Summary: This is the backstory of how we built There is a Wired article about the project already. This article provides our backstory of what went behind the scenes.Whenever someone asks me what I work on, I find nothing better than a single link to a YouTube series of 9 hours filming an actor reading Amazon Kindle’s terms and conditions.Yes, you read it correctly: 9 HOURS.Here is the trailer: this is a link to the full series.Hopefully, you’re still with me at this early stage and you didn’t get lost in an endless spiral of similar videos recommended by YouTube, such as people streaming themselves reading privacy policies of Google, Facebook, and Apple, for hours.Actually, the privacy policy, the equally annoying sibling of terms and conditions, is the subject of this article.Have you thought how long it would really take to read all the policies for services we use per year? That would be 201 hours according to a research by McDonald and Cranor in 2008.So given the choice between this unpaid, exhausting task and anything else, it’s no wonder that people prefer spending time fulfilling their yearly exercising goals, reading an amusing book, or just relaxing. This is even when they hear frightening stories of what’s inside privacy policies (like this and this).Researchers have tried to make these policies simpler, primarily by manual methods, like websites offering standardized versions of their policy, inspired by nutrition labels. Approaches relying on the wisdom of the crowd have received good traction too (like the “Terms of Service; Didn’t Read” project). Yet, these attempts didn’t scale due to the huge manual effort and human expertise involved.Fixing Privacy Policies: the Backstory 🛠Two years ago, my colleagues and I wrote a paper for a workshop on the future of privacy notices. In it, we proposed a vision for turning privacy policies into a conversation via a chatbot called PriBot.Let’s admit it: 2016 was the year of chatbots, and we were — shamelessly — motivated by the hype. After all, the best interface is no interface. Right?Our idea was that you could ask PriBot about any privacy policy like you ask Siri for the capital of Ivory Coast (spoiler: Yamoussoukro). PriBot would then respond in real time with an answer from the policy itself.Fast forward ⏩Twenty months later, we have taken our vision to fruition in a new research paper, in which we first show how we realized the goal of automated Question Answering for privacy policies. This is a collaboration between Hamza Harkous (yours truly), Kassem Fawaz, Rémi Lebret, Florian Schaub, Kang G. Shin and Karl Aberer.As our first research outcome, we’re introducing PriBot to the public, via a chatbot that you can converse with right now at in actionOn the way to build PriBot, we had a surprising byproduct, which even has the potential to make a wider impact: we built a general system for automatically analyzing privacy policies using machine learning. We call it Polisis.Polisis gives you a glimpse about of privacy policy, like the data being collected, the information shared with third parties, the security measures implemented by the company, the choices you have, etc. All that without having to read a single line of the policy itself.We’re releasing Polisis too at You can alternatively download its Chrome extension or Firefox addon to analyze websites in a click.Polisis in actionTo our knowledge, Polisis is the first system to provide such in-depth automated analysis of privacy policies.Who is this for? 👩‍⚖️🕵️‍👨‍💻We have three audiences envisioned:General users: We designed PriBot and Polisis to be highly intuitive for the general user who is interested in the privacy aspects of the sites they use.Regulators: We envision that the technology powering Polisis could be used for large-scale analysis of privacy policies by regulation agencies. For example, in our paper, we used Polisis to show how privacy certification companies (such as TRUSTe — now TrustArc) have been highly permissive with businesses.Researchers: A lot of research has studied apps and websites in an automated way based on their code, their embedded scripts, or on what they share over the network. A missing piece of the puzzle is what these apps promise in their privacy policies. We hope our work can empower researchers with new insights from the policies’ angle.To that goal, we would be glad to collaborate with regulators, researchers, and the general industry. Feel free to reach out if you are interested.No Magic PillNow to the research part!You might say: “couldn’t you do the above by combining a few APIs or open source projects? Didn’t IBM Watson beat humans at answering “Jeopardy!” questions? And why couldn’t you use commercial services like Microsoft QnA Maker?”The answer is that such systems are not magic pills. If they are trained on specific domains, like insurance questions, they can be rarely adequate for others. If they are trained on a general domain, they suffer when tested with specific problems.Imagine asking the question: “do you gather my address info?” around a privacy policy. Almost every QA system will favor the answer “for your info, we work hard to address your issues” over the answer “we use your location for customizing our service.” Obviously, the second is the better answer. Yet, this is not easy to get.What makes things harder is that there are no public datasets of questions and answers about privacy policies waiting to be trained. So traditional approaches to this problem were not the way forward.A Hierarchical Approach 💤We took the other way around. We focused on solving the problem of automatically labeling segments in the privacy policy, producing Polisis. Then we leveraged that solution for the QA problem, producing PriBot. On a high-level, our approach for automatically labeling segments was as follows:Unsupervised Learning step: We first trained a word embedding model on 130K privacy policies that we collected.Supervised Learning step: We then trained a hierarchy of 22 classifiers (each being a neural network) for labeling the different aspects of the policy. We relied on the valuable OPP-115 dataset from the Usable Privacy Project for this part.Spoiler: if you just read these steps, and you try to reproduce the results, that will lead to a horrible performance. Our paper discusses the devil in the details, which lead to a high accuracy, from data preprocessing to classifier selection, etc.How can we move from classification to QA?Let’s say you had a question “Do you share my info?”. To get an answer, we first break the policy into small standalone segments. Each segment is a candidate answer. Then, we rank the answers by their similarity to the question.The similarity is measured by seeing which answers receive “close” labels to the question. To get these labels, we pass both the question and the answers through our classification hierarchy. How to define “close” is also important. For example, questions are frequently broad. You don’t want to frustrate the user by with close, but generic answers. Hence, we came up with a new similarity algorithm to account for this issue (details in the paper).Examples We LoveHopefully, by now you are willing to give our tools a try. Head over to to see them both in action.And to get inspired when you stress-test PriBot, you can check the following examples.Here, PriBot works although there are no common words between the question and the answer:…or when our hands get sloppy, and we misspell a few critical words (you can thank subword embeddings for this):PriBot also notifies users when there is a contradiction in the potential answers:…and it tries to not appear stupid when presented with irrelevant questions:Likewise, you can give Polisis a go, and you can a few examples below:With Fitbit’s policy, you can get an overview of what data the company collects. By clicking on “Analytics/Research”, you can see all the data being collected for that purpose, with the options you get.You can also see that Fitbit shares health information with third parties in the second tab. Hovering over the link will give you the exact evidence from the policy itself.If the policy gives you choices to mitigate data collection, you can see these choices in the dedicated tab, along with the links to opt-in or opt-out.Finally, if there is no information about a certain aspect in the policy, we give an explanation on why this is the case.We hope you enjoy playing with these services, and we welcome your feedback. We know well the limitations of this technology. Hence, we do not claim that the results are legally binding or are completely accurate.Yet, we believe that this a great step forward on the road to Make Privacy Policies Cool (I’m tempted to end the sentence with ‘Again’, but they were never cool before 😀 )!And thanks for reading! You might also be interested in checking my other articles on my Medium page:Hamza Harkous - Medium… or my website:Hamza Harkous' SiteWe Gave Privacy Policies an AI Overhaul, and You’ll Never Have to Read Them Again! was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
“Why are we still releasing new features so slowly?”. I was thinking about this a year after joining a fast-paced start-up. Every new feature was taking painstakingly long.This was not what I expected. Start-ups should move fast. Especially if you are before product/market fit and still bootstrapping.When I joined we had 3 development teams. One year and a multi-million seed funding round later, we had 9 development teams at our disposal. We tripled our development capacity, but new features were being released at the same pace. What was going on?I hypothesized we were suffering from Brooks’ law. Brooks’ law states that adding more software developers to a late project makes it even later. We hired a lot of amazing developers who still needed to ramp up. As long as they are still getting up to speed they would be slowing other developers down.Half a year later, we still were not delivering new features any faster. So Brooks’ law no longer was a probable explanation. There had to be something else going on, but what could it be?Then we suffered a production issue that would open my eyes to the likely culprit.Lessons learned from troubleshooting a production issueWe had just released a new feature for filtering on tags. Support received complaints the new feature was not working.First thing I did was to upload a new picture myself and tag it with ‘Not Hotdog’. I tried to filter on ‘Not Hotdog’ and I confirmed it was not working.I then made a list of all the teams involved in building the feature. Each team was responsible for a specific component:Uploads. Responsible for all back-end functionality related to uploading and processing images.Elastic search. Responsible for all functionality related to elastic search.Photo album front-end. Responsible for front-end of displaying photo albums and images.DevOps team. Responsible for the infrastructure and deploying new features in the cloud.I decided to talk to the Uploads team first. I asked the lead developer why their feature was not working. He ran some checks and concluded the tags were saved on upload. He suspects the tags are not indexed and directs you to the Elastic Search team.The developer from the Elastic Search team peeks in Elastic Search. The ‘Not Hotdog’ tag you added is indexed. Everything in Elastic Search looks great. So he directs you to the Photo album display team. It must be something in the front-end and it is not our problem. You begin to get annoyed as this is the second time you are being redirected.The front-end team discovers a front-end library has been deployed with an older version. The new filtering functionality depends on this library so this probably is the origin of the production issue. The front-end team advises me to talk to the DevOps team to get the right front-end library deployed. Tired of being redirected, I tell the front-end team they should talk to DevOps team instead and get it fixed. Finally, the issue gets resolved.After these inefficient interactions, I realized the teams lacked ownership over what they were building. Our teams were organized around components. No team owned features, they just owned small parts of the whole puzzle.The component team structure made it hard for the teams to deliver new features fast. All their efforts were tightly coupled. If one part of the four teams would be not working or delayed, then this would affect the delivery of the whole feature.So why do companies start using component teams?When product development starts, development teams often self-organize in component teams. When a product is vague, at least the technical components making it work are clear. It is easy to create your teams based on clear components.The idea behind component teams is simple. Assign system components to teams. Each team is responsible for just one or more components.Component teams help maximize the output of your developers by having them only work on specific parts of the system. Building something new in Amazon RedShift? Only a single team needs to worry about gaining experience with RedShift.The downside of component teams is dependency management needs to be handled on the feature level. Features spanning component team boundaries immediately generate dependencies requiring active management. This is only manageable when you have few components or your features do not span a lot of component team boundaries.Imagine you are a Product Owner and want to pick up ‘Feature 2’ in the picture below. Feature 2 depends on two components assigned to different teams. Feature 2 can only go live when two different teams have completed the necessary work on their components to make the feature work.Component teams: 1-to-1 relationship between team and system components. Features are shared across multiple teams based on changes necessary in the components.To go back to the filtering on tags example. Just as it was hard to troubleshoot the feature, imagine how ineffective it was to have all these different teams coordinate to deliver just such a small feature!The uploads team made it possible to add tags when uploading. The saved tags then needed to be indexed by Elastic Search, so it becomes possible to filter on them. Then the Photo album front-end team would need to add the front-end that allows you to filter on tags from the user interface. The work of all four teams came together and the feature deployed.In short, all four teams needed to coordinate to get the new feature live, as each was responsible for a component. Any delay in a component, delays the release of the feature. The more components and teams involved, the more coordination problems and unnecessary waiting can arise. If just one sprint of one team fails, the feature cannot be released.The alternative: feature teamsInstead of making development teams responsible for components, you can make teams responsible for features. When using feature teams you assign one or more features to a team, instead of components.Imagine you would have single team responsible for filtering & searching and another team responsible for adding metadata to pictures. When delivering the filtering on tags feature you suddenly have only two teams that need to coordinate their efforts.Feature teams: 1-to-1 relationship between team and features. Components are shared across multiple teams.What makes feature teams hard is that ownership of components is shared. Multiple teams may be working on the delivery of a new feature requiring the changes in the same components. The coordination efforts are moved from the feature level to the component level.Managing dependencies on the feature level is hard, but easier than doing it on the component level. You need to make sure changes to components are reviewed by the people in the company who know the most about those components. There also needs to be adequate knowledge sharing between teams about all components.Switch to feature teams when component teams hurt your development speedMost companies start with component teams, because when the product is unclear at least the components are clear. At some point component teams might start preventing you from moving fast.There are clear symptoms when your component team structure might be slowing you down:Small features take much longer to develop than expected, because they cross components owned by many different teams.Finger-pointing when there are production issues. Nobody owns any feature.If this happens, you might consider switching to feature teams. Working with feature teams presents it’s own unique challenges. You need to figure out how to best share multiple components between multiple teams. Knowledge transfer between all teams is often required. It might also be necessary to adjust your architecture to decouple all different components as much as possible.Actually one of the hardest part of working with feature teams is actually switching over to feature teams. How do you structure the teams? How do you transfer knowledge? How do you make sure you get buy-in from the organisation to change the teams?I will follow-up with another article how you can make the switch to feature teams.Further readingFeature Teams in LeSSWhy your development team structure might be slowing you down was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
BY RYAN HOLMESGrowth or profit? For tech startups, it’s a million-dollar (and sometimes billion-dollar) question. If you want to expand your business, you generally have to spend money. But when does growth at all costs become a reckless strategy? How do you find the right balance between growth and burn?The tech world’s last major reckoning with this question was back in March 2014, when Silicon Valley darling Box filed an S-1 indicating it was ready to IPO. For years, Box, which offers cloud-based content-management software to enterprises, had been growing at a fast clip. Investors were willing to overlook the massive amounts of money the company was losing, citing its long-term potential.But then something unexpected happened. The markets shifted. Investors hunkered down. Whispers went out that Box’s “unit economics” weren’t working. So the company waited nine long months before finally going public. The new attitude: Growth was important, but companies needed to be profitable (or show at least clear path to profitability), too.So what’s the optimal ratio between these two fundamentals? Ultimately, the answer is still fairly subjective, but there’s an insanely simple chart that can help you sort it out.MEET THE GROWTH-PROFIT MATRIXThere’s no shortage of methods out there for assessing whether your startup has got the growth-profit balance just right — the real question is what “just right” actually means. The now-famous “Rule of 40,” for example, suggests that a successful software-as-a-service (SaaS) startup’s growth rate plus profit should add up to 40: If you’re growing at 60% rate, you can afford to lose 20%, for instance.But I’m a highly visual person and started to wonder whether there was a way to express this dynamic graphically. Thinking aboutHootsuite‘s own trajectory, I threw together this super-simple chart:Now, I realize this is far from scientific, and I also know the underlying concept isn’t new or revolutionary — everyone from Boston Consulting Group to the venture capitalist Tomasz Tunguz have used graphs like this to assess businesses. It’s very basic, but — at a glance — it should let you know if your company is headed in the right direction.Needless to say, the bottom-left quadrant here is the one you generally don’t want to find yourself in. With few exceptions, you don’t want your startup to be losing money and not really growing. That’s a sure sign that you haven’t mastered product-market fit yet.The top-left quadrant is where most promising startups start off. It’s definitely where Hootsuite was in its early years. We were losing money, but for all the right reasons: burning through our investments in order to grow fast. In retrospect, this approach let us gain a huge early lead on our competitors in the social-relationship platform space.As we matured, our priorities shifted. Growth remained important, but investors and analysts became increasingly focused on seeing a path to profitability. So we reduced our spending, tightening belts and asking employees to do more with less. Last year, we achieved a cash-flow positive milestone. There’s no doubt we’re a healthier company now, one built to make money and built to last doing it.A CONSTANT GAME OF FOUR SQUAREThe idea of just “breaking even” may not sound like a milestone, but if you look at similar-sized companies in our space, it’s actually kind of revolutionary. We’re an eight-year-old business, we’re still growing at a great pace, but we’re actually cash-positive. For most cloud companies — from Zendesk and Marketo to Hubspot and Shopify — the idea of breaking even doesn’t enter the picture until anywhere from two to four years after IPO.So when it comes to the growth-profit matrix, it’s all about angling yourself into the right quadrant at the right time — knowing it might not be in your best interest to try and stay put there forever. For many startups, this is easy to miss.If we were to think of these four types of growth-profit ratio as phases, on the other hand, stretched out chronologically, we might be setting ourselves up to fail. I hardly think of our high-growth days as over and done with. While it’s not easy to achieve high profitability and high growth at the same time, there’s one way to break into that elusive top-right quadrant of the chart (andback into it if you’ve had to slow down and refocus for awhile): continuous innovation. By developing new product lines and finding new ways to bring real benefits to customers, it’s possible to sustain high profits while also expanding market share.In our case, for example, we’ve built new functionality into our core platform, including the ability to buy social media ads. And we’re adding features that make our dashboard useful not just to marketers but to net new audiences — namely, sales and customer service teams. All these steps align with our long-term goal–one that’s predicated on high growth and high profit — of becoming a $10 billion company.The profit-growth question doesn’t have easy answers. What’s more, you can never stop asking it because the right answer might change according to circumstance. Depending on your industry and the stage of your company, “success” might mean a very different ratio than it does for a different startup in a different phase. Ultimately, every investor cares primarily about one thing: how much cash flow your company generates over its lifetime. You may not be profitable now, but there needs to be a clear route to profitability, ideally in your near future — no matter how groundbreaking your business idea may be.This article originally appeared on Fast Company and is reprinted with permission.More From Fast Company:Making The Case For Hiring Less And Growing SlowlyWhy This CEO Appointed An Employee To Change Dumb Company RulesThree Myths About Successful Founders That Just Won’t DieThe Quick and Dirty Growth-Profit Matrix was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
by Rupert Hargreaves | 12 February 2018 Over the past 12 months, monthly inflows into high-grade bond funds and ETFs, has averaged between $20B and $30B. However, over the same period, these funds have produced a return of around 0%. The post International Investors Rush To Buy US “High-Grade” Bonds, But Inflows Could Crash Soon appeared first on We Study Billionaires.
by | 12 February 2018 In the week ahead, market participants will eye fresh weekly information on U.S. stockpiles of crude and refined products on Tuesday and Wednesday to gauge the strength of demand in the world’s largest oil consumer and how fast output levels will continue to rise. The post Crude Oil Prices – Weekly Outlook: Feb. 12 – 16 appeared first on We Study Billionaires.
A compromised plugin for browsing aid BrowseAloud has infected thousands of websites with crypto mining malware. #NEWS
In this article, I’m will talk about how you can build a Serverless application using AWS Serverless Application Model (SAM) to perform Log Analytics on AWS CloudTrail data using Amazon Elasticsearch Service. The AWS Serverless Application will help you analyze AWS CloudTrail Logs using Amazon Elasticsearch Service. The application creates CloudTrail trail, sets the log delivery to an s3 bucket that it creates and configures SNS delivery whenever the CloudTrail log file has been written to s3. The app alsocreates an Amazon Elasticsearch Domain and creates an Amazon Lambda Function which gets triggered by the SNS message, get the s3 file location, read the contents from the s3 file and write the data to Elasticsearch for analytics.Let’s learn about what is AWS CloudTrail, Elasticsearch, Amazon Elasticsearch Service, AWS Lambda and AWS SAM.What is AWS CloudTrail?AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.AWS CloudTrailWhat is Elasticsearch?Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it centrally stores your data so you can discover the expected and uncover the unexpected.Elasticsearch: RESTful, Distributed Search & Analytics | ElasticWhat is Amazon Elasticsearch Service?Amazon Elasticsearch Service makes it easy to deploy, secure, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more. Amazon Elasticsearch Service is a fully managed service that delivers Elasticsearch’s easy-to-use APIs and real-time analytics capabilities alongside the availability, scalability, and security that production workloads require.Amazon Elasticsearch Service - Amazon Web Services (AWS)What is AWS Lambda?AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume — there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.AWS Lambda - Serverless Compute - Amazon Web ServicesWhat is AWS Serverless Application Model?AWS Serverless Application Model (AWS SAM) prescribes rules for expressing Serverless applications on AWS. The goal of AWS SAM is to define a standard application model for Serverless applications.awslabs/serverless-application-modelNow let’s look at how we can build a Serverless App to perform Log Analytics on AWS CloudTrail data using Amazon Elasticsearch Service.This is the architecture of the CloudTrail Log Analytics Serverless Application:Architecture for Serverless Application: CloudTrail Log Analytics using ElasticsearchAWS Serverless Application Model is a AWS Cloudformation template. Before we look at the code for SAM template, let’s work on packaging our AWS Lambda.On your workstation, create a working folder for building the Serverless Application.Create a file called for the AWS Lambda:""" This module reads the SNS message to get the S3 file location for cloudtrail log and stores into Elasticsearch. """from __future__ import print_functionimport jsonimport boto3import loggingimport datetimeimport gzipimport urllibimport osimport tracebackfrom StringIO import StringIOfrom exceptions import *# from awses.connection import AWSConnectionfrom elasticsearch import Elasticsearch, RequestsHttpConnectionfrom requests_aws4auth import AWS4Authlogger = logging.getLogger()logger.setLevel(logging.INFO)s3 = boto3.client('s3', region_name=os.environ['AWS_REGION'])awsauth = AWS4Auth(os.environ['AWS_ACCESS_KEY_ID'], os.environ['AWS_SECRET_ACCESS_KEY'], os.environ['AWS_REGION'], 'es', session_token=os.environ['AWS_SESSION_TOKEN'])es = Elasticsearch( hosts=[{'host': os.environ['es_host'], 'port': 443}], http_auth=awsauth, use_ssl=True, verify_certs=True, connection_class=RequestsHttpConnection)def handler(event, context):'Event: ' + json.dumps(event, indent=2)) s3Bucket = json.loads(event['Records'][0]['Sns']['Message'])['s3Bucket'].encode('utf8') s3ObjectKey = urllib.unquote_plus(json.loads(event['Records'][0]['Sns']['Message'])['s3ObjectKey'][0].encode('utf8'))'S3 Bucket: ' + s3Bucket)'S3 Object Key: ' + s3ObjectKey) try: response = s3.get_object(Bucket=s3Bucket, Key=s3ObjectKey) content = gzip.GzipFile(fileobj=StringIO(response['Body'].read())).read() for record in json.loads(content)['Records']: recordJson = json.dumps(record) indexName = 'ct-' +"%Y-%m-%d") res = es.index(index=indexName, doc_type='record', id=record['eventID'], body=recordJson) return True except Exception as e: logger.error('Something went wrong: ' + str(e)) traceback.print_exc() return FalseCreate a file called requirements for the python packages that are needed:elasticsearch>=5.0.0,<6.0.0requests-aws4authWith the above requirements file created in your workspace, run the below command to install the required packages:python -m pip install -r requirements.txt -t ./Create a file called template.yaml that will store the code for AWS SAM:AWSTemplateFormatVersion: '2010-09-09'Transform: 'AWS::Serverless-2016-10-31'Description: > This SAM example creates the following resources: S3 Bucket: S3 Bucket to hold the CloudTrail Logs CloudTrail: Create CloudTrail trail for all regions and configures it to delivery logs to the above S3 Bucket SNS Topic: Configure SNS topic to receive notifications when the CloudTrail log file is created in s3 Elasticsearch Domain: Create Elasticsearch Domain to hold the CloudTrail logs for advanced analytics IAM Role: Create IAM Role for Lambda Execution and assigns Read Only S3 permission Lambda Function: Create Function which get's triggered when SNS receives notification, reads the contents from s3 and stores them in Elasticsearch DomainOutputs: S3Bucket: Description: "S3 Bucket Name where CloudTrail Logs are delivered" Value: !Ref S3Bucket LambdaFunction: Description: "Lambda Function that reads CloudTrail logs and stores them into Elasticsearch Domain" Value: !GetAtt Function.Arn ElasticsearchUrl: Description: "Elasticsearch Domain Endpoint that you can use to access the CloudTrail logs and analyze them" Value: !GetAtt ElasticsearchDomain.DomainEndpointResources: SNSTopic: Type: AWS::SNS::Topic SNSTopicPolicy: Type: "AWS::SNS::TopicPolicy" Properties: Topics: - Ref: "SNSTopic" PolicyDocument: Version: "2008-10-17" Statement: - Sid: "AWSCloudTrailSNSPolicy" Effect: "Allow" Principal: Service: "" Resource: "*" Action: "SNS:Publish" S3Bucket: Type: AWS::S3::Bucket S3BucketPolicy: Type: "AWS::S3::BucketPolicy" Properties: Bucket: Ref: S3Bucket PolicyDocument: Version: "2012-10-17" Statement: - Sid: "AWSCloudTrailAclCheck" Effect: "Allow" Principal: Service: "" Action: "s3:GetBucketAcl" Resource: !Sub |- arn:aws:s3:::${S3Bucket} - Sid: "AWSCloudTrailWrite" Effect: "Allow" Principal: Service: "" Action: "s3:PutObject" Resource: !Sub |- arn:aws:s3:::${S3Bucket}/AWSLogs/${AWS::AccountId}/* Condition: StringEquals: s3:x-amz-acl: "bucket-owner-full-control" CloudTrail: Type: AWS::CloudTrail::Trail DependsOn: - SNSTopicPolicy - S3BucketPolicy Properties: S3BucketName: Ref: S3Bucket SnsTopicName: Fn::GetAtt: - SNSTopic - TopicName IsLogging: true EnableLogFileValidation: true IncludeGlobalServiceEvents: true IsMultiRegionTrail: true FunctionIAMRole: Type: "AWS::IAM::Role" Properties: Path: "/" ManagedPolicyArns: - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Sid: "AllowLambdaServiceToAssumeRole" Effect: "Allow" Action: - "sts:AssumeRole" Principal: Service: - "" ElasticsearchDomain: Type: AWS::Elasticsearch::Domain DependsOn: - FunctionIAMRole Properties: DomainName: "cloudtrail-log-analytics" ElasticsearchClusterConfig: InstanceCount: "2" EBSOptions: EBSEnabled: true Iops: 0 VolumeSize: 20 VolumeType: "gp2" AccessPolicies: Version: "2012-10-17" Statement: - Sid: "AllowFunctionIAMRoleESHTTPFullAccess" Effect: "Allow" Principal: AWS: !GetAtt FunctionIAMRole.Arn Action: "es:ESHttp*" Resource: !Sub |- arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/cloudtrail-log-analytics/* - Sid: "AllowFullAccesstoKibanaForEveryone" Effect: "Allow" Principal: AWS: "*" Action: "es:*" Resource: !Sub |- arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/cloudtrail-log-analytics/_plugin/kibana ElasticsearchVersion: "5.5" Function: Type: 'AWS::Serverless::Function' DependsOn: - ElasticsearchDomain - FunctionIAMRole Properties: Handler: index.handler Runtime: python2.7 CodeUri: ./ Role: !GetAtt FunctionIAMRole.Arn Events: SNSEvent: Type: SNS Properties: Topic: !Ref SNSTopic Environment: Variables: es_host: Fn::GetAtt: - ElasticsearchDomain - DomainEndpointPacking Artifacts and uploading them to s3:Run the following command to upload your artifacts to S3 and output a packaged template that can be readily deployed to cloudformation package \ --template-file template.yaml \ --s3-bucket bucket-name \ --output-template-file serverless-output.yamlDeploying this AWS SAM to AWS CloudFormation:You can use aws cloudformation deploy CLI command to deploy the SAM template. Under-the-hood it creates and executes a changeset and waits until the deployment completes. It also prints debugging hints when the deployment fails. Run the following command to deploy the packaged template to a stack called cloudtrail-log-analytics:aws cloudformation deploy \ --template-file serverless-output.yaml \ --stack-name cloudtrail-log-analytics \ --capabilities CAPABILITY_IAMRefer to the documentation for more details.I recommend reading about Elasticsearch Service Access Policies using the documentation and modify the Access policy of the Elasticsearch domain to further fine tune the access policy.Once the Serverless application is deployed in your AWS account, It will automatically store the AWS CloudTrail data into Amazon Elasticsearch Service as soon as the log is delivered to s3. With the data in Elasticsearch, you can use Kibana to visualize the data in Elasticsearch and create the dashboards that you need on the AWS CloudTrail data.The above Serverless Application Model app is available at the below Github repo:ExpediaDotCom/cloudtrail-log-analyticsServerless App: AWS CloudTrail Log Analytics using Amazon Elasticsearch Service was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Let’s fall in love with the problem not the solution.“Hi, I’m Bitcoin — a really noble thought by my master Satoshi who brought me to life (and then left me?!) as a solution to a merchant’s pain of handling micro transactions through centralized financial institutions. But look what this toxic and greedy community has done to me. Selfish few, disguised as saints, are driving their own propaganda and promoting me as an oracle of a kind, a panacea to all of humanity’s problems…well I’ve had enough and I want to tell them very politely to f%$& *@!”Bitcoin to humanity, Feb 2018.Let me do the honors and try to translate what Bitcoin might have said looking at the situation today.I can already hear the disapproving comments but before you go off on a critical tangent, (at least try to) hear me out.I love the idea of decentralization — giving more power to people, a world without borders, an economy without middlemen and more. So much so that I’m working on a similar mission myself!But when it comes to Bitcoin — which is truly an innovative technological marvel of modern times — it’s important to not lose sight of the bigger picture and live in a denial mode about something that was not designed to be what some people claim and try to make us believe.Advocates of the blockchain like any other promising technology often draw comparisons to the rise of the Internet in the 90s and claim that it took at least a few years before the first killer website or app came around.With respect to blockchain however, we’re yet to even find a killer use case leave alone an implementation of it.It’s been nine years since the first blockchain network bitcoin came through, by now you’d expect to see at least few platforms/apps that solve real-world problems and are adopted by the masses.Plus, today with easy access to the internet a.k.a information and knowledge, shouldn’t the pace of innovation be much much faster than it was more than 20 years ago?On paper, blockchain and subsequently a cryptographic currency sounds like the solution to all our monetary grievances — finally a war on governments, banks and centralized institutions who’ve been exploiting the common man since the invention of money — power back to the people, yay!However, a close hard look at the core workings of bitcoin blockchain and it becomes evident that it’s far from being the solution it’s touted to be.To clarify, I’m not talking about cryptocurrencies and the problems they aim to address — cryptocurrencies simply put are still good! Here in this article, I’m trying to question how Bitcoin as a solution addresses the critical issues we face with other payment methods and whether or not its core advantages are actually solving the real end user problems.So let’s just jump in.Value #01: Limited/No feesBanks and financial institutions centralized in nature have been playing middlemen by charging hefty fees for decades and utilizing our assets in the name of fractional reserve banking for their own benefit.Is Bitcoin solving the problem?A public network with a distributed, secure and transparent ledger that can be viewed by anyone with no central body acting as a middleman and charging unnecessary fees.As noble and effective this sounds, in reality upholding and updating a massive decentralized system presents its own share of loopholes.For starters, this transparent distributed ledger demands the verification of transactions so they can be added to the ledger. In essence, miners who verify these transactions and add them to the ledger take on the partial roles of a bank and thus charge mining (fair) and transaction (unfair) fees. While banks charge anything between $0.20 to $0.60 per transaction, miners on cryptocurrency networks such as the popular Bitcoin charge anything between $20 to $60 per transaction.Banks charging hefty fees is definitely a problem worth solving but how does replacing them with miners who charge even more transaction fees solve anything?Moreover the original intent of Bitcoin vision was not to replace the financial institutions completely but to provide merchants a better (and cheaper) way to manage micro transactions.Value #02: Speed and agilityBitcoin claims to be the end all solution to global transactions and peer-to-peer transfers by providing a way for two willing parties across the globe to exchange monetary value securely and instantly.Is Bitcoin solving the problem?The amount of time taken to verify bitcoin transactions often ranges from anywhere between 10 minutes to 2 full days. While this might seem acceptable at first, this rate is extremely slow compared to the 2000 transactions credit/debit cards verify every second and is hence far from being an instant process.Even the overwhelmingly hyped solutions such as Lightning network has its own fundamental deficiencies which is outside the scope of this post.Value #03: Anonymity and Censorship ResistanceOther than the obvious controls governments exercise, time and again we’ve seen governments intervene in extreme ways like freezing accounts or restricting access to funds at will.Is Bitcoin solving the problem?One of the most appealing features of the blockchain technology, anonymity on the network offered through crypto hashing is meant to (sort of) protect us from the direct access of any government. Additionally, transactions recorded on the ledger can never be removed or manipulated making it resistant to all kinds of censorship.I feel like we’re almost fooling ourselves when we say that we are anonymous on the blockchain. We’re still prone to being identified — even the Satoshi whitepaper warned of this!If a government really wanted to track someone down, what is stopping them? Nothing. They did it with Ross William Ulbricht after criminalizing Silk Road. Plus how long does it even take a government to ban cryptocurrencies altogether? China did it and how!The point is, there is a clear lack of defensibility with nothing stopping governments from exercising control, when they really want to.Moreover, I have no idea why we’re celebrating when this pseudo anonymous state gives rise to problems of it own — from money laundering to easy terrorism financing.In short, what exactly is the fuss about? Instead, we should be focusing on delivering defensibility, security and efficiency, all of which are lacking in existing systems.Value #04: Community governance and fair incentivesWith governments having the final authority and central banks controlling the monetary supply, big corporations affiliated to either of them are the ones who stand incentivised.Is Bitcoin solving the problem?Bitcoin aims to tackle the central authorities and incentivization problem by giving everyone in the community a voice and an equal chance to gain incentives by becoming developers or miners in a distributed economy.Today getting into the mining scene is more difficult than it was ever intended to be. With mining pools inevitably beating the small guys to the finish line, more power now lies in the hands of these pools thanks to their superior computational ability.Plus, mining is definitely not for the technologically challenged — in fact it requires quite a bit of tech expertise, zero love for good UX, has unnecessarily been made complex and almost impossible for the common man to participate in.Sure we wanted the power back in our hands, but today where miners stood there are mining pools (mining companies with superior computational ability) in whose hands lie more authority than initially desired — they not only choose which transactions to verify first but also have the power to game the system.Further, we’ve blindly given crypto exchanges an immense amount of control over market prices too — I mean come on we saw how Coinbase employees influenced the price of BCH very recently.Point is, how are we doing things any differently when we’ve only replaced corporations with more corporations and total overnance chaos?The claim that money supply is fixed in itself is not entirely true if you take into account how various hard forks can simply lead to more money being printed out of thin air and provided to existing holders.An open and ethical system with a more optimal solution could help prevent this chaos and lack of direction (like the recent SegWit2X drama) often faced by the crypto community.Value #05: SecuritySince the ledger is distributed, there is no single point whose failure can affect users on the network hence making it more secure than bank servers.Is Bitcoin solving the problem?While blockchain per se is very difficult to hack in the absence of 51% control but time and again, crypto exchanges and wallets have been hacked losing millions.In 2014 we saw the fall of Mt.Gox from handling almost 70% of all bitcoin transactions to going bankrupt after losing $450 million to a hack, last year we saw Bitfinex lose $65 million to NiceHash losing $62 million to “a professional hack” as recent as last month.I am, like many others are, all in for a cryptocurrency where no central body or institution controls its flow, supply or demand but is complete decentralization really an efficient answer especially when we are opening ourselves up to all kinds of manipulation, hacks and threats?Side effects of BitcoinThe way I see it, instead of efficiently addressing the problems it claims to, bitcoin has given rise to problems of its own:1/ Speculative BubblesThere are good old market manipulation tactics that come along with anonymity like painting the tape where traders who hold large volumes of a coin of a coin trade among themselves making traded volumes go up. This in turn makes more people buy the coin because hey, FOMO is real — demands goes up, the price of the coin goes up, ultimately creating a big win-win situation for large holders.Fact is between the top 20 richest wallet addresses holding more than 7% (approximately 2 million bitcoins) of the coins in circulation right now, Satoshi rumored to hold more than 1 million bitcoins and top and early hodlers spreading their assets out into countless wallets, it’s extremely difficult to tell when a hike or dip in a coins price is genuine or just a manipulation trick. I don’t understand why we’re celebrating anonymity and victory against governments, when evidently we’re still being governed (read: played)?2/ Energy inefficienciesRidiculous amounts of electricity and machine power get consumed daily to maintain and update the almighty ledger. In fact, this year alone bitcoin mining used more energy than what 159 countries consumed! I mean how are we suggesting that a decentralized currency be adopted by the masses when mass adoption is far from being sustainable?3/ Complex experienceAs technologically advanced as the core crypto community is, very few people in the world actually know what blockchain is. For that matter, even traders have confessed not knowing a thing about the tech per say.Only a mere 3 million people worldwide use cryptocurrencies, that’s hardly 0.04% of our total global population that’s nothing given that it’s been around for 9 years! And can you blame the common man for not jumping right in? The community has made it unnecessarily difficult for the common man to grasp the dynamics of the cryptoverse with little to zero effort invested on improving user experiences or awareness across multiple blockchain platforms.Final ThoughtsTo wrap up, cryptocurrencies are here and going to stay since they address all of the issues — will write a follow up post on this — we face with fiat currency but we seriously need to rethink how we’re going to reach our end goal. The core technology powering these amazing virtual assets might not be the golden goose it’s hyped up to be — in fact, the way I see it it opens us up to more liabilities and inefficiencies in the long run.— — — — — — — — — — — — — — — — — -If you made it this far, don’t shy away from showing the love by tapping 👏 and sharing 🙏.Would love to hear your thoughts in the comment section👇!The way we live and work today will fundamentally change in the next 5 years. Want to be a part of this revolution? Do join us in building the future at Bottr!Bitcoin to us: If you love me, leave me alone. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
“Occlusion” means hiding virtual objects behind real things.One of the biggest and most elusive pieces of the augmented reality puzzle is occlusion. In other words, the ability to hide virtual objects behind real things.This post is about why occlusion in AR is so hard and why deep learning might be the key to solving it in the future.When you look at a purely real or a purely virtual world, you tend to accept the “rules” of that world or suspend disbelief, as long as it satisfies some basic notions of reality like gravity, lighting, shadows etc. You’ll notice when these rules are broken because it’s jarring and feels like something “doesn’t look right”. That’s why it’s so instinctual to cringe at bad special effects in movies.In VR, it’s actually quite easy to achieve suspension of disbelief because you have complete control over all elements in the scene. Unfortunately, as an AR developer, you don’t have this luxury because most of your app’s screen real estate (i.e. the real world) is totally out of your control.As an AR developer, most of your app’s screen real estate (i.e. the real world) is totally out of your control.In the mobile world, Apple’s ARKit has achieved incredibly fast motion tracking as well as realistic lighting and shadows, but it’s still lacking when it comes to occlusion.Here’s an example:Does the screenshot below look strange to you? It’s because the dragon looks like it’s further away from the chair but still appears in front of the chair.Without occlusion, this dragon looks weird overlapping the chair.This isn’t just a problem with mobile AR. It’s also a problem on every headset available today.How does occlusion in AR work?The goal of occlusion is to preserve the rules of line-of-sight when creating AR scenes. That means any virtual object that is behind a real object, should be “occluded” or hidden behind that real object.So how is this done in AR ? Basically, we selectively prevent parts of the virtual scene from rendering on the screen based some knowledge of the 3D structure of the real world.Doing this involves 3 main functions:Sensing the 3D structure of the real world.Reconstructing a digital 3D model of the world.Rendering that model as a transparent mask that hides virtual objects.But what is so hard about it?Assuming you have a good reconstruction of all the objects in your real environment, occlusion involves simply rendering that model as a transparent mask in your scene. That’s the easy part. It’s getting to that point where things start to get unwieldy.Consider a common street scene. There are people, vehicles, trees and all kinds of objects at various distances from you. Further away, there are larger structures like bridges and buildings, each with their own unique features.The real world is a complicated and dynamic 3D sceneThe hardest thing about creating a realistic occlusion mask is actually reconstructing a good enough model of the real world to apply that mask.That’s because no AR device available today has the ability to perceive its environment precisely or quickly enough for realistic occlusion.How does 3D sensing work?Sensing 3D structure really boils down to one important ability — depth sensing. Depth sensors come in many flavors, the common ones being Structured Light, Time of Flight and Stereo Cameras.In terms of hardware, Structured light and Time of Flight involve an Infrared projector and sensor pair, while Stereo requires two cameras at a fixed distance from each other, pointing in the same direction.At a high level, here’s how they work:Structured Light SensorStructured Light sensing works by projecting an IR light pattern onto a 3D surface and using the distortions to reconstruct surface contours.Time-of-Flight SensorThis sensor works by emitting rapid pulses of IR light that are reflected by objects in its field of view. The delay in the reflected light is used by an image sensor to calculate the depth at each pixel.Stereo CamerasStereo cameras simulate human binocular vision by measuring the displacement of pixels between the two cameras placed a fixed distance apart and use that to triangulate distances to points in the scene.Of course, all these sensors have their limitations. IR based sensors like have a harder time functioning outdoors because bright sunlight (lots of IR) can wash out or add noise to the measurements. Stereo cameras have no problems working outdoors and consume less power, but they work best in well-lit areas with a lot of features and stark contrast.All you need to do to confuse a stereo camera is point it at a flat white wall.Since all these sensors work on pixel-based measurements, any noise or error in the measurements creates holes in the depth image. Also, at the size and capacity of phones and headset devices today, the maximum range achieved so far has been about 3–4 meters.The image below is an example of a depth map created with a stereo camera. The colors represent distance from the camera. See how the measurements are good at a close range while further objects are too noisy or ignored?3D perception doesn’t end at depth sensing. The next step is to take the 2D depth image and turn it into a 3D point cloud model where each pixel in the depth image gets a 3D position relative to the camera.Next, all the camera relative point clouds are fused with an estimate of camera motion to create a 3D point cloud map of the world around the sensor.The video below illustrates the complete point cloud mapping process. that you understand the complete pipeline of 3D perception, let’s look at how this translates to implementing occlusion.Using Depth Sensor Data for OcclusionThere’s a few ways 3D depth information can be used to occlude virtual objects.Method 1Directly use the 2D depth map coming in from the sensor.In this method, we align the camera image and the depth map and hide parts of the scene that should be behind any pixels of the depth map. This method doesn’t really need a full 3D reconstruction since it just uses the depth image.This makes it faster but has a few problems:The sensor can only detect objects up to 4 meters away. Anything further won’t be occluded.The depth map has holes and isn’t perfectThe resolution of the depth map is much lower than the camera which means scaling and aligning the two images will create pixelated, jagged edges for occlusion.The video below is an example of depth map based occlusion. Notice the irregularities in the Occlusion mask as the red cube is moved around. 2Re-construct and use the 3D point cloud model.Since the point cloud is a geometrically accurate map of the real world, we can use it to create an occlusion mask. Note that a point cloud itself isn’t sufficient for occlusion, but point clouds can be processed to create meshes that essentially fit a surface onto the point map (like a blanket covering your 3D point cloud).Meshes are much less computationally intensive than point clouds and are the go-to mechanism for calculations like detecting collisions in 3D games.Environment mesh created with a Microsoft HololensThis mesh can now be used to create the transparent mask we need to occlude virtual elements in our scene.Well that sounds like we have a good enough solution for Occlusion! So what’s the problem?The problem with AR devices todayThe 3 AR devices that I think have the most impressive tracking and mapping capabilities today are Google Tango, Microsoft Hololens, and Apple iPhone X. Here’s how their sensors stack up against each other.Google Tango (Discontinued by Google)Depth Sensor — IR time-of-flightRange — 4mMicrosoft HololensDepth Sensor — IR time-of-flightRange — 4mApple iPhone XForward facing depth sensor — IR Structured lightBack facing depth sensor — Stereo CamerasRange — 4mThe main problem with all the above systems is that in terms of depth sensing, they have:Poor range (<4m): The size and power limitations of mobile devices restrict the range of IR and Stereo depth sensors.Low resolution: Smaller objects in the scene are not discernible in the point cloud and it’s really hard to achieve crisp and reliable occlusion surfaces.Slow mesh reconstruction: Current methods of generating meshes from point clouds are too slow for real-time occlusion on these devices.Generating a mesh from a point cloud, currently, isn’t fast enough for real-time occlusion on any tablet or headset device.So how does a developer today hack together a reasonable solution to get around these issues?How can you hack occlusion today ?Perfect occlusion is an elusive target, but we can get close to it in some situations, especially when we can relax the real-time constraint.If the application allows pre-mapping the environment, it’s possible to use a pre-built mesh as an occlusion mask for the larger prominent objects in the scene, provided they don’t move.This means that you’re not limited to the 4m range of the depth sensor, at least for occlusion behind static objects.Moving objects are still a problem and the only solution right now it to use the depth map masking method for close range moving objects like your hands.A sample 3D environment mesh built by pre-mapping an indoor spaceNow it’s clear from the example mesh above that a big problem with pre-built meshes is that although they’re lighter than point clouds, they can cause more than a ten-fold increase in the complexity of your 3D content.The way to simply a 3D mesh is to approximate its structure with simpler objects like walls and blocks that envelope complex structures.At Placenote, we’ve built guided tours of large museums in AR and the way we hacked occlusion was to manually draw planes to cover specific walls in the space that might get in the way of our virtual content.A Unity scene that shows how we hacked Occlusion at Placenote. Visit placenote (’s an example of occlusion with a high-quality, pre-built environment mesh on Google Tango. course, this method assumes that either the developer or the user will take the time to map the environment before the AR session.Since this might be a bit overwhelming for the average user, it likely works best in location-based AR experiences where the map can be pre-built by the developer.In an extreme scenario, you might want to occlude an AR experience at a much larger scale, like rendering a dinosaur walking among buildings in New York City. Perhaps, the way to do this is to use known 3D models of buildings from services like Google Maps or Mapbox to create occlusion surfaces at the city scale.Our friends at Sturfee have built a unique way of creating city scale augmented reality experiences, using satellite imagery to reconstruct large buildings and static structures in 3D. Sheng Huang at Sturfee has written about their platform here.Of course, this means you need to be able to accurately localize the device in 3D, which is quite challenging at that scale. GPS position is simply not good enough for occlusion since it’s slow (1Hz) and highly inaccurate (measurement error of 5–20 meters).In fact, centimeter-level position tracking indoors and outdoors is a critical component of occlusion and through our work with Placenote, we’re working towards a cloud-based visual positioning system that can solve some of these problems.Occluding buildings is a crazy idea. Or is it? We already have 3D maps (like Google maps) that could be used for Occlusion.What does the future look like?While pre-built meshes are great for AR experiences tied to a single location, occluding moving objects still requires instant depth measurements at a range greater than just 4m.What’s needed to create a realistic AR experience is a sensor that produces a high-resolution depth map with near infinite range.Improvements in sensing hardware can certainly help squeeze greater resolution and range from IR or Stereo sensors, but these improvements will likely hit a ceiling and produce diminishing returns in the near future.Interestingly, an alternative approach has emerged in 3D sensing research, that turns this hardware problem into a software problem by leveraging deep learning to improve the speed and quality of 3D reconstruction.Neural networks might be the key to solving occlusion in the future.This method uses neural networks that can pick out visual cues in the scene to estimate 3D structure, much like the way we as humans estimate distance. (i.e. guessing distance by using our general knowledge of the sizes of things in the real world. The networks are trained on a large dataset of images and are capable of segmenting out objects in a scene and then recognizing them to estimate depth.That means, if we can design the neural networks and train them on a good enough dataset, we might be able to bypass a lot of limitations in resolution and range present in current depth sensing technologies, with no added hardware costs.Neural network scene segmentation ( image above is from a paper that explores methods to segment and label scenes using neural networks in combination with depth sensors to improve the quality of generated maps.You can find the full text of the paper here.In SummaryOcclusion is, by far, one of the biggest pieces of the AR puzzle because it makes the biggest leap towards realism for AR experiences.Depth sensors today are too slow, have limited range and low resolution for real-time occlusion.You can get around these limitations by building AR apps in areas with pre-built environment meshes. Try Placenote SDK to build location-based AR experiences.The key to solving the range and speed limitations of depth sensors in the future might be deep learning and this approach is already showing promising results.If you’re a new AR developer looking to build compelling AR experiences, don’t let occlusion stop you. Remember Pokemon Go? Poor occlusion in Pokemon Go resulted in some hilarious AR screenshots that spread all over the internet and helped with the meteoric rise of the game.So have fun with it!If you want to build amazing AR experiences on iOS or Unity, partner with us, or join our team, let’s connect! Just fill out the form below. are we?We’re building an SDK for persistent, shared augmented reality experiences. We call it Placenote SDK.Special thanks to Sheng Huang, Dominikus Baur, David Smooke and Peter Feld for your help with reviewing this article!Why is Occlusion in Augmented Reality So Hard? was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
CME has filed a patent to work around the difficulty in gaining a consensus to modify Blockchain rules. #NEWS
Disclaimer: As a news and information platform, also aggregate headlines from other sites, and republish small text snippets and images. We always link to original content on other sites, and thus follow a 'Fair Use' policy. For further content, we take great care to only publish original material, but since part of the content is user generated, we cannot guarantee this 100%. If you believe we violate this policy in any particular case, please contact us and we'll take appropriate action immediately.

Our main goal is to make crypto grow by making news and information more accessible for the masses.