Below is a transcript from our Slack channel where we asked few question to Ryan Hickman about blockchain.Me : Can you tell about little bit history about crypto tech and where it lies in its current form ..and where it’s going?Ryan : That question is a rabbit hole, however when you begin to peel back the layers of the onion you begin to realize that the essence of crypto/blockchain tech is all about collaboration across a community with game theory at its core. i.e. without miners there is no bitcoin. without nodes there is no Ethereum, etc. there in lies the greatest strength.various people across the ecosystem finding ways to innovate on top of core ideas to forge new concepts that some find success where others may fail. you would be shocked if you read much of that score code from coin to coin to coin. The underlying technology is very much the same. the birth of bitcoin comes from the failure of the likes of b-money and others like bitgold. Adjustments in block sizes to remedy speed bring to life these forks like _lite coin_ and pure forks such as _bitcoin cash_ .Where some would challenge these ideals and suggest you could implement many of these use cases without blockchain — arguably true, there are killer cases that these technologies enable. As devs and other innovators continue to find ways to identify those cases surrounding markets are going to speculate towards the future supporting (game theory) such innovations in the form of market’s funny because you can write an entire book on that subject. it’s very hard to summarise in short form.Me : Can you tell us what are current problems with the blockchain tech ?Ryan : Purely blockchain — tech speed, scale, fragmentation. Blockchain as cryptocurrency — liquidity, volatility and acceptance.Last year we developed a transnational protocol for artificial intelligence on Ethereum. When it was time to scale it, it was a disaster. We needed transaction speeds between machine to happen in ms, Ethereum was bottle necking in mins.Me : In terms of scalability and building blockchain on top of any platform like eos , eth or neo or others. which one is better, Ada is doing good and focusing on tech side very much?Ryan : Lisk/Ark are my preferred, Lightweight, nodejs based (more suitable for web application), dPos (delegated proof of stake) and fixed time intervals.Me: Can you tell us what kind of innovation you are seeing on blockchain tech ?Ryan: Identity has some of the coolest cases I’ve seen. They have some of the best chances. A.I. of course as that is near and dear to my heartMany of the projects are money grabs and simply don’t make sense to “blockchain-ifiy”.Me : Can you tell you what’s it will look like when A.I will meet blockchain ..what kind of problems it will solve?Ryan : The biggest problem it will solve is overcoming bias. (shameless plug → )Me: how do you see miners in future blockchain industry?Ryan : You have to rethink miners. Hardware miners as we know it will become a thing of the past due to speed limits. The future in hardware mining will be distributed/pooled infrastructure such as GPU to train ML models or to process predictions. Miners will become people who provide votes, stake-based audits and contribute to the game theory mechanics in newer yet very applicable ways… bottom line, yes. They are critical we just need to elevate our thinking around what the term `miner` really means.Me: just more thing for our investor members, if you choose one Coin for future, which one it will be?Ryan: I don’t think the coin of the future has yet been developed.I’m a firm believer of `the code never lies` — which it does’t not. When blockchain tech can perform at the scale and speed of the existing protocols of the web, then I will hedge.We thanks Ryan to give us time and share his views. He is also our community member.Ryan Hickman is founder of Epic.AI, Passionately focused on building and investing in Artificial Intelligence and the Blockchain. His latest project is CoinDealer — The World’s Most Secure Way to Buy, Sell and Store Digital Currency. Web, iOS & Android App. Services: Buy Digital Currency, Secure Storage.If you are investor, Trader , developer or crypto enthusiast. Come join us on our slack community (Here). Our Crypto Forum CoinMonks and also check out our website which ranks crypto according to their development progress CoinCodeCap.❤️ Like, Share, Leave your commentIf you like this post, don’t forget to like, share with your friends and colleagues and leave your comment below about the post.And ………Follow Me to deep webPast, Present & Future of Blockchain: Interview with EpicAi Founder Ryan Hickman was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
SaaS Growth: No End in SightLast year, 2017, saw a slight slow-down in the pace of new SaaS subscriptions, but an increase in the amount being spent on SaaS. So while it may look like we are starting to see some consolidation around tools, it’s also true that organizations are not shy about shelling out for software as a service that will help them grow and manage their businesses.SaaS Trends: Why They MatterIt’s important to pay close attention to how software is being used in the modern business, because it’s the backbone of our economy today. Understanding SaaS trends can help businesses make good decisions about where to invest their own money. Additionally, digging deep on what’s working and what’s not can help uncover opportunities to improve.As we’ve written about before, SaaS security has a long way to go. Additionally, there are often major opportunities for cost savings when teams pay attention to where they are investing their dollars to avoid feature overlap and optimize resources. Productivity can either be dramatically improved or hampered by technology, depending on how workflows are structured, so periodically evaluating your SaaS usage (and taking a look at the wider industry) is a good way to understand whether you are moving in the right direction.Overall, it’s a good idea to pay attention to where SaaS momentum lies before you make any big decisions about where to invest, where to cut back, and how to better integrate and structure the technology your team already uses. Our hope is that this guide will help you do just that.SaaS Usage By Organizational DepartmentThe last ten years have resulted in a penetration of SaaS across all departments. While engineering was an unsurprising early frontrunner, and still a spending leader, Business Ops has taken the crown when it comes to number of subscriptions recent years. In 2018, it’s clear that SaaS is used widely across the entire organization.On average, we are seeing 18 SaaS subscriptions and about $136,000 in total spend at each company. However, it’s worth noting that as of Q3 2017, the run rate was closer to 20 subscriptions and $186,000 in annualized spending, so we expect these numbers to grow in 2018, just as they have in previous years.Let’s take a look at a departmental breakdown in SaaS usage and trends.💻 EngineeringEngineering teams are naturally powered by technology, and since they are of course the pioneers of SaaS, it makes sense that they were the first to jump in headlong. In 2009, engineering far outpaced other departments, with an average of 20 SaaS subscriptions per organization. Today, that growth continues unabated, with 2017 seeing about 1,108 subscriptions per org. Of course, there’s some overlap between engineering and DevOps, but here are the top three favorite products among pure-engineering teams:AWS: No one who works in the cloud will be surprised to find AWS at the top of this list, as the top cloud infrastructure provider out there.Google Cloud: Google Cloud hasn’t captured quite the market share that AWS has, but they are also a frontrunner in the cloud infrastructure game.New Relic: A “digital intelligence platform,” New Relic offers deep and wide insight into how your entire stack is performing, and has long been a favorite among devs.💼 Business OpsJust behind engineering, business ops stands out as a heavy user of SaaS products. In 2010, they already had an average of four subscriptions, and today that number has skyrocketed to 1370 per organization. It makes a lot of sense. Business ops pros are charged with keeping the whole organization humming along smoothly, and investing in tools that streamline, simplify, and facilitate communication is at the heart of what they do. Here are the top three business ops SaaS applications:GSuite: Not shocking to find the suite of apps that runs so many of today’s businesses here. GSuite is a favorite of ours for its ease of use, comprehensiveness, and security features.NetSuite: Need to manage your business’s financials, operations, and customer relations all in one place? NetSuite has your back.Slack: Perhaps the hottest communication tool on the market today, Slack is popular for its ability to replace email and bring teams into closer alignment. Plus, it’s fun to use.📣 MarketingMarketing has gone from an average of one SaaS subscription per organization to a whopping 647 today. As marketing becomes an increasingly technical and KPI-focused discipline, it’s no surprise that technology has been relied upon to support growth. This ranges from the big marketing automation platforms to a plethora of tailored solutions for everything from landing pages to design. Some of the top SaaS products for marketers include:Hubspot: No surprise this all-in-one marketing automation tool rises to the top, given it is able to scale up or down for everyone from SMBs to the enterprise.Marketo: Another marketing automation SaaS winner, Marketo is a popular tool among enterprises.Clearbit: Clearbit enables marketers to better understand their prospects, empowering the marketing department with data-driven insights.😀 Customer SupportIf your customers are tech-savvy, you certainly better be as well. It took a little while for customer support teams to start adopting SaaS, with 2012 seeing an average of just five apps per organization. They are still one of the lowest subscription rate departments, with just 168 on average in 2017, but it’s clear that the SaaS tools they are using are making a big difference. Customers today expect excellent support, and doing this at scale quite simply requires good technology. Some favorites for customer support include:Zendesk: A popular all-in-one customer service platform that offers everything from live chat to help desk to ticketing and more.Help Scout: Help desk software that aims to make customers service interactions more human (bonus points for being HIPAA compliant!)Front: A shared inbox for teams, Front makes it easier to interact with customers without dropping the ball along the way. empowering the marketing department with data-driven insights.💵 FinanceTime to get your financial house in order? In 2018, you’d be hard-pressed to do this successfully without the support of some excellent SaaS finance tools. While security and compliance concerns (and perhaps a bit of an old-school attitude) led to slow growth in this department — which averaged just one subscription as late as 2011 — it has clearly taken off in recent years. Today, the average finance department has 216 SaaS subscriptions under its belt. The best apps for their money?Recurly: As its name hints, Recurly offers a smart platform for companies who bill on a subscription basis.Zuora: Similar to Recurly, Zuora offers a subscription management platform that is well-liked by businesses who need to manage revenue This service not only helps businesses get paid, but also helps them manage payments to vendors and contractors themselves.👩🏾‍💻 ProductBuilding a SaaS product? You’re gonna want some SaaS products to help you manage the process…In fact, even companies who aren’t building technology themselves can benefit from some of the powerful product development and management tools on the market today. Below are three popular SaaS tools for product teams:FullStory: Want to understand how your customers are interacting with your platform so you can improve the experience? FullStory’s the tool for you.Typeform: Most businesses need at least some forms on their websites, and Typeform offers a well-designed, streamlined way to build them.Zeplin: Need to shorten the path between design and development? Zeplin makes it easy to collaborate and ensure the final product looks exactly how it should.👨🏼‍🔧 DevOpsDevOps teams are natural early adopters, since they’re steeped in technology. Behind only engineering and business ops, they have grown from an average of two subscriptions per team in 2009 to 767 today. Since DevOps teams are tasked with both development and operations tasks, it makes sense that their SaaS investments span the gamut, with a focus on simplifying infrastructure to support continuous application delivery. Here are some stand-out SaaS tools that DevOps teams rely on:Heroku: This cloud platform enables companies to build, deliver, monitor and scale apps without worrying about infrastructure requirements.Joyent: Billing itself as “next-generation cloud,” Joyent offers computing, storage, and analytics for any type of infrastructure, including containers.Datadog: Monitoring and analytics for your entire technology stack, offering visibility and insights.💙 HRHuman resources teams have been the slowest to start adopting SaaS, with just two subscriptions on average per team as late as 2012. Today, it’s closer to 240. That growth comes from increasingly well-operationalized HR processes (like onboarding and offboarding) that require support in the form of technology to keep them running smoothly. Some big hits with HR professionals include:BambooHR: A powerful platform that includes everything from applicant tracking to self-onboarding to HR reporting.Gusto: HR, payroll, and benefits, all in one — plus access to HR experts on demand — makes this a popular choice for SMBs.Zenefits: This tool has something for everyone, from HR pros to customers to benefits brokers, offering a streamlined way to handle payroll, benefits, compliance, and more.🤝 SalesLast but not least, sales teams were a bit late to the game, with just four subscriptions on average in 2011. By last year, they were up to 331. That’s no surprise, given that sales teams are continually measured and KPI’d on how fast, efficiently, and effectively they are able to sell. There’s always room for improvement and an opportunity to trim the fat. Technology can be a huge enabler here. The three most popular SaaS tools for sales are:Salesforce: A lumbering SaaS giant, Salesforce offers pretty much anything a sales team could ask for, from opportunity tracking to proposals to analytics — and beyond.Outreach: This sales engagement platform helps teams fill the pipeline, book meetings, and find out which sales tactics actually work.InsightSquared: Time to show the board how things are going? InsightSquared turns data into clear, actionable revenue reports.With Great Power Comes Great ResponsibilityThe SaaS explosion is a good thing. It’s easier to deploy and faster to adopt, and it doesn’t require IT or developers to get started. Even less tech-savvy team members can deploy many SaaS products. People prefer it because apps can solve a wide range of business challenges. But when an organization invests in a lot of apps, it can lead to chaos pretty quickly.You have to consider how certain tools get along (or don’t) with other tools. You need to look for ways to streamline tech-heavy workflows. You need to ensure you are meeting security and compliance requirements and responsibilities, and treating customers’ data with the level of respect and sensitivity it deserves. You also need to ensure you are optimizing both operations and spending around SaaS tools, so that proliferation doesn’t equal waste.Investing in software as a service responsibly means addressing these challenges. That’s the only way to ensure your organization sees a net benefit from all that amazing technology.Blissfully: The Antidote to SaaS ChaosSound hard? Don’t worry. Blissfully helps hundreds of companies effortlessly manage their SaaS vendors, across thousands of subscriptions and millions in monthly spend. Once installed, Blissfully displays both historical and up-to-the-minute accurate representations of SaaS usage, spend, and data management.Blissfully offers:Automatic SaaS tool inventorying (including free and unsanctioned apps)Spend tracking and optimizationSecurity monitoring to reduce riskBuilt-in IT workflow streamlining and automationTry it free today.View more and download the full report freeYou can view the full report and download a PDF at published at 2018 Annual SMB SaaS Trends Report was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Default choices, irrational human beings and lessons for building products.The Nudge. Image courtesy:’ve been reading a lot of behavioral economics books lately. I came across Richard Thaler’s book — ‘Nudge — Improving decisions about health, wealth and happiness’ when I was reading Daniel Kahneman’s Thinking Fast and Slow. Nudge was a natural progression from Kahneman’s book.The art of the Nudge is especially relevant to products managers who depend on soft influence to affect decisions. In a team/organization usually, different stakeholders can have different aims. For example the designer wants to optimize for users, the engineer wants to solve for elegance of the solution, the business person wants to optimize for revenue. Building consensus requires a lot of subtle nudges — whether it’s calling a meeting to discuss a design decision or an email listing down the minutes of a meeting or presenting user feedback to emphasize urgency of the problem to the engineer. Thus I was excited to read Thaler’s work.In the book Thaler discuss an idea called ‘libertarian paternalism’ — gently, non-coercively pushing people toward doing something that they really want to do. For example, a company might, by default, enroll new employees in a 401K plan and put a certain salary percentage into that plan. The employees can opt out or change their contribution amount at any time, but by enrolling everyone by default, the company does an end run around its workers’ natural procrastination tendencies, without forcing them into anything.The power of defaultsExperience shows that most users don’t bother to change the default settings — whether it’s your WhatsApp background or phone ringtone. Thus designing good default settings is crucial to improving the overall user experience.The iconic WhatsApp background:The iconic Whatsapp background good example of power of default settings is a SIP — systematic investment plan. Surely we’ll be richer if we timed the market, studied stocks and invested in the right ones. However usually we’re too busy or too lazy to spend any meaningful time researching stocks. SIP ensures we’re diligently investing a % of our salary every month thus avoiding extra expenditure & ensuring a compounding return on our investments.Consider MakemyTrip which adds an insurance to your ticket by default.MakeMyTrip adds an insurance to your ticket by defaultHow does one choose the ‘right’ defaults?Does one optimize for the user or business?In some instances the choice is clear, the right choice for a user and business are one and the same — like the Google widget on the home screen. However it isn’t always so clear — consider autoplaying videos on Facebook.Autoplay on Facebook — why oh why?While auto-playing videos help increase video views for Facebook, it consumes precious data for the user. How does one choose between the two?Private products and companies are answerable to their users & the stock market. Free market competition ensures that companies strive to build products optimal for users. Thus if a default choice is detrimental to users it gets panned on the playstore forcing the developer to make changes. However in case of public policy, how does one ensure the default choice is in interest of the citizen?Thaler says that the individual should be nudged towards the more rational choice — something a rational human being or in his language 'econ' would choose. There are 2 variants of such a choice — 1. Helping an individual make a choice that is rationally better for himself 2. Nudging an individual towards a choice that is rationally better for the society but not necessarily for the individual. (Example everyone being an organ donor by default).Consider the example of retirement savings — contributing to your 401k or your PF account in the Indian context. While contributing more to your PF account might be more ‘rational’ choice in the long term, it reduces the amount of money available to people who might need it to send their school to children and to get food.The 2nd variant is more controversial. No framework is presented on when and how to make such a decision between 'good for the society' and 'good for the individual’ cases and how to inform the user about the choices being made. We see that in cases of public policy a nudge turns to a mandatory requirement — example Aadhaar.In conclusion, Richard Thaler makes an important point that we should pay keen attention to default choices as they have a big effect on our product experience/well being. Awareness about Thaler’s work helps us understand the innumerable default choices that the government and other products make for us and how it affects us. However we need to think about frameworks that ensure nudging is used in a constructive way, especially by governments.If you liked my article, please hit the clap button multiple times.At Comic Con, 2 years ago! Hello 👋References: — echoes similar thoughts on the pros and cons of the Nudge approach.Nudge — the pros and cons of Noble prize winner Richard Thaler’s work was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Devs, stop complaining about recruiters. You sound like an asshole.Every so often, my Twitter stream — and probably yours — will include some annoyed or perversely-entertained developer sharing a tale of sorrow and woe. The tragedy? They’ve been spammed by a recruiter. Horror!Look.Are many technical recruiters clueless*?Sure.But aren’t you glad you have your job and not theirs?Then try gratefulness as a response instead of complaining/showboating to Twitter.Do you make over $60,000?Yes? Then you’re in the top 0.19% richest people in the world.No? Then respond to the recruiter!Either way, be grateful that while millions upon millions of people are looking for work (and in many parts of the world actual fucking water), an annoyance that registers on your radar is that from time to time someone sends you an email about a job.Tempted to justify your frustration by pointing out how blatantly irrelevant some recruiter spam can be? Reread that last paragraph.We are insanely lucky. We find ourselves in the midst of a thriving industry at a point in time when our skills are valuable and demand outweighs supply.That will not always be the case.When the tide turns and you find yourself knocking on doors, brushing up your resume, and sending personalized cover letters to position your background as remotely relevant in the brave new world, you’ll remember rolling your eyes at another email from yet another clueless recruiter and you may think, “What an asshole.”* I love @dhh. I read everything he writes, I watch every talk he posts, and I agree with almost all of it. I’m not exaggerating when I say that I would place him among my top 10 most influential authors. What I have never agreed with is how hard he is on recruiters.Yes, it’s comical that recruiters approach him for mid-level Rails positions considering that he, you know, invented it! But ridiculing an actual human being who wasn’t good at their job (when nobody was harmed) strikes me as borderline elitist with all the bad vibes.Originally published at Complaining About Recruiters was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Rate this item
(1 Vote)
Yes! There is a kid around Yashu who is trying to mess up the interview. It's not a usual picture. But that should not deviate viewers from this latest episode of We Need to CryptoTalk, where Anuj Bairathi, founder & CEO of HashGains, speaks about India's growing cryptocurrency culture. In this tell-all conversation, Bairathi speaks about his upcoming cloud mining venture. Watch the full video. Subscribe to NewsBTC | Click Here ► Social Media Links:- Facebook - G+ - Twitter - Telegram: Also, visit our website - About Hashgains: HashGains is group company of 1500+ strong professionals and 2 mega data center’s parent Cyfuture with experience of over 15+ years in Data Center Industry serving 10 of 500 Fortune 500 Customers. Or interact with their team on their social media channels. Facebook: Twitter: LinkedIn: YouTube: Instagram: Telegram:
Rate this item
(1 Vote)
At this year’s World Economic Forum in Davos John Wise met up with Cointelegraph to talk about how an idea can not only be valuable but also profitable. John has a new take on economics and wants to see it grow into something more ethical and beneficial to everyone. Subscribe to our channel for even more videos! OUR CHANNEL ABOUT COINTELEGRAPH
Coincheck has met the FSA deadline, handing in a report on the $534 mln NEM hack last month. #ANALYSIS
Why another webpack tutorial? Because, almost everything about it is confusing and I’ve suffered a lot like you. That’s why I’ve decided to write this step-by-step procedure to make a better webpack configuration with a good understanding of how webpack works. I hope that you won’t get confused and leave again in the middle of configuring webpack again.WEBPACK! Simply, it’s a module bundler, means — you can create as many modules(not only JS, but also HTML & CSS too!) while developing an app. Webpack handles the responsibility to bundle your application and give you the customized simplest version of your app (which contains only HTML, CSS & JS), even for customized output settings too.I’ve realized that it’s a habit that you have to develop through understanding the core concepts of how webpack works and then applying it on different projects/demos.After reading/following this guide, you can easily configure webpack in just 5 minutes for any kind of projects. No Headaches anymore! In the end of this article, I’m going to give you the simplest cheat-sheet to get ready for any project . Follow along with me. This blog is going to be a bigger one, so I suggest you to get some coffee around and stay with me patiently :)Or, just go to this repo and clone it to use it in your project: webpack-boilerplateBefore starting, let’s see what’s inside this write upProject Setup1.1. Create project folder and land on the project root (via terminal)1.2. Initialize a project via npm1.3. Create project directories1.4. Populating our project directories1.5. Installing webpackConfiguring package.jsonLand on the webpack config file3.1. Exporting our config object3.2. Define entry point(s)3.3. Define output point3.4. Define ContextSetting up webpack-dev-server4.1. Installing webpack-dev-server4.2. Adding command to package.json4.3. Configuring webpack-dev-serverDevtool ConfigurationLoaders and plugins6.1. Loaders6.2. PluginsSome loaders and plugins in action7.1. clean-webpack-plugin (PLUGIN)7.2. babel-loader (LOADER)7.3. html-loader (LOADER) & html-webpack-plugin (PLUGIN)7.4. css-loader (LOADER), style-loader (LOADER), sass-loader (LOADER), extract-text-webpack-plugin (PLUGIN)7.5. file-loader (LOADER)NOTE: I’m using linux in my pc. If you’re using windows, I recommend to look for the respective commands on windows. Mac users should have the same commands as mine.==========1. PROJECT SETUP==========Follow along with me to create a basic project. App architecture is important.1.1. Create project folder and land on the project root (via terminal)> mkdir project_name && cd project_name1.2. Initialize a project via npm> npm init1.3. Create project directories> mkdir src dist src/assets src/assets/media src/assets/js src/assets/scss1.4. Populating our project directories> touch webpack.config.js .babelrc src/index.html src/app.js src/assets/scss/app.scssNOTE:.babelrc file is for configuring babel (which is going to transpile our ES6/ES2015 code to ES5)webpack.config.js for configuring our webpack’s for introductory text on project information1.5. Installing webpackInstall webpack locally as a dev-dependency in your project. You can install webpack globally if you want> npm i -D webpacknpm i -D webpack is just the shortcut of npm install — save-dev webpackAn awesome article about some npm tricks that is so useful for meProject hierarchy for our setup:project_architecture==================2. CONFIGURING PACKAGE.JSON==================Let’s get some extra headaches out of our brain. We need to build our project for development and for production. And, we don’t want to refresh our browser again and again while modifying our code every time. So why don’t we put some thing to watch our code? 😉If webpack is already installed globally in your machine, simply write these commands into your package.json scripts:"scripts": { "build": "webpack", "build:prod": "webpack -p", "watch": "webpack --watch"}I think, the better approach is to install updated webpack locally for every project, which will give you a complete freedom in development process.If you’re following the steps already and installed webpack locally, then, changes in package.json would be:"scripts": { "build": "./node_modules/.bin/webpack", "build:prod": "./node_modules/.bin/webpack -p", "watch": "./node_modules/.bin/webpack --watch"}You can get the idea of the directory for those script commands below. Webpack executable binary file stays in this directorywebpack_destinationNOTE: If you’re interested to know the details of webpack-cli:! You can run those commands instantly, but you’ll get some errors since ignorant webpack still doesn’t know where to start from, where to finish at and what to do 😐Let’s make some configurations into webpack.config.js to make webpack a li’l bit educated 😃======================3. LAND ON THE WEBPACK CONFIG FILE======================First thing first -Webpack simply needs to have it’s 4 core things to execute properly. 1. Entry 2. Output 3. Loaders 4. PluginsWe’re going to define entry and output for webpack in this section and watch our first bundled output.3.1. Exporting our config objectSince, we’re using node.js and webpack uses the modular pattern, we first need to export the configuration object from our webpack.config.jsmodule.exports = { // configurations here}OR, this approach:const config = { // configurations here};module.exports = config;3.2. Define entry point(s)Single entry point:The app starts executing from this point if your app is SPA(Single Page Application). We’ll define a path relative to our project rootconst config = { entry: './src/app.js',};Multiple entry points:If your app has multiple entry points (like: multi-page application), then, you have to define your entry points inside entry object with identical names. Multiple entry points are called chunks and the properties (individual entry point) of this entry object are called entryChunkName. So, let’s create it:const config = { entry: { app: './src/app.js', vendors: './src/vendors.js' }}Look carefully, our entry property is not a string anymore, it’s now a pure javascript object with different entries being the properties of it.This is so helpful when we want to separate our app entry and vendor (like: jQuery/lodash) entry into different bundles.3.3. Define output pointWebpack needs to know where to write the compiled files in the disk. That’s why we need to define output point for webpack.NOTE: while there can be multiple entry points, only ONE output configuration is specified.We define our output point as an object. This object must include these two following things at least: 1. filename (to use our output files) 2. path (an absolute path to preferred output directory)We can define path of the output point manually. But that won’t be a wise choice, since our project root’s name and location may change later. Besides, you or your collaborator may clone your projects on different pc in which case the custom absolute path won’t work there too.So, to solve this, we use node.js path module which gives us the absolute path for our project root in a more convenient way.To use node.js path module we need to import it in our config file and then use it in our output objectconst config = { output: { filename: 'bundle.js', // Output path using nodeJs path module path: path.resolve(__dirname, 'dist') }};You can use path.join or path.resolve, though both working kinda same. To not to make this article bigger and distracting, I’m going to skip the process of how node.js path.join and path.resolve works, but giving you the resources:path.resolve resource : herepath.join resource : hereNOTE: when creating multiple bundles for multiple entry points, you should use one of the following substitutions to give each bundle a unique nameUsing entry name:filename: "[name].bundle.js"Using hashes based on each chunks’ content:filename: "[chunkhash].bundle.js"For more naming options:, you may want to give a relative path to your output file in the output.filename3.4. Define ContextContext is the base directory, an absolute path, for resolving entry points and loaders from configuration. By default the current directory is used, but it’s recommended to pass a value in your configuration. This makes your configuration independent from CWD (current working directory).const config = { context: path.resolve(__dirname, "src")};Until now, our basic setup:, fire this command into your terminal to watch your first bundle:> npm run buildFor production ready bundle:> npm run build:prodFor developing with watch mode ON:> npm run watchVoila! 😎 We’ve just landed on the ground of awesomeness!Now, we’re going to add some loaders and plugins to make our actual configuration object after finishing two more setup. We’re going to use some automation that is provided by webpack.=====================4. SETTING UP WEBPACK-DEV-SERVER=====================Hey! Wasn’t that an easy setup to get our first bundle? Now we’re going to get some amazing things that’s gonna boost-up our development process and save us a lot of time. We’re going to get a real server! Yeah, webpack provides us a built-in server for development purpose, so that we’re going to see what’s going to happen while our application is ready for deployment into a real server. But we need to install this server setup first.NOTE: This should be used for development only.4.1. Installing webpack-dev-serverInstall webpack-dev-server via terminal> npm i -D webpack-dev-server4.2. Adding command to package.jsonAdd this command to scripts in package.json"dev": "./node_modules/.bin/webpack-dev-server"4.3. Configuring webpack-dev-serverThere are so many configuration options for webpack-dev-server. We’re going to look for some important ones.In your config object let’s create a new property named devServer (syntax is important)devServer: {}This object is ready to get some configuration options such as:#1 devServer.contentBaseTell the server where to serve content from. This is only necessary if you want to serve static files.NOTE: it is recommended to use an absolute path. It is also possible to serve contents from multiple directoriesFor our project architecture, we want all our static images to be stored in dist/assets/media directorycontentBase: path.resolve(__dirname, "dist/assets/media")#2 devServer.statsThis option lets you precisely control what bundle information to be displayed. To show only errors in your bundle:stats: 'errors-only'for other stats options : devServer.openIf you want dev-server to open the app at the first time in our browser and just refresh afterwards while we change our codeopen: true#4 devServer.portMention which port number you want your application to be deployed in your webpack-dev-serverport: 12000#5 devServer.compressEnable gzip compression for everything servedcompress: trueFinally our devServer configuration looks like:devServer: { contentBase: path.resolve(__dirname, "./dist/assets/media"), compress: true, port: 12000, stats: 'errors-only', open: true}=================5. DEVTOOL CONFIGURATION=================This option controls if and how source maps are generated. With this feature, we know exactly where to look in order to fix/debug issues in our application. Very very useful for development purpose, but should NOT use in production.devtool: 'inline-source-map'There are much more options for devtool hereWe’ve setup most of the things that’s required for the first moment to configure webpack. Here’s the updated snippet of what we’ve done so far.. our package.json looks like: It’s worth to mention that at the time of writing this article I was on webpack 3.6.0 and webpack-dev-server 2.9.1 version. Your version number may differ than that of mine.===============6. LOADERS AND PLUGINS===============We’ve come so far. Now the fun part begins. We’re actually going to explore what webpack can do by itself through some configurations.6.1. LoadersWebpack enables use of loaders to pre-process files. This allows you to bundle any static resource way beyond JavaScript. Since webpack still doesn’t know what to do with these loaders, webpack config object use module property to know what loader to work and how to execute them.module property of config object itself is an object. It works with some extra options mentioned below:# module.noParsePrevent webpack from parsing any files matching the given RegExp. Ignored files should not have calls to import, require, define or any other importing mechanism. This can boost build performance when ignoring large libraries.module: { noParse: /jquery|lodash/}# module.rulesIt takes every loaders as a set of rules inside an array. Whereas every element of that array is an object containing individual loaders and their respective configurations.From Webpack’s documentationA Rule can be separated into three parts — Conditions, Results and Nested Rules.1. Conditions: There are two input values for the conditions a. The resource: An absolute path to the file requested. b. The issuer: The location of the import.In a Rule the properties test, include, exclude and resource are matched with the resource and the property issuer is matched with the issuer.2. Results: Rule results are used only when the Rule condition matches. There are two output values of a rule: a. Applied loaders: An array of loaders applied to the resource. b. Parser options: An options object which should be used to create the parser for this module.3. Nested Rules: Nested rules can be specified under the properties rules and oneOf. These rules are evaluated when the Rule condition matches.Okay! Let’s simplify them, since webpack doc always confuses us 😖😓A loader needs some additional information to work correctly and efficiently in a module. We mention them in module.rules with some configuration parameters stated below:test:(required) A loader needs to know which file extension it’s going to work with. We give the name with the help of RegExptest: /\.js$/include: (optional) A loader needs a directory to locate where it’s working files are stored.include: /src/exclude: (optional) We can save a lot of unwanted process like — we don’t want to parse modules inside node_modules directory and can save a lot of memory and execution timeexclude: /node_modules/use: (required) A rule must have a loader property being a string. Mention the loaders you want to use in for that particular task. Loaders can be chained by passing multiple loaders, which will be applied from right to left (last to first configured).It can have a options property being a string or object. This value is passed to the loader, which should interpret it as loader options. For compatibility a query property is also possible, which is an alias for the options property. Use the options property instead.use: { loader: "babel-loader", options: { presets: ['env'] }}Detail configuration setup for module.rules: PluginsThe plugins option is used to customize the webpack build process in a variety of ways. webpack comes with a variety of built-in plugins available under webpack.[plugin-name]Webpack has a plugin configuration setup in it’s config object with plugins property.NOTE: Every plugin needs to create an instance to be used into config object.We’re going to see them in action now!=========================7. SOME LOADERS AND PLUGINS IN ACTION=========================7.1. clean-webpack-plugin (PLUGIN)Every time we want to see our production ready dist folder, we need to delete the previous one. Such a pain! clean-webpack-plugin is to remove/clean your build folder(s) before building. It’s very easy to setup:Install via npm> npm i -D clean-webpack-pluginImport into your webpack.config.js fileconst CleanWebpackPlugin = require('clean-webpack-plugin');Now, we’re going to use a plugin for the first time. Webpack has a plugin configuration setup in it’s config object. Every plugin needs to create an instance to be used into config object. So, in our plugins property:plugins: [ new CleanWebpackPlugin(['dist'])]Here, in the instance of clean-webpack-plugin we mention dist as an array element. Webpack now knows that we want to clean/remove dist folder every time before building our bundle. This instance can have multiple directory/path as array elements and multiple options as object.Syntax for clean-webpack-plugin usage:plugins: [ new CleanWebpackPlugin(paths [, {options}])]You should watch the process of removing and creating your dist folder live in your directory and in your IDE too..Reference doc: GitHub DocUntil now our webpack.config.js looks like: babel-loader (LOADER)We all want to write some ES2015/ES6 code, right? But until now our browser is not fully adopted to ES6 syntax, so we need to at first transpile our ES6 code to ES5 and then we can use it in our production bundle. Babel is taking that responsibility for us. We just need to include babel-loader into our configuration through some easy steps.Install babel-loader> npm i -D babel-loader babel-coreCreate .babelrc file in our project root to enable some babel-presets (We’ve already done it in project setup section. If you follow along with me, you’ll find a .babelrc file in your project root directory already)Install babel-preset-env to use for environment dependent compilation> npm i -D babel-preset-envIn order to enable the preset you have to define it in your .babelrc file, like this:{ "presets": ["env"]}NOTE: If we add this into our package.json > “scripts”, then the .babelrc file is not needed anymore."scripts": { "babel": { "presets": ["env"] }}Include rule into the config file modulemodule: { rules: [ { test: /\.js$/, include: /src/, exclude: /node_modules/, use: { loader: "babel-loader", options: { presets: ['env'] } } } ]}From our loader section we know that:test : to let the loader know which file format it’s going to work oninclude : to let the loader know which directory it should work intoexclude : to let the loader know which directory should it avoid while parsinguse : to let the loader know which specific loader it’s using with use.loader and what’s it’s configuration options with use.optionsbabel-loader_config html-loader (LOADER) & html-webpack-plugin (PLUGIN)Since we want to edit our index.html in src directory and want to see the changes in the output dist folder, we need to create and update index.html into dist every time webpack compile our project. Well, we should remove that painful job!We need to use a loader and a plugin together to solve our problem. Because -html-loader : Exports HTML as string. HTML is minimized when the compiler demands.html-webpack-plugin : Simplifies creation of HTML files to serve your webpack bundles.Install dependencies> npm i -D html-loader html-webpack-pluginConfiguring html-loader{ test: /\.html$/, use: ['html-loader']Importing html-webpack-pluginconst HtmlWebpackPlugin = require('html-webpack-plugin');Using our pluginplugins: [ new HtmlWebpackPlugin({ template: 'index.html' })], we’ve setup every thing. In your src/index.html write something and then run some build commandsnpm run dev : You’ll see our app now works and you can see the html element in the browser. The bundled js file has been injected into the html before the end of body tagnpm run build:prod : Watch yourself the process of building the output index.html and the changes applied in the dist/index.htmlResources:html-loader : webpack-doc , github-dochtml-webpack-plugin : webpack-doc , github-doc7.4. css-loader (LOADER), style-loader (LOADER), sass-loader (LOADER), extract-text-webpack-plugin (PLUGIN)Using CSS and SASS with webpack may look like some extra headaches with some extra steps. Webpack compiles css and pushes the code into the bundled js. But we need to extract it from the bundle, and then create a identical .css file, and then push it into our dist/index.html and then add the css to DOM. A lot of work, right? Not literally..I’ve combined 3 loaders and 1 plugin to see them work together for our required output:style-loader : Adds CSS to the DOM by injecting a <style> tagcss-loader : Interprets @import and url() like import/require() and will resolve them (stay on the article, don’t go to twitter’s “@ import”. medium’s mistake, not mine 😒)sass-loader : Loads a SASS/SCSS file and compiles it to CSSnode-sass : Provides binding for Node.js to LibSass. The sass-loader requires node-sass and webpack as peerDependency. Thus you are able to control the versions accurately.extract-text-webpack-plugin : Extract text from a bundle, or bundles, into a separate fileNow install the dependencies> npm i -D sass-loader node-sass css-loader style-loader extract-text-webpack-pluginWe need to import our app.scss into our app.js to work to let webpack know about dependencies. So in our app.js we’re going to write:import './assets/scss/app.scss';Additionally to check that our sass modules work fine, add one more .scss file in our src/assets/scss directory via terminal> touch src/assets/scss/_colors.scssImport the newly created _color.scss file into app.scss and add some styling with:@import '_colors';body { background: $bgcolor;}And define $bgcolor into _color.scss file:$bgcolor : #e2e2e2;Import extract-text-webpack-plugin into config fileconst ExtractTextPlugin = require('extract-text-webpack-plugin');It’s a good and safe practice to import webpack itself into our webpack.config.js to use the webpack’s built-in plugins into our projectconst webpack = require('webpack');Now our workflow splits into two ways for two type of requirements.(1) We want to have just a single .css file into our output(2) We want more than one .css files as outputFor a single stylesheet (we’re working on this):First, we need to create an instance of ExtractTextPlugin into which we’ll define our output filename:const extractPlugin = new ExtractTextPlugin({ filename: './assets/css/app.css'});Secondly, while configuring our css/sass loaders we need to use the previously created instance with it’s extract() method (i.e. extractPlugin.extract()) and inside the method we pass the required loaders as an argument and in a form of an objectSo our configuration of these loaders is going to be:{ test: /\.scss$/, include: [path.resolve(__dirname, 'src', 'assets', 'scss')], use: extractPlugin.extract({ use: ['css-loader', 'sass-loader'], fallback: 'style-loader' })}And now add the instance of ExtractTextPlugin (which is extractPlugin) into the plugins section:plugins: [ extractPlugin]NOTE: If you are not using any html-loader then, include a index.html file in the dist folder and link the output stylesheet and js urls into that file like:<link rel="stylesheet" href="/./assets/app.css"><script src="/./assets/main.bundle.js"></script>For multiple stylesheets (just for giving you example):If you’re following the previous direction for creating a single extracted stylesheet, then there’s nothing new for creating multiple stylesheets.Just create as much instances of ExtractTextPlugin as you want your stylesheets to have. We’re creating here two instances, one for just css and one for sass compiledconst extractCSS = new ExtractTextPlugin('./assets/css/[name]-one.css');const extractSASS = new ExtractTextPlugin('./assets/css/[name]-two.css');NOTE: ExtractTextPlugin generates a file per entry, so you must use [name], [id] or [contenthash] when using multiple entries.Now our loaders configuration looks like:{ test: /\.css$/, use: extractCSS.extract([ 'css-loader', 'style-loader' ]) },{ test: /\.scss$/, use: extractSASS.extract([ 'css-loader', 'sass-loader' ], fallback: 'style-loader') }Now add the instances into the plugins:plugins: [ extractCSS, extractSASS]Now run the npm run dev to see it action in your browser.If you’re working for single stylesheet like mine, you should see that the background changes to #e2e2e2 😂If you see the view-source from the output app, you can see the stylesheet injected into the head of our html and app.bundle.js file injected to before the end of the body tagNow What?? Well, we need to debug things from our browser, right? We just need a source map to see the actual line number from the source code rather than being lost into the minified stylesheets!In the options property for loader, you can switch ON the source map with sourceMap: true{ test: /\.scss$/, include: [path.resolve(__dirname, 'src', 'assets', 'scss')], use: extractPlugin.extract({ use: [ { loader: 'css-loader', options: { sourceMap: true } }, { loader: 'sass-loader', options: { sourceMap: true } } ], fallback: 'style-loader' })}Check your styling changes and inspect it in your browser with inspect element. You’ll find the actual line number showing where you made your changes!Here’s my configuration: : webpack , githubcss-loader : webpack , githubsass-loader : webpack , githubstyle-loader : webpack , github7.5. file-loader (LOADER)Well, we’ve setup configuration for every thing except static files like images, fonts. Now, we’re going to setup for static files with very useful loader file-loaderInstall file-loader> npm i -D file-loaderConfiguring file-loader{ test: /\.(jpg|png|gif|svg)$/, use: [ { loader: 'file-loader', options: { name: '[name].[ext]', outputPath: './assets/media/', publicPath: './assets/media/' } } ]}NOTE: BE VERY VERY CAREFUL ABOUT outputPath and publicPath in the file-loader configuration. You need to add a ‘/’ at the end of the outputPath, so that it will get the directory address rather than concatenating the specified string with the file name. VERY CAREFUL! And we don’t need to add publicPath(I guess), since we’ve already defined it in our output path//file-loader(for images){ test: /\.(jpg|png|gif|svg)$/, use: [ { loader: 'file-loader', options: { name: '[name].[ext]', outputPath: './assets/media/' } } ]},//file-loader(for fonts){ test: /\.(woff|woff2|eot|ttf|otf)$/, use: ['file-loader']}With the loader configured and fonts in place, you can import them via an @font-face declaration. The local url(…) directive will be picked up by webpack just as it was with the imageAdding font-face to stylesheet@font-face { font-family: 'MyFont'; src: url('./my-font.woff2') format('woff2'), url('./my-font.woff') format('woff'); font-weight: 600; font-style: normal;}Now add a sample image in your src/assets/media directory and create an img element into your src/index.html to see it works npm run dev and npm run build:prod to see everything’s working fine and clear.That’s it! Now you should find everything in place and working just fine. :)We have come to the end of this guide, that’s not everything about webpack though. But it’s the starting point that you have to understand deeply and correctly to get the the working procedure of webpack philosophy. In the upcoming blogs, I’ll try to simplify the actual powers/features of webpack like - Hot Module Replacement, Tree-shaking, Output management for development and production environment, Code splitting, Lazy loading and other stuffs.And I highly recommend to follow this awesome article rajaraodv wrote to save our lives : I promised you to give you the simplest cheat-sheet for you. Here it is:You can simply clone this repo in your project root to get everything in place or follow the next steps to setup your project in just 5 minutes’t hesitate to fork on this repo. I’d love to see others contributing on this boilerplate to make this configuration more robust and usable for everyone who has suffered a lot like me..Cheat-sheet1. Run these commands one after another in the terminal to create project folders and install all dependencies> mkdir project_name && cd project_name> npm init> mkdir src dist src/assets src/assets/media src/assets/js src/assets/scss> touch webpack.config.js .babelrc src/index.html src/app.js src/assets/scss/app.scss> npm i -D webpack> npm i -D webpack-dev-server clean-webpack-plugin babel-loader babel-core babel-preset-env html-loader html-webpack-plugin sass-loader node-sass css-loader style-loader extract-text-webpack-plugin file-loader2. Add this snippet into .babelrc{ "presets": ["env"]}3. Configure package.json > scripts with this snippet"scripts": { "build": "./node_modules/.bin/webpack", "build:prod": "./node_modules/.bin/webpack -p", "watch": "./node_modules/.bin/webpack --watch", "dev": "./node_modules/.bin/webpack-dev-server"}4. Import app.scss into your app.jsimport './assets/scss/app.scss';5. Populate your src/index.html (with an image too) and src/assets/scss/app.scss and add the image into src/assets/media6. Copy and paste this configuration file into your webpack.config.js (make sure that project folders hierarchy to be same) Run npm scripts to see your application in action in browser and in production format. Don’t forget to inspect element in the browser console to make sure everything’s working great without any error.🎉🎉🎉🎉 End of this long exhaustive guide 🎉🎉🎉🎉If you find something that’s not right in this guide, mention it into the comment section. I’d love to get your feedback on this write up.If you like it, give some 👏 💓 and share it on medium and twitter.Thank You for your patience 🙇Webpack 3 quickstarter: Configure webpack from scratch was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
In this series of posts, I will walk you through architecting, building and deploying a large scale, multi-region, active-active architecture, all while trying to break it. My initial idea is to split the series into the following structure:The Quest for Availability. (this post)Why and how do we build a Multi-Region, Active-Active Architecture?Building a Multi-Region, Active-Active Serverless Backend.Breaking things with Chaos Engineering.Of course, it might and probably will change as I start writing, so feel free to steer the course of (t)his (s)tory :)System Failure.One of my favourite quote, and also one that influenced my thinking of software engineering is one from Werner Vogels, CTO at“Failures are a given and everything will eventually fail over time.”Indeed, we live in a chaotic world, where failure is a first-class citizen. Failure usually comes in three flavours; the early failures, the wear-out (or late) failure and the random failures, each coming at a different stage in the life of any given system.The “bathtub” curve of failure.Early failures are essentially related to programming and configuration bugs (typos, variable mutations, networking issues like ports and IP routing misconfiguration, security, etc…). Over time, as the product (or version) matures and as automation kicks-in, those failures tend to naturally diminish.Note: I just mentioned “automation kicks-in”! This really means that you have to be using automation to experience this natural declining behaviour of early failures. Doing things manually won’t allow for that luxury.Wear-out (or late) failures — you often read online that software systems, unlike physical components, are not subject to wear-out failures. Well, software is running on hardware, right? Even in the cloud, software is subject to hardware failure and therefore should be accounted for. But that’s not all, wear-out failures also and most often are, related to configuration drifts. Indeed, configuration drift accounts for the majority of reasons why disaster recovery and high availability systems fail.Random failures are basically, well, random. A squirrel eating your cables. A shark brushing its teeth on transatlantic cables. A drunk truck driver aiming at the data-centre. Zeus playing with lightings. Don’t be a fool, over time, you too will eventually fall victim to ridiculous unexpected failures.BUT— we live in a world where velocity is critical and by that, I mean being able to deliver software continuously. To give you an idea of velocity at scale,, in 2014, was doing approximately 50 million deployments a year, that’s roughly 1.6 deployments per seconds. Of course, not everyone needs to do that, but the velocity of software delivery, even at smaller scale does have a big impact on customer satisfaction and retention.So how does velocity impact our “bathtub” failure rate curve? Well, it now looks more like the mouth of a shark ready to eat you raw. And indeed, for each new deployment, new early failures will be thrown at you, hoping to take your system down.How it really looks like.As you can easily notice, this creates a tension between the pursuit of high availability and the speed of innovation. If you develop and ship new features slowly, you will have a better availability — but your customer will probably seek innovations from someone else. On the other hand, if you go fast and innovate constantly on behalf of your customers, you risk failures and downtime — which they will not like.To help you grasp what you are fighting against, I included the table of “The Infamous Nines” of availability. Let that table sink in for a minute.If you want to have 5-nines of availability, you can only afford 5 minutes of downtime a year!!“The Infamous Nines” of AvailabilityFew years ago, I experienced first-hand a complete system meltdown. It took our team few minutes just to realise what was happening, another few minutes to get our sh*t together and slow our heart-rate down and another couple hours to complete a full system restore.Lesson learned: If __any__ humans are involved in restoring your system, you can say bye-bye to the Infamous Nines.So how can you reconcile both availability and velocity for the greater good of your customers?There are three important things, namely:Architecting highly reliable and available systems.Tooling, automation and continuous delivery.Culture.Simply put, what you should aim for is having everyone in the team confident enough to push things into production without being scared of failure. And the best way to do so is by first having highly available and reliable systems, having the right tooling in place and by nurturing a culture where failure is accepted and cherished. In this following, I will focus more on the availability and reliability aspect of things.It is worth remembering, that generally speaking a reliable system has high availability but an available system may or may not be very reliable.Understanding Availability.Consider you have 2 components, X and Y, respectively with 99% and 99.99% availability. If you put those two components in series, the overall availability of the system will get worse.Availability in series.It is worth noting that the common wisdom “the chain is as strong as the weakest link” is wrong here — the chain is actually worsened.On the other hand, if you take the worse of these components, in that case, A with 99% availability, but put it in parallel, you increase your overall system availability dramatically. The beauty of math at work my friends!Availability in parallel.What is the take away from that?Component redundancy increases availability significantly!Note: you can also calculate availability with the following equation:Calculating System AvailabilityAlright, now that we understand that part, let’s take a look at how AWS Regions are designed.AWS Regions.From the AWS website, you can read the following:The AWS Cloud infrastructure is built around Regions and Availability Zones (“AZs”). A Region is a physical location in the world where we have multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking and connectivity, housed in separate facilities.Since a picture is worth 48 words, an AWS Region looks something like that.An example AWS Region with 3 AZs.Now you probably understand why AWS is always, always talking and advising its customers to deploy their applications across multi-AZ, preferably three of them. Just because of this equation my friends.By deploying your application across multiple AZs, you magically increase, and with minimal effort, it’s availability.Application deployed across multi-AZ using a Elastic Load Balancer (ELB).This is also the reason why using AWS regional services like S3, DynamoDB, SQS, Kinesis, Lambda or ELBs just to name a few, is a good idea — they are by default, using multiple AZs under the hood. And this is also why using RDS configured in multi-AZ deployment is neat!The price of AvailabilityOne thing to remember though is that availability does have a cost associated with it. The more available your application needs to be, the more complexity is required and therefore the more expensive it becomes.The price of Availability.Indeed, highly available applications have stringent requirements for development, test and validation. But especially, they must be reliable, and by that, I mean fully automated and supporting self-healing, which is the capability for a system to auto-magically recover from failure. They must dynamically acquire computing resources to meet demand but they also should be able to mitigate disruptions such as misconfigurations or transient network issues. Finally, it also requires that all aspects of this automation and self-healing capability be developed, tested and validated to the same highest standards as the application itself. This takes time, money and the right people, thus it costs more.Taking it up a notchWhile there are tens, or even hundreds of techniques used to increase application reliability and availability, I want to mention two that in my opinion stand-out.Exponential backoffTypical components in a software system include multiple (service) servers, load balancers, databases, DNS servers, etc. In operation, and subject to potential failures as discussed earlier, any of these can start generating errors. The default technique for dealing with these errors is to implement retries on the requester side. This simple technique increases the reliability of the application and reduces operational costs for the developer.However, at scale and if requesters attempt to retry the failed operation as soon as an error occurs, the network can quickly become saturated with new and retired requests, each competing for network bandwidth — and the pattern would continue forever until a full system meltdown would occur.To avoid such scenarios, exponential backoff algorithms must be used. Exponential backoff algorithms gradually increase the rate at which retries are performed, thus avoiding network congestion scenarios.In its most simple form, a pseudo exponential backoff algorithm looks like that:Simple exponential backoff algorithmNote: If you use concurrent clients, you can add jitter to the wait function to help your requests succeed faster. See here.Luckily many SDKs and software libraries, including the AWS ones, implement a version (often more sophisticated) of this algorithms. However don’t assume it, always verify and test for it.QueuesAnother important pattern to increase your application’s reliability is using queues in what is often called message-passing architecture.The queue sits between the API and the workers, allowing for the decoupling of components.Message-passing pattern with queues.Queues give the ability for clients to fire-and-forget requests, letting the task, now in the queue, to be handled when the right time comes by the workers. This asynchronous pattern is incredibly powerful at increasing the reliability of complex distributed applications but is unfortunately not as straightforward to put in place as the exponential backoff algorithms since it requires re-designing the client side. Indeed, requests do not return the result anymore, but a JobID, which can be used to retrieve the result when it is ready.Cherry on the cakeCombining message-passing patterns with exponential backoff will take you a long way in your journey to minimise the effect of failures on your availability and are in the top 10 of most important things I have learned to architect for.That’s is for this part. I hope you have enjoyed it. Please do not hesitate to give feedback, share your own opinion or simply clap your hands. The next part will hopefully be published next week. Stay tuned!-AdrianThe Quest for Availability. was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Add in-depth, production ready analytics to your application in minutes using AWS Amplify & AWS Mobile Hub.Usually, tracking and analytics is a part of the app that is not considered in “build-time”, despite its critical importance for the success of the product.Tracking is critically important to understand how your users are interacting with your app, answering questions like:Which app features are being used?What is the time spent within the app, and using certain features?How frequently is my app visited?How users are interacting with UI elements (swipes, gestures, etc..)Amplify helps developers with ‘out-of-the-box’ analytics support for these types of analytics and more.In this tutorial, we will be adding analytics to a React Native Application using Amplify. Amplify currently works with React, React Native, Angular, & Ionic, with Vue coming soon!Getting StartedThe first thing we will do is create a new React Native project. You can use either Expo (create-react-native-app) or the React Native CLI.react-native init RNAnalyticsNext, we will need to install the AWS Mobile CLI. This will allow us to create and interact with mobile projects directly from our command line.npm i -g awsmobile-cliNow, we need to configure the cli with our credentials.If you already have the AWS SDK installed and configured, the awsmobile-CLI will automatically inherit these settings.awsmobile configureHere, you will need to enter your AWS region, IAM accessKeyId, and IAM secretAccessKey.To see a walkthrough of how to get these credentials and configure the CLI, watch this video: a new AWS Mobile ProjectNow that we have the CLI installed and the React Native project created, we can add analytics using the AWS Mobile CLI.You can also go into Mobile Hub, create your own project, and configure your aws-exports file manually, but we will be using the command line to automate this process. Both processes will produce the same result.Change into the root directory of the project, create a folder called src, and run awsmobile init:cd RNAnalyticsmkdir srcawsmobile initOnce you run awsmobile init, you will be prompted with a few options regarding the configuration of your project. You can choose the default for all of these if you would like by just pressing enter, or feel free to give your project a custom name prompted.This has automatically created and configured a new AWS Mobile Hub project for you and provisioned S3 as well as Pinpoint Analytics! You should also now see an aws-exports.js file in the src folder of your root directory.You can view your new application at if you would like.Tracking Events and SessionsNow, we are ready to start tracking!Open App.js and add the following code below the last React Native import:import Amplify, { Analytics } from 'aws-amplify'import aws_exports from './aws-exports'Amplify.configure(aws_exports)Now, let’s go ahead and refresh our app.That is it, we now have Analytics installed and tracking! Out of the box, this configuration will begin tracking things like sessions, device type, and will give you information on active users.You should now be able to go to, click on the app you just created, click on Analytics on the left side menu, and see the new session show up along with some information about the device.Now, let’s start tracking a few custom events!We can use the Analytics.record() method to track custom events. One event that may make sense is to track when a user opens the app, as in when it goes into the background and then into the foreground.Let’s use AppState from the React Native API to listen for the current application state. If it is active, we will record an “App Opened” event!In App.js, let’s also import the AppState component from React Native, and set up a couple of new methods in the class:import { Platform, StyleSheet, Text, AppState, View} from 'react-native'import Amplify, { Analytics } from 'aws-amplify'import aws_exports from './src/aws-exports'Amplify.configure(aws_exports)export default App extends React.Component { componentDidMount() { AppState.addEventListener('change', this.onAppStateChange) } onAppStateChange(appState) { if (appState === 'active') { Analytics.record('App opened') } } render() { // rest of class }}Now, let’s refresh our application, place the app in the background and then back into the foreground a few times, and then open up the Pinpoint console, click on Analytics and then the Events tab.You should now be able to choose the new event from the Event dropdown menu in the console, and see the data from the new event!Tracking Attributes and MetricsWe also have the ability to track attributes and metrics. Attributes are often things like information about the current user or a dynamic value such as the type of item a user is viewing in a shopping application, while metrics are often things like computed time within a certain page or the number of times a user has viewed a certain item within that same shopping application.To track attributes, we pass a second argument to record:Analytics.record(name: string, attributes?: object, metrics?: object)So, let’s try to manually simulate the tracking of a user sign in. To do so, we will create a username, store it in the state, and send this event to Pinpoint:state = { username: 'naderdabit'}trackUser() { Analytics.record('userSignin', { username: this.state.username })}render() {// <Button title='Sign In' onPress={this.trackUser}>//}You should now be able to go back into your Analytics dashboard, choose userSignin from the Event dropdown menu, and then view the available attributes on the right, choosing the user you would like to view and seeing the information about the user.The method of tracking metrics is exactly the same, just passing in the object as the third argument.If you would like to only track name and metrics, you can pass an empty object as the second argument:Analytics.record('timeSpentOnPage', {}, { pageName: shoes, time: 23000 })To view the documentation for Analytics, click here.RoadmapCrash analyticsException loggingActions based on users’ app activity (Send a one-time notification to user not visiting for 30 days …., Pinpoint campaigns (push, sms, email)What we have covered in this short tutorial is only a small part of what you can do with the Amplify library. With the existing project that we have already created, it’s also pretty simple to also add things like Authentication! To learn more about how to add Authentication, check out this blog post. To learn more about what Amplify can do, check out the docs.My Name is Nader Dabit . I am a Developer Advocate at AWS Mobile working with projects like AppSync and Amplify, and the founder of React Native Training.If you like React and React Native, checkout out our podcast — React Native Radioon, check out my book, React Native in Action now available from Manning PublicationsIf you enjoyed this article, please recommend and share it! Thanks for your timeAdding Analytics to Your Next Mobile JavaScript Application was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer: As a news and information platform, also aggregate headlines from other sites, and republish small text snippets and images. We always link to original content on other sites, and thus follow a 'Fair Use' policy. For further content, we take great care to only publish original material, but since part of the content is user generated, we cannot guarantee this 100%. If you believe we violate this policy in any particular case, please contact us and we'll take appropriate action immediately.

Our main goal is to make crypto grow by making news and information more accessible for the masses.