Wednesday, January 31, 2018

Javascript Module Definition

Over the years there’s been a steadily increasing ecosystem of JavaScript components to choose from. The sheer amount of choices is fantastic, but this also infamously presents a difficulty when components are mixed-and-matched. And it doesn’t take too long for budding developers to find out that not all components are built to play nicely together.

To address these issues, the competing module specs AMD and CommonJS have appeared on the scene, allowing developers to write their code in an agreed-upon sandboxed and modularized way, so as not to “pollute the ecosystem”.

Asynchronous module definition (AMD)
Asynchronous module definition (AMD) is a specification for the programming language JavaScript. It defines an application programming interface (API) that defines code modules and their dependencies, and loads them asynchronously if desired. Implementations of AMD provide the following benefits:
  • Website performance improvements. AMD implementations load smaller JavaScript files, and then only when they are needed.
  • Fewer page errors. AMD implementations allow developers to define dependencies that must load before a module is executed, so the module does not try to use outside code that is not available yet.
In addition to loading multiple JavaScript files at runtime, AMD implementations allow developers to encapsulate code in smaller, more logically-organized files, in a way similar to other programming languages such as Java. For production and deployment, developers can concatenate and minify JavaScript modules based on an AMD API into one file, the same as traditional JavaScript.

CommonJS is a style you may be familiar with if you’re written anything in Node (which uses a slight variant). It’s also been gaining traction on the frontend with Browserify.

Universal Module Definition (UMD)
Since CommonJS and AMD styles have both been equally popular, it seems there’s yet no consensus. This has brought about the push for a “universal” pattern that supports both styles, which brings us to none other than the Universal Module Definition.
The pattern is admittedly ugly, but is both AMD and CommonJS compatible, as well as supporting the old-style “global” variable definition.

Thursday, January 25, 2018

webpack : module bundler

Webpack is the latest and greatest in front-end development tools. It is a module bundler that works great with the most modern of front-end workflows including Babel, ReactJS, CommonJS, among others.
At its core, webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it recursively builds a dependency graph that includes every module your application needs, then packages all of those modules into one or more bundles.


Wednesday, December 20, 2017

.NET Core revolution

.NET Core is a modular framework accessible across platforms because it is a refactored set of base class libraries (CoreFX) and Runtime (CoreCLR). Along with that you can also have your own out-of-band libraries. This is also a key characteristic of .NET Core where you may choose the package you need to deploy with your app. This means that your apps can be deployed and run in isolation and machine-wide versions of the full .NET Framework do not cause a hindrance in the running of your apps.
.NET Core can be deployed modularly and locally both, with the support by Microsoft on the Windows, Linux and Mac OSX platforms. It targets both the traditional desktop Windows as well as Windows devices and phone. .NET Core provides portability to iOS and Android devices also using third-party tools such as Xamarin.
.NET Core introduces a common layer known as the Unified Base Class Library (BCL) which sits on top of the thin layer of Runtime. The API surface area is same for .NET Core and .NET Native. There are basically two implementations: .NET Native Runtime and CoreCLR which is specific to ASP .NET 5. The majority of the APIs are the same – they just don’t seems similar only but they also share the same implementation. For example, there need not be different implementations for collections. 
.NET Core platform is a new fork of the .NET Framework which aim to provide code reusability and to maximize code sharing across all set of verticals in the framework as a whole. It is an open-source platform which accepts contributions from the open source community to achieve their goal of constant improvement and optimization.

Tuesday, December 19, 2017

Redis Cache

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

Redis maps keys to types of values. An important difference between Redis and other structured storage systems is that Redis supports not only strings, but also abstract data types:
  • Lists of strings
  • Sets of strings (collections of non-repeating unsorted elements)
  • Sorted sets of strings (collections of non-repeating elements ordered by a floating-point number called score)
  • Hash tables where keys and values are strings
  • HyperLogLogs used for approximated set cardinality size estimation.
  • Geospatial data through the implementation of the geohash technique since Redis 3.2.
The type of a value determines what operations (called commands) are available for the value itself. Redis supports high-level, atomic, server-side operations like intersection, union, and difference between sets and sorting of lists, sets and sorted sets.
Redis also supports trivial-to-setup master-slave asynchronous replication, with very fast non-blocking first synchronization, auto-reconnection with partial resynchronization on net split.
Other features include:
  • Transactions
  • Pub/Sub
  • Lua scripting
  • Keys with a limited time-to-live
  • LRU eviction of keys
  • Automatic failover

Monday, November 27, 2017

OAuth : Grants Types

The OAuth 2.0 specification is a flexibile authorization framework that describes a number of grants (“methods”) for a client application to acquire an access token (which represents a user’s permission for the client to access their data) which can be used to authenticate a request to an API endpoint. There are many supported grant types in the OAuth2 specification, and this library allows for the addition of custom grant types as well.
Supported grant types are as follows:
Authorization code grant
    The authorization code grant should be very familiar if you’ve ever signed into an application using your Facebook or Google account.
Implicit grant
    The implicit grant is similar to the authorization code grant with two distinct differences. It is intended to be used for user-agent-based clients (e.g. single page web apps) that can’t keep a client secret because all of the application code and storage is easily accessible. Secondly instead of the authorization server returning an authorization code which is exchanged for an access token, the authorization server returns an access token.
Resource owner credentials grant
    This grant is a great user experience for trusted first party clients both on the web and in native device applications.
Client credentials grant
    The simplest of all of the OAuth 2.0 grants, this grant is suitable for machine-to-machine authentication where a specific user’s permission to access data is not required.
Refresh token grant
    Access tokens eventually expire; however some grants respond with a refresh token which enables the client to get a new access token without requiring the user to be redirected.
A grant is a method of acquiring an access token. Deciding which grants to implement depends on the type of client the end user will be using, and the experience you want for your users.   

OAuth terms:
Resource owner (a.k.a. the User) - An entity capable of granting access to a protected resource. When the resource owner is a person, it is referred to as an end-user.
Resource server (a.k.a. the API server) - The server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.
Client - An application making protected resource requests on behalf of the resource owner and with its authorization. The term client does not imply any particular implementation characteristics (e.g. whether the application executes on a server, a desktop, or other devices).
Authorization server - The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization.

Thursday, November 23, 2017

Linux vs Windows Containers

Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines (VMs).


Docker containers on Linux and Windows are similar in the following ways:
  • They are designed to function as application containers.
  • They run natively, meaning they do not depend on hypervisors or virtual machines.
  • They can be administered through Docker (although you can also use PowerShell to manage containers on Windows).
  • They are limited to containing applications that are natively supported by the host operating system. In other words, Docker for Windows can only host Windows applications inside Docker containers, and Docker on Linux supports only Linux apps.
  • They provide the same portability and modularity features on both operating systems.


And here’s what makes Docker on Windows different:
  • Docker supports only certain versions of Windows (namely, Windows Server 2016 and Windows 10). In contrast, Docker can run on any type of modern Linux-based operating system.
  • Even on Windows versions that are supported by Docker, Windows has stricter requirements regarding image compatibility. Read more about those here.
  • Some Docker networking features for containers are not yet supported on Windows. They are detailed at the bottom of this page.
  • Most of the container orchestration systems that are used for Docker on Linux are not supported on Windows. The exception is Docker Swarm, which is supported. (If you want to use a different orchestrator on Windows, however, fret not; Windows support for orchestrators such as Kubernetes and Apache Mesos is under development.)

Thursday, October 26, 2017

AWS Service : Parameter Store

Parameter store provides a centralized store to manage your configuration data, whether plain-text data such as database strings or secrets such as passwords, encrypted through AWS KMS. With Parameter store, your critical information stays within your environment, saving you the manual overhead of storing and managing it in configuration files. Parameters can be easily re-used across your AWS configuration and automation workflows without having to type them in plain-text, improving your security posture. Parameters can be easily referenced across AWS services such as Amazon ECS and AWS Lambda, as well as other EC2 Systems Manager capabilities such as Run Command, State Manager, and Automation.
Through integration with AWS Identity and Access Management, you can provide access control to specific parameters, letting you provide access to the data only to the users who need them and on which resources they can be used. AWS Key Management Service (KMS) integration helps you encrypt your sensitive information and protect the security of your keys. Additionally, all calls to the parameter store are recorded with AWS CloudTrail so that they can be audited.
Parameter Store offers the following benefits and features.
  • Use a secure, scalable, hosted secrets management service (No servers to manage).
  • Improve your security posture by separating your data from your code.
  • Store configuration data and secure strings in hierarchies and track versions.
  • Control and audit access at granular levels.
  • Configure change notifications and trigger automated actions.
  • Tag parameters individually, and then secure access from different levels, including operational, parameter, EC2 tag, or path levels.
  • Reference parameters across AWS services such as Amazon EC2, Amazon EC2 Container Service, AWS Lambda, AWS CloudFormation, AWS CodeBuild, AWS CodeDeploy, and other Systems Manager capabilities.
  • Configure integration with AWS KMS, Amazon SNS, Amazon CloudWatch, and AWS CloudTrail for encryption, notification, monitoring, and audit capabilities.