Wednesday, December 20, 2017

.NET Core revolution

.NET Core is a modular framework accessible across platforms because it is a refactored set of base class libraries (CoreFX) and Runtime (CoreCLR). Along with that you can also have your own out-of-band libraries. This is also a key characteristic of .NET Core where you may choose the package you need to deploy with your app. This means that your apps can be deployed and run in isolation and machine-wide versions of the full .NET Framework do not cause a hindrance in the running of your apps.
.NET Core can be deployed modularly and locally both, with the support by Microsoft on the Windows, Linux and Mac OSX platforms. It targets both the traditional desktop Windows as well as Windows devices and phone. .NET Core provides portability to iOS and Android devices also using third-party tools such as Xamarin.
.NET Core introduces a common layer known as the Unified Base Class Library (BCL) which sits on top of the thin layer of Runtime. The API surface area is same for .NET Core and .NET Native. There are basically two implementations: .NET Native Runtime and CoreCLR which is specific to ASP .NET 5. The majority of the APIs are the same – they just don’t seems similar only but they also share the same implementation. For example, there need not be different implementations for collections. 
.NET Core platform is a new fork of the .NET Framework which aim to provide code reusability and to maximize code sharing across all set of verticals in the framework as a whole. It is an open-source platform which accepts contributions from the open source community to achieve their goal of constant improvement and optimization.

Tuesday, December 19, 2017

Redis Cache

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

Redis maps keys to types of values. An important difference between Redis and other structured storage systems is that Redis supports not only strings, but also abstract data types:
  • Lists of strings
  • Sets of strings (collections of non-repeating unsorted elements)
  • Sorted sets of strings (collections of non-repeating elements ordered by a floating-point number called score)
  • Hash tables where keys and values are strings
  • HyperLogLogs used for approximated set cardinality size estimation.
  • Geospatial data through the implementation of the geohash technique since Redis 3.2.
The type of a value determines what operations (called commands) are available for the value itself. Redis supports high-level, atomic, server-side operations like intersection, union, and difference between sets and sorting of lists, sets and sorted sets.
Redis also supports trivial-to-setup master-slave asynchronous replication, with very fast non-blocking first synchronization, auto-reconnection with partial resynchronization on net split.
Other features include:
  • Transactions
  • Pub/Sub
  • Lua scripting
  • Keys with a limited time-to-live
  • LRU eviction of keys
  • Automatic failover

Monday, November 27, 2017

OAuth : Grants Types

The OAuth 2.0 specification is a flexibile authorization framework that describes a number of grants (“methods”) for a client application to acquire an access token (which represents a user’s permission for the client to access their data) which can be used to authenticate a request to an API endpoint. There are many supported grant types in the OAuth2 specification, and this library allows for the addition of custom grant types as well.
Supported grant types are as follows:
Authorization code grant
    The authorization code grant should be very familiar if you’ve ever signed into an application using your Facebook or Google account.
Implicit grant
    The implicit grant is similar to the authorization code grant with two distinct differences. It is intended to be used for user-agent-based clients (e.g. single page web apps) that can’t keep a client secret because all of the application code and storage is easily accessible. Secondly instead of the authorization server returning an authorization code which is exchanged for an access token, the authorization server returns an access token.
Resource owner credentials grant
    This grant is a great user experience for trusted first party clients both on the web and in native device applications.
Client credentials grant
    The simplest of all of the OAuth 2.0 grants, this grant is suitable for machine-to-machine authentication where a specific user’s permission to access data is not required.
Refresh token grant
    Access tokens eventually expire; however some grants respond with a refresh token which enables the client to get a new access token without requiring the user to be redirected.
A grant is a method of acquiring an access token. Deciding which grants to implement depends on the type of client the end user will be using, and the experience you want for your users.   

OAuth terms:
Resource owner (a.k.a. the User) - An entity capable of granting access to a protected resource. When the resource owner is a person, it is referred to as an end-user.
Resource server (a.k.a. the API server) - The server hosting the protected resources, capable of accepting and responding to protected resource requests using access tokens.
Client - An application making protected resource requests on behalf of the resource owner and with its authorization. The term client does not imply any particular implementation characteristics (e.g. whether the application executes on a server, a desktop, or other devices).
Authorization server - The server issuing access tokens to the client after successfully authenticating the resource owner and obtaining authorization.

Thursday, November 23, 2017

Linux vs Windows Containers

Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines (VMs).

Similarities

Docker containers on Linux and Windows are similar in the following ways:
  • They are designed to function as application containers.
  • They run natively, meaning they do not depend on hypervisors or virtual machines.
  • They can be administered through Docker (although you can also use PowerShell to manage containers on Windows).
  • They are limited to containing applications that are natively supported by the host operating system. In other words, Docker for Windows can only host Windows applications inside Docker containers, and Docker on Linux supports only Linux apps.
  • They provide the same portability and modularity features on both operating systems.

Differences

And here’s what makes Docker on Windows different:
  • Docker supports only certain versions of Windows (namely, Windows Server 2016 and Windows 10). In contrast, Docker can run on any type of modern Linux-based operating system.
  • Even on Windows versions that are supported by Docker, Windows has stricter requirements regarding image compatibility. Read more about those here.
  • Some Docker networking features for containers are not yet supported on Windows. They are detailed at the bottom of this page.
  • Most of the container orchestration systems that are used for Docker on Linux are not supported on Windows. The exception is Docker Swarm, which is supported. (If you want to use a different orchestrator on Windows, however, fret not; Windows support for orchestrators such as Kubernetes and Apache Mesos is under development.)

Thursday, October 26, 2017

AWS Service : Parameter Store

Parameter store provides a centralized store to manage your configuration data, whether plain-text data such as database strings or secrets such as passwords, encrypted through AWS KMS. With Parameter store, your critical information stays within your environment, saving you the manual overhead of storing and managing it in configuration files. Parameters can be easily re-used across your AWS configuration and automation workflows without having to type them in plain-text, improving your security posture. Parameters can be easily referenced across AWS services such as Amazon ECS and AWS Lambda, as well as other EC2 Systems Manager capabilities such as Run Command, State Manager, and Automation.
Through integration with AWS Identity and Access Management, you can provide access control to specific parameters, letting you provide access to the data only to the users who need them and on which resources they can be used. AWS Key Management Service (KMS) integration helps you encrypt your sensitive information and protect the security of your keys. Additionally, all calls to the parameter store are recorded with AWS CloudTrail so that they can be audited.
Parameter Store offers the following benefits and features.
  • Use a secure, scalable, hosted secrets management service (No servers to manage).
  • Improve your security posture by separating your data from your code.
  • Store configuration data and secure strings in hierarchies and track versions.
  • Control and audit access at granular levels.
  • Configure change notifications and trigger automated actions.
  • Tag parameters individually, and then secure access from different levels, including operational, parameter, EC2 tag, or path levels.
  • Reference parameters across AWS services such as Amazon EC2, Amazon EC2 Container Service, AWS Lambda, AWS CloudFormation, AWS CodeBuild, AWS CodeDeploy, and other Systems Manager capabilities.
  • Configure integration with AWS KMS, Amazon SNS, Amazon CloudWatch, and AWS CloudTrail for encryption, notification, monitoring, and audit capabilities.

Monday, October 16, 2017

NGINX - High Performance Load Balancer, Web Server, & Reverse Proxy

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. It is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. It can be deployed to serve dynamic HTTP content on the network using FastCGI, SCGI handlers for scripts, WSGI application servers or Phusion Passenger modules, and it can serve as a software load balancer.
Nginx uses an asynchronous event-driven approach to handling requests. Nginx's modular event-driven architecture can provide more predictable performance under high loads.
It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.
Nginx official build for docker is also available at GitHub Repo: https://github.com/nginxinc/docker-nginx

Friday, September 29, 2017

Version Control : Git vs TFS

Git (distributed)
Git is a distributed version control system. Each developer has a copy of the source repository on their dev machine. Developers can commit each set of changes on their dev machine and perform version control operations such as history and compare without a network connection. Branches are lightweight. When you need to switch contexts, you can create a private local branch. You can quickly switch from one branch to another to pivot among different variations of your codebase. Later, you can merge, publish, or dispose of the branch.
Git has two benefits:
Automatic backup of the whole repo - everytime someone pulls from the central repo, he/she gets a full history of the changes. When one repo gets lost: don't worry, take one of those present on every workstation.
Offline repo access - when I'm working at home (or in an airplane or train), I can see the full history of the project, every single checkin, without starting up my VPN connection to work and can work like I were at work: checkin, checkout, branch, anything.
TFVC (centralized)
Team Foundation Version Control (TFVC) is a centralized version control system. Typically, team members have only one version of each file on their dev machines. Historical data is maintained only on the server. Branches are path-based and created on the server.
TFVC has two workflow models:
Server workspaces - Before making changes, team members publicly check out files. Most operations require developers to be connected to the server. This system facilitates locking workflows. Other systems that work this way include Visual Source Safe, Perforce, and CVS. With server workspaces, you can scale up to very large codebases with millions of files per branch and large binary files.
Local workspaces - Each team member takes a copy of the latest version of the codebase with them and works offline as needed. Developers check in their changes and resolve conflicts as necessary. Another system that works this way is Subversion.

Thursday, September 28, 2017

Node.JS Processing Model

Node.js processes user requests differently when compared to a traditional web server model. Node.js runs in a single process and the application code runs in a single thread and thereby needs less resources than other platforms. All the user requests to your web application will be handled by a single thread and all the I/O work or long running job is performed asynchronously for a particular request. So, this single thread doesn't have to wait for the request to complete and is free to handle the next request. When asynchronous I/O work completes then it processes the request further and sends the response.
An event loop is constantly watching for the events to be raised for an asynchronous job and executing callback function when the job completes. Internally, Node.js uses libev for the event loop which in turn uses internal C++ thread pool to provide asynchronous I/O.
This is how a single thread can handle multiple requests at once; receiving a request and either serving static/simple content or delegating it to an I/O thread from a thread pool are both very cheap and quick operations. When the thread pool thread that is doing the long-running I/O work signals to the single listener thread that the work is done, the listener thread picks up the response and sends it back to the user; this is another very cheap operation. The core idea is that the single listener thread never blocks: it only does fast, cheap processing or delegation of requests to other threads and the serving of responses to clients.

Monday, August 28, 2017

Jenkins - Automation Tool

Jenkins is free software released under MIT License. Jenkins is an open source automation server written in Java. Jenkins helps to automate the non-human part of software development process, with continuous integration and facilitating technical aspects of continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat. It supports version control tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbt based projects as well as arbitrary shell scripts and Windows batch commands.
Features:
  • Continuous Integration and Continuous Delivery - As an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.
  • Easy installation - Jenkins is a self-contained Java-based program, ready to run out-of-the-box, with packages for Windows, Mac OS X and other Unix-like operating systems.
  • Easy configuration - Jenkins can be easily set up and configured via its web interface, which includes on-the-fly error checks and built-in help.
  • Plugins - With hundreds of plugins in the Update Center, Jenkins integrates with practically every tool in the continuous integration and continuous delivery toolchain.
  • Extensible - Jenkins can be extended via its plugin architecture, providing nearly infinite possibilities for what Jenkins can do.
  • Distributed - Jenkins can easily distribute work across multiple machines, helping drive builds, tests and deployments across multiple platforms faster.

Thursday, August 17, 2017

Azure Functions

Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Python or PHP. Pay only for the time your code runs and trust Azure to scale as needed. Azure Functions lets you develop serverless applications on Microsoft Azure.

Here are some key features of Azure Functions:
  • Choice of language - Write functions using C#, F#, Node.js, Python, PHP, batch, bash, or any executable.
  • Pay-per-use pricing model - Pay only for the time spent running your code. See the Consumption hosting plan option in the pricing section.
  • Bring your own dependencies - Functions supports NuGet and NPM, so you can use your favorite libraries.
  • Integrated security - Protect HTTP-triggered functions with OAuth providers such as Azure Active Directory, Facebook, Google, Twitter, and Microsoft Account.
  • Simplified integration - Easily leverage Azure services and software-as-a-service (SaaS) offerings. See the integrations section for some examples.
  • Flexible development - Code your functions right in the portal or set up continuous integration and deploy your code through GitHub, Visual Studio Team Services, and other supported development tools.
  • Open-source - The Functions runtime is open-source and available on GitHub.
Azure Functions integrates with various Azure and 3rd-party services. These services can trigger your function and start execution, or they can serve as input and output for your code.

Azure Functions has two kinds of pricing plans:
  • Consumption plan - When your function runs, Azure provides all of the necessary computational resources. You don't have to worry about resource management, and you only pay for the time that your code runs.
  • App Service plan - Run your functions just like your web, mobile, and API apps. When you are already using App Service for your other applications, you can run your functions on the same plan at no additional cost.

Wednesday, July 26, 2017

JWT : JSON Web Token

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.
For example, a server could generate a token that has the claim "logged in as admin" and provide that to a client. The client could then use that token to prove that it is logged in as admin. The tokens are signed by the server's key, so the client and server are both able to verify that the token is legitimate. The tokens are designed to be compact, URL-safe and usable especially in web browser single sign-on (SSO) context. JWT claims can be typically used to pass identity of authenticated users between an identity provider and a service provider, or any other type of claims as required by business processes. The tokens can also be authenticated and encrypted.
Some concepts of this definition:
Compact: Because of their smaller size, JWTs can be sent through a URL, POST parameter, or inside an HTTP header. Additionally, the smaller size means transmission is fast.
Self-contained: The payload contains all the required information about the user, avoiding the need to query the database more than once.
JSON Web Token structure:
JSON Web Tokens consist of three parts separated by dots (.), which are:
  • Header - identifies which algorithm is used to generate the signature
  • Payload - contains the claims to make
  • Signature - calculated by base64url encoding the header and payload and concatenating them with a period as a separator
To put it all together, the signature is base64url encoded. The three separate parts are concatenated using periods:
token = encodeBase64Url(header) + '.' + encodeBase64Url(payload) + '.' + encodeBase64Url(signature) 
# token is now: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJsb2dnZWRJbkFzIjoiYWRtaW4iLCJpYXQiOjE0MjI3Nzk2Mzh9.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI 
Some scenarios where JSON Web Tokens are useful:
Authentication: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. Single Sign On is a feature that widely uses JWT nowadays, because of its small overhead and its ability to be easily used across different domains.
Information Exchange: JSON Web Tokens are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs—you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Monday, July 17, 2017

What is Docker?

Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. Amazon ECS uses Docker images in task definitions to launch containers on EC2 instances in your clusters.
Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run distributed applications at any scale. AWS supports both Docker licensing models: open source Docker Community Edition (CE) and subscription-based Docker Enterprise Edition (EE).
Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even Mac OSX and Windows.
Docker Benefits
Ship More Software Faster
Docker users on average ship software 7x more frequently than non-Docker users. Docker enables developers to ship isolated services as often as needed by eliminating the headaches of software dependencies.
Improve Developer Productivity
Docker reduces the time spent setting up new environments or troubleshooting differences between environments.
Seamlessly Move Applications
Docker-based applications can be seamlessly moved from local development machines to production deployments on AWS.
Standardize Application Operations
Small containerized applications make it easy to deploy, identify issues, and roll back for remediation.
Docker Use Cases
Continuous Integration & Delivery
Accelerate application delivery by standardizing environments and removing conflicts between language stacks and versions.
Data Processing
Provide big data processing as a service. Package data and analytics packages into portable containers that can be executed by non-technical users
Containers as a Service
Build and ship distributed applications with content and infrastructure that is IT-managed and secured.

Tuesday, June 27, 2017

Amazon CloudWatch

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time.You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define.
Amazon CloudWatch Architecture
Metrics
A metric is the fundamental concept in CloudWatch and represents a time-ordered set of data points. These data points can be either your custom metrics or metrics from other services in AWS. You or AWS products publish metric data points into CloudWatch and you retrieve statistics about those data points as an ordered set of time-series data. You can use metrics to calculate statistics and present the data graphically in the CloudWatch console. Metrics exist only in the region in which they are created.

Namespaces
CloudWatch namespaces are containers for metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.

Dimensions
A dimension is a name/value pair that helps you to uniquely identify a metric. Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics. Dimensions help you design a structure for your statistics plan. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new metric.

DynamoDB Data Model

In Amazon DynamoDB, a table is a collection of items and each item is a collection of attributes. In a relational database, a table has a predefined schema such as the table name, primary key, list of its column names and their data types. All records stored in the table must have the same set of columns. In contrast, DynamoDB only requires that a table has a primary key, but does not require you to define all of the attribute names and data types in advance. Individual items in a DynamoDB table can have any number of attributes, although there is a limit of 400 KB on the item size. An item size is the sum of lengths of its attribute names and values (binary and UTF-8 lengths).
Each attribute in an item is a name-value pair. An attribute can be a scalar (single-valued), a JSON document, or a set. For example, consider storing a catalog of products in DynamoDB.You can create a table, ProductCatalog, with the Id attribute as its primary key. The primary key uniquely identifies each item, so that no two products in the table can have the same Id.

Primary Key
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key. DynamoDB supports two different kinds of primary keys: 
Partition Key – A simple primary key, composed of one attribute known as the partition key. DynamoDB uses the partition key's value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. No two items in a table can have the same partition key value.
Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. All items with the same partition key are stored together, in sorted order by sort key value. It is possible for two items to have the same partition key value, but those two items must have different sort key values.

Secondary Indexes
When you create a table with a composite primary key (partition key and sort key), you can optionally define one or more secondary indexes on that table. A secondary index lets you query the data in the table using an alternate key, in addition to queries against the primary key.

DynamoDB Data Types
Amazon DynamoDB supports the following data types:
Scalar types – Number, String, Binary, Boolean, and Null.
Document types – List and Map.
Set types – String Set, Number Set, and Binary Set.

Item Distribution
DynamoDB stores data in partitions. A partition is an allocation of storage for a table, backed by solid state drives (SSDs) and automatically replicated across three facilities within an AWS region. Partition management is handled entirely by DynamoDB—customers never need to manage partitions themselves. If your storage requirements exceed a partition's capacity, DynamoDB allocates additional partitions automatically.

Monday, May 15, 2017

AWS CodeStar

AWS CodeStar is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. Depending on your choice of AWS CodeStar project template, that toolchain might include source control, build, deployment, virtual servers or serverless resources, and more. AWS CodeStar also manages the permissions required for project users (called team members). By adding users as team members to an AWS CodeStar project, project owners can quickly and simply grant each team member role-appropriate access to a project and its resources.
AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams’ recent code deployments.

Wednesday, May 10, 2017

Amazon DynamoDB Accelerator (DAX)

DynamoDB Accelerator (DAX) is a new caching service for DynamoDB.
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.
Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data.
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
  • As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  • DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Friday, April 7, 2017

JSON Schema

In computing, JSON (JavaScript Object Notation) is an open-standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. It is the most common data format used for asynchronous browser/server communication, largely replacing XML, and is used by AJAX.
JSON is a language-independent data format. It was derived from JavaScript, but as of 2017 many programming languages include code to generate and parse JSON-format data. The official Internet media type for JSON is application/json. JSON filenames use the extension .json.
JSON Schema specifies a JSON-based format to define the structure of JSON data for validation, documentation, and interaction control. A JSON Schema provides a contract for the JSON data required by a given application, and how that data can be modified. JSON Schema is based on the concepts from XML Schema (XSD), but is JSON-based. The JSON data schema can be used to validate JSON data. As in XSD, the same serialization/deserialization tools can be used both for the schema and data. The schema is self-describing. There are no standard file extension, but some suggested .schema.json.
The official MIME type for JSON text is "application/json". Although most modern implementations have adopted the official MIME type, many applications continue to provide legacy support for other MIME types. Many service providers, browsers, servers, web applications, libraries, frameworks, and APIs use, expect, or recognize the (unofficial) MIME type "text/json" or the content-type "text/javascript".
Example:
{
  "$schema": "http://json-schema.org/schema#",
  "title": "Product",
  "type": "object",
  "required": ["id", "name", "price"],
  "properties": {
    "id": {
      "type": "number",
      "description": "Product identifier"
    },
    "name": {
      "type": "string",
      "description": "Name of the product"
    },
    "price": {
      "type": "number",
      "minimum": 0
    },
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    },
    "stock": {
      "type": "object",
      "properties": {
        "warehouse": {
          "type": "number"
        },
        "retail": {
          "type": "number"
        }
      }
    }
  }
}
There are several websites which allows us to generate JSON schema file online. An example is : http://jsonschema.net/

Tuesday, April 4, 2017

OpenAPI Specification : Swagger

The OpenAPI Specification (originally known as the Swagger Specification) is a specification for machine-readable interface files for describing, producing, consuming, and visualizing RESTful Web services. A variety of tools can generate code, documentation and test cases given an interface file. Development of the OpenAPI Specification (OAS) is overseen by the Open API Initiative, an open source collaborative project of the Linux Foundation.
Swagger is a project used to describe and document RESTful APIs. The Swagger specification defines a set of files required to describe such an API. These files can then be used by the Swagger-UI project to display the API and Swagger-Codegen to generate clients in various languages. Additional utilities can also take advantage of the resulting files, such as testing tools.
Applications implemented based on OpenAPI interface files can automatically generate documentation of methods, parameters and models. This helps keep the documentation, client libraries, and source code in sync.
Features:
  • The OpenAPI Specification is language-agnostic. It is also extensible into new technologies and protocols beyond HTTP.
  • With Swagger's declarative resource specification, clients can understand and consume services without knowledge of server implementation or access to the server code.
  • The Swagger UI framework allows both developers and non-developers to interact with the API in a sandbox UI that gives clear insight into how the API responds to parameters and options. Swagger may utilize both JSON and XML.
  • Swagger-Codegen contains a template-driven engine to generate documentation, API clients and server stubs in different languages by parsing the OpenAPI/Swagger definition.
How to start:
If you're an API provider and want to use Swagger to describe your APIs - there are several approaches available:
  • A top-down approach where you would use the Swagger Editor to create your Swagger definition and then use the integrated Swagger Codegen tools to generate server implementation.
  • A bottom-up approach where you have an existing REST API for which you want to create a Swagger definition. Either you create the definition manually (using the same Swagger Editor mentioned above), or if you are using one of the supported frameworks (JAX-RS, node.js, etc), you can get the Swagger definition generated automatically for you. If you're doing JAX-RS have a look at the example at https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup-1.5.X.
If on the other hand you're an API Consumer who wants to integrate with an API that has a Swagger definition you can use the online version of the Swagger UI to explore the API (given that you have a URL to the APIs Swagger definition) - and then use Swagger Codegen to generate the client library of your choice.

Monday, March 20, 2017

Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic.You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability.
Tables, Items, and Attributes
In Amazon DynamoDB, a table is a collection of items and each item is a collection of attributes. Each attribute in an item is a name-value pair. An attribute can be a scalar (single-valued), a JSON document, or a set.
Primary Key
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key.
DynamoDB supports two different kinds of primary keys:
  • Partition Key – A simple primary key, composed of one attribute known as the partition key. DynamoDB uses the partition key's value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. No two items in a table can have the same partition key value.
  • Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. All items with the same partition key are stored together, in sorted order by sort key value. It is possible for two items to have the same partition key value, but those two items must have different sort key values.
Secondary Indexes
DynamoDB supports two kinds of secondary indexes:
  • Global secondary index – an index with a partition key and sort key that can be different from those on the table.
  • Local secondary index – an index that has the same partition key as the table, but a different sort key.
DynamoDB Data Types
Amazon DynamoDB supports the following data types:
  • Scalar types – Number, String, Binary, Boolean, and Null.
  • Document types – List and Map.
  • Set types – String Set, Number Set, and Binary Set.

Saturday, March 18, 2017

ExtJS : JavaScript application framework

Ext JS is a pure JavaScript application framework for building interactive cross platform web applications using techniques such as Ajax, DHTML and DOM scripting. Ext JS helps you build data-intensive, cross-platform web apps for desktops, tablets, and smartphones. Originally built as an add-on library extension of Yahoo! User Interface Library (YUI). Ext JS includes interoperability with jQuery and Prototype. Beginning with version 1.1, Ext JS retains no dependencies on external libraries, instead making their use optional.
Sencha Ext JS provides everything a developer needs to build data-intensive, cross-platform web applications. Ext JS leverages HTML5 features on modern browsers while maintaining compatibility and functionality for legacy browsers.
Ext JS features hundreds of high-performance, pre-tested and integrated UI components including calendar, grids, charts and more. The Ext JS Grid and Advanced Charting package can handle millions of records with ease. The framework includes a robust data package that can consume data from any back-end data source. With Sencha Pivot Grid and D3 adapter, organizations can add leading-edge visualization and analytics capabilities to their web applications.
The rich set of Ext JS tools and themes help improve development productivity and accelerate the delivery of great looking web applications. Tools are available to help with application design, development, theming, and debugging as well as build optimization and deployment.
Ext JS includes a set of GUI-based form controls (or "widgets") for use within web applications:
  • text field and textarea input controls
  • date fields with a pop-up date-picker
  • numeric fields
  • list box and combo boxes
  • radio and checkbox controls
  • html editor control
  • grid contro
  • tree control
  • tab panels
  • toolbars
  • desktop application-style menus
  • region panels to allow a form to be divided into multiple sub-sections
  • sliders
  • vector graphics charts
Ext.NET is an ASP.NET component framework integrating the Ext library, current version (as of September 2015) is 3.2.1 which integrates ExtJS version 5.1.1
Ext JS version 6.0 of the Ext JS framework was released on March 29, 2016. It merges the Sencha Touch (mobile) framework into Ext JS.

Monday, February 27, 2017

AWS Step Functions

Step function is a new AWS service. AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application. Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps. This makes it simple to build and run multi-step applications. Step Functions automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems quickly. You can change and add steps without even writing code, so you can easily evolve your application and innovate faster.

AWS Step Functions manages the operations and underlying infrastructure for you to help ensure your application is available at any scale.

With regards to pricing you only pay for the transitions, that is the steps between tasks.

Friday, February 17, 2017

Custom Authorization HTTP Request in AWS NodeJS Lambda function

Below is the sample code for AWS NodeJS lambda function. This will show how to make an HTTP request with Custom Authorization using "request" node module.
request v2.79.0 - Simplified HTTP request client.
Request package is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default.

/// Module to test HTTP POST Request
/// Created By: Sohan Fegade
/// Created Date: 17-Feb-2017

var request = require('request');
var buffer = require('buffer');

console.log('Loading http-test-module');

exports.handler = function (event, context, callback) {

    if (event != null) {
        console.log('event = ' + JSON.stringify(event));

        var username = 'sampleuser';
        var password = 'samplepassword';
       
        var auth = 'Basic ' + buffer.Buffer(username + ':' + password).toString('base64');
       
        var options = {
            url: 'https://www.sample-auth-url.com/oauth/token',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': auth
            },
            json: {
                'type': 'test',
                'input': 'test',
                'field': 'test',
                'scope': 'urn:scope-api',
                'token-type': 'auth-token'
            }
        }

        request.post(options, function (error, response, body) {
            if (!error && response.statusCode === 200) {
               console.log('Access Token : ' + body.access-token);
            }
               
            context.callbackWaitsForEmptyEventLoop = false;
            callback(null, 'SUCCESS');
       });
    }
    else {
        console.log('No event object');
    }
};