Thursday, August 17, 2017

Azure Functions

Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it. Functions can make development even more productive, and you can use your development language of choice, such as C#, F#, Node.js, Python or PHP. Pay only for the time your code runs and trust Azure to scale as needed. Azure Functions lets you develop serverless applications on Microsoft Azure.

Here are some key features of Azure Functions:
  • Choice of language - Write functions using C#, F#, Node.js, Python, PHP, batch, bash, or any executable.
  • Pay-per-use pricing model - Pay only for the time spent running your code. See the Consumption hosting plan option in the pricing section.
  • Bring your own dependencies - Functions supports NuGet and NPM, so you can use your favorite libraries.
  • Integrated security - Protect HTTP-triggered functions with OAuth providers such as Azure Active Directory, Facebook, Google, Twitter, and Microsoft Account.
  • Simplified integration - Easily leverage Azure services and software-as-a-service (SaaS) offerings. See the integrations section for some examples.
  • Flexible development - Code your functions right in the portal or set up continuous integration and deploy your code through GitHub, Visual Studio Team Services, and other supported development tools.
  • Open-source - The Functions runtime is open-source and available on GitHub.
Azure Functions integrates with various Azure and 3rd-party services. These services can trigger your function and start execution, or they can serve as input and output for your code.

Azure Functions has two kinds of pricing plans:
  • Consumption plan - When your function runs, Azure provides all of the necessary computational resources. You don't have to worry about resource management, and you only pay for the time that your code runs.
  • App Service plan - Run your functions just like your web, mobile, and API apps. When you are already using App Service for your other applications, you can run your functions on the same plan at no additional cost.

Wednesday, July 26, 2017

JWT : JSON Web Token

JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.
For example, a server could generate a token that has the claim "logged in as admin" and provide that to a client. The client could then use that token to prove that it is logged in as admin. The tokens are signed by the server's key, so the client and server are both able to verify that the token is legitimate. The tokens are designed to be compact, URL-safe and usable especially in web browser single sign-on (SSO) context. JWT claims can be typically used to pass identity of authenticated users between an identity provider and a service provider, or any other type of claims as required by business processes. The tokens can also be authenticated and encrypted.
Some concepts of this definition:
Compact: Because of their smaller size, JWTs can be sent through a URL, POST parameter, or inside an HTTP header. Additionally, the smaller size means transmission is fast.
Self-contained: The payload contains all the required information about the user, avoiding the need to query the database more than once.
JSON Web Token structure:
JSON Web Tokens consist of three parts separated by dots (.), which are:
  • Header - identifies which algorithm is used to generate the signature
  • Payload - contains the claims to make
  • Signature - calculated by base64url encoding the header and payload and concatenating them with a period as a separator
To put it all together, the signature is base64url encoded. The three separate parts are concatenated using periods:
token = encodeBase64Url(header) + '.' + encodeBase64Url(payload) + '.' + encodeBase64Url(signature) 
# token is now: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJsb2dnZWRJbkFzIjoiYWRtaW4iLCJpYXQiOjE0MjI3Nzk2Mzh9.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI 
Some scenarios where JSON Web Tokens are useful:
Authentication: This is the most common scenario for using JWT. Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. Single Sign On is a feature that widely uses JWT nowadays, because of its small overhead and its ability to be easily used across different domains.
Information Exchange: JSON Web Tokens are a good way of securely transmitting information between parties. Because JWTs can be signed—for example, using public/private key pairs—you can be sure the senders are who they say they are. Additionally, as the signature is calculated using the header and the payload, you can also verify that the content hasn't been tampered with.

Monday, July 17, 2017

What is Docker?

Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run. Amazon ECS uses Docker images in task definitions to launch containers on EC2 instances in your clusters.
Running Docker on AWS provides developers and admins a highly reliable, low-cost way to build, ship, and run distributed applications at any scale. AWS supports both Docker licensing models: open source Docker Community Edition (CE) and subscription-based Docker Enterprise Edition (EE).
Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even Mac OSX and Windows.
Docker Benefits
Ship More Software Faster
Docker users on average ship software 7x more frequently than non-Docker users. Docker enables developers to ship isolated services as often as needed by eliminating the headaches of software dependencies.
Improve Developer Productivity
Docker reduces the time spent setting up new environments or troubleshooting differences between environments.
Seamlessly Move Applications
Docker-based applications can be seamlessly moved from local development machines to production deployments on AWS.
Standardize Application Operations
Small containerized applications make it easy to deploy, identify issues, and roll back for remediation.
Docker Use Cases
Continuous Integration & Delivery
Accelerate application delivery by standardizing environments and removing conflicts between language stacks and versions.
Data Processing
Provide big data processing as a service. Package data and analytics packages into portable containers that can be executed by non-technical users
Containers as a Service
Build and ship distributed applications with content and infrastructure that is IT-managed and secured.

Tuesday, June 27, 2017

Amazon CloudWatch

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time.You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications. CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define.
Amazon CloudWatch Architecture
Metrics
A metric is the fundamental concept in CloudWatch and represents a time-ordered set of data points. These data points can be either your custom metrics or metrics from other services in AWS. You or AWS products publish metric data points into CloudWatch and you retrieve statistics about those data points as an ordered set of time-series data. You can use metrics to calculate statistics and present the data graphically in the CloudWatch console. Metrics exist only in the region in which they are created.

Namespaces
CloudWatch namespaces are containers for metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are not mistakenly aggregated into the same statistics.

Dimensions
A dimension is a name/value pair that helps you to uniquely identify a metric. Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics. Dimensions help you design a structure for your statistics plan. Because dimensions are part of the unique identifier for a metric, whenever you add a unique name/value pair to one of your metrics, you are creating a new metric.

DynamoDB Data Model

In Amazon DynamoDB, a table is a collection of items and each item is a collection of attributes. In a relational database, a table has a predefined schema such as the table name, primary key, list of its column names and their data types. All records stored in the table must have the same set of columns. In contrast, DynamoDB only requires that a table has a primary key, but does not require you to define all of the attribute names and data types in advance. Individual items in a DynamoDB table can have any number of attributes, although there is a limit of 400 KB on the item size. An item size is the sum of lengths of its attribute names and values (binary and UTF-8 lengths).
Each attribute in an item is a name-value pair. An attribute can be a scalar (single-valued), a JSON document, or a set. For example, consider storing a catalog of products in DynamoDB.You can create a table, ProductCatalog, with the Id attribute as its primary key. The primary key uniquely identifies each item, so that no two products in the table can have the same Id.

Primary Key
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key. DynamoDB supports two different kinds of primary keys: 
Partition Key – A simple primary key, composed of one attribute known as the partition key. DynamoDB uses the partition key's value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. No two items in a table can have the same partition key value.
Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. All items with the same partition key are stored together, in sorted order by sort key value. It is possible for two items to have the same partition key value, but those two items must have different sort key values.

Secondary Indexes
When you create a table with a composite primary key (partition key and sort key), you can optionally define one or more secondary indexes on that table. A secondary index lets you query the data in the table using an alternate key, in addition to queries against the primary key.

DynamoDB Data Types
Amazon DynamoDB supports the following data types:
Scalar types – Number, String, Binary, Boolean, and Null.
Document types – List and Map.
Set types – String Set, Number Set, and Binary Set.

Item Distribution
DynamoDB stores data in partitions. A partition is an allocation of storage for a table, backed by solid state drives (SSDs) and automatically replicated across three facilities within an AWS region. Partition management is handled entirely by DynamoDB—customers never need to manage partitions themselves. If your storage requirements exceed a partition's capacity, DynamoDB allocates additional partitions automatically.

Monday, May 15, 2017

AWS CodeStar

AWS CodeStar is a cloud-based service for creating, managing, and working with software development projects on AWS. You can quickly develop, build, and deploy applications on AWS with an AWS CodeStar project. An AWS CodeStar project creates and integrates AWS services for your project development toolchain. Depending on your choice of AWS CodeStar project template, that toolchain might include source control, build, deployment, virtual servers or serverless resources, and more. AWS CodeStar also manages the permissions required for project users (called team members). By adding users as team members to an AWS CodeStar project, project owners can quickly and simply grant each team member role-appropriate access to a project and its resources.
AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams’ recent code deployments.

Wednesday, May 10, 2017

Amazon DynamoDB Accelerator (DAX)

DynamoDB Accelerator (DAX) is a new caching service for DynamoDB.
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management.
Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data.
DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
  • As an in-memory cache, DAX reduces the response times of eventually-consistent read workloads by an order of magnitude, from single-digit milliseconds to microseconds.
  • DAX reduces operational and application complexity by providing a managed service that is API-compatible with Amazon DynamoDB, and thus requires only minimal functional changes to use with an existing application.
  • For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to over-provision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.

Friday, April 7, 2017

JSON Schema

In computing, JSON (JavaScript Object Notation) is an open-standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. It is the most common data format used for asynchronous browser/server communication, largely replacing XML, and is used by AJAX.
JSON is a language-independent data format. It was derived from JavaScript, but as of 2017 many programming languages include code to generate and parse JSON-format data. The official Internet media type for JSON is application/json. JSON filenames use the extension .json.
JSON Schema specifies a JSON-based format to define the structure of JSON data for validation, documentation, and interaction control. A JSON Schema provides a contract for the JSON data required by a given application, and how that data can be modified. JSON Schema is based on the concepts from XML Schema (XSD), but is JSON-based. The JSON data schema can be used to validate JSON data. As in XSD, the same serialization/deserialization tools can be used both for the schema and data. The schema is self-describing. There are no standard file extension, but some suggested .schema.json.
The official MIME type for JSON text is "application/json". Although most modern implementations have adopted the official MIME type, many applications continue to provide legacy support for other MIME types. Many service providers, browsers, servers, web applications, libraries, frameworks, and APIs use, expect, or recognize the (unofficial) MIME type "text/json" or the content-type "text/javascript".
Example:
{
  "$schema": "http://json-schema.org/schema#",
  "title": "Product",
  "type": "object",
  "required": ["id", "name", "price"],
  "properties": {
    "id": {
      "type": "number",
      "description": "Product identifier"
    },
    "name": {
      "type": "string",
      "description": "Name of the product"
    },
    "price": {
      "type": "number",
      "minimum": 0
    },
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    },
    "stock": {
      "type": "object",
      "properties": {
        "warehouse": {
          "type": "number"
        },
        "retail": {
          "type": "number"
        }
      }
    }
  }
}
There are several websites which allows us to generate JSON schema file online. An example is : http://jsonschema.net/

Tuesday, April 4, 2017

OpenAPI Specification : Swagger

The OpenAPI Specification (originally known as the Swagger Specification) is a specification for machine-readable interface files for describing, producing, consuming, and visualizing RESTful Web services. A variety of tools can generate code, documentation and test cases given an interface file. Development of the OpenAPI Specification (OAS) is overseen by the Open API Initiative, an open source collaborative project of the Linux Foundation.
Swagger is a project used to describe and document RESTful APIs. The Swagger specification defines a set of files required to describe such an API. These files can then be used by the Swagger-UI project to display the API and Swagger-Codegen to generate clients in various languages. Additional utilities can also take advantage of the resulting files, such as testing tools.
Applications implemented based on OpenAPI interface files can automatically generate documentation of methods, parameters and models. This helps keep the documentation, client libraries, and source code in sync.
Features:
  • The OpenAPI Specification is language-agnostic. It is also extensible into new technologies and protocols beyond HTTP.
  • With Swagger's declarative resource specification, clients can understand and consume services without knowledge of server implementation or access to the server code.
  • The Swagger UI framework allows both developers and non-developers to interact with the API in a sandbox UI that gives clear insight into how the API responds to parameters and options. Swagger may utilize both JSON and XML.
  • Swagger-Codegen contains a template-driven engine to generate documentation, API clients and server stubs in different languages by parsing the OpenAPI/Swagger definition.
How to start:
If you're an API provider and want to use Swagger to describe your APIs - there are several approaches available:
  • A top-down approach where you would use the Swagger Editor to create your Swagger definition and then use the integrated Swagger Codegen tools to generate server implementation.
  • A bottom-up approach where you have an existing REST API for which you want to create a Swagger definition. Either you create the definition manually (using the same Swagger Editor mentioned above), or if you are using one of the supported frameworks (JAX-RS, node.js, etc), you can get the Swagger definition generated automatically for you. If you're doing JAX-RS have a look at the example at https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup-1.5.X.
If on the other hand you're an API Consumer who wants to integrate with an API that has a Swagger definition you can use the online version of the Swagger UI to explore the API (given that you have a URL to the APIs Swagger definition) - and then use Swagger Codegen to generate the client library of your choice.

Monday, March 20, 2017

Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.
With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic.You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability.
Tables, Items, and Attributes
In Amazon DynamoDB, a table is a collection of items and each item is a collection of attributes. Each attribute in an item is a name-value pair. An attribute can be a scalar (single-valued), a JSON document, or a set.
Primary Key
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key.
DynamoDB supports two different kinds of primary keys:
  • Partition Key – A simple primary key, composed of one attribute known as the partition key. DynamoDB uses the partition key's value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. No two items in a table can have the same partition key value.
  • Partition Key and Sort Key – A composite primary key, composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function; the output from the hash function determines the partition where the item will be stored. All items with the same partition key are stored together, in sorted order by sort key value. It is possible for two items to have the same partition key value, but those two items must have different sort key values.
Secondary Indexes
DynamoDB supports two kinds of secondary indexes:
  • Global secondary index – an index with a partition key and sort key that can be different from those on the table.
  • Local secondary index – an index that has the same partition key as the table, but a different sort key.
DynamoDB Data Types
Amazon DynamoDB supports the following data types:
  • Scalar types – Number, String, Binary, Boolean, and Null.
  • Document types – List and Map.
  • Set types – String Set, Number Set, and Binary Set.

Saturday, March 18, 2017

ExtJS : JavaScript application framework

Ext JS is a pure JavaScript application framework for building interactive cross platform web applications using techniques such as Ajax, DHTML and DOM scripting. Ext JS helps you build data-intensive, cross-platform web apps for desktops, tablets, and smartphones. Originally built as an add-on library extension of Yahoo! User Interface Library (YUI). Ext JS includes interoperability with jQuery and Prototype. Beginning with version 1.1, Ext JS retains no dependencies on external libraries, instead making their use optional.
Sencha Ext JS provides everything a developer needs to build data-intensive, cross-platform web applications. Ext JS leverages HTML5 features on modern browsers while maintaining compatibility and functionality for legacy browsers.
Ext JS features hundreds of high-performance, pre-tested and integrated UI components including calendar, grids, charts and more. The Ext JS Grid and Advanced Charting package can handle millions of records with ease. The framework includes a robust data package that can consume data from any back-end data source. With Sencha Pivot Grid and D3 adapter, organizations can add leading-edge visualization and analytics capabilities to their web applications.
The rich set of Ext JS tools and themes help improve development productivity and accelerate the delivery of great looking web applications. Tools are available to help with application design, development, theming, and debugging as well as build optimization and deployment.
Ext JS includes a set of GUI-based form controls (or "widgets") for use within web applications:
  • text field and textarea input controls
  • date fields with a pop-up date-picker
  • numeric fields
  • list box and combo boxes
  • radio and checkbox controls
  • html editor control
  • grid contro
  • tree control
  • tab panels
  • toolbars
  • desktop application-style menus
  • region panels to allow a form to be divided into multiple sub-sections
  • sliders
  • vector graphics charts
Ext.NET is an ASP.NET component framework integrating the Ext library, current version (as of September 2015) is 3.2.1 which integrates ExtJS version 5.1.1
Ext JS version 6.0 of the Ext JS framework was released on March 29, 2016. It merges the Sencha Touch (mobile) framework into Ext JS.

Monday, February 27, 2017

AWS Step Functions

Step function is a new AWS service. AWS Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly. Step Functions is a reliable way to coordinate components and step through the functions of your application. Step Functions provides a graphical console to arrange and visualize the components of your application as a series of steps. This makes it simple to build and run multi-step applications. Step Functions automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems quickly. You can change and add steps without even writing code, so you can easily evolve your application and innovate faster.

AWS Step Functions manages the operations and underlying infrastructure for you to help ensure your application is available at any scale.

With regards to pricing you only pay for the transitions, that is the steps between tasks.

Friday, February 17, 2017

Custom Authorization HTTP Request in AWS NodeJS Lambda function

Below is the sample code for AWS NodeJS lambda function. This will show how to make an HTTP request with Custom Authorization using "request" node module.
request v2.79.0 - Simplified HTTP request client.
Request package is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default.

/// Module to test HTTP POST Request
/// Created By: Sohan Fegade
/// Created Date: 17-Feb-2017

var request = require('request');
var buffer = require('buffer');

console.log('Loading http-test-module');

exports.handler = function (event, context, callback) {

    if (event != null) {
        console.log('event = ' + JSON.stringify(event));

        var username = 'sampleuser';
        var password = 'samplepassword';
       
        var auth = 'Basic ' + buffer.Buffer(username + ':' + password).toString('base64');
       
        var options = {
            url: 'https://www.sample-auth-url.com/oauth/token',
            headers: {
                'Content-Type': 'application/json',
                'Authorization': auth
            },
            json: {
                'type': 'test',
                'input': 'test',
                'field': 'test',
                'scope': 'urn:scope-api',
                'token-type': 'auth-token'
            }
        }

        request.post(options, function (error, response, body) {
            if (!error && response.statusCode === 200) {
               console.log('Access Token : ' + body.access-token);
            }
               
            context.callbackWaitsForEmptyEventLoop = false;
            callback(null, 'SUCCESS');
       });
    }
    else {
        console.log('No event object');
    }
};
 

Tuesday, January 31, 2017

AWS Route 53

Amazon Route 53 (Route 53) is part of Amazon.com's cloud computing platform, Amazon Web Services (AWS). Route 53 provides a scalable and highly available Domain Name System (DNS). The name is a reference to TCP or UDP port 53, where DNS server requests are addressed. In addition to being able to route users to various AWS services, including EC2 instances, Route 53 also enables AWS customers to route users to non-AWS infrastructure. Route 53's servers are distributed throughout the world. Amazon Route 53 supports full, end-to-end DNS resolution over IPv6. Recursive DNS resolvers on IPv6 networks can use either IPv4 or IPv6 transport to send DNS queries to Amazon Route 53.
One of the key features of Route 53 is programmatic access to the service that allows customers to modify DNS records via web service calls. Combined with other features in AWS, this allows a developer to programmatically bring up a machine and point to components that have been created via other service calls such as those to create new S3 buckets or EC2 instances.
AWS supposedly named the service Route 53 because all DNS requests are handled through port 53, and the "route" piece resembles the historic "Route 66" of the USA. And Route is basically for routing any traffic to Amazon DNS.

Monday, January 30, 2017

Developer Guide for AWS Lambda using Node.JS

Developing Simple Lambda function using Node.js

Pre Requisite:
• AWS Account
• Visual Studio
• Node.JS tool

Step 1: Create Lambda function in Visual Studio
Step 1.1 Open Visual Studio (Run as Administrator)
Add Lambda Project in Visual Studio
File -> Add New Project
Create new project as “Hello World”.
On Next screen select – Create Simple project
Click Finish.
Below screen will appear after click on Finish.
By default some files will be added in project which we can be found in Solution explorer.
  1. _testdriver.js  : This is a utility file to help invoke and debug the lambda function. It is not included as part of the bundle upload to Lambda.
  2. _sampleEvent : This is JSON file use to set Event sample event data to test / debug
  3. App.js : Main program file in which we will write code (Application logic)
  4. npm : Node Package module. Utility to install NPM package
  5. node_modules : Repository for node.js packages. Folder contains supporting node module files. By default it contains “aws-sdk” module. When add new module it will automatically get updated and store supporting file
Open App.JS file, file contains below code:
exports.handler : start execution of your Lambda function

Step 1.2 – Update _sampleEvent.json file
Step 1.3 – Update app.js file with below code
Step 1.4 – Run Application (Press F5), below screen should appear which contains Event information from _sampleEvent.JSON file
This complete our Hello world program.

Step 2: Deploy Lambda function in AWS console
Step 2.1: Open AWS console and find Lambda under Compute service.

Step 2.2: Screen will show List of Lambda functions. Click on Create Lambda function button
Step 2.3: Screen will show list of Blueprint use to create Lambda function. We can select existing blueprint or Skip this part and move next.
Click on Skip button on screen (Bottom right corner)
Step 2.4: Configure Lambda function.
Configure function - Set Name,Description,Runtime.
Set Lambda Function code details as shown below
Code entry type: Upload ZIP file
Handler: Main JavaScript file, in our case it is app.js. Update Handler Textbox as app.handler.
Role: Set role as lambda_basic_execution

Step 2.5: Create Zip folder contains code and Node modules
To create ZIP file open Physical location of Lambda code (HelloWorld Lambda Developed in VS2013)
Select below files/folder for ZIP,
1.    Node_modules
2.    App.js
Step 2.6:  Set Advanced settings.
Review Lambda function
Click on Create Function button. This will create your First Lambda Function.

Step 3: Configure Test Event and Test Lambda function
Step 3.1:  To set Test event click on Action button and select “Configure test event” option.

Select event template and set data. This is same data which we used to test in debug mode. You can find this in _sampleEvent.json file. Click Save button (Bottom Right corner)
Click on Test button to test Lambda function. Scroll down and you will see below content on screen.
Highlighted section in Blue is output generated by Context object.
Section highlighted in yellow is output log.

Step 4: See Test result logs in CloudWatch
Step 4.1: To check logs in click on Monitor Tab. Click “View logs in CloudWatch”
You can see details logs on new screen.