Monday, December 22, 2014


Ember.js is a framework that enables user to build “ambitious” web applications. That term “ambitious” can mean different things to different people, but as a general rule, Ember.js aims to help user, push the envelope of develop code for the web, while ensuring that application source code remains structured and sane. Ember.js achieves this goal by structuring an application into logical abstraction layers and forcing the development model to be as object-oriented as possible. At its core, Ember.js has built-in support for the following features:
  • Bindings - Enables the changes to one variable to propagate into another variable and vice versa
  • Computed properties - Enables to mark functions as properties that automatically update along with the properties they rely on
  • Automatically updated templates - Ensures that GUI stays up to date whenever changes occur in the underlying data
With the combination of these features along with a strong and well-planned Model-View-Controller (MVC)  architecture results into a framework that delivers on its promise.
Below image shows the internal architecture of Ember framework:
Ember.js application includes - a complete MVC implementation, which enriches both the controller and the view layers, and Ember Data enriches the model layer of Ember.js.
  • Model layer - Built with an ember objects
  • View layer - Built with a combination of templates and views
  • Controller layer - Built with a combination of routes and controllers

Saturday, November 29, 2014

Wednesday, October 29, 2014

Representational state transfer (REST)

Roy Fielding generalized the Web’s architectural principles and presented them as a framework of constraints, or an architectural style in his Ph.D. dissertation. Through this framework, Fielding described how distributed information systems such as the Web are built and operated. He described the interplay between resources, and the role of unique identifiers in such systems. He also talked about using a limited set of operations with uniform semantics to build a ubiquitous infrastructure that can support any type of application. Fielding referred to this architectural style as REpresentational State Transfer, or REST. REST describes the Web as a distributed hypermedia application whose linked resources communicate by exchanging representations of resource state.
Representational state transfer (REST) is an abstraction of the architecture of the World Wide Web; more precisely, REST is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.
The description of the Web, as captured in W3C’s “Architecture of the World Wide Web” and other IETF RFC documents, was heavily influenced by Fielding’s work. The architectural abstractions and constraints he established led to the introduction of hypermedia as the engine of application state.
The idea is simple, and yet very powerful. A distributed application makes forward progress by transitioning from one state to another, just like a state machine. The difference from traditional state machines, however, is that the possible states and the transitions between them are not known in advance. Instead, as the application reaches a new state, the next possible transitions are discovered.
In a hypermedia system, application states are communicated through representations of uniquely identifiable resources. The identifiers of the states to which the application can transition are embedded in the representation of the current state in the form of links.
Example of hypermedia as the engine for application state in action

REST - An Architectural Style, Not a Standard ,while REST is not a standard, it does use following standards:
  • HTTP
  • URL
  • XML/HTML/GIF/JPEG/etc (Resource Representations)
  • text/xml, text/html, image/gif, image/jpeg, etc (MIME Types)

Principles of REST Web Service Design
  1. The key to creating Web Services in a REST network (i.e., the Web) is to identify all of the conceptual entities that you wish to expose as services.
  2. Create a URL to each resource. The resources should be nouns, not verbs. For example, do not use this:
        Note the verb, getPart. Instead, use a noun:
  3. Categorize your resources according to whether clients can just receive a representation of the resource, or whether clients can modify (add to) the resource. For the former, make those resources accessible using an HTTP GET. For the later, make those resources accessible using HTTP POST, PUT, and/or DELETE.
  4. All resources accessible via HTTP GET should be side-effect free. That is, the resource should just return a representation of the resource. Invoking the resource should not result in modifying the resource.
  5. No man/woman is an island. Likewise, no representation should be an island. In other words, put hyperlinks within resource representations to enable clients to drill down for more information, and/or to obtain related information.
  6. Design to reveal data gradually. Don't reveal everything in a single response document. Provide hyperlinks to obtain more details.
  7. Specify the format of response data using a schema (DTD, W3C Schema, RelaxNG, or Schematron). For those services that require a POST or PUT to it, also provide a schema to specify the format of the response.
  8. Describe how your services are to be invoked using either a WSDL document, or simply an HTML document.

Tuesday, September 30, 2014

MVC 4 : Dependency Injection

Dependency Injection
ASP.NET MVC has included a dependency resolver in earlier version, which dramatically improves the ability of an application to participate in dependency injection for both services consumed by MVC and commonly created classes like controllers and view pages.
The dependency injection (DI) pattern is another form of the inversion of control pattern, wherein there is no intermediary object like the service locator. Instead, components are written in a way that allows their dependencies to be stated explicitly, usually by way of constructor parameters or property setters.
Constructor Injection
The most common form of dependency injection is called constructor injection. This technique involves creating a constructor for your class that expresses all of its dependencies explicitly
    public class NotificationSystem
        private IMessagingService svc;

        public NotificationSystem(IMessagingService service)
            this.svc = service;
        public void InterestingEventHappened()
In this code, the first benefit is that the implementation of the constructor is dramatically simplified. The component is always expecting whoever creates it to pass the required dependencies. It only needs to store the instance of IMessagingService for later use. Another benefit is that you’ve reduced the number of things NotificationSystem needs to know about.

Property Injection
A less common form of dependency injection is called property injection. As the name implies, dependencies for a class are injected by setting public properties on the object rather than through the use of constructor parameters.
    public class NotificationSystem
        public IMessagingService MessagingService
        public void InterestingEventHappened()
This code removes the constructor arguments (in fact, it removes the constructor entirely) and replaces it with a property. This class expects any consumers to provide required dependencies via properties rather than the constructor.

Dependency Resolution
Using a dependency injection container is one way to make the resolution of these dependencies simpler. A dependency injection container is a software library that acts as a factory for components, automatically inspecting and fulfi lling their dependency requirements. The consumption portion of the API for a dependency injection container looks a lot like a service locator because the primary action user ask it to perform is to provide user with some component, usually based on its type. The primary way that MVC talks to containers is through an interface created for MVC applications: IDependencyResolver.
The interface is defined as follows:
    public interface IDependencyResolver
        object GetService(Type serviceType);
        IEnumerable<object> GetServices(Type serviceType);

This interface is consumed by the MVC framework itself. If user want to register a dependency injection container (or a service locator, for that matter), need to provide an implementation of this interface. User can typically register an instance of the resolver inside your Global.asax file as below:
       DependencyResolver.Current = new MyDependencyResolver();

Tuesday, August 26, 2014


The official website of the project presents SignalR as:
“ASP.NET SignalR is a new library for ASP.NET developer that makes it incredibly simple to add real-time web functionality to your applications”

And Wiki says:
"SignalR is a server-side software system designed for writing scalable Internet applications, notably web servers. Programs are written on the server side in C#, using event-driven, asynchronous I/O to minimize overhead and maximize scalability."

Basically, SignalR isolates developer from low-level details, giving the impression of working on a permanently open persistent connection. To achieve this, SignalR includes components specific to both ends of communication, which will facilitate message delivery and reception in real time between the two. SignalR is in charge of determining which is the best technique available both at the client and at the server (Long Polling, Forever Frame, Websockets...) and uses it to create a underlying connection and keep it continuously open, also automatically managing disconnections and reconnections when necessary. SignalR provides a virtual persistent connection, and that everything works correctly in the backstage.

SignalR includes a set of transports or techniques to keep the underlying connection to the server open, and it determines which one it should use based on certain factors, such as the availability of the technology at both ends. SignalR will always try to use the most efficient transport, and will keep falling back until it finds one that is compatible with the context. This decision is made automatically during an initial stage in the communication between the client and the server, known as “negotiation”. It is also possible to force the use of a specific transport using the client libraries of the framework.

SignalR also includes a messaging bus capable of managing data transmission and reception between the server and the clients connected to the service. That is, the server is able to keep track of its clients and detect their connections and disconnections, and it will also have mechanisms to easily send messages to all clients connected or part of them, automatically managing all issues concerning communications (different speeds, latency, errors…) and ensuring the delivery of messages. Moreover, it includes powerful libraries on the client side that allow the consumption of services from virtually any kind of application, allowing us to manage our end of the virtual connection and send or receive data asynchronously.

In short, in SignalR provides rich platform to create multiuser real-time applications.

SignalR offers two different levels of abstraction above the transports used to maintain the connection with the server.

The first one, called Persistent connections, is the lower level, closer to the reality of the connections. In fact, it creates a development surface which is quite similar to programming with sockets, although here it is done on the virtual connection established by SignalR.
The second level of abstraction, based on components called Hubs, is quite further removed from the underlying connections and protocols, offering a very imperative programming model, where the boundaries traditionally separating the client and the server melt away.

Saturday, June 28, 2014

JavaScript Engine : V8

A JavaScript engine is process virtual machine which interprets and executes JavaScript. Although there are several uses for a JavaScript engine, it is most commonly used in web browsers. Web browsers typically use the public application programming interface (API) to create "host objects" responsible for reflecting the Document Object Model (DOM) into JavaScript.
The web server is another common application of the engine. A JavaScript web server exposes host objects representing an HTTP request and response objects, which a JavaScript program then manipulates to dynamically generate web pages.


The V8 JavaScript Engine is an open source JavaScript engine developed by Google for the Google Chrome web browser. V8 compiles JavaScript to native machine code (IA-32, x86-64, ARM, or MIPS ISAs) before executing it, instead of more traditional techniques such as interpreting bytecode or compiling the whole program to machine code and executing it from a filesystem. The compiled code is additionally optimized (and re-optimized) dynamically at runtime, based on heuristics of the code's execution profile. Optimization techniques used include inlining, elision of expensive runtime properties, and inline caching, among many others.
V8 is a new JavaScript engine specifically designed for fast execution of large JavaScript applications. It handles memory allocation for objects, and garbage collects objects it no longer needs. V8's stop-the-world, generational, accurate garbage collector is one of the keys to V8's performance.
There are three key areas to V8's performance:
  • Fast Property Access
  • Dynamic Machine Code Generation
  • Efficient Garbage Collection
Some Facts about V8:
  • V8 is Google's open source JavaScript engine. 
  • V8 is written in C++ and is used in Google Chrome, the open source browser from Google. 
  • V8 can run standalone, or can be embedded into any C++ application. 
  • The garbage collector of V8 is a generational incremental collector.
  • V8 assembler is based on the Strongtalk assembler.
  • V8 is intended to be used both in a browser and as a standalone high-performance engine that can be integrated into independent projects, for example server-side JavaScript in Node.js, or client side JavaScript in .NET/Mono using V8.NET.

Wednesday, May 21, 2014

Message-oriented middleware - RabbitMQ

RabbitMQ is open source message broker software (sometimes called message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). The RabbitMQ server is written in the Erlang programming language and is built on the Open Telecom Platform framework for clustering and failover. Client libraries to interface with the broker are available for all major programming languages.

Message-oriented middleware (MOM) is software or hardware infrastructure supporting sending and receiving messages between distributed systems. MOM allows application modules to be distributed over heterogeneous platforms and reduces the complexity of developing applications that span multiple operating systems and network protocols. The middleware creates a distributed communications layer that insulates the application developer from the details of the various operating systems and network interfaces. APIs that extend across diverse platforms and networks are typically provided by MOM.

The Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing (including point-to-point and publish-and-subscribe), reliability and security. AMQP mandates the behavior of the messaging provider and client to the extent that implementations from different vendors are truly interoperable, in the same way as SMTP, HTTP, FTP, etc. have created interoperable systems. Previous attempts to standardize middleware have happened at the API level (e.g. JMS) and thus did not ensure interoperability. Unlike JMS, which merely defines an API, AMQP is a wire-level protocol. A wire-level protocol is a description of the format of the data that is sent across the network as a stream of octets. Consequently any tool that can create and interpret messages that conform to this data format can interoperate with any other compliant tool irrespective of implementation language.
  • Robust messaging for applications
  • Easy to use
  • Runs on all major operating systems
  • Supports a huge number of developer platforms
  • Open source
Messaging enables software applications to connect and scale. Applications can connect to each other, as components of a larger application, or to user devices and data. Messaging is asynchronous, decoupling applications by separating sending and receiving data.

Tuesday, April 22, 2014

Service Virtualization

Service Virtualization is a method to emulate the behavior of specific components in heterogeneous component-based applications such as API-driven applications, cloud-based applications and service-oriented architectures. Service virtualization emulates the behavior of software components to remove dependency constraints on development and testing teams which enable end-to-end testing of the application as a whole. 
Test environments can use virtual services in lieu of the production services to conduct integration testing earlier in the development process. Service virtualization can be useful for anyone involved in developing and delivering software applications. Integration testing of these applications is often delayed because some of the components the application depends on aren’t available. Service virtualization enables earlier and more frequent integration testing by emulating the unavailable component dependencies.
  • Application emulation: Virtual components can simulate the behavior of an entire application or a specific component.
  • Multiple test environments: Developers and quality professionals may create test environments by using virtual components configured for their needs.
  • Same testing tools: Developers and quality professionals can use the same testing tools that they have used in the past — the tools can’t tell the difference between a real system and a virtual service. 
Benefits :
  • Reducing costs: Test lab infrastructure costs can be pricey. Instead of provisioning large servers or mainframes, a virtual test environment can run on low-cost commodity hardware. The environment can easily be reconfigured for different testing needs or projects.
  • Improving productivity: With service virtualization you don’t have restraints in the way you do testing or development. Virtual components are available 24/7. This means that productivity can be greatly increased, and resources can be freed up for other value add activities or additional testing process improvements like the inclusion of exploratory testing.
  • Reducing risk: Service virtualization can also help reduce risk. You can test software earlier in the process, which means defects can be addressed earlier, producing fewer surprises toward the end of the schedule. The final product may be put into production earlier and with fewer errors.
  • Increasing quality: Service virtualization can improve the overall quality of the application because it increases the efficiency of any testing being performed. As a result, teams are able to do a more thorough job of testing their applications and get higher quality software to market faster.

Sunday, March 23, 2014

JQuery: Deferred Object

The Deferred object, introduced in jQuery 1.5, is a chainable utility object created by calling the jQuery.Deferred() method. It can register multiple callbacks into callback queues, invoke callback queues, and relay the success or failure state of any synchronous or asynchronous function.
In computer science, future, promise, and delay refer to constructs used for synchronizing in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is yet incomplete. A deferred is an object representing work that is not yet done and a promise is an object representing a value that is not yet known. In other words, promises / deferreds allow us to represent ‘simple’ tasks and can be easily combined to represent complex tasks and their flows, allowing for fine-grained control over sequencing. This means we can write asynchronous JavaScript parallel to how we write synchronous code. Additionally, promises make it relatively simple to abstract small pieces of functionality shared across multiple asynchronous tasks.
Similar to jQuery object, Deferred object is also chainable. $.Deferred() / jQuery.Deferred() is a constructor that creates a new deferred object. A Deferred object starts in the pending state. Any callbacks added to the object with deferred.then(), deferred.always(), deferred.done(), or are queued to be executed later. Calling deferred.resolve() or deferred.resolveWith() transitions the Deferred into the resolved state and immediately executes any doneCallbacks that are set. Calling deferred.reject() or deferred.rejectWith() transitions the Deferred into the rejected state and immediately executes any failCallbacks that are set. Once the object has entered the resolved or rejected state, it stays in that state. Callbacks can still be added to the resolved or rejected Deferred — they will execute immediately.
Key Points:
  • deferred.always(), deferred.done(), return the deferred object.
  • deferred.then(), deferred.when(), .promise() return a promise.
  • $.ajax() and $.get() return promise objects
  • instead of using .resolveWith() and .rejectWith(), you can call resolve with the context you want it to inherit
  • pass the deferred.promise() around instead of the deferred itself as the deferred object itself cannot be resolved or rejected through it.

Friday, February 28, 2014

Principles of Software Architecture

Key Design Principles
When getting started with application design, keep in mind the key principles that will help to create an architecture that adheres to proven principles, minimizes costs and maintenance requirements, and promotes usability and extendability. The key principles are:
  • Separation of concerns. Divide application into distinct features with as little overlap in functionality as possible. The important factor is minimization of interaction points to achieve high cohesion and low coupling. However, separating functionality at the wrong boundaries can result in high coupling and complexity between features even though the contained functionality within a feature does not significantly overlap.
  • Single Responsibility principle. Each component or module should be responsible for only a specific feature or functionality, or aggregation of cohesive functionality.
  • Principle of Least Knowledge (also known as the Law of Demeter or LoD). A component or object should not know about internal details of other components or objects.
  • Don’t repeat yourself (DRY). In terms of application design, specific functionality should be implemented in only one component; the functionality should not be duplicated in any other component.
  • Minimize upfront design. Only design what is necessary. In some cases, it might require upfront comprehensive design and testing if the cost of development or a failure in the design is very high. In other cases, especially for agile development, big design upfront (BDUF) can be avoid. If  application requirements are unclear, or if there is a possibility of the design evolving over time, avoid making a large design effort prematurely. This principle is sometimes known as YAGNI ("You ain’t gonna need it").
Key Design Considerations
The major design considerration listed as follow:
  • Determine the Application Type
  • Determine the Deployment Strategy
  • Determine the Appropriate Technologies
  • Determine the Quality Attributes
  • Determine the Crosscutting Concerns
Below figure illustrates common application architecture with components grouped by different areas of concern.

Thursday, January 9, 2014

JSLint : Static program analysis tool

JSLint is a static program analysis tool. It was developed by Douglas Crockford. Static program analysis is the interpretation of computer software that is performed with actually executing. JSLint is used for checking if JavaScript source code complies with coding rules.It can be referred as code quality tool.
JavaScript is a young-for-its-age language. It was originally intended to do small tasks in webpages, tasks for which Java was too heavy and clumsy. But JavaScript is a surprisingly capable language, and it is now being used in larger projects. Many of the features that were intended to make the language easy to use are troublesome when projects become complicated. A lint for JavaScript is needed: JSLint, a JavaScript syntax checker and validator.
JSLint takes a JavaScript source and scans it. If it finds a problem, it returns a message describing the problem and an approximate location within the source. The problem is not necessarily a syntax error, although it often is. JSLint looks at some style conventions as well as structural problems. It does not prove that your program is correct. It just provides another set of eyes to help spot problems.
JSLint is provided primarily as an online tool, but there are also command-line adaptations. JSLint.VS2012 extension also available for Microsoft Visual Studio 2012.
Also, JSONLint is the JSON validator availabe for online use.

JSLint.NET is a wrapper for Douglas Crockford's JSLint, the JavaScript code quality tool. It can validate JavaScript anywhere .NET runs.
At it's core, the JSLint.NET project aims to provide:
  • a complete, accurate and up-to-date wrapper for JSLint
  • an interface that's natural for .NET developers while staying true-to-source
  • a suite of valuable tools that assist JavaScript developers working on Microsoft platforms