Friday, January 2, 2015

Welcome 2015

Along with all the new hopes and promises
that the New Year would bring
Hope it also brings us a lot more opportunities
New Aim, New Dreams, New Achievements
Everything Waiting for You
Forget The Failures, Correct Your Mistakes
Surely Success is yours.
Wish you a Happy New Year 2015!

Monday, December 22, 2014


Ember.js is a framework that enables user to build “ambitious” web applications. That term “ambitious” can mean different things to different people, but as a general rule, Ember.js aims to help user, push the envelope of develop code for the web, while ensuring that application source code remains structured and sane. Ember.js achieves this goal by structuring an application into logical abstraction layers and forcing the development model to be as object-oriented as possible. At its core, Ember.js has built-in support for the following features:
  • Bindings - Enables the changes to one variable to propagate into another variable and vice versa
  • Computed properties - Enables to mark functions as properties that automatically update along with the properties they rely on
  • Automatically updated templates - Ensures that GUI stays up to date whenever changes occur in the underlying data
With the combination of these features along with a strong and well-planned Model-View-Controller (MVC)  architecture results into a framework that delivers on its promise.
Below image shows the internal architecture of Ember framework:
Ember.js application includes - a complete MVC implementation, which enriches both the controller and the view layers, and Ember Data enriches the model layer of Ember.js.
  • Model layer - Built with an ember objects
  • View layer - Built with a combination of templates and views
  • Controller layer - Built with a combination of routes and controllers

Saturday, November 29, 2014

Wednesday, October 29, 2014

Representational state transfer (REST)

Roy Fielding generalized the Web’s architectural principles and presented them as a framework of constraints, or an architectural style in his Ph.D. dissertation. Through this framework, Fielding described how distributed information systems such as the Web are built and operated. He described the interplay between resources, and the role of unique identifiers in such systems. He also talked about using a limited set of operations with uniform semantics to build a ubiquitous infrastructure that can support any type of application. Fielding referred to this architectural style as REpresentational State Transfer, or REST. REST describes the Web as a distributed hypermedia application whose linked resources communicate by exchanging representations of resource state.
Representational state transfer (REST) is an abstraction of the architecture of the World Wide Web; more precisely, REST is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.
The description of the Web, as captured in W3C’s “Architecture of the World Wide Web” and other IETF RFC documents, was heavily influenced by Fielding’s work. The architectural abstractions and constraints he established led to the introduction of hypermedia as the engine of application state.
The idea is simple, and yet very powerful. A distributed application makes forward progress by transitioning from one state to another, just like a state machine. The difference from traditional state machines, however, is that the possible states and the transitions between them are not known in advance. Instead, as the application reaches a new state, the next possible transitions are discovered.
In a hypermedia system, application states are communicated through representations of uniquely identifiable resources. The identifiers of the states to which the application can transition are embedded in the representation of the current state in the form of links.
Example of hypermedia as the engine for application state in action

REST - An Architectural Style, Not a Standard ,while REST is not a standard, it does use following standards:
  • HTTP
  • URL
  • XML/HTML/GIF/JPEG/etc (Resource Representations)
  • text/xml, text/html, image/gif, image/jpeg, etc (MIME Types)

Principles of REST Web Service Design
  1. The key to creating Web Services in a REST network (i.e., the Web) is to identify all of the conceptual entities that you wish to expose as services.
  2. Create a URL to each resource. The resources should be nouns, not verbs. For example, do not use this:
        Note the verb, getPart. Instead, use a noun:
  3. Categorize your resources according to whether clients can just receive a representation of the resource, or whether clients can modify (add to) the resource. For the former, make those resources accessible using an HTTP GET. For the later, make those resources accessible using HTTP POST, PUT, and/or DELETE.
  4. All resources accessible via HTTP GET should be side-effect free. That is, the resource should just return a representation of the resource. Invoking the resource should not result in modifying the resource.
  5. No man/woman is an island. Likewise, no representation should be an island. In other words, put hyperlinks within resource representations to enable clients to drill down for more information, and/or to obtain related information.
  6. Design to reveal data gradually. Don't reveal everything in a single response document. Provide hyperlinks to obtain more details.
  7. Specify the format of response data using a schema (DTD, W3C Schema, RelaxNG, or Schematron). For those services that require a POST or PUT to it, also provide a schema to specify the format of the response.
  8. Describe how your services are to be invoked using either a WSDL document, or simply an HTML document.

Tuesday, September 30, 2014

MVC 4 : Dependency Injection

Dependency Injection
ASP.NET MVC has included a dependency resolver in earlier version, which dramatically improves the ability of an application to participate in dependency injection for both services consumed by MVC and commonly created classes like controllers and view pages.
The dependency injection (DI) pattern is another form of the inversion of control pattern, wherein there is no intermediary object like the service locator. Instead, components are written in a way that allows their dependencies to be stated explicitly, usually by way of constructor parameters or property setters.
Constructor Injection
The most common form of dependency injection is called constructor injection. This technique involves creating a constructor for your class that expresses all of its dependencies explicitly
    public class NotificationSystem
        private IMessagingService svc;

        public NotificationSystem(IMessagingService service)
            this.svc = service;
        public void InterestingEventHappened()
In this code, the first benefit is that the implementation of the constructor is dramatically simplified. The component is always expecting whoever creates it to pass the required dependencies. It only needs to store the instance of IMessagingService for later use. Another benefit is that you’ve reduced the number of things NotificationSystem needs to know about.

Property Injection
A less common form of dependency injection is called property injection. As the name implies, dependencies for a class are injected by setting public properties on the object rather than through the use of constructor parameters.
    public class NotificationSystem
        public IMessagingService MessagingService
        public void InterestingEventHappened()
This code removes the constructor arguments (in fact, it removes the constructor entirely) and replaces it with a property. This class expects any consumers to provide required dependencies via properties rather than the constructor.

Dependency Resolution
Using a dependency injection container is one way to make the resolution of these dependencies simpler. A dependency injection container is a software library that acts as a factory for components, automatically inspecting and fulfi lling their dependency requirements. The consumption portion of the API for a dependency injection container looks a lot like a service locator because the primary action user ask it to perform is to provide user with some component, usually based on its type. The primary way that MVC talks to containers is through an interface created for MVC applications: IDependencyResolver.
The interface is defined as follows:
    public interface IDependencyResolver
        object GetService(Type serviceType);
        IEnumerable<object> GetServices(Type serviceType);

This interface is consumed by the MVC framework itself. If user want to register a dependency injection container (or a service locator, for that matter), need to provide an implementation of this interface. User can typically register an instance of the resolver inside your Global.asax file as below:
       DependencyResolver.Current = new MyDependencyResolver();

Tuesday, August 26, 2014


The official website of the project presents SignalR as:
“ASP.NET SignalR is a new library for ASP.NET developer that makes it incredibly simple to add real-time web functionality to your applications”

And Wiki says:
"SignalR is a server-side software system designed for writing scalable Internet applications, notably web servers. Programs are written on the server side in C#, using event-driven, asynchronous I/O to minimize overhead and maximize scalability."

Basically, SignalR isolates developer from low-level details, giving the impression of working on a permanently open persistent connection. To achieve this, SignalR includes components specific to both ends of communication, which will facilitate message delivery and reception in real time between the two. SignalR is in charge of determining which is the best technique available both at the client and at the server (Long Polling, Forever Frame, Websockets...) and uses it to create a underlying connection and keep it continuously open, also automatically managing disconnections and reconnections when necessary. SignalR provides a virtual persistent connection, and that everything works correctly in the backstage.

SignalR includes a set of transports or techniques to keep the underlying connection to the server open, and it determines which one it should use based on certain factors, such as the availability of the technology at both ends. SignalR will always try to use the most efficient transport, and will keep falling back until it finds one that is compatible with the context. This decision is made automatically during an initial stage in the communication between the client and the server, known as “negotiation”. It is also possible to force the use of a specific transport using the client libraries of the framework.

SignalR also includes a messaging bus capable of managing data transmission and reception between the server and the clients connected to the service. That is, the server is able to keep track of its clients and detect their connections and disconnections, and it will also have mechanisms to easily send messages to all clients connected or part of them, automatically managing all issues concerning communications (different speeds, latency, errors…) and ensuring the delivery of messages. Moreover, it includes powerful libraries on the client side that allow the consumption of services from virtually any kind of application, allowing us to manage our end of the virtual connection and send or receive data asynchronously.

In short, in SignalR provides rich platform to create multiuser real-time applications.

SignalR offers two different levels of abstraction above the transports used to maintain the connection with the server.

The first one, called Persistent connections, is the lower level, closer to the reality of the connections. In fact, it creates a development surface which is quite similar to programming with sockets, although here it is done on the virtual connection established by SignalR.
The second level of abstraction, based on components called Hubs, is quite further removed from the underlying connections and protocols, offering a very imperative programming model, where the boundaries traditionally separating the client and the server melt away.

Friday, June 27, 2014

JavaScript Engine : V8

A JavaScript engine is process virtual machine which interprets and executes JavaScript. Although there are several uses for a JavaScript engine, it is most commonly used in web browsers. Web browsers typically use the public application programming interface (API) to create "host objects" responsible for reflecting the Document Object Model (DOM) into JavaScript.
The web server is another common application of the engine. A JavaScript web server exposes host objects representing an HTTP request and response objects, which a JavaScript program then manipulates to dynamically generate web pages.


The V8 JavaScript Engine is an open source JavaScript engine developed by Google for the Google Chrome web browser. V8 compiles JavaScript to native machine code (IA-32, x86-64, ARM, or MIPS ISAs) before executing it, instead of more traditional techniques such as interpreting bytecode or compiling the whole program to machine code and executing it from a filesystem. The compiled code is additionally optimized (and re-optimized) dynamically at runtime, based on heuristics of the code's execution profile. Optimization techniques used include inlining, elision of expensive runtime properties, and inline caching, among many others.
V8 is a new JavaScript engine specifically designed for fast execution of large JavaScript applications. It handles memory allocation for objects, and garbage collects objects it no longer needs. V8's stop-the-world, generational, accurate garbage collector is one of the keys to V8's performance.
There are three key areas to V8's performance:
  • Fast Property Access
  • Dynamic Machine Code Generation
  • Efficient Garbage Collection
Some Facts about V8:
  • V8 is Google's open source JavaScript engine. 
  • V8 is written in C++ and is used in Google Chrome, the open source browser from Google. 
  • V8 can run standalone, or can be embedded into any C++ application. 
  • The garbage collector of V8 is a generational incremental collector.
  • V8 assembler is based on the Strongtalk assembler.
  • V8 is intended to be used both in a browser and as a standalone high-performance engine that can be integrated into independent projects, for example server-side JavaScript in Node.js, or client side JavaScript in .NET/Mono using V8.NET.