Saturday, January 30, 2016

.NET Core

.NET Core is a modular version of the .NET Framework designed to be portable across platforms for maximum code reuse and code sharing. In addition, .NET Core will be open-sourced and accept contributions from the community.
What is .NET Core?
.NET Core is portable across platforms because, although a subset of the full .NET Framework, it provides key functionality to implement the app features you need and reuse this code regardless of your platform target. In the past, different versions of .NET for different platforms lacked shared functionality for key tasks such as reading local files. Microsoft platforms that you will be able to target with .NET Core include traditional desktop Windows, as well as Windows devices and phones. When used third-party tools such as Xamarin, .NET Core should be portable to IOS and Android devices. In addition, .NET Core will soon be available for the Mac and Linux operating systems to enable web apps to run on those systems.
.NET Core is modular because it is released through NuGet in smaller assembly packages. Rather than one large assembly that contains most of the core functionality, .NET Core is made available as smaller feature-centric packages. This enables a more agile development model for us and allows you to pick and choose the functionality pieces that you need for your apps and libraries. For more information about .NET packages that release on NuGet, see, The .NET Framework and Out-of-Band Releases.
For existing apps, using Portable Class Libraries (PCL), Universal app projects and separating business logic from platform specific code is the best way to take advantage of .NET Core, and maximize your code reuse. For apps, Model-View-Controller (MVC) or the Model View-ViewModel (MVVM) patterns are good choices to make your apps easy to migrate to .NET Core.
In addition to the modularization of the .NET Framework, Microsoft is open-sourcing the .NET Core packages on GitHub, under the MIT license. This means you can clone the Git repo, read and compile the code and submit pull requests just like any other open source package you might find on GitHub.
.NET Core 5 is a modular runtime and library implementation that includes a subset of the .NET Framework. Currently it is feature complete on Windows, and in-progress builds exist for both Linux and OS X. .NET Core consists of a set of libraries, called “CoreFX”, and a small, optimized runtime, called “CoreCLR”. .NET Core is open-source, so you can follow progress on the project and contribute to it on GitHub:
  • .NET Core Libraries (CoreFX)
  • .NET Core Common Language Runtime (CoreCLR)
The CoreCLR runtime (Microsoft.CoreCLR) and CoreFX libraries are distributed via NuGet. The CoreFX libraries are factored as individual NuGet packages according to functionality, named “System.[module]” on nuget.org.
One of the key benefits of .NET Core is its portability. You can package and deploy the CoreCLR with your application, eliminating your application’s dependency on an installed version of .NET (e.g. .NET Framework on Windows). You can host multiple applications side-by-side using different versions of the CoreCLR, and upgrade them individually, rather than being forced to upgrade all of them simultaneously.
CoreFX has been built as a componentized set of libraries, each requiring the minimum set of library dependencies (e.g. System.Collections only depends on System.Runtime, not System.Xml). This approach enables minimal distributions of CoreFX libraries (just the ones you need) within an application, alongside CoreCLR. CoreFX includes collections, console access, diagnostics, IO, LINQ, JSON, XML, and regular expression support, just to name a few libraries. Another benefit of CoreFX is that it allows developers to target a single common set of libraries that are supported by multiple platforms.

AMQP Essentials

WHAT IS AMQP?
AMQP (Advanced Message Queuing Protocol) is a binary transfer protocol that was made for enterprise applications and server-to-server communication (e.g., for financial businesses), but today it can be very useful in the Internet of Things world, thanks to the following primary features. AMQP is binary and avoids a lot of the useless data sent on the wire when using a text-based protocol like HTTP; because of this, it can be considered compact, too. Thanks to its multiplexed nature, only one connection (over a reliable stream transport protocol) is needed to allow separated data flows between the two peers; and of course it’s symmetric and provides both a client-server communication style and peer-to-peer exchange. Finally, it’s secure and reliable, providing three different levels of QoS (Quality of Service).
The last ratified version of AMQP (1.0) is the only one standardized by OASIS (since 2012/10) and ISO/IEC (since 2014/05), and it’s totally broker-model agnostic, as it doesn’t define any requirements on broker internals (this is the main difference with previous “not standard” versions like 0.9.1); the protocol is focused on how the data is transferred on the wire.

AMQP ARCHITECTURE
AMQP has a layered model defined in the following way from a bottom-up perspective:
• TRANSPORT/FRAMING:
Defines the connection behavior  and the security layer between peers on top of an underlying network transport protocol (TCP, for example). It also adds the framing protocol and how the exchanged data is formatted and encoded.
• MESSAGING:
Provides messaging capabilities at application level on top of the previous layer defining the message
entity as built of one or more frames.
Regarding the network transport layer, AMQP isn’t strongly tied to TCP, and as such can be used with any reliable stream transport protocol; so, for example, SCTP (Stream Control Transmission Protocol) and pipes are suitable.

AMQP COMMUNICATIONS
All the AMQP concepts—from connection, session, and link to performatives and messages—fit together to define how the communication happens between two peers. The main steps involved are:
• OPEN/CLOSE a connection (respectively after opening a network connection and before closing it) using “open” and “close” performatives
• BEGIN/END a session inside the connection thanks to “begin” and “end” performatives
• ATTACH/DETACH a link inside the session using “attach” and “detach” performatives
• SEND/RECEIVE messages with flow control thanks to “transfer, ” “disposition, ” and “flow” performatives

Thursday, December 31, 2015

Best Wishes for New Year 2016

A good beginning makes a good end

yield Keyword Explained

When you use the yield keyword in a statement, you indicate that the method, operator, or get accessor in which it appears is an iterator. Using yield to define an iterator removes the need for an explicit extra class when you implement the IEnumerable and IEnumerator pattern for a custom collection type. The yield keyword was introduced in C# 2.0 (it is not currently available in VB.NET) and is useful for iteration. In iterator blocks it can be used to get the next value.
“Yield keyword helps us to do custom stateful iteration over .NET collections.” There are two scenarios where “yield” keyword is useful:
  • Customized iteration through a collection without creating a temporary collection
  • Stateful iteration
It is a contextual keyword used in iterator methods in C#. Basically, it has two use cases:
  • yield return obj; returns the next item in the sequence.
  • yield break; stops returning sequence elements (this happens automatically if control reaches the end of the iterator method body).
When you use the "yield return" keyphrase, .NET is wiring up a whole bunch of plumbing code for you, but for now you can pretend it's magic. When you start to loop in the calling code, this function actually gets called over and over again, but each time it resumes execution where it left off.
Typical Implementation
  1. Caller calls function
  2. Function executes and returns list
  3. Caller uses list
Yield Implementation
  1. Caller calls function
  2. Caller requests item
  3. Next item returned
  4. Goto step #2
Although the execution of the yield implementation is a little more complicated, what we end up with is an implementation that "pulls" items one at a time instead of having to build an entire list before returning to the client.
Example:
public class PowersOf2
{
    static void Main()
    {
        // Display powers of 2 up to the exponent of 8:
        foreach (int i in Power(2, 8))
        {
            Console.Write("{0} ", i);
        }
    }

    public static System.Collections.Generic.IEnumerable<int> Power(int number, int exponent)
    {
        int result = 1;

        for (int i = 0; i < exponent; i++)
        {
            result = result * number;
            yield return result;
        }
    }
    // Output: 2 4 8 16 32 64 128 256
}

Wednesday, December 30, 2015

RabbitMQ : Tackling the broker SPOF

Though RabbitMQ brokers are extremely stable, a crash is always possible. Losing an instance altogether due to a virtual instance glitch is a likely possibility that can't be ignored. Therefore, it is essential to tackle the broker single point of failure (SPOF) before something bad happens, to prevent losing data or annoying users.The RabbitMQ can easily be configured to run in an active/active deployment, where several brokers are engaged in a cluster to act as a single highly-available AMQP middleware. The active/active aspect is essential, because it means that no manual fail-over operation is needed if one broker goes down.
Mirroring queues
With clustering, the configuration gets synchronized across all RabbitMQ nodes. This means that clients can now connect to one node or the other and find the exchanges and queues they're expecting. However, there is one thing that is not carried over the cluster by default: the messages themselves. By default, queue data is local to a particular node. When a queue is mirrored, its instances across the network organize themselves around one master and several slaves. All interaction (message queuing and dequeuing) happens with the master; the slaves receive the updates via synchronization over the cluster. If you interact with a node that hosts a slave queue, the interaction would actually be forwarded across the cluster to the master and then synchronized back to the slave.
Activating queue mirroring is done via a policy that is applied to each queue concerned.
Federating brokers
If you think beyond the notion of a single centralized enterprise resource and instead think in terms of distributed components, the idea of creating a topology of RabbitMQ brokers will emerge. RabbitMQ offers the following two plugins that allow the connection of brokers:
• The shovel plugin, which connects queues in one broker to exchanges in another broker
• The federation plugin, which connects queues to queues or exchanges to exchanges across brokers 
Both plugins ensure a reliable delivery of messages across brokers; if messages can't be routed to the target broker, they'll remain safely accumulated. Neither require brokers to be clustered, which simplifies setup and management (RabbitMQ and Erlang versions can mismatch). The federation plugin needs to be installed on all RabbitMQ nodes that will engage in the topology.
Keep in mind...
  • RabbitMQ relies on Erlang's clustering feature, which allows several Erlang nodes to communicate with each other locally or over the network.
  • The Erlang cluster requires a so-called security cookie as a means of cross-node authentication. 
  • If RabbitMQ instances are firewalled from each other, its needed to open specific ports on top of the one used by AMQP (5672); otherwise, the cluster will not work.
  • Make sure the same version of Erlang is used by all the RabbitMQ nodes that engage in a cluster; otherwise, the join_cluster command will fail with an OTP version mismatch error.
  • Similarly, the same major/minor version of RabbitMQ should be used across nodes, but patch versions can differ; this means that versions 3.2.1 and 3.2.0 can be used in the same cluster, but not 3.2.1 and 3.1.0.
  • In a federation, only the node where messages converge needs to be manually configured; its upstream nodes get automatically configured for the topology. Conversely with shovels, each source node needs to be manually configured to send to a destination node, which itself is unaware of the fact that it's engaged in a particular topology.

Thursday, December 24, 2015

WS-Policy

WS-Policy is a specification that allows web services to use XML to advertise their policies (on security, quality of service, etc.) and for web service consumers to specify their policy requirements. WS-Policy is a W3C recommendation as of September 2007. WS-Policy represents a set of specifications that describe the capabilities and constraints of the security (and other business) policies on intermediaries and end points (for example, required security tokens, supported encryption algorithms, and privacy rules) and how to associate policies with services and end points.
  • To integrate software systems with web services
  • Need a way to express web services characteristics
  • Without this standard, developers need docs
  • Provides a flexible and extensible grammar for expressing the capabilities, requirements, and general characteristics of Web Service entities
  • Defines a model to express these properties as policies
  • Provide the mechanisms needed to enable Web Services applications to specify policies
WS-Policy specifies:
  • An XML-based structure called a policy expression containing policy information
  • Grammar elements to indicate how the contained policy assertions apply
Terminology:
  • Policy: refers to the set of information being expressed as policy assertions
  • Policy Assertion: represents an individual preference, requirement, capability, etc.
  • Policy Expression: set of one or more policy assertions
  • Policy Subject: an entity to which a policy expression can be bound
Policy Namespaces:
  • WS-Policy schema defines all constructs that can used in a policy expression
Prefix Description Namespace
wsp WS-Policy, WS-PolicyAssertions, and WS_PolicyAttachment http://schemas.xmlsoap.org/ws/2002/12/policy
wsse WS-SecurityPolicy http://schemas.xmlsoap.org/ws/2002/12/secext
wsu WS utility schema http://schemas.xmlsoap.org/ws/2002/07/utility
msp WSE 2.0 policy schema http://schemas.microsoft.com/wse/2003/06/policy

Monday, November 30, 2015

MongoDB Fundamentals

MongoDB is a document-oriented DBMS. It uses JSON-like objects comprising the data model, rather than RDBMS tables. Significantly, MongoDB supports neither joins nor transactions. However, it features secondary indexes, an expressive query language, atomic writes on a per-document level, and fully-consistent reads. Operationally, MongoDB features master-slave replication with automated failover and built-in horizontal scaling via automated range-based partitioning.
Instead of tables, a MongoDB database stores its data in collections, which are the rough equivalent of RDBMS tables. A collection holds one or more documents, which corresponds to a record or a row in a relational database table, and each document has one or more fields, which corresponds to a column in a relational database table.
MongoDB Storage
A storage engine is the part of a database that is responsible for managing how data is stored on disk. Many databases support multiple storage engines, where different engines perform better for specific workloads. For example, one storage engine might offer better performance for read-heavy workloads, and another might support a higher-throughput for write operations. MMAPv1 is the default storage engine in 3.0. With multiple storage engines, you can decide which storage engine is best for your application.
MMAPv1 Storage Engine
A memory-mapped file is a file with data that the operating system places in memory by way of the mmap() system call. mmap() thus maps the file to a region of virtual memory. Memory-mapped files are the critical piece of the MMAPv1 storage engine in MongoDB. By using memory mapped files, MongoDB can treat the contents of its data files as if they were in memory. This provides MongoDB with an extremely fast and simple method for accessing and manipulating data.
MongoDB uses memory mapped files for managing and interacting with all data. Memory mapping assigns files to a block of virtual memory with a direct byte-for-byte correlation. MongoDB memory maps data files to memory as it accesses documents. Unaccessed data is not mapped to memory. Once mapped, the relationship between file and memory allows MongoDB to interact with the data in the file as if it were memory.

Database Properties
Atomicity and Transactions
In MongoDB, a write operation is atomic on the level of a single document, even if the operation modifies multiple embedded documents within a single document. When a single write operation modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic and other operations may interleave. However, we can isolate a single write operation that affects multiple documents using the $isolated operator.
Get more information at Concurrency.
Consistency
MongoDB is consistent by default: reads and writes are issued to the primary member of a replica set. Applications can optionally read from secondary replicas, where data is eventually consistent by default. Reads from secondaries can be useful in scenarios where it is acceptable for data to be slightly out of date, such as some reporting applications. Applications can also read from the closest copy of the data (as measured by ping distance) when latency is more important than consistency.
Get more information at Consistency.