Friday, April 22, 2016

Comparison of RabbitMQ, ActiveMQ, and ZeroMQ Message Brokers

RabbitMQ ZeroMQ ActiveMQ
RabbitMQ is one of the leading implementation of the AMQP protocol (along with Apache Qpid). Therefore, it implements a broker architecture, meaning that messages are queued on a central node before being sent to clients. This approach makes RabbitMQ very easy to use and deploy, because advanced scenarios like routing, load balancing or persistent message queuing are supported in just a few lines of code. However, it also makes it less scalable and “slower” because the central node adds latency and message envelopes are quite big.

RabbitMQ is a Message Queue Server in Erlang

It stores jobs in memory (message queue)
ZeroMQ is a very lightweight messaging system specially designed for high throughput/low latency scenarios like the one you can find in the financial world. Zmq supports many advanced messaging scenarios but contrary to RabbitMQ, you’ll have to implement most of them yourself by combining various pieces of the framework (e.g : sockets and devices). Zmq is very flexible but you’ll have to study the 80 pages or so of the guide (which I recommend reading for anybody writing distributed system, even if you don’t use Zmq) before being able to do anything more complicated that sending messages between 2 peers.

The socket library that acts as a concurrency framework

Faster than TCP, for clustered products and supercomputing

Carries messages across inproc, IPC, TCP, and multicast

Connect N-to-N via fanout, pubsub, pipeline, request-reply

Asynch I/O for scalable multicore message-passing apps
ActiveMQ is in the middle ground. Like Zmq, it can be deployed with both broker and P2P topologies. Like RabbitMQ, it’s easier to implement advanced scenarios but usually at the cost of raw performance.

ActiveMQ is an open source message broker in Java

Supports many advanced featuressuch as Message Groups, Virtual Destinations, Wildcards and Composite Destinations.

Wednesday, April 20, 2016

Object Modeling

The goal in object modeling is to render a precise, concise, understandable object-oriented model, or "blueprint," of the system to be automated. This model will serve as an important tool for communication:
  • To the future users of the system that we are about to build, an object model communicates our understanding of the system requirements.
  • To the software development team, an object model communicates the structure and function of the software that needs to be built in order to satisfy those requirements. This benefits not only the software engineers themselves, but also the folks who are responsible for quality assurance, testing, and documentation.
  • Long after the application is operational, an object model lives on as a "schematic diagram" to help the myriad folks responsible for supporting and maintaining an application understand its structure and function.
The design of complex systems invariably changes during their construction, so care should be taken to keep the object model up-to-date as the system is built.
Modeling Methodology = Process + Notation + Tool
According to Webster's dictionary, a methodology is
    A set of systematic procedures used by a discipline (to achieve a particular desired outcome).
A modeling methodology, ideally involves three components:
  • A process: The "how to" steps for gathering the requirements and determining the abstraction to be modeled
  • A notation: A graphical "language" for communicating the model
  • A tool: An automated way of rendering the notation, typically in "drag-and-drop" fashion
Although these constitute the ideal components of a modeling methodology, they are not all of equal importance.
  • Adhering to a sound process is certainly critical.
  • However, we can sometimes get by with a narrative text description of an abstraction without having to resort to portraying it with formal graphical notation.
  • And, when we do choose to depict an abstraction formally via a graphical notation, it isn't mandatory that we use a specialized tool for doing so.
Object modeling tools fall under the general heading of Computer-Aided Software Engineering, or CASE, tools. CASE tools afford us with many advantages, but aren't without their drawbacks.

The Advantages of Using CASE Tools:
  • Ease of Use - CASE tools provide a quick drag-and-drop way to create visual models.
  • Added Information Content - CASE tools produce "intelligent" drawings that enforce the syntax rules of a particular notation. This is in contrast to a generic drawing package, which will pretty much let you draw whatever you like, whether it adheres to the notational syntax or not.
  • Automated Code Generation - Most CASE tools provide code generation capabilities, enabling you to transition from a diagram to skeletal C# (or other) code with the push of a button.
  • Project Management Aids - Many CASE tools provide some sort of version control, enabling you to maintain different generations of the same model.
  • Flexibility - Some CASE tools support multiple graphical notations, enabling you to initially create a diagram in one notation but to then convert the diagram to another notation quickly and effortlessly.

Some Drawbacks of CASE Tools:
  • CASE tools can be expensive; it's not unusual for a high-end CASE tool to cost hundreds or even thousands of dollars per "seat."
  • It's easy to get caught up with form over substance! This is true of any automated tool - even a word processor tends to lure people into spending more time on the cosmetics of a document than is warranted, long after the substantive content is rock solid.

Thursday, March 31, 2016

Few C# Terms

Accessor
An accessor is a method which provides access to the value managed within a class. Effectively the access is read only, in that the data is held securely in the class but code in other classes may need to have access to the value itself. An accessor is implemented as a public method which will return a value to the caller. Note that if the thing being given access to is managed by reference the programmer must make sure that it is OK for a reference to the object is passed out. If the object is not to be changed it may be necessary to make a copy of the object to return to the caller.

Coupling
If a class is dependent on another the two classes are said to be coupled. Generally speaking a programmer should strive to have as little coupling in their designs as possible, since it makes it harder to update the system. Coupling is often discussed alongside cohesion, in that you should aim for high cohesion and low coupling.

Mutator
A mutator is a method which is called to change the value of a member inside an object. The change will hopefully be managed, in that invalid values will be rejected in some way. This is implemented in the form of a public method which is supplied with a new value and may return an error code.

Stream
A stream is an object which represents a connection to something which is going to move data for us. The movement might be to a disk file, to a network port or even to the system console. Streams remove the need to modify a program depending on where the output is to be sent or input received from.

Subscript
This is a value which is used to identify the element in an array. It must be an integer value. Subscripts in C# always start at 0 (this locates, the first element of the array) and extend up to the size of the array minus 1. This means that if you create a four element array you get hold of elements in the array by subscript values of 0,1,2 or 3. The best way to regard a subscript is the distance down the array you are going to move to get the element that you want. This means that the first element in the array must have a subscript value of 0.

Typesafe
Type-safe code accesses only the memory locations it is authorized to access. For example, type-safe code cannot read values from another object's private fields. It accesses types only in well-defined, allowable ways. When code is type safe, the common language runtime can completely isolate assemblies from each other. This isolation helps ensure that assemblies cannot adversely affect each other and it increases application reliability. Type-safe components can execute safely in the same process even if they are trusted at different levels.

Continuous Delivery

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.
"Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time" — Martin Fowler
Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day. Continuous Integration provides a framework for efficiently validating software in a predictable way. But to get the most out of it, you need to look at how it fits into the overall process of delivering software. In an agile project, you want to deliver working software at every iteration. Unfortunately, this is easier said than done; it often turns out that even if you implement CI and get the build process to produce a new installation package in a few minutes, it takes several days to get a new piece of software tested and released into production. To make this work better, the key is process. In order to deliver working software faster, you need a good cohesive set of tools and practices. So you need to add planning, environment management, deployment, and automated validation to get a great solution for your product, and this is just what Continuous Delivery is about.
Continuous Deployment is the practice of continuously pushing to production new versions of software under development. So Continuous Integration is all about quick feedback and validation of the commit phase, and Continuous Delivery is about establishing a mindset where you can deliver features at customer demand. Continuous Deployment is a third term that’s sometimes confused with both Continuous Integration and Continuous Delivery. Continuous Deployment can be viewed as the next level of Continuous Delivery. Where Continuous Delivery provides a process to create frequent releases but not necessarily deploy them, Continuous Deployment means that every change you make automatically gets deployed through the deployment pipeline. When you have established a Continuous Delivery solution, you are ready to move to Continuous Deployment if that’s something your business would benefit from.

Summary:
Continuous Integration is a software development practice in which you build and unit-test software every time a developer checks in new code.
Continuous Delivery (CD) is a software development practice in which continuous integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably and repeatedly with minimal manual overhead.
Continuous Deployment is a software development practice in which every code change goes through the entire pipeline and is put into production, automatically, resulting in many production deployments every day.

Monday, February 29, 2016

Few CPU Facts

64-bit computing : In computer architecture, 64-bit computing is the use of processors that have datapath widths, integer size, and memory address widths of 64 bits (eight octets). Also, 64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. From the software perspective, 64-bit computing means the use of code with 64-bit virtual memory addresses.
Processor registers are typically divided into several groups: integer, floating-point, SIMD, control, and often special registers for address arithmetic which may have various uses and names such as address, index or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider.

What the difference between Intel Core i3, Core i5, Core i7:
The Core name itself is a bit misleading. All CPUs have one or more cores, with each core being a processor itself. Most commonly an Intel Core processor will have two physical cores (dual-core) and also two virtual cores (which Intel calls Hyper-Threading). Some, though, have four physical cores: quad-core. If you buy a Core i7 Extreme Edition, you will find up to 12 phyiscal cores. Physical cores are better than virtual cores in performance terms.
Model Core i3 Core i5 Core i7
Number of cores (A core can be thought of as in individual processor) 2 4 4
Hyper-threading (Hyper-Threading is Intel's technology for creating two logical cores in each physical core) Yes No Yes
Turbo boost (Turbo Boost is Intel's technology for automatically overclocking a processor, boosting its clock speed higher than the default setting) No Yes Yes
K model (Any CPU that has a model ending with a K means that it the CPU is unlocked) No Yes Yes

CPU Registers :
There are 16 general purpose registers in the x86-64 architecture.

ARM architecture:
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing (RISC) architectures for computer processors, configured for various environments. British company ARM Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures—​​including systems-on-chips (SoC) that incorporate memory, interfaces, radios, etc. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products.

Open source platform for continuous inspection of code quality : SonarQube

SonarQube can perform analysis on 20+ different languages. The outcome of this analysis will be quality measures and issues (instances where coding rules were broken). SonarQube is an open platform to manage code quality. There are three different paradigms for SonarQube analysis. You switch among the three modes using the sonar.analysis.mode analysis parameter with one of these three values:
publish - this is the default. This mode analyzes everything that's analyze-able for the languages in question and pushes the results to the server for processing.
preview - is typically used to determine whether code changes are good enough to move forward with, E.G. merge into the Git master.
issues - is a "preview" equivalent intended for use by tools. You should never need to use it manually.
SonarQube covers the 7 axes of code quality:
  • Architecture & Design
  • Comments
  • Coding rules
  • Potential bugs
  • Complexity
  • Unit tests
  • Duplications
Features
  • Supports languages: Java, C/C++, Objective-C, C#, PHP, Flex, Groovy, JavaScript, Python, PL/SQL, COBOL, etc.
  • Can also be used in Android development.
  • Offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, potential bugs, comments and design and architecture.
  • Records metrics history and provides evolution graphs ("time machine") and differential views.
  • Provides fully automated analyses: integrates with Maven, Ant, Gradle and continuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.).
  • Integrates with the Eclipse development environment
  • Integrates with external tools: JIRA, Mantis, LDAP, Fortify, etc.
  • Is expandable with the use of plugins.
  • Implements the SQALE methodology to compute technical debt.

Saturday, January 30, 2016

.NET Core

.NET Core is a modular version of the .NET Framework designed to be portable across platforms for maximum code reuse and code sharing. In addition, .NET Core will be open-sourced and accept contributions from the community.
What is .NET Core?
.NET Core is portable across platforms because, although a subset of the full .NET Framework, it provides key functionality to implement the app features you need and reuse this code regardless of your platform target. In the past, different versions of .NET for different platforms lacked shared functionality for key tasks such as reading local files. Microsoft platforms that you will be able to target with .NET Core include traditional desktop Windows, as well as Windows devices and phones. When used third-party tools such as Xamarin, .NET Core should be portable to IOS and Android devices. In addition, .NET Core will soon be available for the Mac and Linux operating systems to enable web apps to run on those systems.
.NET Core is modular because it is released through NuGet in smaller assembly packages. Rather than one large assembly that contains most of the core functionality, .NET Core is made available as smaller feature-centric packages. This enables a more agile development model for us and allows you to pick and choose the functionality pieces that you need for your apps and libraries. For more information about .NET packages that release on NuGet, see, The .NET Framework and Out-of-Band Releases.
For existing apps, using Portable Class Libraries (PCL), Universal app projects and separating business logic from platform specific code is the best way to take advantage of .NET Core, and maximize your code reuse. For apps, Model-View-Controller (MVC) or the Model View-ViewModel (MVVM) patterns are good choices to make your apps easy to migrate to .NET Core.
In addition to the modularization of the .NET Framework, Microsoft is open-sourcing the .NET Core packages on GitHub, under the MIT license. This means you can clone the Git repo, read and compile the code and submit pull requests just like any other open source package you might find on GitHub.
.NET Core 5 is a modular runtime and library implementation that includes a subset of the .NET Framework. Currently it is feature complete on Windows, and in-progress builds exist for both Linux and OS X. .NET Core consists of a set of libraries, called “CoreFX”, and a small, optimized runtime, called “CoreCLR”. .NET Core is open-source, so you can follow progress on the project and contribute to it on GitHub:
  • .NET Core Libraries (CoreFX)
  • .NET Core Common Language Runtime (CoreCLR)
The CoreCLR runtime (Microsoft.CoreCLR) and CoreFX libraries are distributed via NuGet. The CoreFX libraries are factored as individual NuGet packages according to functionality, named “System.[module]” on nuget.org.
One of the key benefits of .NET Core is its portability. You can package and deploy the CoreCLR with your application, eliminating your application’s dependency on an installed version of .NET (e.g. .NET Framework on Windows). You can host multiple applications side-by-side using different versions of the CoreCLR, and upgrade them individually, rather than being forced to upgrade all of them simultaneously.
CoreFX has been built as a componentized set of libraries, each requiring the minimum set of library dependencies (e.g. System.Collections only depends on System.Runtime, not System.Xml). This approach enables minimal distributions of CoreFX libraries (just the ones you need) within an application, alongside CoreCLR. CoreFX includes collections, console access, diagnostics, IO, LINQ, JSON, XML, and regular expression support, just to name a few libraries. Another benefit of CoreFX is that it allows developers to target a single common set of libraries that are supported by multiple platforms.