Thursday, December 22, 2016

C# 7: New Features

C# is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. C# 7.0 adds a number of new features and brings a focus on data consumption, code simplification and performance. Perhaps the biggest features are tuples, which make it easy to have multiple results, and pattern matching which simplifies code that is conditional on the shape of data.
Feature List in C# 7.0
  • Local functions – code available currently in github
  • Tuple Types and literals
  • Record Types
  • Pattern matching
  • Non Nullable reference types
  • Immutable types
  • Binary Literals
  • Digit Separators
  • Type switch
  • Out var
  • Arbitrary async returns

Monday, December 19, 2016

Visual Studio 2017 Features

Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft.
Codename: Dev15
Version number: 15.0
Version of cl.exe: 19.10
Supported .NET Framework versions: 2.0 – 4.6.2; Core 1.0
Below is the list of features in Visual Studio 2017:
  • New Installation Experience:
  • Visual Studio IDE:
    • Improved Code Navigation
    • Connecting to Services Using Connected Services
    • EditorConfig Support
    • New Extensibility Format
    • Modify Extensions in Bulk
    • Ngen Support
    • Roaming Extension Manager
    • Sign in and Identity Improvements
    • Lightweight Solution Load
  • Debugging and Diagnostics:
    • Attach to Process Filter
    • Reattach to Process
    • The New Exception Helper
    • Add Conditions to Exception Settings
    • Debugger Accessibility Improvements
    • IntelliTrace Events for .NET Core
    • Diagnostic Tools Window Updates
    • Performance Profiler Updates
    • CPU Usage Tool Updates
    • Chrome Debugging Support
  • Team Explorer:
    • Connect to VSTS
    • Work Item Forms
  • C# and Visual Basic:
    • IDE Experience and Productivity
    • Language Extensions and Analyzers
  • JavaScript and TypeScript:
    • TypeScript 2.1
    • JavaScript Language Service

Monday, November 28, 2016

TypeScript : Javascript that scales

TypeScript is a free and open-source programming language developed and maintained by Microsoft. It is a strict superset of JavaScript, and adds optional static typing and class-based object-oriented programming to the language. Anders Hejlsberg, lead architect of C# and creator of Delphi and Turbo Pascal, has worked on the development of TypeScript. TypeScript may be used to develop JavaScript applications for client-side or server-side (Node.js) execution.
TypeScript is designed for development of large applications and transcompiles to JavaScript. As TypeScript is a superset of JavaScript, any existing JavaScript programs are also valid TypeScript programs.
TypeScript supports definition files that can contain type information of existing JavaScript libraries, much like C/C++ header files can describe the structure of existing object files. This enables other programs to use the values defined in the files as if they were statically typed TypeScript entities. There are third-party header files for popular libraries like jQuery, MongoDB, and D3.js. TypeScript headers for the Node.js basic modules are also available, allowing development of Node.js programs within TypeScript.
The TypeScript compiler is itself written in TypeScript, transcompiled to JavaScript and licensed under the Apache 2 License.
Starts and ends with JavaScript
TypeScript starts from the same syntax and semantics that millions of JavaScript developers know today. Use existing JavaScript code, incorporate popular JavaScript libraries, and call TypeScript code from JavaScript.
TypeScript compiles to clean, simple JavaScript code which runs on any browser, in Node.js, or in any JavaScript engine that supports ECMAScript 3 (or newer).
Strong tools for large apps
Types enable JavaScript developers to use highly-productive development tools and practices like static checking and code refactoring when developing JavaScript applications.
Types are optional, and type inference allows a few type annotations to make a big difference to the static verification of your code. Types let you define interfaces between software components and gain insights into the behavior of existing JavaScript libraries.
State of the art JavaScript
TypeScript offers support for the latest and evolving JavaScript features, including those from ECMAScript 2015 and future proposals, like async functions and decorators, to help build robust components.
These features are available at development time for high-confidence app development, but are compiled into simple JavaScript that targets ECMAScript 3 (or newer) environments.

Friday, November 25, 2016

Git

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. Git is easy to learn and has a tiny footprint with lightning fast performance. It outclasses SCM tools like Subversion, CVS, Perforce, and ClearCase with features like cheap local branching, convenient staging areas, and multiple workflows.
Git is a version control system (VCS) that is used for software development and other version control tasks. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows. Git was created by Linus Torvalds in 2005 for development of the Linux kernel, with other kernel developers contributing to its initial development.
As with most other distributed version control systems, and unlike most client–server systems, every Git directory on every computer is a full-fledged repository with complete history and full version tracking abilities, independent of network access or a central server.
In computer programming, distributed version control, also known as distributed revision control or decentralized version control, allows many software developers to work on a given project without requiring them to share a common network. The software revisions are stored in a distributed revision control system (DRCS), also known as a distributed version control system (DVCS).
Distributed revision control takes a peer-to-peer approach to version control, as opposed to the client-server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a complete repository. Distributed revision control synchronizes repositories by exchanging patches (sets of changes) from peer to peer. This results in some important differences from a centralized system:
  • No canonical, reference copy of the codebase exists by default; only working copies.
  • Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server.
  • Communication is only necessary when sharing changes among other peers.
  • Each working copy effectively functions as a remote backup of the codebase and of its change-history, protecting against data loss.
Why the 'Git' name?
Quoting Linus: "I'm an egotistical bastard, and I name all my projects after myself. First 'Linux', now 'Git'".
('git' is British slang for "pig headed, think they are always correct, argumentative").
Alternatively, in Linus' own words as the inventor of Git: "git" can mean anything, depending on your mood:
  • Random three-letter combination that is pronounceable, and not actually used by any common UNIX command. The fact that it is a mispronunciation of "get" may or may not be relevant.
  • Stupid. Contemptible and despicable. Simple. Take your pick from the dictionary of slang.
  • "Global information tracker": you're in a good mood, and it actually works for you. Angels sing and light suddenly fills the room.
  • "Goddamn idiotic truckload of sh*t": when it breaks

Thursday, October 6, 2016

Threats to wireless security : Rogue access point

Of all of the threats faced by your network security, few are as potentially dangerous as the rogue Access Point (AP). A rogue AP is a WiFi Access Point that is set up by an attacker for the purpose of sniffing wireless network traffic in an effort to gain unauthorized access to your network environment. Ironically, though, this breach in security typically isn't implemented by a malicious hacker or other malcontent. Instead, it's usually installed by someone who is simply looking for the same convenience and flexibility at work that they've grown accustomed to using on their own home wireless network.
To prevent the installation of rogue access points, organizations can install wireless intrusion prevention systems to monitor the radio spectrum for unauthorized access points. Presence of a large number of wireless access points can be sensed in airspace of a typical enterprise facility. These include managed access points in the secure network plus access points in the neighborhood. A wireless intrusion prevention system facilitates the job of auditing these access points on a continuous basis to learn whether there are any rogue access points among them.
In order to detect rogue access points, two conditions need to be tested:
  • whether or not the access point is in the managed access point list
  • whether or not it is connected to the secure network
The first of the above two conditions is easy to test - compare wireless MAC address (also called as BSSID) of the access point against the managed access point BSSID list. However, automated testing of the second condition can become challenging in the light of following factors: a) Need to cover different types of access point devices such as bridging, NAT (router), unencrypted wireless links, encrypted wireless links, different types of relations between wired and wireless MAC addresses of access points, and soft access points, b) necessity to determine access point connectivity with acceptable response time in large networks, and c) requirement to avoid both false positives and negatives.

Tuesday, October 4, 2016

Puppet on the AWS Cloud

Puppet is a declarative, model-based configuration management solution from Puppet Labs that lets you define the state of your IT infrastructure, and automatically enforces that desired state on your systems. This Quick Start automates the deployment of a Puppet master and Puppet agents from scratch, using AWS CloudFormation templates.
Puppet Enterprise, comprises a commercially supported version of its open-source configuration management tool, Puppet. Puppet IT automation software uses Puppet's declarative language to manage various stages of the IT infrastructure lifecycle, including the provisioning, patching, configuration, and management of operating system and application components across enterprise data centers and cloud infrastructures.
Built as cross-platform software, Puppet and Puppet Enterprise operate on Linux distributions, including RHEL (and clones such as CentOS and Oracle Linux), Fedora, Debian, Mandriva, Ubuntu, and SUSE, as well as on multiple Unix systems (Solaris, BSD, Mac OS X, AIX, HP-UX), and has Microsoft Windows support. It is a model-driven solution that requires limited programming knowledge to use.
Puppet is a server management application that can use ServiceNow configuration item (CI) data to bring computers into a desired state by managing files, services, or packages installed on physical or virtual machines. ServiceNow can interact with Puppet systems that run Linux. ServiceNow identifies a Puppet Master, which controls Puppet nodes, and uses a standalone utility to discover the components in the Puppet environment. ServiceNow uses information about server CIs from the Puppet Master to classify those servers as Puppet nodes. Puppet then evaluates a node's current state and modifies the node to achieve the desired state.
Note: A group of configuration items is called a configuration template in ServiceNow Provider and a node definition in Puppet.

Wednesday, September 28, 2016

Chef

Chef is the platform for automating your infrastructure on Amazon Web Services. Chef is a configuration management tool written in Ruby and Erlang. It uses a pure-Ruby, domain-specific language (DSL) for writing system configuration "recipes". Chef is used to streamline the task of configuring and maintaining a company's servers, and can integrate with cloud-based platforms such as Internap, Amazon EC2, Google Cloud Platform, OpenStack, SoftLayer, Microsoft Azure and Rackspace to automatically provision and configure new machines. Chef contains solutions for both small and large scale systems, with features and pricing for the respective ranges.
The user writes "recipes" that describe how Chef manages server applications and utilities (such as Apache HTTP Server, MySQL, or Hadoop) and how they are to be configured. These recipes (which can be grouped together as a "cookbook" for easier management) describe a series of resources that should be in a particular state: packages that should be installed, services that should be running, or files that should be written. These various resources can be configured to specific versions of software to run and can ensure that software is installed in the correct order based on dependencies. Chef makes sure each resource is properly configured and corrects any resources that are not in the desired state.
Chef can run in client/server mode, or in a standalone configuration named "chef-solo". In client/server mode, the Chef client sends various attributes about the node to the Chef server. The server uses Solr to index these attributes and provides an API for clients to query this information. Chef recipes can query these attributes and use the resulting data to help configure the node.
Chef allows you to define, create, and manage your entire application stack on AWS. With a single recipe you can manage and orchestrate a multi-tier application that relies on a variety of AWS services such as Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), elastic load balancing (ELB), Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS) relational database service (RDS). Because Chef recipes describe resources in the order of their execution, you can ensure that, for example, an RDS database is created and populated before enabling the Amazon EC2 instances that connect to it.
Website: chef.io

Monday, September 26, 2016

Google Cloud Platform

Google Cloud Platform is a cloud computing service by Google that offers hosting on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube. Cloud Platform provides developer products to build a range of programs from simple websites to complex applications. 
Google Cloud Platform is a part of a suite of enterprise services from Google for Work and provides a set of modular cloud-based services with a host of development tools. For example, hosting and computing, cloud storage, data storage, translations APIs and prediction APIs.

The core cloud computing services in the Google Cloud Platform include:
  • Google Compute Engine: An infrastructure as a service (IaaS) offering that provides users with virtual machine (VM) instances for workload hosting.
  • Google App Engine: A platform as a service (PaaS) offering that gives software developers access to Google's scalable hosting. Developers can also use a software developer kit (SDK) to develop software products that run on App Engine.
  • Google Cloud Storage: A cloud storage platform designed to store large, unstructured data sets. Google also offers database storage options including Cloud Datastore for NoSQL non-relational storage, Cloud SQL for MySQL fully-relational storage and Google's native Cloud Bigtable database.
  • Google Container Engine: A management and orchestration system for Docker containers that run within Google's public cloud. Google Container Engine is based on the Google Kubernetes container orchestration engine.

Sunday, August 7, 2016

Nice device - Puck.js

Recently I have came across an article which mention this new small smart device like Raspberry Pi.
Puck.js - the ground-breaking bluetooth beacon
Puck.js is a low energy smart device which can be programmed wirelessly and comes pre-installed with JavaScript. It is both multi-functional and easy to use, with a custom circuit board, the latest Nordic chip, Bluetooth Smart, Infra-red and much more, all enclosed in a tiny silicone case.

Friday, August 5, 2016

SOLID Design Principles C#

In computer programming, SOLID (single responsibility, open-closed, Liskov substitution, interface segregation and dependency inversion) is a mnemonic acronym introduced by Michael Feathers for the "first five principles" named by Robert C. Martin in the early 2000s that stands for five basic principles of object-oriented programming and design. The intention is that these principles, when applied together, will make it more likely that a programmer will create a system that is easy to maintain and extend over time. The principles of SOLID are guidelines that can be applied while working on software to remove code smells by causing the programmer to refactor the software's source code until it is both legible and extensible. It is part of an overall strategy of agile and Adaptive Software Development.
SOLID are five basic principles which help to create good software architecture. SOLID is an acronym where:-
  1. SRP The Single Responsibility Principle: - a class should have one, and only one, reason to change.
  2. OCP The Open Closed Principle: - you should be able to extend a class's behavior, without modifying it.
  3. LSP The Liskov Substitution Principle: - derived classes must be substitutable for their base classes.
  4. ISP The Interface Segregation Principle: - make fine grained interfaces that are client specific.
  5. DIP The Dependency Inversion Principle - depend on abstractions not on concrete implementations.
1. Single Responsibility Principle (SRP)
The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. 
  • All its services should be narrowly aligned with that responsibility
As per SRP, there should not be more than one reason for a class to change, or a class should always handle single functionality. If you put more than one functionality in one Class in C# it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality, which require another round of testing to avoid any surprise on production environment.

2. Open Closed Principle (OCP)
In object-oriented programming, the open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”; that is, such an entity can allow its behaviour to be extended without modifying its source code.

3. Liskov substitution principle (LSP)
The Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address entitled Data abstraction and hierarchy
  • if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program
In order to follow this principle we need to make sure that the subtypes respect the parent class.

4. Interface Segragation principle (ISP)
The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.
  • ISP splits interfaces which are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them.
  • ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy.
ISP is one of the five SOLID principles of Object-Oriented Design, similar to the High Cohesion Principle of GRASP

5. Dependency Inversion principle (DIP)
The Dependency Inversion principle refers to a specific form of decoupling software modules. It states:
  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.
The Dependency Inversion principle (DIP) helps us to develop loosely couple code by ensuring that high-level modules depend on abstractions rather than concrete implementations of lower-level modules
 

Tuesday, July 26, 2016

U-SQL

U-SQL is a data processing language that unifies the benefits of SQL with the expressive power of your own code to process all data at any scale. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across relational stores such as Azure SQL Database. It enables you to process unstructured data by applying schema on read, insert custom logic and UDF's, and includes extensibility to enable fine grained control over how to execute at scale.
U-SQL is the new big data query language of the Azure Data Lake Analytics service. It evolved out of Microsoft's internal Big Data language called SCOPE and combines a familiar SQL-like declarative language with the extensibility and programmability provided by C# types and the C# expression language and big data processing concepts such as “schema on reads”, custom processors and reducers. It also provides the ability to query and combine data from a variety of data sources, including Azure Data Lake Storage, Azure Blob Storage, and Azure SQL DB, Azure SQL Data Warehouse, and SQL Server instances running in Azure VMs. It is however not ANSI SQL.
U-SQL script:
The main unit of a U-SQL “program” is a U-SQL script. A script consists of an optional script prolog and a sequence of U-SQL statements.
@t = EXTRACT date string
        , time string
        , author string
        , tweet string
    FROM "/input/MyTwitterHistory.csv"
    USING Extractors.Csv();

@res = SELECT author
    , COUNT(*) AS tweetcount
    FROM @t
    GROUP BY author;

OUTPUT @res TO "/output/MyTwitterAnalysis.csv"
ORDER BY tweetcount DESC
USING Outputters.Csv();
The above U-SQL script shows the three major steps of processing data with U-SQL:
  • Extract data from your source, using EXTRACT statement in query. The datatypes are based on C# datatypes and it use the built-in Extractors library to read and schematize the CSV file.
  • Transform using SQL and/or custom user defined operators.
  • Output the result either into a file or into a U-SQL table to store it for further processing.
U-SQL combines some familiar concepts from a variety of languages: It is a declarative language like SQL, it follows a dataflow-like composition of statements and expressions like Cascading, and provides simple ways to extend the language with user-defined operators, user-defined aggregators and user-defined functions using C#, and provides a SQL database-like metadata object model to manage, discover and secure structured data and user-code.

Sunday, July 10, 2016

SQL Server Management Objects (SMO)

SQL Server Management Objects (SMO) are .NET objects introduced by Microsoft as of Microsoft SQL Server 2005, designed to allow for easy and simple programmatic management of Microsoft SQL Server. You can use SMO to build customized SQL Server management applications. The SMO object model extends and supersedes the Distributed Management Objects (SQL-DMO) object model. Compared to SQL-DMO, SMO increases performance, control, and ease of use. Most SQL-DMO functionality is included in SMO, and there are various new classes that support new features in SQL Server. The object model is intuitive and uses SQL-DMO terminology, where it is possible, to help transfer your skills.
New features in SMO SQL Sever 2016 include the following:
  • Cached object model and optimized object instance creation. Objects are loaded only when specifically referenced. Object properties are only partially loaded when the object is created. The remaining objects and properties are loaded when they are referenced directly.
  • Batched execution of Transact-SQL statements. Statements are batched to improve network performance.
  • Capture Transact-SQL statements. Allows any operation to be captured into a script. Management Studio uses this capability to script an operation instead of executing it immediately.
  • Management of SQL Server services with the WMI Provider. SQL Server services can be started, stopped, and paused programmatically.
  • Advanced Scripting. Transact-SQL scripts can be generated to re-create SQL Server objects that describe relationships to other objects on the instance of SQL Server.
  • Use of Unique Resource Names (URNs). A URN allows you to create instances of and reference SMO objects.
SMO also represents as new objects or properties many features and components that were introduced in SQL Server 2005. These new features and components include the following:
  • Table and index partitioning for storage of data on a partition scheme.
  • HTTP endpoints for managing SOAP requests.
  • Snapshot isolation and row level versioning for increased concurrency.
  • XML Schema collection, XML indexes and XML datatype provide validation and storage of XML data. For more information, see XML Schema Collections (SQL Server) and Using XML Schemas.
  • Snapshot databases for creating read-only copies of databases.
  • Service Broker support for message-based communication.
  • Synonym support for multiple names of SQL Server database objects.
  • The management of Database Mail that lets you create e-mail servers, e-mail profiles, and e-mail accounts in SQL Server.
  • Registered Servers support for registering connection information.
  • Trace and replay of SQL Server events.
  • Support for certificates and keys for security control.
  • DDL triggers for adding functionality when DDL events occur.
The SMO namespace is Microsoft.SqlServer.Management.Smo. SMO is implemented as a Microsoft.NET Framework assembly. This means that the common language runtime from the Microsoft.NET Framework version 2.0 must be installed before using the SMO objects. The SMO assemblies are installed by default into the Global Assembly Cache (GAC) with the SQL Server SDK option.

Wednesday, June 29, 2016

SQL SERVER 2016 - JSON Data

JSON is a popular textual data format used for exchanging data in modern web and mobile applications. JSON is also used for storing unstructured data in log files or NoSQL databases. Many REST web services return results formatted as JSON text or accept data formatted as JSON. JSON is also the main format for exchanging data between web pages and web servers using AJAX calls.

SQL Server provides built-in functions and operators for following JSON data manipulation:
  • Parse JSON text and read or modify values.
  • Transform arrays of JSON objects into table format.
  • Use any Transact SQL query on the converted JSON objects.
  • Format the results of Transact-SQL queries in JSON format.

Transform JSON text to relational table:
OPENJSON is table-value function (TVF) that seeks into some JSON text, locate an array of JSON objects, iterate through the elements of array, and for each element generates one row in the output result. This feature will be available in CTP3. One example of OPENJSON function in T-SQL query is shown in the following example:
SELECT Number, Customer, Quantity
FROM OPENJSON (@JSalestOrderDetails, '$.OrdersArray')
WITH (
Number varchar(200),
Customer varchar(200),
Quantity int
) AS OrdersArray

Exporting data as JSON:
First feature that will be available in SQL Server 2016 CTP2 is ability to format query results as JSON text using FOR JSON clause. If you are familiar with FOR XML clause you will easily understand FOR JSON. When you add FOR JSON clause at the end of T-SQL SELECT query, SQL Server will take the results, format them as JSON text, and return it to client. Every row will be formatted as one JSON object, values in cells will be generated as values of JSON objects, and column names or aliases will be used as key names. Below is the syntax:
SELECT column, expression, column as alias
FROM table1, table2, table3
FOR JSON [AUTO | PATH]

Tuesday, June 28, 2016

AngularJS

AngularJS is an open source, JavaScript based web application development framework. It can be added to an HTML page with a <script> tag. AngularJS extends HTML attributes with Directives, and binds data to HTML with Expressions. The AngularJS framework works by first reading the HTML page, which has embedded into it additional custom tag attributes. Angular interprets those attributes as directives to bind input or output parts of the page to a model that is represented by standard JavaScript variables. The values of those JavaScript variables can be manually set within the code, or retrieved from static or dynamic JSON resources.

Definition of AngularJS as put by its official documentation is as follows:
"AngularJS is a structural framework for dynamic web applications. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application components clearly and succinctly. Its data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology."

Angular allows your application to have an expanded HTML library.

Features of AngularJS are:
  • AngularJS is a efficient framework that can create Rich Internet Applications (RIA).
  • AngularJS provides developers an options to write client side applications using JavaScript in a clean Model View Controller (MVC) way.
  • Applications written in AngularJS are cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser.
  • AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache license version 2.0.

Thursday, June 16, 2016

Common Language Infrastructure (CLI)

The Common Language Infrastructure (CLI) provides a specification for executable code and the execution environment (the Virtual Execution System) in which it runs. Executable code is presented to the VES as modules. At the center of the CLI is a unified type system, the Common Type System that is shared by compilers, tools, and the CLI itself. It is the model that defines the rules the CLI follows when declaring, using, and managing types. The CTS establishes a framework that enables cross-language integration, type safety, and high performance code execution. This clause describes the architecture of the CLI by describing the CTS. The following four areas are covered in this clause:
  • The Common Type System (CTS) - The CTS provides a rich type system that supports the types and operations found in many programming languages. The CTS is intended to support the complete implementation of a wide range of programming languages.
  • Metadata - The CLI uses metadata to describe and reference the types defined by the CTS. Metadata is stored (that is, persisted) in a way that is independent of any particular programming language. Thus, metadata provides a common interchange mechanism for use between tools (such as compilers and debuggers) that manipulate programs, as well as between these tools and the VES.
  • The Common Language Specification (CLS) - The CLS is an agreement between language designers and framework (that is, class library) designers. It specifies a subset of the CTS and a set of usage conventions. Languages provide their users the greatest ability to access frameworks by implementing at least those parts of the CTS that are part of the CLS. Similarly, frameworks will be most widely used if their publicly exported aspects (e.g., classes, interfaces, methods, and
  • fields) use only types that are part of the CLS and that adhere to the CLS conventions.
  • The Virtual Execution System (VES) - The VES implements and enforces the CTS model. The VES is responsible for loading and running programs written for the CLI. It provides the services needed to execute managed code and data, using the metadata to connect separately generated modules together at runtime (late binding).
Together, these aspects of the CLI form a unifying infrastructure for designing, developing, deploying, and executing distributed components and applications. The appropriate subset of the CTS is available from each programming language that targets the CLI. Language-based tools communicate with each other and with the VES using metadata to define and reference the types used to construct the application. The VES uses the metadata to create instances of the types as needed and to provide data type information to other parts of the infrastructure (such as remoting services, assembly downloading, and security).

Thursday, June 9, 2016

Test-driven development

Test-driven development (TDD a.k.a. test-first programming) represents an important change to the way most programmers think about writing code. The practice involves writing a test before each new piece of functionality is coded into the system. It is a software development process that relies on the repetition of a very short development cycle: requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that isn't proven to meet requirements.
An interesting fact about bug-free code is that as the code is further developed, the bug count in it is less likely to increase than in code that is known to have bugs. Developers are more careful to ensure they do not introduce any errors to a solution that is bug free than they are when the solution is riddled with issues. It is what is known as "broken window syndrome". When code starts to fall into disrepair (has bugs), a typical belief follows that a few more bugs will not make any difference because the code is already broken. Therefore, it is important to keep the quality of the code high all the way through the process.
To develop zero-defect software, you need some tools and the correct way of thinking about quality. One of the most powerful tools is unit testing; and by developing the tests before writing the solution code, you are "turning the volume up." NUnit is a .NET Framework class library that can locate unit tests that have been written in your project. NUnit comes with a GUI windows application and a console-based application that both enable you to run the tests and see the results. Tests either pass or fail, as indicated by a green or red progress bar, respectively.
The tests were intended to increase your confidence in the code you are writing. The tests are driving the development process. By writing the tests first, you should find your focus on the code you are writing changes substantially. Your goal now is to define how the class will be used with a test method and then to get the tests to pass.

Tuesday, May 31, 2016

eCommerce : Structure, Information Architecture, SEO

Electronic commerce, commonly written as e-commerce or eCommerce, is the trading or facilitation of trading in products or services using computer networks, such as the Internet. Electronic commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web for at least one part of the transaction's life cycle, although it may also use other technologies such as e-mail.

Information Architecture (IA) may sound dull but it’s a critical component of ecommerce and helps put the right data structures and standards in place to enable, amongst other things:
  • Site & catalogue structure
  • Core processes & functions e.g. site search
  • Business reporting & web analytics
  • SEO
Search Engine Optimization (SEO) is really just a series of educated guesses. It is educated guessing mixed with data-driven decisions that lead to more educated guesses and more data-driven decisions. Fear not though because even within all this guessing remain some basic principles and best practices. The most important SEO consideration for an eCommerce website is the website’s categorical structure / website architecture.

nopCommerce - ASP.NET open-source ecommerce software
nopCommerce is the leading open source shopping cart, allowing anyone to set up an online store quickly and easily.
One key feature of the nopCommerce is its pluggable modular/layered architecture which allows additional functionality and presentation elements to be dynamically added to the application at run-time. This pluggable modularized architecture makes it easy to create and manage your web sites.
 

Topshelf : Easy to create a Windows service

Topshelf is a Windows service framework for the .NET platform. Topshelf makes it easy to create a Windows service, test the service, debug the service, and ultimately install it into the Windows Service Control Manager (SCM). Topshelf does this by allowing developers to focus on service logic instead of the details of interacting with the built-in service support in the .NET framework. Developers don’t need to understand the complex details of service classes, perform installation via InstallUtil, or learn how to attach the debugger to services for troubleshooting issues.
Topshelf works with Mono, making it possible to deploy services to Linux. The service installation features are currently Windows only, but others are working on creating native host environment support so that installation and management features are available as well.

Reference Link: http://topshelf-project.com/

Friday, April 22, 2016

Comparison of RabbitMQ, ActiveMQ, and ZeroMQ Message Brokers

RabbitMQ ZeroMQ ActiveMQ
RabbitMQ is one of the leading implementation of the AMQP protocol (along with Apache Qpid). Therefore, it implements a broker architecture, meaning that messages are queued on a central node before being sent to clients. This approach makes RabbitMQ very easy to use and deploy, because advanced scenarios like routing, load balancing or persistent message queuing are supported in just a few lines of code. However, it also makes it less scalable and “slower” because the central node adds latency and message envelopes are quite big.

RabbitMQ is a Message Queue Server in Erlang

It stores jobs in memory (message queue)
ZeroMQ is a very lightweight messaging system specially designed for high throughput/low latency scenarios like the one you can find in the financial world. Zmq supports many advanced messaging scenarios but contrary to RabbitMQ, you’ll have to implement most of them yourself by combining various pieces of the framework (e.g : sockets and devices). Zmq is very flexible but you’ll have to study the 80 pages or so of the guide (which I recommend reading for anybody writing distributed system, even if you don’t use Zmq) before being able to do anything more complicated that sending messages between 2 peers.

The socket library that acts as a concurrency framework

Faster than TCP, for clustered products and supercomputing

Carries messages across inproc, IPC, TCP, and multicast

Connect N-to-N via fanout, pubsub, pipeline, request-reply

Asynch I/O for scalable multicore message-passing apps
ActiveMQ is in the middle ground. Like Zmq, it can be deployed with both broker and P2P topologies. Like RabbitMQ, it’s easier to implement advanced scenarios but usually at the cost of raw performance.

ActiveMQ is an open source message broker in Java

Supports many advanced featuressuch as Message Groups, Virtual Destinations, Wildcards and Composite Destinations.

Wednesday, April 20, 2016

Object Modeling

The goal in object modeling is to render a precise, concise, understandable object-oriented model, or "blueprint," of the system to be automated. This model will serve as an important tool for communication:
  • To the future users of the system that we are about to build, an object model communicates our understanding of the system requirements.
  • To the software development team, an object model communicates the structure and function of the software that needs to be built in order to satisfy those requirements. This benefits not only the software engineers themselves, but also the folks who are responsible for quality assurance, testing, and documentation.
  • Long after the application is operational, an object model lives on as a "schematic diagram" to help the myriad folks responsible for supporting and maintaining an application understand its structure and function.
The design of complex systems invariably changes during their construction, so care should be taken to keep the object model up-to-date as the system is built.
Modeling Methodology = Process + Notation + Tool
According to Webster's dictionary, a methodology is
    A set of systematic procedures used by a discipline (to achieve a particular desired outcome).
A modeling methodology, ideally involves three components:
  • A process: The "how to" steps for gathering the requirements and determining the abstraction to be modeled
  • A notation: A graphical "language" for communicating the model
  • A tool: An automated way of rendering the notation, typically in "drag-and-drop" fashion
Although these constitute the ideal components of a modeling methodology, they are not all of equal importance.
  • Adhering to a sound process is certainly critical.
  • However, we can sometimes get by with a narrative text description of an abstraction without having to resort to portraying it with formal graphical notation.
  • And, when we do choose to depict an abstraction formally via a graphical notation, it isn't mandatory that we use a specialized tool for doing so.
Object modeling tools fall under the general heading of Computer-Aided Software Engineering, or CASE, tools. CASE tools afford us with many advantages, but aren't without their drawbacks.

The Advantages of Using CASE Tools:
  • Ease of Use - CASE tools provide a quick drag-and-drop way to create visual models.
  • Added Information Content - CASE tools produce "intelligent" drawings that enforce the syntax rules of a particular notation. This is in contrast to a generic drawing package, which will pretty much let you draw whatever you like, whether it adheres to the notational syntax or not.
  • Automated Code Generation - Most CASE tools provide code generation capabilities, enabling you to transition from a diagram to skeletal C# (or other) code with the push of a button.
  • Project Management Aids - Many CASE tools provide some sort of version control, enabling you to maintain different generations of the same model.
  • Flexibility - Some CASE tools support multiple graphical notations, enabling you to initially create a diagram in one notation but to then convert the diagram to another notation quickly and effortlessly.

Some Drawbacks of CASE Tools:
  • CASE tools can be expensive; it's not unusual for a high-end CASE tool to cost hundreds or even thousands of dollars per "seat."
  • It's easy to get caught up with form over substance! This is true of any automated tool - even a word processor tends to lure people into spending more time on the cosmetics of a document than is warranted, long after the substantive content is rock solid.

Thursday, March 31, 2016

Few C# Terms

Accessor
An accessor is a method which provides access to the value managed within a class. Effectively the access is read only, in that the data is held securely in the class but code in other classes may need to have access to the value itself. An accessor is implemented as a public method which will return a value to the caller. Note that if the thing being given access to is managed by reference the programmer must make sure that it is OK for a reference to the object is passed out. If the object is not to be changed it may be necessary to make a copy of the object to return to the caller.

Coupling
If a class is dependent on another the two classes are said to be coupled. Generally speaking a programmer should strive to have as little coupling in their designs as possible, since it makes it harder to update the system. Coupling is often discussed alongside cohesion, in that you should aim for high cohesion and low coupling.

Mutator
A mutator is a method which is called to change the value of a member inside an object. The change will hopefully be managed, in that invalid values will be rejected in some way. This is implemented in the form of a public method which is supplied with a new value and may return an error code.

Stream
A stream is an object which represents a connection to something which is going to move data for us. The movement might be to a disk file, to a network port or even to the system console. Streams remove the need to modify a program depending on where the output is to be sent or input received from.

Subscript
This is a value which is used to identify the element in an array. It must be an integer value. Subscripts in C# always start at 0 (this locates, the first element of the array) and extend up to the size of the array minus 1. This means that if you create a four element array you get hold of elements in the array by subscript values of 0,1,2 or 3. The best way to regard a subscript is the distance down the array you are going to move to get the element that you want. This means that the first element in the array must have a subscript value of 0.

Typesafe
Type-safe code accesses only the memory locations it is authorized to access. For example, type-safe code cannot read values from another object's private fields. It accesses types only in well-defined, allowable ways. When code is type safe, the common language runtime can completely isolate assemblies from each other. This isolation helps ensure that assemblies cannot adversely affect each other and it increases application reliability. Type-safe components can execute safely in the same process even if they are trusted at different levels.

Continuous Delivery

Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.
"Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time" — Martin Fowler
Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day. Continuous Integration provides a framework for efficiently validating software in a predictable way. But to get the most out of it, you need to look at how it fits into the overall process of delivering software. In an agile project, you want to deliver working software at every iteration. Unfortunately, this is easier said than done; it often turns out that even if you implement CI and get the build process to produce a new installation package in a few minutes, it takes several days to get a new piece of software tested and released into production. To make this work better, the key is process. In order to deliver working software faster, you need a good cohesive set of tools and practices. So you need to add planning, environment management, deployment, and automated validation to get a great solution for your product, and this is just what Continuous Delivery is about.
Continuous Deployment is the practice of continuously pushing to production new versions of software under development. So Continuous Integration is all about quick feedback and validation of the commit phase, and Continuous Delivery is about establishing a mindset where you can deliver features at customer demand. Continuous Deployment is a third term that’s sometimes confused with both Continuous Integration and Continuous Delivery. Continuous Deployment can be viewed as the next level of Continuous Delivery. Where Continuous Delivery provides a process to create frequent releases but not necessarily deploy them, Continuous Deployment means that every change you make automatically gets deployed through the deployment pipeline. When you have established a Continuous Delivery solution, you are ready to move to Continuous Deployment if that’s something your business would benefit from.

Summary:
Continuous Integration is a software development practice in which you build and unit-test software every time a developer checks in new code.
Continuous Delivery (CD) is a software development practice in which continuous integration, automated testing, and automated deployment capabilities allow software to be developed and deployed rapidly, reliably and repeatedly with minimal manual overhead.
Continuous Deployment is a software development practice in which every code change goes through the entire pipeline and is put into production, automatically, resulting in many production deployments every day.

Monday, February 29, 2016

Few CPU Facts

64-bit computing : In computer architecture, 64-bit computing is the use of processors that have datapath widths, integer size, and memory address widths of 64 bits (eight octets). Also, 64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. From the software perspective, 64-bit computing means the use of code with 64-bit virtual memory addresses.
Processor registers are typically divided into several groups: integer, floating-point, SIMD, control, and often special registers for address arithmetic which may have various uses and names such as address, index or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider.

What the difference between Intel Core i3, Core i5, Core i7:
The Core name itself is a bit misleading. All CPUs have one or more cores, with each core being a processor itself. Most commonly an Intel Core processor will have two physical cores (dual-core) and also two virtual cores (which Intel calls Hyper-Threading). Some, though, have four physical cores: quad-core. If you buy a Core i7 Extreme Edition, you will find up to 12 phyiscal cores. Physical cores are better than virtual cores in performance terms.
Model Core i3 Core i5 Core i7
Number of cores (A core can be thought of as in individual processor) 2 4 4
Hyper-threading (Hyper-Threading is Intel's technology for creating two logical cores in each physical core) Yes No Yes
Turbo boost (Turbo Boost is Intel's technology for automatically overclocking a processor, boosting its clock speed higher than the default setting) No Yes Yes
K model (Any CPU that has a model ending with a K means that it the CPU is unlocked) No Yes Yes

CPU Registers :
There are 16 general purpose registers in the x86-64 architecture.

ARM architecture:
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing (RISC) architectures for computer processors, configured for various environments. British company ARM Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures—​​including systems-on-chips (SoC) that incorporate memory, interfaces, radios, etc. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products.

Open source platform for continuous inspection of code quality : SonarQube

SonarQube can perform analysis on 20+ different languages. The outcome of this analysis will be quality measures and issues (instances where coding rules were broken). SonarQube is an open platform to manage code quality. There are three different paradigms for SonarQube analysis. You switch among the three modes using the sonar.analysis.mode analysis parameter with one of these three values:
publish - this is the default. This mode analyzes everything that's analyze-able for the languages in question and pushes the results to the server for processing.
preview - is typically used to determine whether code changes are good enough to move forward with, E.G. merge into the Git master.
issues - is a "preview" equivalent intended for use by tools. You should never need to use it manually.
SonarQube covers the 7 axes of code quality:
  • Architecture & Design
  • Comments
  • Coding rules
  • Potential bugs
  • Complexity
  • Unit tests
  • Duplications
Features
  • Supports languages: Java, C/C++, Objective-C, C#, PHP, Flex, Groovy, JavaScript, Python, PL/SQL, COBOL, etc.
  • Can also be used in Android development.
  • Offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, potential bugs, comments and design and architecture.
  • Records metrics history and provides evolution graphs ("time machine") and differential views.
  • Provides fully automated analyses: integrates with Maven, Ant, Gradle and continuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.).
  • Integrates with the Eclipse development environment
  • Integrates with external tools: JIRA, Mantis, LDAP, Fortify, etc.
  • Is expandable with the use of plugins.
  • Implements the SQALE methodology to compute technical debt.

Saturday, January 30, 2016

.NET Core

.NET Core is a modular version of the .NET Framework designed to be portable across platforms for maximum code reuse and code sharing. In addition, .NET Core will be open-sourced and accept contributions from the community.
What is .NET Core?
.NET Core is portable across platforms because, although a subset of the full .NET Framework, it provides key functionality to implement the app features you need and reuse this code regardless of your platform target. In the past, different versions of .NET for different platforms lacked shared functionality for key tasks such as reading local files. Microsoft platforms that you will be able to target with .NET Core include traditional desktop Windows, as well as Windows devices and phones. When used third-party tools such as Xamarin, .NET Core should be portable to IOS and Android devices. In addition, .NET Core will soon be available for the Mac and Linux operating systems to enable web apps to run on those systems.
.NET Core is modular because it is released through NuGet in smaller assembly packages. Rather than one large assembly that contains most of the core functionality, .NET Core is made available as smaller feature-centric packages. This enables a more agile development model for us and allows you to pick and choose the functionality pieces that you need for your apps and libraries. For more information about .NET packages that release on NuGet, see, The .NET Framework and Out-of-Band Releases.
For existing apps, using Portable Class Libraries (PCL), Universal app projects and separating business logic from platform specific code is the best way to take advantage of .NET Core, and maximize your code reuse. For apps, Model-View-Controller (MVC) or the Model View-ViewModel (MVVM) patterns are good choices to make your apps easy to migrate to .NET Core.
In addition to the modularization of the .NET Framework, Microsoft is open-sourcing the .NET Core packages on GitHub, under the MIT license. This means you can clone the Git repo, read and compile the code and submit pull requests just like any other open source package you might find on GitHub.
.NET Core 5 is a modular runtime and library implementation that includes a subset of the .NET Framework. Currently it is feature complete on Windows, and in-progress builds exist for both Linux and OS X. .NET Core consists of a set of libraries, called “CoreFX”, and a small, optimized runtime, called “CoreCLR”. .NET Core is open-source, so you can follow progress on the project and contribute to it on GitHub:
  • .NET Core Libraries (CoreFX)
  • .NET Core Common Language Runtime (CoreCLR)
The CoreCLR runtime (Microsoft.CoreCLR) and CoreFX libraries are distributed via NuGet. The CoreFX libraries are factored as individual NuGet packages according to functionality, named “System.[module]” on nuget.org.
One of the key benefits of .NET Core is its portability. You can package and deploy the CoreCLR with your application, eliminating your application’s dependency on an installed version of .NET (e.g. .NET Framework on Windows). You can host multiple applications side-by-side using different versions of the CoreCLR, and upgrade them individually, rather than being forced to upgrade all of them simultaneously.
CoreFX has been built as a componentized set of libraries, each requiring the minimum set of library dependencies (e.g. System.Collections only depends on System.Runtime, not System.Xml). This approach enables minimal distributions of CoreFX libraries (just the ones you need) within an application, alongside CoreCLR. CoreFX includes collections, console access, diagnostics, IO, LINQ, JSON, XML, and regular expression support, just to name a few libraries. Another benefit of CoreFX is that it allows developers to target a single common set of libraries that are supported by multiple platforms.

AMQP Essentials

WHAT IS AMQP?
AMQP (Advanced Message Queuing Protocol) is a binary transfer protocol that was made for enterprise applications and server-to-server communication (e.g., for financial businesses), but today it can be very useful in the Internet of Things world, thanks to the following primary features. AMQP is binary and avoids a lot of the useless data sent on the wire when using a text-based protocol like HTTP; because of this, it can be considered compact, too. Thanks to its multiplexed nature, only one connection (over a reliable stream transport protocol) is needed to allow separated data flows between the two peers; and of course it’s symmetric and provides both a client-server communication style and peer-to-peer exchange. Finally, it’s secure and reliable, providing three different levels of QoS (Quality of Service).
The last ratified version of AMQP (1.0) is the only one standardized by OASIS (since 2012/10) and ISO/IEC (since 2014/05), and it’s totally broker-model agnostic, as it doesn’t define any requirements on broker internals (this is the main difference with previous “not standard” versions like 0.9.1); the protocol is focused on how the data is transferred on the wire.

AMQP ARCHITECTURE
AMQP has a layered model defined in the following way from a bottom-up perspective:
• TRANSPORT/FRAMING:
Defines the connection behavior  and the security layer between peers on top of an underlying network transport protocol (TCP, for example). It also adds the framing protocol and how the exchanged data is formatted and encoded.
• MESSAGING:
Provides messaging capabilities at application level on top of the previous layer defining the message
entity as built of one or more frames.
Regarding the network transport layer, AMQP isn’t strongly tied to TCP, and as such can be used with any reliable stream transport protocol; so, for example, SCTP (Stream Control Transmission Protocol) and pipes are suitable.

AMQP COMMUNICATIONS
All the AMQP concepts—from connection, session, and link to performatives and messages—fit together to define how the communication happens between two peers. The main steps involved are:
• OPEN/CLOSE a connection (respectively after opening a network connection and before closing it) using “open” and “close” performatives
• BEGIN/END a session inside the connection thanks to “begin” and “end” performatives
• ATTACH/DETACH a link inside the session using “attach” and “detach” performatives
• SEND/RECEIVE messages with flow control thanks to “transfer, ” “disposition, ” and “flow” performatives