Thursday, December 31, 2015
yield Keyword Explained
When you use the yield keyword in a statement, you indicate that the method, operator, or get accessor in which it appears is an iterator. Using yield to define an iterator removes the need for an explicit extra class when you implement the IEnumerable and IEnumerator pattern for a custom collection type. The yield keyword was introduced in C# 2.0 (it is not currently available in VB.NET) and is useful for iteration. In iterator blocks it can be used to get the next value.
“Yield keyword helps us to do custom stateful iteration over .NET collections.” There are two scenarios where “yield” keyword is useful:
- Customized iteration through a collection without creating a temporary collection
- Stateful iteration
It is a contextual keyword used in iterator methods in C#. Basically, it has two use cases:
- yield return obj; returns the next item in the sequence.
- yield break; stops returning sequence elements (this happens automatically if control reaches the end of the iterator method body).
When you use the "yield return" keyphrase, .NET is wiring up a whole bunch of plumbing code for you, but for now you can pretend it's magic. When you start to loop in the calling code, this function actually gets called over and over again, but each time it resumes execution where it left off.
Typical Implementation
Typical Implementation
- Caller calls function
- Function executes and returns list
- Caller uses list
- Caller calls function
- Caller requests item
- Next item returned
- Goto step #2
Although the execution of the yield implementation is a little more complicated, what we end up with is an implementation that "pulls" items one at a time instead of having to build an entire list before returning to the client.
Example:
Example:
public class PowersOf2 { static void Main() { // Display powers of 2 up to the exponent of 8: foreach (int i in Power(2, 8)) { Console.Write("{0} ", i); } } public static System.Collections.Generic.IEnumerable<int> Power(int number, int exponent) { int result = 1; for (int i = 0; i < exponent; i++) { result = result * number; yield return result; } } // Output: 2 4 8 16 32 64 128 256 }
Wednesday, December 30, 2015
RabbitMQ : Tackling the broker SPOF
Though RabbitMQ brokers are extremely stable, a crash is always possible. Losing an instance altogether due to a virtual instance glitch is a likely possibility that can't be ignored. Therefore, it is essential to tackle the broker single point of failure (SPOF) before something bad happens, to prevent losing data or annoying users.The RabbitMQ can easily be configured to run in an active/active deployment, where several brokers are engaged in a cluster to act as a single highly-available AMQP middleware. The active/active aspect is essential, because it means that no manual fail-over operation is needed if one broker goes down.
Mirroring queues
With clustering, the configuration gets synchronized across all RabbitMQ nodes. This means that clients can now connect to one node or the other and find the exchanges and queues they're expecting. However, there is one thing that is not carried over the cluster by default: the messages themselves. By default, queue data is local to a particular node. When a queue is mirrored, its instances across the network organize themselves around one master and several slaves. All interaction (message queuing and dequeuing) happens with the master; the slaves receive the updates via synchronization over the cluster. If you interact with a node that hosts a slave queue, the interaction would actually be forwarded across the cluster to the master and then synchronized back to the slave.
Activating queue mirroring is done via a policy that is applied to each queue concerned.
Federating brokers
If you think beyond the notion of a single centralized enterprise resource and instead think in terms of distributed components, the idea of creating a topology of RabbitMQ brokers will emerge. RabbitMQ offers the following two plugins that allow the connection of brokers:
• The shovel plugin, which connects queues in one broker to exchanges in another broker
• The federation plugin, which connects queues to queues or exchanges to exchanges across brokers
Both plugins ensure a reliable delivery of messages across brokers; if messages can't be routed to the target broker, they'll remain safely accumulated. Neither require brokers to be clustered, which simplifies setup and management (RabbitMQ and Erlang versions can mismatch). The federation plugin needs to be installed on all RabbitMQ nodes that will engage in the topology.
Keep in mind...
- RabbitMQ relies on Erlang's clustering feature, which allows several Erlang nodes to communicate with each other locally or over the network.
- The Erlang cluster requires a so-called security cookie as a means of cross-node authentication.
- If RabbitMQ instances are firewalled from each other, its needed to open specific ports on top of the one used by AMQP (5672); otherwise, the cluster will not work.
- Make sure the same version of Erlang is used by all the RabbitMQ nodes that engage in a cluster; otherwise, the join_cluster command will fail with an OTP version mismatch error.
- Similarly, the same major/minor version of RabbitMQ should be used across nodes, but patch versions can differ; this means that versions 3.2.1 and 3.2.0 can be used in the same cluster, but not 3.2.1 and 3.1.0.
- In a federation, only the node where messages converge needs to be manually configured; its upstream nodes get automatically configured for the topology. Conversely with shovels, each source node needs to be manually configured to send to a destination node, which itself is unaware of the fact that it's engaged in a particular topology.
Thursday, December 24, 2015
WS-Policy
WS-Policy is a specification that allows web services to use XML to advertise their policies (on security, quality of service, etc.) and for web service consumers to specify their policy requirements. WS-Policy is a W3C recommendation as of September 2007. WS-Policy represents a set of specifications that describe the capabilities and constraints of the security (and other business) policies on intermediaries and end points (for example, required security tokens, supported encryption algorithms, and privacy rules) and how to associate policies with services and end points.
- To integrate software systems with web services
- Need a way to express web services characteristics
- Without this standard, developers need docs
- Provides a flexible and extensible grammar for expressing the capabilities, requirements, and general characteristics of Web Service entities
- Defines a model to express these properties as policies
- Provide the mechanisms needed to enable Web Services applications to specify policies
WS-Policy specifies:
- An XML-based structure called a policy expression containing policy information
- Grammar elements to indicate how the contained policy assertions apply
Terminology:
- Policy: refers to the set of information being expressed as policy assertions
- Policy Assertion: represents an individual preference, requirement, capability, etc.
- Policy Expression: set of one or more policy assertions
- Policy Subject: an entity to which a policy expression can be bound
Policy Namespaces:
- WS-Policy schema defines all constructs that can used in a policy expression
Prefix | Description | Namespace |
---|---|---|
wsp | WS-Policy, WS-PolicyAssertions, and WS_PolicyAttachment | http://schemas.xmlsoap.org/ws/2002/12/policy |
wsse | WS-SecurityPolicy | http://schemas.xmlsoap.org/ws/2002/12/secext |
wsu | WS utility schema | http://schemas.xmlsoap.org/ws/2002/07/utility |
msp | WSE 2.0 policy schema | http://schemas.microsoft.com/wse/2003/06/policy |
Monday, November 30, 2015
MongoDB Fundamentals
MongoDB is a document-oriented DBMS. It uses JSON-like objects comprising the data model, rather than RDBMS tables. Significantly, MongoDB supports neither joins nor transactions. However, it features secondary indexes, an expressive query language, atomic writes on a per-document level, and fully-consistent reads. Operationally, MongoDB features master-slave replication with automated failover and built-in horizontal scaling via automated range-based partitioning.
Instead of tables, a MongoDB database stores its data in collections, which are the rough equivalent of RDBMS tables. A collection holds one or more documents, which corresponds to a record or a row in a relational database table, and each document has one or more fields, which corresponds to a column in a relational database table.
Instead of tables, a MongoDB database stores its data in collections, which are the rough equivalent of RDBMS tables. A collection holds one or more documents, which corresponds to a record or a row in a relational database table, and each document has one or more fields, which corresponds to a column in a relational database table.
MongoDB Storage
A storage engine is the part of a database that is responsible for managing how data is stored on disk. Many databases support multiple storage engines, where different engines perform better for specific workloads. For example, one storage engine might offer better performance for read-heavy workloads, and another might support a higher-throughput for write operations. MMAPv1 is the default storage engine in 3.0. With multiple storage engines, you can decide which storage engine is best for your application.
A storage engine is the part of a database that is responsible for managing how data is stored on disk. Many databases support multiple storage engines, where different engines perform better for specific workloads. For example, one storage engine might offer better performance for read-heavy workloads, and another might support a higher-throughput for write operations. MMAPv1 is the default storage engine in 3.0. With multiple storage engines, you can decide which storage engine is best for your application.
MMAPv1 Storage Engine
A memory-mapped file is a file with data that the operating system places in memory by way of the mmap() system call. mmap() thus maps the file to a region of virtual memory. Memory-mapped files are the critical piece of the MMAPv1 storage engine in MongoDB. By using memory mapped files, MongoDB can treat the contents of its data files as if they were in memory. This provides MongoDB with an extremely fast and simple method for accessing and manipulating data.
MongoDB uses memory mapped files for managing and interacting with all data. Memory mapping assigns files to a block of virtual memory with a direct byte-for-byte correlation. MongoDB memory maps data files to memory as it accesses documents. Unaccessed data is not mapped to memory. Once mapped, the relationship between file and memory allows MongoDB to interact with the data in the file as if it were memory.
A memory-mapped file is a file with data that the operating system places in memory by way of the mmap() system call. mmap() thus maps the file to a region of virtual memory. Memory-mapped files are the critical piece of the MMAPv1 storage engine in MongoDB. By using memory mapped files, MongoDB can treat the contents of its data files as if they were in memory. This provides MongoDB with an extremely fast and simple method for accessing and manipulating data.
MongoDB uses memory mapped files for managing and interacting with all data. Memory mapping assigns files to a block of virtual memory with a direct byte-for-byte correlation. MongoDB memory maps data files to memory as it accesses documents. Unaccessed data is not mapped to memory. Once mapped, the relationship between file and memory allows MongoDB to interact with the data in the file as if it were memory.
Database Properties
Atomicity and Transactions
In MongoDB, a write operation is atomic on the level of a single document, even if the operation modifies multiple embedded documents within a single document. When a single write operation modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic and other operations may interleave. However, we can isolate a single write operation that affects multiple documents using the $isolated operator.
Get more information at Concurrency.
In MongoDB, a write operation is atomic on the level of a single document, even if the operation modifies multiple embedded documents within a single document. When a single write operation modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic and other operations may interleave. However, we can isolate a single write operation that affects multiple documents using the $isolated operator.
Get more information at Concurrency.
Consistency
MongoDB is consistent by default: reads and writes are issued to the primary member of a replica set. Applications can optionally read from secondary replicas, where data is eventually consistent by default. Reads from secondaries can be useful in scenarios where it is acceptable for data to be slightly out of date, such as some reporting applications. Applications can also read from the closest copy of the data (as measured by ping distance) when latency is more important than consistency.
Get more information at Consistency.
MongoDB is consistent by default: reads and writes are issued to the primary member of a replica set. Applications can optionally read from secondary replicas, where data is eventually consistent by default. Reads from secondaries can be useful in scenarios where it is acceptable for data to be slightly out of date, such as some reporting applications. Applications can also read from the closest copy of the data (as measured by ping distance) when latency is more important than consistency.
Get more information at Consistency.
SOA : Service-Oriented Architecture
What is Service-Oriented Architecture?
Service-Oriented Architecture (SOA) is an architectural style. Applications built using an SOA style deliver functionality as services that can be used or reused when building applications or integrating within the enterprise or trading partners. It is a method of design, deployment, and management of both applications and the software infrastructure where:
Service-Oriented Architecture (SOA) is an architectural style. Applications built using an SOA style deliver functionality as services that can be used or reused when building applications or integrating within the enterprise or trading partners. It is a method of design, deployment, and management of both applications and the software infrastructure where:
- All software is organized into business services that are network accessible and executable.
- Service interfaces are based on public standards for interoperability.
Key Characteristics of SOA
- Uses open standards to integrate software assets as services
- Standardizes interactions of services
- Services become building blocks that form business flows
- Services can be reused by other applications
- Quality of service, security and performance are specified
- Software infrastructure is responsible for managing
- Services are catalogued and discoverable
- Data are catalogued and discoverable
- Protocols use only industry standards
What is an Enterprise Service Bus (ESB)?
- An enterprise service bus is an infrastructure used for building compound applications
- The enterprise service bus is the glue that holds the compound application together
- The enterprise service bus is an emerging style for integrating enterprise applications in an implementation-independent fashion
- An enterprise service bus can be thought of as an abstraction layer on top of an Enterprise Messaging System
Key Characteristics of an ESB
- Streamlines development
- Supports multiple binding strategies
- Performs data transformation
- Intelligent routing
- Real time monitoring
- Exception handling
- Service security
Tuesday, October 20, 2015
Real-time Transport Protocol
The Real-time Transport Protocol (RTP) is a network protocol for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol such as the Session Initiation Protocol (SIP) which establishes connections across the network.
RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering Task Force (IETF) and first published in 1996 as RFC 1889, superseded by RFC 3550 in 2003.
RTP combines its data transport with a control protocol (RTCP), which makes it possible to monitor data delivery for large multicast networks. Typically, RTP runs on top of the UDP protocol, although the specification is general enough to support other transport protocols. An RTP session is established for each multimedia stream. A session consists of an IP address with a pair of ports for RTP and RTCP. For example, audio and video streams use separate RTP sessions, enabling a receiver to deselect a particular stream.
Monday, October 5, 2015
Node.js Module
A module encapsulates related code into a single unit of code. When creating a module, this can be interpreted as moving all related functions into a file. Node.js has a simple module loading system. In Node.js, files and modules are in one-to-one correspondence. As an example, foo.js loads the module circle.js in the same directory.
The semantics of Node.js's require() function were designed to be general enough to support a number of reasonable directory structures. Package manager programs such as dpkg, rpm, and npm will hopefully find it possible to build native packages from Node.js modules without modification.
Core Modules
Node.js has several modules compiled into the binary. The core modules are defined within Node.js's source and are located in the lib/ folder. Core modules are always preferentially loaded if their identifier is passed to require(). For instance, require('http') will always return the built in HTTP module, even if there is a file by that name.
Node.js has several modules compiled into the binary. The core modules are defined within Node.js's source and are located in the lib/ folder. Core modules are always preferentially loaded if their identifier is passed to require(). For instance, require('http') will always return the built in HTTP module, even if there is a file by that name.
exports VS. module.exports
- exports is an alias to module.exports.
- node automatically creates it as a convenient shortcut.
- For assigning named properties, use either one.
> module.exports.fiz = "fiz";
> exports.buz = "buz";
> module.exports === exports;
true
- Assigning anything to exports directly (instead of exports.something) will overwrite the exports alias.
Named exports - one module, many exported things
Anonymous exports - simpler client interface
Anonymous exports - simpler client interface
Wednesday, September 30, 2015
Mobile Backend as a Service (MBaaS)
Mobile Backend as a service (MBaaS), also known as "backend as a service" (BaaS), is a model for providing web and mobile app developers with a way to link their applications to backend cloud storage and APIs exposed by back end applications while also providing features such as user management, push notifications, and integration with social networking services. These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). Backend as a Service also refered as “turn-on infrastructure” for mobile and web apps.
BaaS providers tend to fall into one of two categories: consumer BaaS or enterprise BaaS. The former focuses largely on “lighter-weight” brand apps (and games), whereas the latter centers on mobilizing sensitive, business-critical data from enterprise systems. As a whole, BaaS providers are disrupting the on-premise “Mobile Enterprise Application Platform,” or MEAP, category, while providing a lot more turn-key functionality for your mobile strategy than traditional API management and Platform as a Service vendors.
Web and mobile apps require a similar set of features on the backend, including push notifications, integration with social networks, and cloud storage. Each of these services has its own API that must be individually incorporated into an app, a process that can be time-consuming and complicated for app developers. BaaS providers form a bridge between the frontend of an application and various cloud-based backends via a unified API and SDK.
Providing a consistent way to manage backend data means that developers do not need to redevelop their own backend for each of the services that their apps need to access, potentially saving both time and money.
Although similar to other cloud-computing developer tools, such as software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS), BaaS is distinct from these other services in that it specifically addresses the cloud-computing needs of web and mobile app developers by providing a unified means of connecting their apps to cloud services.
Thursday, September 10, 2015
Compress LOB data in the database
SQL Server stores the data in regular B-Tree indexes in three different sets of the data pages called allocation units. The main data row structure and fixed-length data are stored in IN-ROW data pages. Variable-length data greater than 8,000 bytes in size is stored in LOB (large object) pages. Such data includes (max) columns, XML, CLR UDT and a few other data types. Finally, variable-length data, which does not exceed 8,000 bytes, is stored either in IN-ROW data pages when it fits into the page, or in ROW-OVERFLOW data pages.
Enterprise Edition of SQL Server allows you to reduce the size of the data by implementing data compression. However, data compression is applied to IN-ROW data only and it does not compress ROW-OVERFLOW and LOB data. Any large objects that do not fit into IN-ROW data pages remain uncompressed.
The approach to address such an overhead is manually compress LOB data in the code. You can create the methods to compress and decompress data utilizing one of the classes from System.IO.Compression namespace, for example using GZipStream or DeflateStream classes. Moreover, that method could be implemented in CLR stored procedures and used directly in T-SQL code. The drawback to this approach, compression is CPU intensive. It is better to run such code on the client whenever it is possible. The second important consideration is performance. Obviously, decompression adds an overhead, which you would like to avoid on the large scope.
Compressing LOB data in the database could help you to significantly reduce the database size in the large number of cases. However, it adds an overhead of compressing and decompressing data. In some cases, such overhead would be easily offset by the smaller data size, less I/O and buffer pool usage.
Monday, August 31, 2015
Understanding Firewalls
Firewalls enable you to define an access control requirement and ensure that only traffic or data that meets that requirement can traverse the firewall (in the case of a network-based firewall) or access the protected system (in the case of a host-based firewall). Firewalls are used to create security checkpoints at the boundaries of private networks. At these checkpoints, firewalls inspect all packets passing between the private network and the Internet and determine whether to pass or drop the packets depending on how they match the policy rules programmed into the firewall.
Firewalls sit on the borders of your network, connected directly to the circuits that provide access to other networks. For that reason, firewalls are frequently referred to as border security. The concept of border security is important—without it, every host on your network would have to perform the functions of a firewall themselves, needlessly consuming computer resources and increasing the amount of time required to connect, authenticate, and encrypt data in local area, high−speed networks. Firewalls allow you to centralize all external security services in machines that are optimized for and dedicated to the task. Inspecting traffic at the border gateways also has the benefit of preventing hacking traffic from consuming the bandwidth on your internal network.
Fundamentally, firewalls need to be able to perform the following tasks:
- Manage and control network traffic
- Authenticate access
- Act as an intermediary
- Protect resources
- Record and report on events
And Firewalls function primarily by using three fundamental methods:
- Packet Filtering Rejects TCP/IP packets from unauthorized hosts and reject connection attempts to unauthorized services.
- Network Address Translation (NAT) Translates the IP addresses of internal hosts to hide them from outside monitoring. You may hear of NAT referred to as IP masquerading.
- Proxy Services Makes high−level application connections on behalf of internal hosts in order to completely break the network layer connection between internal and external hosts.
Packet Filters
Filters compare network protocols (such as IP) and transport protocol packets (such as TCP) to a database of rules and forward only those packets that conform to the criteria specified in the database of rules. Filters can either be implemented in routers or in the TCP/IP stacks of servers. Filters implemented inside routers prevent suspicious traffic from reaching the destination network, whereas TCP/IP filter modules in servers merely prevent that specific machine from responding to suspicious traffic. The traffic still reaches the network and could target any machine on it. Filtered routers protect all the machines on the destination network from suspicious traffic. For that reason, filtering in the TCP/IP stacks of servers (such as that provided by Windows NT) should only be used in addition to router filtering, not instead of it.
Network Address Translation
Network Address Translation (NAT) solves the problem of hiding internal hosts. NAT is actually a network layer proxy: A single host makes requests on behalf of all internal hosts, thus hiding their identity from the public network. NAT hides internal IP addresses by converting all internal host addresses to the address of the firewall. The firewall then retransmits the data payload of the internal host from its own address using the TCP port number to keep track of which connections on the public side map to which hosts on the private side. To the Internet, all the traffic on your network appears to be coming from one extremely busy computer.
Proxies
Application−level proxies allow you to completely disconnect the flow of network−level protocols through your firewall and restrict traffic only to higher−level protocols like HTTP, FTP, and SMTP. Application proxies don't have to run on firewalls; any server, either inside or outside your network, can perform the role of a proxy. Security proxies are even capable of performing application−level filtering for specific content. Proxies are extremely specific because they can only work for a specific application. For instance, you must have a proxy software module for HTTP, another proxy module for FTP, and another module for Telnet.
Wednesday, August 26, 2015
Node.js Architecture
Node is an open source toolkit for developing server side applications based on the V8 JavaScript engine. Like Node, V8 is written in C++ and is mostly known for being used in Google Chrome.
Node is part of the Server Side JavaScript environment and extend JavaScript API to offer usual server side functionalities. Node base API can be extended by using the CommonJS module system.
NodeJS is divided into two main components: the core and its modules. The core is built in C and C++. It combines Google’s V8 JavaScript engine with Node’s Libuv library and protocol bindings including sockets and HTTP.
V8 Runtime Environment
Google’s V8 engine is an open-source Just In Time, or JIT, compiler written in C++. In recent benchmarks, V8’s performance has surpassed other JavaScript interpreters including Spider Monkey and Nitro. It has additionally surpassed PHP, Ruby and Python performance. Due to Google’s approach, it is predicted that in fact it could become as fast as C. The engine compiles JavaScript directly into assembly code ready for execution by avoiding any intermediary representations like tokens and opcodes which are further interpreted. The runtime environment is itself divided into three majors components: a compiler, an optimizer and a garbage collector.
Libuv
The C++ Libuv library is responsible for Node’s asynchronous I/O operations and main event loop. It is composed of a fixed-size thread pool from which a thread is allocated for each I/O operation. By delegating these time-consuming operations to the Libuv module, the V8 engine and remainder of NodeJS is free to continue executing other requests. Before 2012, Node relied on two separate libraries, Libio and Libev, in order to provide asynchronous I/O and support the main event loop. However, Libev was only supported by Unix. In order to add Windows support, the Libio library was fashioned as an abstraction around Libev.
Sunday, July 26, 2015
What is Node.js?
Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Node.js is an open source, cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux, FreeBSD, NonStop, IBM AIX, IBM System z and IBM i. Its work is hosted and supported by the Node.js Foundation, a Collaborative Project at Linux Foundation. Node.js provides an event-driven architecture and a non-blocking I/O API that optimizes an application's throughput and scalability. These technologies are commonly used for real-time web applications.
Node.js uses the Google V8 JavaScript engine to execute code, and a large percentage of the basic modules are written in JavaScript. Node.js contains a built-in library to allow applications to act as a Web server without software such as Apache HTTP Server or IIS. Node.js is gaining adoption as a server-side platform and is used by IBM, Microsoft, Yahoo!, Walmart, Groupon, SAP, LinkedIn, Rakuten, PayPal, Voxer, and GoDaddy.
Node.js was invented in 2009 by Ryan Dahl, and other developers working at Joyent. Node.js was created and first published for Linux use in 2009. Its development and maintenance was spearheaded by Ryan Dahl and sponsored by Joyent, the firm where Dahl worked.
Node.js allows the creation of web servers and networking tools, using JavaScript and a collection of "modules" that handle various core functionality. Modules handle file system I/O, networking (HTTP, TCP, UDP, DNS, or TLS/SSL), binary data (buffers), cryptography functions, data streams, and other core functions. Node's modules have a simple and elegant API, reducing the complexity of writing server applications.
Frameworks can be used to accelerate the development of applications, and common frameworks are Express.js, Socket.IO and Connect. Node.js applications can run on Microsoft Windows, Unix and Mac OS X servers. Node.js applications can alternatively be written with CoffeeScript (a more readable form of JavaScript), Dart or Microsoft TypeScript (strongly typed forms of JavaScript), or any language that can compile to JavaScript.
Monday, July 13, 2015
Evolution of the Microsoft NOS
"NOS" is the term used to describe a networked environment in which various types of resources, such as user, group, and computer accounts, are stored in a central repository that is controlled and accessible to end users. Typically a NOS environment is comprised of one or more servers that provide NOS services, such as authentication and account manipulation, and multiple end users that access those services.
Microsoft's first integrated NOS environment became available in 1990 with the release of Windows NT 3.0, which combined many features of the LAN Manager protocols and OS/2 operating system. The NT NOS slowly evolved over the next eight years until Active Directory was first released in beta in 1997.
Under Windows NT, the "domain" concept was introduced, providing a way to group resources based on administrative and security boundaries. NT domains are flat structures limited to about 40,000 objects (users, groups, and computers). For large organizations, this limitation imposed superficial boundaries on the design of the domain structure. Often, domains were geographically limited as well because the replication of data between domain controllers (i.e., servers providing the NOS services to end users) performed poorly over high-latency or low-bandwidth links. Another significant problem with the NT NOS was delegation of administration, which typically tended to be an all-or-nothing matter at the domain level.
Microsoft was well aware of these limitations and needed to rearchitect their NOS model into something that would be much more scalable and flexible. For that reason, they looked to LDAP-based directory services as a possible solution.
In generic terms, a directory service is a repository of network, application, or NOS information that is useful to multiple applications or users. Under this definition, the Windows NT NOS is a type of directory service. In fact, there are many different types of directories, including Internet white pages, email systems, and even the Domain Name System (DNS). While each of these systems have characteristics of a directory service, X.500 and the Lightweight Directory Access Protocol (LDAP) define the standards for how a true directory service is implemented and accessed.
Windows NT and Active Directory both provide directory services to clients (Windows NT in a more generic sense). And while both share some common concepts, such as Security Identifiers (SIDs) to identify security principals, they are very different from a feature, scalability, and functionality point of view. Below Table contains a comparison of features between Windows NT and Active Directory.
Windows NT
|
Active Directory
|
---|---|
Single-master replication is used, from the PDC master to the BDC subordinates.
|
Multimaster replication is used between all domain controllers.
|
Domain is the smallest unit of partitioning.
|
Naming Contexts and Application Partitions are the smallest unit of partitioning.
|
System policies can be used locally on machines or set at the domain level.
|
Group policies can be managed centrally and used by clients throughout the forest based on domain, site or OU criteria.
|
Data cannot be stored hierarchically within a domain.
|
Data can be stored in a hierarchical manner using OUs.
|
Domain is the smallest unit of security delegation and administration.
|
A property of an object is the smallest unit of security delegation/administration.
|
NetBIOS and WINS used for name resolution.
|
DNS is used for name resolution.
|
Object is the smallest unit of replication.
|
Attribute is the smallest unit of replication.
In Windows Server 2003 Active Directory, some attributes replicate on a per-value basis (such as the member attribute of group objects).
|
Maximum recommended database size for SAM is 40 MB.
|
Recommended maximum database size for Active Directory is 70 TB.
|
Maximum effective number of users is 40,000 (if you accept the recommended 40 MB maximum).
|
The maximum number of objects is in the tens of millions.
|
Four domain models (single, single-master, multimaster, complete-trust) required to solve per-domain admin-boundary and user-limit problems.
|
No domain models required as the complete-trust model is implemented. One-way trusts can be implemented manually.
|
Schema is not extensible.
|
Schema is fully extensible.
|
Data can only be accessed through a Microsoft API.
|
Supports LDAP, which is the standard protocol used by directories, applications, and clients that want to access directory data. Allows for cross-platform data access and management.
|
Table: A comparison between Windows NT and Active Directory
MS SQL Server 2016
SQL Server 2016 delivers breakthrough mission-critical capabilities with in-memory performance and operational analytics built-in. Comprehensive security features like new Always Encrypted technology helps protect your data at rest and in motion, and a world class high availability and disaster recovery solution adds new enhancements to AlwaysOn technology.
Benefits:
- Enhanced in-memory performance provide up to 30x faster transactions, more than 100x faster queries than disk based relational databases and real-time operational analytics
- New Always Encrypted technology helps protect your data at rest and in motion, on-premises and in the cloud, with master keys sitting with the application, without application changes
- Built-in advanced analytics– provide the scalability and performance benefits of building and running your advanced analytics algorithms directly in the core SQL Server transactional database
- Business insights through rich visualizations on mobile devices with native apps for Windows, iOS and Android
- Simplify management of relational and non-relational data with ability to query both through standard T-SQL using PolyBase technology
- Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application changes
- Faster hybrid backups, high availability and disaster recovery scenarios to backup and restore your on-premises databases to Microsoft Azure and place your SQL Server AlwaysOn secondaries in Azure
Always Encrypted
Always Encrypted, based on technology from Microsoft Research, protects data at rest and in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and best of all, the encryption key resides with the application in the customers trusted environment. Encryption and decryption of data happens transparently inside the application which minimizes the changes that have to be made to existing applications.
Stretch Database
This new technology allows you to dynamically stretch your warm and cold transactional data to Microsoft Azure, so your operational data is always at hand, no matter the size, and you benefit from the low cost of Azure. You can use Always Encrypted with Stretch Database to extend data in a more secure manner for greater peace of mind.
Real-time Operational Analytics & In-Memory OLTP
For In-Memory OLTP, which customers today are using for up to 30x faster transactions, you will now be able to apply this tuned transaction performance technology to a significantly greater number of applications and benefit from increased concurrency. With these enhancements, we introduce the unique capability to use our in-memory columnstore delivering 100X faster queries on top of in-memory OLTP to provide real-time operational analytics while accelerating transaction performance.
Additional capabilities in SQL Server 2016 CTP2 include:
- PolyBase – More easily manage relational and non-relational data with the simplicity of T-SQL.
- AlwaysOn Enhancements – Achieve even higher availability and performance of your secondaries, with up to 3 synchronous replicas, DTC support and round-robin load balancing of the secondaries.
- Row Level Security – Enables customers to control access to data based on the characteristics of the user. Security is implemented inside the database, requiring no modifications to the application.
- Dynamic Data Masking – Supports real-time obfuscation of data so data requesters do not get access to unauthorized data. Helps protect sensitive data even when it is not encrypted.
- Native JSON support – Allows easy parsing and storing of JSON and exporting relational data to JSON.
- Temporal Database support – Tracks historical data changes with temporal database support.
- Query Data Store – Acts as a flight data recorder for a database, giving full history of query execution so DBAs can pinpoint expensive/regressed queries and tune query performance.
- MDS enhancements – Offer enhanced server management capabilities for Master Data Services.
- Enhanced hybrid backup to Azure – Enables faster backups to Microsoft Azure and faster restores to SQL Server in Azure Virtual Machines. Also, you can stage backups on-premises prior to uploading to Azure.
Advanced Message Queuing Protocol (AMQP)
The Advanced Message Queuing Protocol (AMQP) is an open standard that defines a protocol for systems to exchange messages. AMQP defines not only the interaction that happens between a consumer/producer and a broker, but also the over-the-wire representation of the messages and commands that are being exchanged. Since it specifies the wire format for messages, AMQP is truly interoperable - nothing is left to the interpretation of a particular vendor or hosting platform.
AMQP was originated in 2003 by John O'Hara at JPMorgan Chase in London, UK. From the beginning AMQP was conceived as a co-operative open effort. Initial development was by JPMorgan Chase. AMQP is a binary, application layer protocol, designed to efficiently support a wide variety of messaging applications and communication patterns. It provides flow controlled, message-oriented communication with message-delivery guarantees such as at-most-once (where each message is delivered once or never), at-least-once (where each message is certain to be delivered, but may do so multiple times) and exactly-once (where the message will always certainly arrive and do so only once), and authentication and/or encryption based on SASL and/or TLS. It assumes an underlying reliable transport layer protocol such as Transmission Control Protocol (TCP).
List of core concepts of AMQP:
- Broker: This is a middleware application that can receive messages produced by publishers and deliver them to consumers or to another broker.
- Virtual host: This is a virtual division in a broker that allows the segregation of publishers, consumers, and all the AMQP constructs they depend upon, usually for security reasons (such as multitenancy).
- Connection: This is a physical network (TCP) connection between a publisher/consumer and a broker. The connection only closes on client disconnection or in the case of a network or broker failure.
- Channel: This is a logical connection between a publisher/consumer and a broker. Multiple channels can be established within a single connection. Channels allow the isolation of the interaction between a particular client and broker so that they don't interfere with each other. This happens without opening costly individual TCP connections. A channel can close when a protocol error occurs.
- Exchange: This is the initial destination for all published messages and the entity in charge of applying routing rules for these messages to reach their destinations. Routing rules include the following: direct (point-to-point), topic (publish-subscribe) and fanout (multicast).
- Queue: This is the final destination for messages ready to be consumed. A single message can be copied and can reach multiple queues if the exchange's routing rule says so.
- Binding: This is a virtual connection between an exchange and a queue that enables messages to flow from the former to the latter. A routing key can be associated with a binding in relation to the exchange routing rule.
Comparison of some main features between AMQP and another protocols:
- Java Message Service (JMS): Unlike AMQP, this only defines the wire protocol for a Java programming interface and not messages. As such, JMS is not interoperable and only works when compatible clients and brokers are used. Moreover, unlike AMQP, it does not define the commands necessary to completely configure messaging routes, leaving too much room for vendorspecific approaches. Finally, in JMS, message producers target a particular destination (queue or topic), meaning the clients need to know about the target topology. In AMQP, the routing logic is encapsulated in exchanges, sparing the publishers from this knowledge.
- MQ Telemetry Transport (MQTT): This is an extremely lightweight message-queuing protocol. MQTT focuses only on the publish-subscribe model. Like AMQP, it is interoperable and is very well suited for massive deployments in embedded systems. Like AMQP, it relies on a broker for subscription management and message routing. RabbitMQ can speak the MQTT protocol—thanks to an extension.
- ØMQ (also known as ZeroMQ): This offers messaging semantics without the need for a centralized broker (but without the persistence and delivery guarantees that a broker provides). At its core, it is an interoperable networking library. Implemented in many languages, it's a tool of choice for the construction of high-performance and highly-available distributed systems.
- Process inboxes: Programming languages and platforms such as Erlang or Akka offer messaging semantics too. They rely on a clustering technology to distribute messages between processes or actors. Since they are embedded in the hosting applications, they are not designed for interoperability.
Wednesday, May 20, 2015
Comparison - Elasticsearch vs MongoDB
Elasticsearch | MongoDB | |
---|---|---|
Description | A modern enterprise search engine based on Apache Lucene | One of the most popular document stores |
DB-Engines Ranking | Rank - 14, Score - 64.83 | Rank - 4, Score - 277.32 |
Database model | Search engine | Document store |
Developer | Apache Software Foundation | MongoDB, Inc |
Initial release | 2010 | 2009 |
License | Open Source | Open Source |
Database as a Service | No | No |
Implementation language | Java | C++ |
Server operating systems | All OS with a Java VM | Linux, OS X, Solaris, Windows |
Data scheme | schema-free | schema-free |
APIs and other access methods | Java API, RESTful HTTP/JSON API | proprietary protocol using JSON |
Server-side scripts | No | JavaScript |
Triggers | Yes | No |
Partitioning methods | Sharding | Sharding |
Replication methods | Yes | Master-slave replication |
MapReduce | No | Yes |
Consistency concepts | Eventual Consistency | Eventual Consistency, Immediate Consistency |
Foreign keys | No | No |
Transaction concepts | No | No |
Durability | Yes | Yes |
Tuesday, May 19, 2015
NoSQL Database
NoSQL encompasses a wide variety of different database technologies that were developed in response to a rise in the volume of data stored about users, objects and products, the frequency in which this data is accessed, and performance and processing needs. Relational databases, on the other hand, were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the cheap storage and processing power available today.
NoSQL DEFINITION: Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open-source and horizontally scalable.
A NoSQL (often interpreted as Not only SQL) database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include simplicity of design, horizontal scaling, and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, graph, or document) differ from those used in relational databases, making some operations faster in NoSQL and others faster in relational databases. The particular suitability of a given NoSQL database depends on the problem it must solve.
NoSQL Database Types
- Document databases pair each key with a complex data structure known as a document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents.
- Graph stores are used to store information about networks, such as social connections. Graph stores include Neo4J and HyperGraphDB.
- Key-value stores are the simplest NoSQL databases. Every single item in the database is stored as an attribute name (or "key"), together with its value. Examples of key-value stores are Riak and Voldemort. Some key-value stores, such as Redis, allow each value to have a type, such as "integer", which adds functionality.
- Wide-column stores such as Cassandra and HBase are optimized for queries over large datasets, and store columns of data together, instead of rows.
The Benefits of NoSQL
When compared to relational databases, NoSQL databases are more scalable and provide superior performance, and their data model addresses several issues that the relational model is not designed to address:
- Large volumes of structured, semi-structured, and unstructured data
- Agile sprints, quick iteration, and frequent code pushes
- Object-oriented programming that is easy to use and flexible
- Efficient, scale-out architecture instead of expensive, monolithic architecture
Saturday, April 25, 2015
Elasticsearch
Elasticsearch is a great open source search engine built on top of Apache Lucene. Its features and upgrades allow it to basically function just like a schema-less JSON datastore that can be accessed using both search-specific methods and regular database CRUD-like commands. It is a real-time distributed search and analytics engine. It allows you to explore your data at a speed and at a scale never before possible. It is used for full-text search, structured search, analytics, and all three in combination:
- Wikipedia uses Elasticsearch to provide full-text search with highlighted search snippets, and search-as-you-type and did-you-mean suggestions.
- The Guardian uses Elasticsearch to combine visitor logs with social -network data to provide real-time feedback to its editors about the public’s response to new articles.
- Stack Overflow combines full-text search with geolocation queries and uses more-like-this to find related questions and answers.
- GitHub uses Elasticsearch to query 130 billion lines of code.
Elasticsearch is much more than just Lucene and much more than “just” full-text search. It can also be described as follows:
- A distributed real-time document store where every field is indexed and searchable
- A distributed search engine with real-time analytics
- Capable of scaling to hundreds of servers and petabytes of structured and unstructured data
And it packages up all this functionality into a standalone server that your application can talk to via a simple RESTful API, using a web client from your favorite programming language, or even from the command line.
Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have zero or more replicas. Each node hosts one or more shards, and acts as a coordinator to delegate operations to the correct shard(s). Rebalancing and routing are done automatically.
Another feature is called "gateway" and handles the long term persistence of the index; for example, an index can be recovered from the gateway in a case of a server crash. Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL solution, but it lacks distributed transactions.
website: https://www.elastic.co/products/elasticsearch
website: https://www.elastic.co/products/elasticsearch
HTML5 Canvas
Canvas element
The canvas element is part of HTML5 and allows for dynamic, scriptable rendering of 2D shapes and bitmap images. It is a low level, procedural model that updates a bitmap and does not have a built-in scene graph. Canvas was initially introduced by Apple for use inside their own Mac OS X WebKit component in 2004, powering applications like Dashboard widgets and the Safari browser. Later, in 2005 it was adopted in version 1.8 of Gecko browsers, and Opera in 2006, and standardized by the Web Hypertext Application Technology Working Group (WHATWG) on new proposed specifications for next generation web technologies.
Canvas consists of a drawable region defined in HTML code with height and width attributes. JavaScript code may access the area through a full set of drawing functions similar to those of other common 2D APIs, thus allowing for dynamically generated graphics. Some anticipated uses of canvas include building graphs, animations, games, and image composition.
What is HTML Canvas?
The HTML <canvas> element is used to draw graphics, on the fly, via scripting (usually JavaScript).
The <canvas> element is only a container for graphics, it has no drawing abilities of its own. User must use a script to actually draw the graphics.
The getContext() method returns an object that provides methods and properties for drawing on the canvas. Canvas has several methods for drawing paths, boxes, circles, text, and adding images. A canvas is a rectangular area on an HTML page. By default, a canvas has no border and no content.
Internet Explorer 9, Firefox, Opera, Chrome, and Safari support <canvas> and its properties and methods.
Canvas vs SVG (Scalable Vector Graphics)
SVG is an earlier standard for drawing shapes in browsers. However, unlike canvas, which is raster-based, SVG is vector-based, i.e., each drawn shape is remembered as an object in a scene graph or Document Object Model, which is subsequently rendered to a bitmap. This means that if attributes of an SVG object are changed, the browser can automatically re-render the scene.
How to use Canvas
The following code creates a Canvas element in an HTML page:
<canvas id="canvasSample" width="200" height="200"> This text is displayed if your browser does not support HTML5 Canvas. </canvas>Using JavaScript, we can draw on the canvas:
var element = document.getElementById('canvasSample'); var context = element.getContext('2d'); context.fillStyle = 'red'; context.fillRect(30, 30, 50, 50);
Sunday, March 15, 2015
What's New in Visual Studio 2015
The following are the new features of the Visual Studio 2015
UI debugging tools for XAML
Two new tools — the Live Visual Tree and the Live Property Explorer — that can use to inspect the visual tree of your running WPF application, as well as the properties on any element in the tree. In short, these tools will allow to select any element in your running app and show the final, computed and rendered properties.
Single sign-in
The new release, reduces the authentication prompts required to access many integrated cloud services in Visual Studio. Now, when user authenticate to the first cloud service in Visual Studio, we will automatically sign in, or reduce the authentication prompts for other integrated cloud services.
CodeLens
With CodeLens, user can find out more about code while staying focused on work in the editor. User can now see the history of C++, SQL, or JavaScript files versioned in Git repositories by using CodeLens file-level indicators. When working with source control in Git and work items in TFS, user can also can get information about the work items associated with C++, SQL, or JavaScript files by using CodeLens file-level work items indicators.
Code Maps
When you user to understand specific dependencies in thier code, visualize them by creating Code Maps. User can then navigate these relationships by using the map, which appears next to code.
Diagnostics Tools
The Diagnostic Tools debugger window has the following improvements:
1. Supports 64-bit Windows Store apps
2. The timeline zooms as necessary so the most recent break event is always visible
Exception Settings
User can configure debugger exception settings by using the Exception Settings tool window. The new window is non-modal and includes improved performance, search, and filter capabilities.
JavaScript Editor
The JavaScript editor now provides you with IntelliSense suggestions when passing an object literal to functions documented using JSDoc. User can use the Task List feature to review task comments, such as // TODO, in JavaScript code.
Unit Tests
In Visual Studio 2015 preview, introduced Smart Unit Tests, which explores developer's .NET code to generate test data and a suite of unit tests. Smart Unit Tests enables support for an API that can use to guide test data generation, specify correctness properties of the code under test, and direct the exploration of the code under test.
XAML Language Service
The XAML language service has been rebuilt on top of .NET Compiler Platform so that it can provide you with a fast, reliable, and modern XAML editing experience that includes IntelliSense.
Timeline Tool
The new Timeline tool provides you with a scenario-centric view of the resources that your applications consume, which you can use to inspect, diagnose, and improve the performance of your WPF and Windows Store 8.1 applications.
Two new tools — the Live Visual Tree and the Live Property Explorer — that can use to inspect the visual tree of your running WPF application, as well as the properties on any element in the tree. In short, these tools will allow to select any element in your running app and show the final, computed and rendered properties.
Single sign-in
The new release, reduces the authentication prompts required to access many integrated cloud services in Visual Studio. Now, when user authenticate to the first cloud service in Visual Studio, we will automatically sign in, or reduce the authentication prompts for other integrated cloud services.
CodeLens
With CodeLens, user can find out more about code while staying focused on work in the editor. User can now see the history of C++, SQL, or JavaScript files versioned in Git repositories by using CodeLens file-level indicators. When working with source control in Git and work items in TFS, user can also can get information about the work items associated with C++, SQL, or JavaScript files by using CodeLens file-level work items indicators.
Code Maps
When you user to understand specific dependencies in thier code, visualize them by creating Code Maps. User can then navigate these relationships by using the map, which appears next to code.
Diagnostics Tools
The Diagnostic Tools debugger window has the following improvements:
1. Supports 64-bit Windows Store apps
2. The timeline zooms as necessary so the most recent break event is always visible
Exception Settings
User can configure debugger exception settings by using the Exception Settings tool window. The new window is non-modal and includes improved performance, search, and filter capabilities.
JavaScript Editor
The JavaScript editor now provides you with IntelliSense suggestions when passing an object literal to functions documented using JSDoc. User can use the Task List feature to review task comments, such as // TODO, in JavaScript code.
Unit Tests
In Visual Studio 2015 preview, introduced Smart Unit Tests, which explores developer's .NET code to generate test data and a suite of unit tests. Smart Unit Tests enables support for an API that can use to guide test data generation, specify correctness properties of the code under test, and direct the exploration of the code under test.
XAML Language Service
The XAML language service has been rebuilt on top of .NET Compiler Platform so that it can provide you with a fast, reliable, and modern XAML editing experience that includes IntelliSense.
Timeline Tool
The new Timeline tool provides you with a scenario-centric view of the resources that your applications consume, which you can use to inspect, diagnose, and improve the performance of your WPF and Windows Store 8.1 applications.
What's New in .NET Framework 4.6
The following are the new features of the .NET Framework 4.6
64-bit JIT Compiler for managed code
The .NET Framework 4.6 features a new version of the 64-bit JIT compiler. This compiler provides significant performance improvements over the existing 64-bit JIT compiler.
Changes in Base Class Library
There are so many new APIs have been added to this new .NET Framework to enable key scenarios, especially for the Cross-Platform environment. The CultureInfo.CurrentCulture and CultureInfo.CurrentUICulture properties are now read-write rather than read-only. Additional collections implement ReadOnlyCollection<T> such as Queue<T> and Stack<T>. It implement System.Collections.ObjectModel.ReadOnlyCollection, including System.Collections.Generic.Queue and System.Collections.Generic.Stack.
Resizing in Windows Forms controls
The .NET Framework 4.6 expanded some namespaces that enable us to resize Windows Forms forms. This is an opt-in feature.
To enable it, set the EnableWindowsFormsHighDpiAutoResizing element to true in the application configuration (app.config) file.
Support for code page encodings
The core .NET Framework primarily supports Unicode encodings and by default it provides limited support for code page encodings. We can add support for code page encodings that are available in the .NET Framework but unsupported in .NET Core by registering code page encodings with the Encoding.RegisterProvider method.
Open-source .NET Framework packages
Some great .NET packages such as the Immutable Collections and SIMD APIs are now available, open source, on GitHub. GitHub is a repository that contains the foundation libraries that make up the .NET Core development stack.
Improvements to event tracing
An EventSource object can now be constructed directly and you can call one of the Write() methods to emit a self-describing event.
.Net Native
It is a pre-compilation technology for building and deploying Windows Store apps. It compiles apps that are written in managed code (Like Visual C#) and that target the .NET Framework to native code. It is quite different from Just-In-Time (JIT) as well as the Native Image Generator (NGEN).
The .NET Framework 4.6 features a new version of the 64-bit JIT compiler. This compiler provides significant performance improvements over the existing 64-bit JIT compiler.
Changes in Base Class Library
There are so many new APIs have been added to this new .NET Framework to enable key scenarios, especially for the Cross-Platform environment. The CultureInfo.CurrentCulture and CultureInfo.CurrentUICulture properties are now read-write rather than read-only. Additional collections implement ReadOnlyCollection<T> such as Queue<T> and Stack<T>. It implement System.Collections.ObjectModel.ReadOnlyCollection, including System.Collections.Generic.Queue and System.Collections.Generic.Stack.
Resizing in Windows Forms controls
The .NET Framework 4.6 expanded some namespaces that enable us to resize Windows Forms forms. This is an opt-in feature.
To enable it, set the EnableWindowsFormsHighDpiAutoResizing element to true in the application configuration (app.config) file.
Support for code page encodings
The core .NET Framework primarily supports Unicode encodings and by default it provides limited support for code page encodings. We can add support for code page encodings that are available in the .NET Framework but unsupported in .NET Core by registering code page encodings with the Encoding.RegisterProvider method.
Open-source .NET Framework packages
Some great .NET packages such as the Immutable Collections and SIMD APIs are now available, open source, on GitHub. GitHub is a repository that contains the foundation libraries that make up the .NET Core development stack.
Improvements to event tracing
An EventSource object can now be constructed directly and you can call one of the Write() methods to emit a self-describing event.
.Net Native
It is a pre-compilation technology for building and deploying Windows Store apps. It compiles apps that are written in managed code (Like Visual C#) and that target the .NET Framework to native code. It is quite different from Just-In-Time (JIT) as well as the Native Image Generator (NGEN).
Friday, February 27, 2015
RASPBERRY PI
Did you know?
The Raspberry Pi sprang out of a desire by colleagues at the University of Cambridge’s Computer Laboratory to see a return to the days of kids programming inexpensive computers. The rise of expensive PCs and games consoles put paid to the BBC B, Spectrum and C64 generation of home programmers, leading to applicants for computer studies courses lacking the necessary skills.
The basic concepts behind the Raspberry Pi were to make it as affordable as possible, supply only the basics and provide it with a programming environment and hardware connections for electronics projects. The Pi is a series of credit card-sized single-board computers, which runs a modified version of Linux called Raspian with Wheezy Raspian being the preferred option for newcomers to the device. Raspian runs directly on an SD card and provides a command-line interface for using the operating system.
The original Raspberry Pi and Raspberry Pi 2 are manufactured in several board configurations through licensed manufacturing agreements with Newark element14 (Premier Farnell), RS Components and Egoman.
Programming the Pi
There are two programming languages supplied by default with the Pi: Scratch v1.4 and Python. They both have shortcut icons on the graphical desktop. Scratch was designed by Lifelong Kindergarten Group at the MIT Media Lab as a firststep in programming. It uses tile-based language commands that can be strung together without having to worry about syntax. Commands are split up into easy-to-understand groups and dragged onto the scripting area. Sounds, graphics and animation can be added. Projects can be saved or uploaded to the Scratch website and shared for remixing by other users.
The other language is Python v3.2.3 which starts with a Python Shell. Python is an interpreted language, where commands are read and implemented one line at a time. The high level commands and data structures make it ideal for scripting. The previous version of Python is still quite popular and has more development aids, so is also supplied. The icons for these are IDLE 3 and IDLE, which stands for Integrated Development Language for Python.
The Pi Store
www.store.raspberrypi.com/projects
The Pi Store isn’t just home to apps and tools to help you use your Raspberry Pi. There’s also a selection of home-grown video games to get your teeth into, that are developed by the Raspberry Pi community.
The original Raspberry Pi and Raspberry Pi 2 are manufactured in several board configurations through licensed manufacturing agreements with Newark element14 (Premier Farnell), RS Components and Egoman.
Fig: Raspberry Pi (Model B)
Programming the Pi
There are two programming languages supplied by default with the Pi: Scratch v1.4 and Python. They both have shortcut icons on the graphical desktop. Scratch was designed by Lifelong Kindergarten Group at the MIT Media Lab as a firststep in programming. It uses tile-based language commands that can be strung together without having to worry about syntax. Commands are split up into easy-to-understand groups and dragged onto the scripting area. Sounds, graphics and animation can be added. Projects can be saved or uploaded to the Scratch website and shared for remixing by other users.
The other language is Python v3.2.3 which starts with a Python Shell. Python is an interpreted language, where commands are read and implemented one line at a time. The high level commands and data structures make it ideal for scripting. The previous version of Python is still quite popular and has more development aids, so is also supplied. The icons for these are IDLE 3 and IDLE, which stands for Integrated Development Language for Python.
The Pi Store
www.store.raspberrypi.com/projects
The Pi Store isn’t just home to apps and tools to help you use your Raspberry Pi. There’s also a selection of home-grown video games to get your teeth into, that are developed by the Raspberry Pi community.
Monday, February 9, 2015
CQRS (Command Query Responsibility Segregation) Pattern
Command–query separation (CQS) is a principle of imperative computer programming. It was devised by Bertrand Meyer. It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, methods should return a value only if they are referentially transparent and hence possess no side effects.
Command Query Responsibility Segregation (CQRS) applies the CQS principle by using separate Query and Command objects to retrieve and modify data, respectively. CQRS Pattern segregate operations that read data from operations that update data by using separate interfaces. This pattern can maximize performance, scalability, and security; support evolution of the system over time through higher flexibility; and prevent update commands from causing merge conflicts at the domain level.
CQRS is a pattern that segregates the operations that read data (Queries) from the operations that update data (Commands) by using separate interfaces. This implies that the data models used for querying and updates are different. The models can then be isolated, although this is not an absolute requirement.
Compared to the single model of the data that is inherent in CRUD-based systems, the use of separate query and update models for the data in CQRS-based systems considerably simplifies design and implementation.
The query model for reading data and the update model for writing data may access the same physical store, perhaps by using SQL views or by generating projections on the fly. However, it is common to separate the data into different physical stores to maximize performance, scalability, and security; as shown in Figure.
The read store can be a read-only replica of the write store, or the read and write stores may have a different structure altogether. Using multiple read-only replicas of the read store can considerably increase query performance and application UI responsiveness, especially in distributed scenarios where read-only replicas are located close to the application instances. Some database systems, such as SQL Server, provide additional features such as failover replicas to maximize availability. Separation of the read and write stores also allows each to be scaled appropriately to match the load.
When to Use this Pattern
This pattern is ideally suited to:
Event Sourcing and CQRS
The CQRS pattern is often used in conjunction with the Event Sourcing pattern. CQRS-based systems use separate read and write data models, each tailored to relevant tasks and often located in physically separate stores. When used with Event Sourcing, the store of events is the write model, and this is the authoritative source of information. The read model of a CQRS-based system provides materialized views of the data, typically as highly denormalized views. These views are tailored to the interfaces and display requirements of the application, which helps to maximize both display and query performance.
Using the stream of events as the write store, rather than the actual data at a point in time, avoids update conflicts on a single aggregate and maximizes performance and scalability. The events can be used to asynchronously generate materialized views of the data that are used to populate the read store.
Because the event store is the authoritative source of information, it is possible to delete the materialized views and replay all past events to create a new representation of the current state when the system evolves, or when the read model must change. The materialized views are effectively a durable read-only cache of the data.
Command Query Responsibility Segregation (CQRS) applies the CQS principle by using separate Query and Command objects to retrieve and modify data, respectively. CQRS Pattern segregate operations that read data from operations that update data by using separate interfaces. This pattern can maximize performance, scalability, and security; support evolution of the system over time through higher flexibility; and prevent update commands from causing merge conflicts at the domain level.
CQRS is a pattern that segregates the operations that read data (Queries) from the operations that update data (Commands) by using separate interfaces. This implies that the data models used for querying and updates are different. The models can then be isolated, although this is not an absolute requirement.
Compared to the single model of the data that is inherent in CRUD-based systems, the use of separate query and update models for the data in CQRS-based systems considerably simplifies design and implementation.
The query model for reading data and the update model for writing data may access the same physical store, perhaps by using SQL views or by generating projections on the fly. However, it is common to separate the data into different physical stores to maximize performance, scalability, and security; as shown in Figure.
The read store can be a read-only replica of the write store, or the read and write stores may have a different structure altogether. Using multiple read-only replicas of the read store can considerably increase query performance and application UI responsiveness, especially in distributed scenarios where read-only replicas are located close to the application instances. Some database systems, such as SQL Server, provide additional features such as failover replicas to maximize availability. Separation of the read and write stores also allows each to be scaled appropriately to match the load.
When to Use this Pattern
This pattern is ideally suited to:
- Collaborative domains where multiple operations are performed in parallel on the same data.
- Use with task-based user interfaces (where users are guided through a complex process as a series of steps), with complex domain models, and for teams already familiar with domain-driven design (DDD) techniques.
- Scenarios where performance of data reads must be fine-tuned separately from performance of data writes, especially when the read/write ratio is very high, and when horizontal scaling is required
- Scenarios where one team of developers can focus on the complex domain model that is part of the write model, and another less experienced team can focus on the read model and the user interfaces.
- Scenarios where the system is expected to evolve over time and may contain multiple versions of the model, or where business rules change regularly.
- Integration with other systems, especially in combination with Event Sourcing, where the temporal failure of one subsystem should not affect the availability of the others.
- Where the domain or the business rules are simple.
- Where a simple CRUD-style user interface and the related data access operations are sufficient.
- For implementation across the whole system. There are specific components of an overall data management scenario where CQRS can be useful, but it can add considerable and often unnecessary complexity where it is not actually required.
Event Sourcing and CQRS
The CQRS pattern is often used in conjunction with the Event Sourcing pattern. CQRS-based systems use separate read and write data models, each tailored to relevant tasks and often located in physically separate stores. When used with Event Sourcing, the store of events is the write model, and this is the authoritative source of information. The read model of a CQRS-based system provides materialized views of the data, typically as highly denormalized views. These views are tailored to the interfaces and display requirements of the application, which helps to maximize both display and query performance.
Using the stream of events as the write store, rather than the actual data at a point in time, avoids update conflicts on a single aggregate and maximizes performance and scalability. The events can be used to asynchronously generate materialized views of the data that are used to populate the read store.
Because the event store is the authoritative source of information, it is possible to delete the materialized views and replay all past events to create a new representation of the current state when the system evolves, or when the read model must change. The materialized views are effectively a durable read-only cache of the data.
Friday, January 30, 2015
Codenamed "Threshold" : Windows 10
An update to Windows 8, codenamed Threshold, a "wave of operating systems" across multiple Microsoft platforms and services, scheduled for release around the second quarter of 2015. The goals for Threshold was to create a unified application platform and development toolkit for Windows, Windows Phone and Xbox One (which all use a similar Windows NT kernel). It was speculated that Threshold would be branded as "Windows 9". But Threshold was officially unveiled during a media event on September 30, 2014, under the name "Windows 10".
- Windows 10 designed with enterprise customers in mind
- One converged Windows platform
- Designed for the way you live and work
- Helping protect against modern security threats
- Managed for continuous innovation
- Windows 10 helps keep customers secure and up to date
- Enabling management for every scenario
- An app store that is open for business
Window 10 Features:
User interface and desktop
Windows 10's user interface changes its behavior depending on the type of device and available inputs, and provides transitions between interface modes on convertible laptops and tablets with docking keyboards.
Online services
Windows 10 incorporates Microsoft's intelligent personal assistant Cortana, as introduced on Windows Phone, It is implemented as a search box located next to the Start button.
Multimedia and gaming
Windows 10 will provide heavier integration with the Xbox ecosystem: an updated Xbox app succeeds Xbox SmartGlass on Windows, allowing users to browse their game library (including both PC and Xbox console games), and Game DVR is also available using a keyboard shortcut, allowing users to save the last 30 seconds of gameplay as a video that can be shared to Xbox Live, OneDrive, or elsewhere.
Update system
Windows 10 will be serviced in a significantly different manner from previous releases of Windows. By default, users receive critical updates, security patches and non-critical updates to the operating system and its functionality as they are released, but can optionally opt out of or delay the installation of non-critical updates.
User interface and desktop
Windows 10's user interface changes its behavior depending on the type of device and available inputs, and provides transitions between interface modes on convertible laptops and tablets with docking keyboards.
Online services
Windows 10 incorporates Microsoft's intelligent personal assistant Cortana, as introduced on Windows Phone, It is implemented as a search box located next to the Start button.
Multimedia and gaming
Windows 10 will provide heavier integration with the Xbox ecosystem: an updated Xbox app succeeds Xbox SmartGlass on Windows, allowing users to browse their game library (including both PC and Xbox console games), and Game DVR is also available using a keyboard shortcut, allowing users to save the last 30 seconds of gameplay as a video that can be shared to Xbox Live, OneDrive, or elsewhere.
Update system
Windows 10 will be serviced in a significantly different manner from previous releases of Windows. By default, users receive critical updates, security patches and non-critical updates to the operating system and its functionality as they are released, but can optionally opt out of or delay the installation of non-critical updates.
Other cool stuff:
- Start Menu is Back
- New Task View
- Snap Assist helps you snap windows
- The Command Prompt Got Way More Than Just Ctrl-V
- Notifications Are Getting a Makeover
- Pin the Recycle Bin to The Start Menu and Taskbar
- Explorer Has a New "Home" With Your Most-Used Folder Locations
- Keyboard Shortcuts Make Virtual Desktops Super Easy to Use
- File History is Now Its Own Tab in Properties
- Windows 10 Universal apps, apps now float on the Desktop
Subscribe to:
Posts (Atom)