Tuesday, October 20, 2015

Real-time Transport Protocol

The Real-time Transport Protocol (RTP) is a network protocol for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol such as the Session Initiation Protocol (SIP) which establishes connections across the network.
RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering Task Force (IETF) and first published in 1996 as RFC 1889, superseded by RFC 3550 in 2003.
RTP combines its data transport with a control protocol (RTCP), which makes it possible to monitor data delivery for large multicast networks. Typically, RTP runs on top of the UDP protocol, although the specification is general enough to support other transport protocols. An RTP session is established for each multimedia stream. A session consists of an IP address with a pair of ports for RTP and RTCP. For example, audio and video streams use separate RTP sessions, enabling a receiver to deselect a particular stream.

Wednesday, September 30, 2015

Mobile Backend as a Service (MBaaS)

Mobile Backend as a service (MBaaS), also known as "backend as a service" (BaaS), is a model for providing web and mobile app developers with a way to link their applications to backend cloud storage and APIs exposed by back end applications while also providing features such as user management, push notifications, and integration with social networking services. These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). Backend as a Service also refered as “turn-on infrastructure” for mobile and web apps.
BaaS providers tend to fall into one of two categories: consumer BaaS or enterprise BaaS. The former focuses largely on “lighter-weight” brand apps (and games), whereas the latter centers on mobilizing sensitive, business-critical data from enterprise systems. As a whole, BaaS providers are disrupting the on-premise “Mobile Enterprise Application Platform,” or MEAP, category, while providing a lot more turn-key functionality for your mobile strategy than traditional API management and Platform as a Service vendors.
Web and mobile apps require a similar set of features on the backend, including push notifications, integration with social networks, and cloud storage. Each of these services has its own API that must be individually incorporated into an app, a process that can be time-consuming and complicated for app developers. BaaS providers form a bridge between the frontend of an application and various cloud-based backends via a unified API and SDK.
Providing a consistent way to manage backend data means that developers do not need to redevelop their own backend for each of the services that their apps need to access, potentially saving both time and money.
Although similar to other cloud-computing developer tools, such as software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS), BaaS is distinct from these other services in that it specifically addresses the cloud-computing needs of web and mobile app developers by providing a unified means of connecting their apps to cloud services.

Thursday, September 10, 2015

Compress LOB data in the database

SQL Server stores the data in regular B-Tree indexes in three different sets of the data pages called allocation units. The main data row structure and fixed-length data are stored in IN-ROW data pages. Variable-length data greater than 8,000 bytes in size is stored in LOB (large object) pages. Such data includes (max) columns, XML, CLR UDT and a few other data types. Finally, variable-length data, which does not exceed 8,000 bytes, is stored either in IN-ROW data pages when it fits into the page, or in ROW-OVERFLOW data pages.

Enterprise Edition of SQL Server allows you to reduce the size of the data by implementing data compression. However, data compression is applied to IN-ROW data only and it does not compress ROW-OVERFLOW and LOB data. Any large objects that do not fit into IN-ROW data pages remain uncompressed.

The approach to address such an overhead is manually compress LOB data in the code. You can create the methods to compress and decompress data utilizing one of the classes from System.IO.Compression namespace, for example using GZipStream or DeflateStream classes. Moreover, that method could be implemented in CLR stored procedures and used directly in T-SQL code. The drawback to this approach, compression is CPU intensive. It is better to run such code on the client whenever it is possible. The second important consideration is performance. Obviously, decompression adds an overhead, which you would like to avoid on the large scope.

Compressing LOB data in the database could help you to significantly reduce the database size in the large number of cases. However, it adds an overhead of compressing and decompressing data. In some cases, such overhead would be easily offset by the smaller data size, less I/O and buffer pool usage.

Monday, August 31, 2015

Understanding Firewalls

Firewalls enable you to define an access control requirement and ensure that only traffic or data that meets that requirement can traverse the firewall (in the case of a network-based firewall) or access the protected system (in the case of a host-based firewall). Firewalls are used to create security checkpoints at the boundaries of private networks. At these checkpoints, firewalls inspect all packets passing between the private network and the Internet and determine whether to pass or drop the packets depending on how they match the policy rules programmed into the firewall.
Firewalls sit on the borders of your network, connected directly to the circuits that provide access to other networks. For that reason, firewalls are frequently referred to as border security. The concept of border security is important—without it, every host on your network would have to perform the functions of a firewall themselves, needlessly consuming computer resources and increasing the amount of time required to connect, authenticate, and encrypt data in local area, high−speed networks. Firewalls allow you to centralize all external security services in machines that are optimized for and dedicated to the task. Inspecting traffic at the border gateways also has the benefit of preventing hacking traffic from consuming the bandwidth on your internal network.
Fundamentally, firewalls need to be able to perform the following tasks:
  • Manage and control network traffic
  • Authenticate access
  • Act as an intermediary
  • Protect resources
  • Record and report on events
And Firewalls function primarily by using three fundamental methods:
  • Packet Filtering Rejects TCP/IP packets from unauthorized hosts and reject connection attempts to unauthorized services.
  • Network Address Translation (NAT) Translates the IP addresses of internal hosts to hide them from outside monitoring. You may hear of NAT referred to as IP masquerading.
  • Proxy Services Makes high−level application connections on behalf of internal hosts in order to completely break the network layer connection between internal and external hosts.
Packet Filters
Filters compare network protocols (such as IP) and transport protocol packets (such as TCP) to a database of rules and forward only those packets that conform to the criteria specified in the database of rules. Filters can either be implemented in routers or in the TCP/IP stacks of servers. Filters implemented inside routers prevent suspicious traffic from reaching the destination network, whereas TCP/IP filter modules in servers merely prevent that specific machine from responding to suspicious traffic. The traffic still reaches the network and could target any machine on it. Filtered routers protect all the machines on the destination network from suspicious traffic. For that reason, filtering in the TCP/IP stacks of servers (such as that provided by Windows NT) should only be used in addition to router filtering, not instead of it.
Network Address Translation
Network Address Translation (NAT) solves the problem of hiding internal hosts. NAT is actually a network layer proxy: A single host makes requests on behalf of all internal hosts, thus hiding their identity from the public network. NAT hides internal IP addresses by converting all internal host addresses to the address of the firewall. The firewall then retransmits the data payload of the internal host from its own address using the TCP port number to keep track of which connections on the public side map to which hosts on the private side. To the Internet, all the traffic on your network appears to be coming from one extremely busy computer.
Application−level proxies allow you to completely disconnect the flow of network−level protocols through your firewall and restrict traffic only to higher−level protocols like HTTP, FTP, and SMTP. Application proxies don't have to run on firewalls; any server, either inside or outside your network, can perform the role of a proxy. Security proxies are even capable of performing application−level filtering for specific content. Proxies are extremely specific because they can only work for a specific application. For instance, you must have a proxy software module for HTTP, another proxy module for FTP, and another module for Telnet.

Wednesday, August 26, 2015

Node.js Architecture

Node is an open source toolkit for developing server side applications based on the V8 JavaScript engine. Like Node, V8 is written in C++ and is mostly known for being used in Google Chrome.
Node is part of the Server Side JavaScript environment and extend JavaScript API to offer usual server side functionalities. Node base API can be extended by using the CommonJS module system.

NodeJS is divided into two main components: the core and its modules. The core is built in C and C++. It combines Google’s V8 JavaScript engine with Node’s Libuv library and protocol bindings including sockets and HTTP.
V8 Runtime Environment
Google’s V8 engine is an open-source Just In Time, or JIT, compiler written in C++. In recent benchmarks, V8’s performance has surpassed other JavaScript interpreters including Spider Monkey and Nitro. It has additionally surpassed PHP, Ruby and Python performance. Due to Google’s approach, it is predicted that in fact it could become as fast as C. The engine compiles JavaScript directly into assembly code ready for execution by avoiding any intermediary representations like tokens and opcodes which are further interpreted. The runtime environment is itself divided into three majors components: a compiler, an optimizer and a garbage collector.
The C++ Libuv library is responsible for Node’s asynchronous I/O operations and main event loop. It is composed of a fixed-size thread pool from which a thread is allocated for each I/O operation. By delegating these time-consuming operations to the Libuv module, the V8 engine and remainder of NodeJS is free to continue executing other requests. Before 2012, Node relied on two separate libraries, Libio and Libev, in order to provide asynchronous I/O and support the main event loop. However, Libev was only supported by Unix. In order to add Windows support, the Libio library was fashioned as an abstraction around Libev. 

Sunday, July 26, 2015

What is Node.js?

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

Node.js is an open source, cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux, FreeBSD, NonStop, IBM AIX, IBM System z and IBM i. Its work is hosted and supported by the Node.js Foundation, a Collaborative Project at Linux Foundation. Node.js provides an event-driven architecture and a non-blocking I/O API that optimizes an application's throughput and scalability. These technologies are commonly used for real-time web applications.

Node.js uses the Google V8 JavaScript engine to execute code, and a large percentage of the basic modules are written in JavaScript. Node.js contains a built-in library to allow applications to act as a Web server without software such as Apache HTTP Server or IIS. Node.js is gaining adoption as a server-side platform and is used by IBM, Microsoft, Yahoo!, Walmart, Groupon, SAP, LinkedIn, Rakuten, PayPal, Voxer, and GoDaddy.

Node.js was invented in 2009 by Ryan Dahl, and other developers working at Joyent. Node.js was created and first published for Linux use in 2009. Its development and maintenance was spearheaded by Ryan Dahl and sponsored by Joyent, the firm where Dahl worked.

Node.js allows the creation of web servers and networking tools, using JavaScript and a collection of "modules" that handle various core functionality. Modules handle file system I/O, networking (HTTP, TCP, UDP, DNS, or TLS/SSL), binary data (buffers), cryptography functions, data streams, and other core functions. Node's modules have a simple and elegant API, reducing the complexity of writing server applications.

Frameworks can be used to accelerate the development of applications, and common frameworks are Express.js, Socket.IO and Connect. Node.js applications can run on Microsoft Windows, Unix and Mac OS X servers. Node.js applications can alternatively be written with CoffeeScript (a more readable form of JavaScript), Dart or Microsoft TypeScript (strongly typed forms of JavaScript), or any language that can compile to JavaScript.

Monday, July 13, 2015

Evolution of the Microsoft NOS

"NOS" is the term used to describe a networked environment in which various types of resources, such as user, group, and computer accounts, are stored in a central repository that is controlled and accessible to end users. Typically a NOS environment is comprised of one or more servers that provide NOS services, such as authentication and account manipulation, and multiple end users that access those services.
Microsoft's first integrated NOS environment became available in 1990 with the release of Windows NT 3.0, which combined many features of the LAN Manager protocols and OS/2 operating system. The NT NOS slowly evolved over the next eight years until Active Directory was first released in beta in 1997.
Under Windows NT, the "domain" concept was introduced, providing a way to group resources based on administrative and security boundaries. NT domains are flat structures limited to about 40,000 objects (users, groups, and computers). For large organizations, this limitation imposed superficial boundaries on the design of the domain structure. Often, domains were geographically limited as well because the replication of data between domain controllers (i.e., servers providing the NOS services to end users) performed poorly over high-latency or low-bandwidth links. Another significant problem with the NT NOS was delegation of administration, which typically tended to be an all-or-nothing matter at the domain level.
Microsoft was well aware of these limitations and needed to rearchitect their NOS model into something that would be much more scalable and flexible. For that reason, they looked to LDAP-based directory services as a possible solution.
In generic terms, a directory service is a repository of network, application, or NOS information that is useful to multiple applications or users. Under this definition, the Windows NT NOS is a type of directory service. In fact, there are many different types of directories, including Internet white pages, email systems, and even the Domain Name System (DNS). While each of these systems have characteristics of a directory service, X.500 and the Lightweight Directory Access Protocol (LDAP) define the standards for how a true directory service is implemented and accessed.
Windows NT and Active Directory both provide directory services to clients (Windows NT in a more generic sense). And while both share some common concepts, such as Security Identifiers (SIDs) to identify security principals, they are very different from a feature, scalability, and functionality point of view. Below Table contains a comparison of features between Windows NT and Active Directory.
Windows NT
Active Directory
Single-master replication is used, from the PDC master to the BDC subordinates.
Multimaster replication is used between all domain controllers.
Domain is the smallest unit of partitioning.
Naming Contexts and Application Partitions are the smallest unit of partitioning.
System policies can be used locally on machines or set at the domain level.
Group policies can be managed centrally and used by clients throughout the forest based on domain, site or OU criteria.
Data cannot be stored hierarchically within a domain.
Data can be stored in a hierarchical manner using OUs.
Domain is the smallest unit of security delegation and administration.
A property of an object is the smallest unit of security delegation/administration.
NetBIOS and WINS used for name resolution.
DNS is used for name resolution.
Object is the smallest unit of replication.
Attribute is the smallest unit of replication.
In Windows Server 2003 Active Directory, some attributes replicate on a per-value basis (such as the member attribute of group objects).
Maximum recommended database size for SAM is 40 MB.
Recommended maximum database size for Active Directory is 70 TB.
Maximum effective number of users is 40,000 (if you accept the recommended 40 MB maximum).
The maximum number of objects is in the tens of millions.
Four domain models (single, single-master, multimaster, complete-trust) required to solve per-domain admin-boundary and user-limit problems.
No domain models required as the complete-trust model is implemented. One-way trusts can be implemented manually.
Schema is not extensible.
Schema is fully extensible.
Data can only be accessed through a Microsoft API.
Supports LDAP, which is the standard protocol used by directories, applications, and clients that want to access directory data. Allows for cross-platform data access and management.

Table: A comparison between Windows NT and Active Directory