Thursday, October 6, 2016

Threats to wireless security : Rogue access point

Of all of the threats faced by your network security, few are as potentially dangerous as the rogue Access Point (AP). A rogue AP is a WiFi Access Point that is set up by an attacker for the purpose of sniffing wireless network traffic in an effort to gain unauthorized access to your network environment. Ironically, though, this breach in security typically isn't implemented by a malicious hacker or other malcontent. Instead, it's usually installed by someone who is simply looking for the same convenience and flexibility at work that they've grown accustomed to using on their own home wireless network.
To prevent the installation of rogue access points, organizations can install wireless intrusion prevention systems to monitor the radio spectrum for unauthorized access points. Presence of a large number of wireless access points can be sensed in airspace of a typical enterprise facility. These include managed access points in the secure network plus access points in the neighborhood. A wireless intrusion prevention system facilitates the job of auditing these access points on a continuous basis to learn whether there are any rogue access points among them.
In order to detect rogue access points, two conditions need to be tested:
  • whether or not the access point is in the managed access point list
  • whether or not it is connected to the secure network
The first of the above two conditions is easy to test - compare wireless MAC address (also called as BSSID) of the access point against the managed access point BSSID list. However, automated testing of the second condition can become challenging in the light of following factors: a) Need to cover different types of access point devices such as bridging, NAT (router), unencrypted wireless links, encrypted wireless links, different types of relations between wired and wireless MAC addresses of access points, and soft access points, b) necessity to determine access point connectivity with acceptable response time in large networks, and c) requirement to avoid both false positives and negatives.

Tuesday, October 4, 2016

Puppet on the AWS Cloud

Puppet is a declarative, model-based configuration management solution from Puppet Labs that lets you define the state of your IT infrastructure, and automatically enforces that desired state on your systems. This Quick Start automates the deployment of a Puppet master and Puppet agents from scratch, using AWS CloudFormation templates.
Puppet Enterprise, comprises a commercially supported version of its open-source configuration management tool, Puppet. Puppet IT automation software uses Puppet's declarative language to manage various stages of the IT infrastructure lifecycle, including the provisioning, patching, configuration, and management of operating system and application components across enterprise data centers and cloud infrastructures.
Built as cross-platform software, Puppet and Puppet Enterprise operate on Linux distributions, including RHEL (and clones such as CentOS and Oracle Linux), Fedora, Debian, Mandriva, Ubuntu, and SUSE, as well as on multiple Unix systems (Solaris, BSD, Mac OS X, AIX, HP-UX), and has Microsoft Windows support. It is a model-driven solution that requires limited programming knowledge to use.
Puppet is a server management application that can use ServiceNow configuration item (CI) data to bring computers into a desired state by managing files, services, or packages installed on physical or virtual machines. ServiceNow can interact with Puppet systems that run Linux. ServiceNow identifies a Puppet Master, which controls Puppet nodes, and uses a standalone utility to discover the components in the Puppet environment. ServiceNow uses information about server CIs from the Puppet Master to classify those servers as Puppet nodes. Puppet then evaluates a node's current state and modifies the node to achieve the desired state.
Note: A group of configuration items is called a configuration template in ServiceNow Provider and a node definition in Puppet.

Wednesday, September 28, 2016


Chef is the platform for automating your infrastructure on Amazon Web Services. Chef is a configuration management tool written in Ruby and Erlang. It uses a pure-Ruby, domain-specific language (DSL) for writing system configuration "recipes". Chef is used to streamline the task of configuring and maintaining a company's servers, and can integrate with cloud-based platforms such as Internap, Amazon EC2, Google Cloud Platform, OpenStack, SoftLayer, Microsoft Azure and Rackspace to automatically provision and configure new machines. Chef contains solutions for both small and large scale systems, with features and pricing for the respective ranges.
The user writes "recipes" that describe how Chef manages server applications and utilities (such as Apache HTTP Server, MySQL, or Hadoop) and how they are to be configured. These recipes (which can be grouped together as a "cookbook" for easier management) describe a series of resources that should be in a particular state: packages that should be installed, services that should be running, or files that should be written. These various resources can be configured to specific versions of software to run and can ensure that software is installed in the correct order based on dependencies. Chef makes sure each resource is properly configured and corrects any resources that are not in the desired state.
Chef can run in client/server mode, or in a standalone configuration named "chef-solo". In client/server mode, the Chef client sends various attributes about the node to the Chef server. The server uses Solr to index these attributes and provides an API for clients to query this information. Chef recipes can query these attributes and use the resulting data to help configure the node.
Chef allows you to define, create, and manage your entire application stack on AWS. With a single recipe you can manage and orchestrate a multi-tier application that relies on a variety of AWS services such as Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), elastic load balancing (ELB), Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS) relational database service (RDS). Because Chef recipes describe resources in the order of their execution, you can ensure that, for example, an RDS database is created and populated before enabling the Amazon EC2 instances that connect to it.

Monday, September 26, 2016

Google Cloud Platform

Google Cloud Platform is a cloud computing service by Google that offers hosting on the same supporting infrastructure that Google uses internally for end-user products like Google Search and YouTube. Cloud Platform provides developer products to build a range of programs from simple websites to complex applications. 
Google Cloud Platform is a part of a suite of enterprise services from Google for Work and provides a set of modular cloud-based services with a host of development tools. For example, hosting and computing, cloud storage, data storage, translations APIs and prediction APIs.

The core cloud computing services in the Google Cloud Platform include:
  • Google Compute Engine: An infrastructure as a service (IaaS) offering that provides users with virtual machine (VM) instances for workload hosting.
  • Google App Engine: A platform as a service (PaaS) offering that gives software developers access to Google's scalable hosting. Developers can also use a software developer kit (SDK) to develop software products that run on App Engine.
  • Google Cloud Storage: A cloud storage platform designed to store large, unstructured data sets. Google also offers database storage options including Cloud Datastore for NoSQL non-relational storage, Cloud SQL for MySQL fully-relational storage and Google's native Cloud Bigtable database.
  • Google Container Engine: A management and orchestration system for Docker containers that run within Google's public cloud. Google Container Engine is based on the Google Kubernetes container orchestration engine.

Sunday, August 7, 2016

Nice device - Puck.js

Recently I have came across an article which mention this new small smart device like Raspberry Pi.
Puck.js - the ground-breaking bluetooth beacon
Puck.js is a low energy smart device which can be programmed wirelessly and comes pre-installed with JavaScript. It is both multi-functional and easy to use, with a custom circuit board, the latest Nordic chip, Bluetooth Smart, Infra-red and much more, all enclosed in a tiny silicone case.

Friday, August 5, 2016

SOLID Design Principles C#

In computer programming, SOLID (single responsibility, open-closed, Liskov substitution, interface segregation and dependency inversion) is a mnemonic acronym introduced by Michael Feathers for the "first five principles" named by Robert C. Martin in the early 2000s that stands for five basic principles of object-oriented programming and design. The intention is that these principles, when applied together, will make it more likely that a programmer will create a system that is easy to maintain and extend over time. The principles of SOLID are guidelines that can be applied while working on software to remove code smells by causing the programmer to refactor the software's source code until it is both legible and extensible. It is part of an overall strategy of agile and Adaptive Software Development.
SOLID are five basic principles which help to create good software architecture. SOLID is an acronym where:-
  1. SRP The Single Responsibility Principle: - a class should have one, and only one, reason to change.
  2. OCP The Open Closed Principle: - you should be able to extend a class's behavior, without modifying it.
  3. LSP The Liskov Substitution Principle: - derived classes must be substitutable for their base classes.
  4. ISP The Interface Segregation Principle: - make fine grained interfaces that are client specific.
  5. DIP The Dependency Inversion Principle - depend on abstractions not on concrete implementations.
1. Single Responsibility Principle (SRP)
The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. 
  • All its services should be narrowly aligned with that responsibility
As per SRP, there should not be more than one reason for a class to change, or a class should always handle single functionality. If you put more than one functionality in one Class in C# it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality, which require another round of testing to avoid any surprise on production environment.

2. Open Closed Principle (OCP)
In object-oriented programming, the open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”; that is, such an entity can allow its behaviour to be extended without modifying its source code.

3. Liskov substitution principle (LSP)
The Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address entitled Data abstraction and hierarchy
  • if S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program
In order to follow this principle we need to make sure that the subtypes respect the parent class.

4. Interface Segragation principle (ISP)
The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.
  • ISP splits interfaces which are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them.
  • ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy.
ISP is one of the five SOLID principles of Object-Oriented Design, similar to the High Cohesion Principle of GRASP

5. Dependency Inversion principle (DIP)
The Dependency Inversion principle refers to a specific form of decoupling software modules. It states:
  • High-level modules should not depend on low-level modules. Both should depend on abstractions.
  • Abstractions should not depend on details. Details should depend on abstractions.
The Dependency Inversion principle (DIP) helps us to develop loosely couple code by ensuring that high-level modules depend on abstractions rather than concrete implementations of lower-level modules

Tuesday, July 26, 2016


U-SQL is a data processing language that unifies the benefits of SQL with the expressive power of your own code to process all data at any scale. U-SQL’s scalable distributed query capability enables you to efficiently analyze data in the store and across relational stores such as Azure SQL Database. It enables you to process unstructured data by applying schema on read, insert custom logic and UDF's, and includes extensibility to enable fine grained control over how to execute at scale.
U-SQL is the new big data query language of the Azure Data Lake Analytics service. It evolved out of Microsoft's internal Big Data language called SCOPE and combines a familiar SQL-like declarative language with the extensibility and programmability provided by C# types and the C# expression language and big data processing concepts such as “schema on reads”, custom processors and reducers. It also provides the ability to query and combine data from a variety of data sources, including Azure Data Lake Storage, Azure Blob Storage, and Azure SQL DB, Azure SQL Data Warehouse, and SQL Server instances running in Azure VMs. It is however not ANSI SQL.
U-SQL script:
The main unit of a U-SQL “program” is a U-SQL script. A script consists of an optional script prolog and a sequence of U-SQL statements.
@t = EXTRACT date string
        , time string
        , author string
        , tweet string
    FROM "/input/MyTwitterHistory.csv"
    USING Extractors.Csv();

@res = SELECT author
    , COUNT(*) AS tweetcount
    FROM @t
    GROUP BY author;

OUTPUT @res TO "/output/MyTwitterAnalysis.csv"
ORDER BY tweetcount DESC
USING Outputters.Csv();
The above U-SQL script shows the three major steps of processing data with U-SQL:
  • Extract data from your source, using EXTRACT statement in query. The datatypes are based on C# datatypes and it use the built-in Extractors library to read and schematize the CSV file.
  • Transform using SQL and/or custom user defined operators.
  • Output the result either into a file or into a U-SQL table to store it for further processing.
U-SQL combines some familiar concepts from a variety of languages: It is a declarative language like SQL, it follows a dataflow-like composition of statements and expressions like Cascading, and provides simple ways to extend the language with user-defined operators, user-defined aggregators and user-defined functions using C#, and provides a SQL database-like metadata object model to manage, discover and secure structured data and user-code.