Monday, September 30, 2013

Software Architecture

Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.

Why Is Architecture Important?
Like any other complex structure, software must be built on a solid foundation. Failing to consider key scenarios, failing to design for common problems, or failing to appreciate the long term consequences of key decisions can put your application at risk. Modern tools and platforms help to simplify the task of building applications, but they do not replace the need to design your application carefully, based on your specific scenarios and requirements. The risks exposed by poor architecture include software that is unstable, is unable to support existing or future business requirements, or is difficult to deploy or manage in a production environment. Systems should be designed with consideration for the user, the system (the IT infrastructure), and the business goals. For each of these areas, you should outline key scenarios and identify important quality attributes (for example, reliability or scalability) and key areas of satisfaction and dissatisfaction. Where possible, develop and consider metrics that measure success in each of these areas.
Architecture focuses on how the major elements and components within an application are used by, or interact with, other major elements and components within the application. The selection of data structures and algorithms or the implementation details of individual components are design concerns. Architecture and design concerns very often overlap.

The Goals of Architecture
Application architecture seeks to build a bridge between business requirements and technical requirements by understanding use cases, and then finding ways to implement those use cases in the software. The goal of architecture is to identify the requirements that affect the structure of the application. Good architecture reduces the business risks associated with building a technical solution.
The architecture should:
  • Expose the structure of the system but hide the implementation details.
  • Realize all of the use cases and scenarios.
  • Try to address the requirements of various stakeholders.
  • Handle both functional and quality requirements.
Key Architecture Principles:
  • Build to change instead of building to last. Consider how the application may need to change over time to address new requirements and challenges, and build in the flexibility to support this.
  • Model to analyze and reduce risk. Use design tools, modeling systems such as Unified Modeling Language (UML), and visualizations where appropriate to help you capture requirements and architectural and design decisions, and to analyze their impact. However, do not formalize the model to the extent that it suppresses the capability to iterate and adapt the design easily.
  • Use models and visualizations as a communication and collaboration tool. Efficient communication of the design, the decisions you make, and ongoing changes to the design, is critical to good architecture. Use models, views, and other visualizations of the architecture to communicate and share your design efficiently with all the stakeholders, and to enable rapid communication of changes to the design.
  • Identify key engineering decisions. Use the information in this guide to understand the key engineering decisions and the areas where mistakes are most often made. Invest in getting these key decisions right the first time so that the design is more flexible and less likely to be broken by changes.

Cloud computing

In past few years cloud computing infiltrated computer industries, the cloud has been variously hailed as the savior of IT and reviled as a risky and chaotic environment. “The cloud” is simply a business model for the creation and delivery of compute resources. The model’s reliance on shared resources and virtualization allows cloud users to achieve levels of economy and scalability that would be difficult in a traditional data center. It is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network such as the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.
Common cloud options include:
  • Public cloud, in which multiple companies share physical servers and networking resources hosted in a provider’s data center.
  • Private cloud, in which companies do not share resources (although efficiencies may be realized by hosting multiple virtual applications from the same company on a single physical server). Private clouds can be located either in a provider’s data center or in the company’s own on-premises data center.
  • Hybrid cloud, in which virtualized applications can be moved among private and public cloud environments.

Common cloud benefits include:
  • Scalable, on-demand resources: The ability to launch a cloud application in minutes, without having to purchase and configure hardware, enables enterprises to significantly cut their time to market. By taking advantage of cloud options for “bursting” during peak work periods, enterprises can also cost-effectively improve application performance and availability.
  • Budget-friendly: Cloud computing services require no capital investment, instead tapping into the operating budget. As many companies tighten up their processes for approval of capital expenditures, a service can be easier and faster to approve and deploy.
  • Utility pricing: The pay-per-use model that characterizes most cloud services appeals to enterprises that want to avoid overinvesting. It also can shorten the time to recoup the investment.
Few cloud computing providers:
Google Oracle Amazon Web Services Microsoft
Rackspace Salesforce VMware Joyent
Citrix Bluelock CenturyLink/Savvis Verizon/Terremark

Tuesday, August 27, 2013

Web crawler

The Web crawler is a computer program that, given one or more seed URLs, downloads data or information from World Wide Web which associated with these URLs, extracts any hyperlinks contained in them, and recursively continues to download the information identified by these hyperlinks. Web information is changed or updated rapidly without any information or notice. Web crawler searches the web for updated or new information. Web crawlers are an important component of web search engines, where they are used to collect the corpus of web pages indexed by the search engine. Moreover, they are used in many other applications that process large numbers of web pages, such as web data mining, comparison shopping engines, and so on.
A Web crawler is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing. A Web crawler may also be called a Web spider, an ant, an automatic indexer, or a Web scutter.

An Internet bot, also known as web robot, WWW robot or simply bot, is a software application that runs automated tasks over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone.
Why do we need a web crawler?
  • To maintain mirror sites for popular Web sites.
  • To test web pages and links for valid syntax and structure.
  • To monitor sites to see when their structure or contents change.
  • To search for copyright infringements.
  • To build a special-purpose index. For example, one that has some understanding of the content stored in multimedia files on the Web.
How does a web crawler work?
A typical web crawler starts by parsing a specified web page: noting any hypertext links on that page that point to other web pages. The Crawler then parses those pages for new links, and so on, recursively. A crawler is a software or script or automated program which resides on a single machine. The crawler simply sends HTTP requests for documents to other machines on the Internet, just as a web browser does when the user clicks on links. All the crawler really does is to automate the process of following links. Below picture shows architecture of a Web Crawler.
List of published crawler architectures for general-purpose crawlers:
  1. Yahoo! Slurp
  2. Bingbot
  3. Googlebot
  4. PolyBot
  5. RBSE
  6. WebCrawler
  7. World Wide Web Worm
  8. WebFountain
  9. WebRACE

Tuesday, August 6, 2013

Agile Software Development using Scrum

Many software development organizations are striving to become more agile, because successful agile teams are producing higher-quality software that better meets user needs more quickly and at a lower cost than are traditional teams.
Below attributes makes transition to Scrum more difficult than other changes:
  • Successful change is not entirely top-down or bottom-up.
  • The end state is unpredictable.
  • Scrum is pervasive.
  • Scrum is dramatically different.
  • Change is coming more quickly than ever before.
  • Best practices are dangerous.
Despite all the reasons why transitioning to Scrum can be particularly difficult, it worth effort because it reduces time-to-martket due to higher productivity of agile teams. Below reasons shows why transitioning to an agile process like Scrum is worthwhile:
  • Higher productivity and lower costs
  • Improved employee engagement and j ob satisfaction
  • Faster time to market
  • Higher quality
  • Improved stakeholder satisfaction
  • What we've been doing no longer works
The five common activities necessary for a successful and lasting Scrum adoption:
  • Awareness that the current process is not delivering acceptable results
  • Desire to adopt Scrum as a way to address current problems
  • Ability to succeed with Scrum
  • Promotion of Scrum through sharing experiences so that we remember and others can see our successes
  • Transfer of the implications of using Scrum throughout the company
Conveniently, these five activities - Awareness, Desire, Ability, Promotion, and Transfer - can be remembered by the acronym ADAPT. These activities are also summarized in below figure.

Wednesday, July 17, 2013

Agile : Planning Poker

Planning poker, also called Scrum poker, is a consensus-based technique for estimating, mostly used to estimate effort or relative size of user stories in software development. Planning Poker is a technique to determine user story size and to build consensus with the development team members. Planning poker is a popular and straightforward approach to estimating story size.
The method was first defined and named by James Grenning in 2002 and later popularized by Mike Cohn in the book Agile Estimating and Planning, whose company trade marked the term.
To start a poker planning session, the product owner or customer reads a agile user story or describes a feature to the estimators. Each estimator is holding a deck of Planning Poker cards with values like 0, 1, 2, 3, 5, 8, 13, 20, 40 and 100, which is the sequence we recommend. The values represent the number of story points, ideal days, or other units in which the team estimates. The estimators discuss the feature, asking questions of the product owner as needed. When the feature has been fully discussed, each estimator privately selects one card to represent his or her estimate. All cards are then revealed at the same time. Only the development team plays estimation poker. The team lead and product owner don’t get a deck and don’t provide estimates. However, the team lead can act as a facilitator, and the product owner reads the user stories and provides details on user stories as needed.
"Planning Poker is a good way to come to a consensus without spending too much time on any one topic. It allows, or forces, people to voice their opinions, thoughts and concerns."
- Lori Schubring, Manager, Bemis Manufacturing Company

Planning poker benefits
Planning poker is a tool for estimating software development projects. It is a technique that minimizes anchoring by asking each team member to play their estimate card such that it cannot be seen by the other players. After each player has selected a card, all cards are exposed at once. Anchoring can occur if a team estimate is based on discussion alone. A team normally has a mix of conservative and optimistic estimators and there may be people who have agendas; developers are likely to want as much time as they can to do the job and the product owner or customer is likely to want it as quickly as possible. Planning poker exposes the potentially influential team member as being isolated in his or her opinion among the group. It then demands that she or he argue the case against the prevailing opinion. If a group is able to express its unity in this manner they are more likely to have faith in their original estimates. If the influential person has a good case to argue everyone will see sense and follow, but at least the rest of the team won't have been anchored; instead they will have listened to reason.

Friday, July 12, 2013

lawnchair : simple JSON storage

Persistent local storage is one of the areas where native client applications have held an advantage over web application. Cookies were invented early in the web’s history, and indeed they can be used for persistent local storage of small amounts of data. But they have three potentially deal breaking downsides:
  • Cookies are included with every HTTP request, thereby slowing down your web application by needlessly transmitting the same data over and over
  • Cookies are included with every HTTP request, thereby sending data unencrypted over the internet (unless your entire web application is served over SSL)
  • Cookies are limited to about 4 KB of data — enough to slow down your application (see above), but not enough to be terribly useful
Whats really needed is:
  • a lot of storage space
  • on the client
  • that persists beyond a page refresh
  • and isn’t transmitted to the server
What is Web Storage?
With HTML5, web pages can store data locally within the user's browser. Web Storage is more secure and faster. The data is not included with every server request, but used ONLY when asked for. It is also possible to store large amounts of data, without affecting the website's performance. The data is stored in key/value pairs, and a web page can only access data stored by itself.
Lawnchair
A lawnchair is sorta like a couch except smaller and outside. Perfect for HTML5 web/mobile apps that need a lightweight, adaptive, simple and elegant persistence solution.
  • Collections. A lawnchair instance is really just an array of objects.
  • Adaptive persistence. The underlying store is abstracted behind a consistent interface.
  • Pluggable collection behavior. Sometimes we need collection helpers but not always.
Features
  • Super micro tiny storage without the nasty SQL: pure and delicious JSON.
  • Default build weighs in at 3.4K minified; 1.5 gzip'd!
  • Adapters for any client side store.
  • Designed with mobile in mind.
  • Clean and simple API.
  • Key/value store ...key is optional.
  • Terse syntax for searching/finding.
  • Battle tested in app stores and on the open mobile web.
  • Framework agnostic. (If not a framework athiest!)
By default, Lawnchair will persist data using DOM Storage but if other adapters are available and DOM Storage isn't supported by the currently executing JavaScript runtime. Lawnchair will attempt each successive adapter until it finds one that works.

Sunday, June 30, 2013

Reporting Services Script File

The reporting services rs utility (RS.exe), utility processes script you provide in an input file. Developers and report server administrators can perform operations on a report server through the use of the rs utility (RS.exe). Using this utility, you can programmatically administer a report server using Visual Basic .NET scripts. 
Reporting Services scripts can be used to run any of the Reporting Services Web service operations. Script files take a certain format and are written in Visual Basic .NET. Scripting can be used to copy security to multiple reports on a server, to add and delete items, to copy report server items from one server to another and more.
RS.exe is located at \Program Files\Microsoft SQL Server\110\Tools\Binn folder.  
To run the tool, you must have permission to connect to the report server instance you are running the script against. You can run scripts to make changes to the local computer or a remote computer.
Reporting Services Script File
A Reporting Services script is a Visual Basic .NET code file, written against a proxy that is built on Web Service Description Language (WSDL), which defines the Reporting Services SOAP API. A script file is stored as a Unicode or UTF-8 text file with the extension .rss.
The script file acts as a Visual Basic module and can contain user defined procedures and module-level variables. For the script file to run successfully, it must contain the Main procedure. The Main procedure is the starting point for your script file, and it is the first procedure that is accessed when your script file runs. Main is where you can add your Web service operations and run your user defined subprocedures. The minimum structure you need to execute a report server script file is the Main procedure.
Sample script.rss file:
Public Sub Main()
    Dim items() As CatalogItem
    items = rs.ListChildren("/", True)

    Dim item As CatalogItem
    For Each item In items
        Console.WriteLine(item.Name)
    Next item
End Sub 
To run Script.rss in the script environment specifying a user name and password for authenticating the Web service calls: 
rs –i Script.rss -s http://servername/reportserver -u myusername -p mypassword
Above script.rss file will list all children of a root (/) folder.