Tuesday, December 31, 2013

Happy New Year 2014



I wish u to have a
Sweetest Sunday,
Marvelous Monday,
Tasty Tuesday,
Wonderful Wednesday,
Thankful Thursday,
Friendly Friday,
Successful Saturday,
Have a great Year,
HAPPY NEW YEAR

Web API

What is Web API?
Web API is a new web application runtime that builds on the lessons and patterns proven in ASP.NET MVC. Using a simple controller paradigm, Web API enables a developer to create simple HTTP web services with very little code and configuration. WebAPI is an ideal platform for building pure HTTP based services where the request and response happens with HTTP protocol. The client can make a GET, PUT, POST, and DELETE request and get the WebAPI response appropriately.
ASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework.
WebAPI is
  • An HTTP Service
  • Designed for broad reach
  • Uses HTTP as an Application protocol, not a transport protocol
Why Web API?
Today, a web-based application is not enough to reach it's customers. People are very smart, they are using iphone, mobile, tablets etc. devices in its daily life. These devices also have a lot of apps for making the life easy. Actually, we are moving from the web towards apps world.
Web API is the great framework for exposing your data and service to different-different devices. Moreover Web API is open source an ideal platform for building REST-ful services over the .NET Framework. Unlike WCF Rest service, it use the full featues of HTTP (like URIs, request/response headers, caching, versioning, various content formats) and you don't need to define any extra config settings for different devices unlike WCF Rest service.
How Web API is different from WCF
WCF ASP.NET Web API
Enables building services that support multiple transport protocols (HTTP, TCP, UDP, and custom transports) and allows switching between them. HTTP only. First-class programming model for HTTP. More suitable for access from various browsers, mobile devices etc enabling wide reach.
Enables building services that support multiple encodings (Text, MTOM, and Binary) of the same message type and allows switching between them. Enables building Web APIs that support wide variety of media types including XML, JSON etc.
Supports building services with WS-* standards like Reliable Messaging, Transactions, Message Security. Uses basic protocol and formats such as HTTP, WebSockets, SSL, JQuery, JSON, and XML. There is no support for higher level protocols such as Reliable Messaging or Transactions.
Supports Request-Reply, One Way, and Duplex message exchange patterns. HTTP is request/response but additional patterns can be supported through SignalRand WebSockets integration.
WCF SOAP services can be described in WSDL allowing automated tools to generate client proxies even for services with complex schemas. There is a variety of ways to describe a Web API ranging from auto-generated HTML help page describing snippets to structured metadata for OData integrated APIs.
Ships with the .NET framework. Ships with .NET framework but is open-source and is also available out-of-band as independent download.
Web API Features
  • It supports convention-based CRUD Actions since it works with HTTP verbs GET, POST, PUT and DELETE.
  • Responses have an Accept header and HTTP status code.
  • Responses are formatted by Web API’s MediaTypeFormatter into JSON, XML or whatever format you want to add as a MediaTypeFormatter.
  • It may accepts and generates the content which may not be object oriented like images, PDF files etc.
  • It has automatic support for OData. Hence by placing the new [Queryable] attribute on a controller method that returns IQueryable, clients can use the method for OData query composition.
  • It can be hosted with in the applicaion or on IIS.
  • It also supports the MVC features such as routing, controllers, action results, filter, model binders, IOC container or dependency injection that makes it more simple and robust.

Friday, December 27, 2013

XBRL : eXtensible Business Reporting Language

XBRL (eXtensible Business Reporting Language) is a freely available and global standard for exchanging business information. XBRL allows the expression of semantic meaning commonly required in business reporting. The goal of XBRL is to standardize the automation of business intelligence (BI).
Highlights:
  • XBRL International is a not-for-profit consortium of approximately 650 companies and agencies worldwide working together to build the XBRL language and promote and support its adoption
  • XBRL is XML-based, it uses the XML syntax and related XML technologies
  • Effort had began in 1998 and has produced a variety of specifications and taxonomies
  • It is language for the electronic communication of business and financial data
  • It is an open standard which supports information modeling and the expression of semantic meaning commonly required in business reporting
  • It provides major benefits in the preparation, analysis and communication of business information.
  • It offers cost savings, greater efficiency and improved accuracy and reliability to all those involved in supplying or using financial data. 
  • XBRL is a standards-based way to communicate business and financial information.
  • These communications are defined by metadata set out in taxonomies.
  • Taxonomies capture the definition of individual reporting concepts as well as the relationships between concepts and other semantic meaning.
  • Provides identifying tag for each individual item of data such that it can be used by name
  • The basis for this technology is a "tagging" process by which each value, item, and descriptor, etc. in the exchanged information can be given a unique set of tags with which to describe it. Using these tags, computer programs can read the data without human intervention.
For more information: http://xbrl.squarespace.com/

Monday, December 9, 2013

Hekaton : In-Memory Optimization

Hekaton : The codename for a Microsoft project that will provide support for mixing in-memory database tables with more traditional on-disk tables in the same database. These hybrid databases will be able to dynamically handle both in-memory and on-disk storage in the same database for optimal performance and reliability. Traditional RDBMS architecture was designed when memory resources were expensive, and was optimized for disk I/O. Modern hardware has much more memory, which affects database design principles dramatically. Modern design can now optimize for a working set stored entirely in main memory. Hekaton fully provides ACID database properties.
Project Hekaton will enable Microsoft to compete in the in-memory database market with products like Oracle Database's Exadata and Exalytics appliance options and SAP Hana.
This SQL Server In-memory OLTP capability is released in SQL 2014 version. In-Memory OLTP is a memory-optimized OLTP database engine for SQL Server. Depending on the reason for poor performance with your disk-based tables, In-Memory OLTP can help you achieve significant performance and scalability gains by using,
  • Algorithms that are optimized for accessing memory-resident data.
  • Optimistic concurrency control that eliminates logical locks.
  • Lock free objects are used to access all data. Threads that perform transactional work don’t use locks or latches for concurrency control.
  • Natively compiled stored procedures result in orders of magnitude reduction in the engine code path.
Use of main memory can result in a few percentage points of performance improvement, to 20 times performance improvement.

Thursday, November 28, 2013

Open Authentication (OAuth)

OAuth is a free and open protocol, built on IETF standards and licenses from the Open Web Foundation, and is the right solution for securing open platforms. OAuth is a simple way to publish and interact with protected data. It's also a safer and more secure way for people to give you access. An open protocol to allow secure authorization in a simple and standard method from web, mobile and desktop applications.
OAuth is ‘An API access delegation protocol’
The heart of OAuth is an authorization token with limited rights, which the user can revoke at any time should they become suspicious or dissatisfied. OAuth supports "delegated authentication" between web apps using a security token called an "access token". Delegated authorization is grating access to another person or application to perform actions on your behalf. An OAuth token gives one app access to one API on behalf of one user.
Below is the pictorial representation of OAuth Authorization Flow:
Microsoft provides the "Microsoft.Web.WebPages.OAuth.dll" for OAuth implementation via .Net applications. The Microsoft.Web.WebPages.OAuth namespace contains core classes that are used to work with OAuth and OpenID authentication. The classes in this namespace interact with the classes from the open-source DotNetOpenAuth library.
OAuthWebSecurity Class
This class manages security that uses OAuth authentication providers like Facebook, Twitter, LinkedIn, Windows Live and OpenID authentication providers like Google and Yahoo. Below are the main API that commonly used for implementation.
  1. Register...Client (... : Microsoft, Facebook, etc) : These various register methods allows user to register specific identity provider.
  2. RegisteredClientData : This API provides the list of the registered identity providers. This is necessary for the ProviderName property when requesting authentication.
  3. RequestAuthentication : This is the API to invoke to trigger a login with one of the identity providers. The parameters are the identity provider name (so one of the ProviderName values from the RegisteredClientData collection) and the return URL where you will receive the authentication token from the identity provider. Internally it does a Response.Redirect to take the user to the identity provider consent screen.

Reference Links: http://hueniverse.com/2007/09/explaining-oauth/

Tuesday, November 5, 2013

Kanban Vs Scrum

Difference between Kanban and Scrum:
  • Iterations : Kanban sees development as a forever ongoing flow of things to do where as in Scrum you work in iterations.
  • Commitment : Kanban is ongoing where as in Scrum a team commits to what they will do during a sprint.
  • Estimations : In Kanban it’s optional since focus is on time-to-market. In Scrum you need to estimate to be able to have a velocity.
  • Cross-functional teams : That’s one of the pillars of Scrum. For Kanban it’s optional.
  • Workflow : The Kanban Method does not prescribe any workflow. Scrum prescribes a set of activities that are performed within a Sprint.
  • Roles : Kanban does not prescribe any roles. Scrum generally prescribes three roles, Scrum Master, Product Owner, and Team Member.
  • System Thinking : The Kanban Method takes a system thinking approach to process problems. Scrum is team-centric.
Kanban Scrum
Board / Artifacts board only board, backlogs, burn-downs
Ceremonies daily scrum, review/retrospeective on set frequency and planning ongoing daily scrum, sprint planning, spring review, sprint retrospective
Iterations no (continuous flow) yes (sprints)
Estimation no (similar size) yes
Teams can be specialized must be cross-functional
Roles Team + need roles Product Owner, Scrum Master, Team
Teamwork swarming to achieve goals collaborative as needed by task
WIP controlled by worklfow state controlled by sprint content
Changes added as nedded on the borad (to do) should wait for the next sprint
Product Backlog just in time cards list of prioritized and estimated stories
Impediments avoided dealt with immediately

Wednesday, October 30, 2013

Hybrid Cloud Storage

Cloud Storage is a form of  networked online storage where data is stored in virtualized pools of storage. The data center operators, in the background, virtualize the resources according to the requirements of the customer and expose them as storage pools, which the customers can themselves use to store files or data objects. Physically, the resource may span across multiple servers and multiple locations. Cloud storage services may be accessed through a web service application programming interface (API) or by applications that utilize the API, such as cloud desktop storage, a cloud storage gateway or Web-based content management systems.
Hybrid cloud storage is a type of cloud storage model that derives and combines the functionality of public and private cloud storage models to provide storage services. This storage technique uses internal and external cloud applications, infrastructure and storage systems to form integrated storage architecture.
Hybrid cloud storage overcomes the problems of managing data and storage by integrating on-premises storage with cloud storage services. In this architecture, on-premises storage uses the capacity on internal SSDs and HDDs, as well on the expanded storage resources that are provided by cloud storage. A key element of the architecture is that the distance over which data is stored is extended far beyond the on-premises data center, thereby providing disaster protection.
Cloud Storage Provider's on-premise storage components vary widely in architecture, functionality, cloud integration and form factors.  There are huge differences between what are called cloud storage gateways and cloud-integrated storage.  Gateways provide an access point from on-premises to cloud data  transfers, whereas cloud-integrated storage is an automated solution to some of IT’s biggest problems such as managing data growth, backup and recovery, archiving data and disaster preparation.
Cloud-integrated storage (CiS), like Microsoft’s recent StorSimple acquisition, is a full-featured on-premises storage system that integrates with the cloud (i.e. Windows Azure Storage). Enterprises can create a complete hybrid cloud storage architecture where they store their data locally and protect it with snapshots on-premises and in the cloud, and where dormant data can be seamlessly tiered to the cloud to make room for new data locally. This way, CiS gives IT an automated  “safety valve” to prevent running out of disk capacity, while also providing rapid disaster recovery via cloud snapshots.

Wednesday, October 9, 2013

Microsoft SQL Server 2014

Microsoft SQL Server 2014 brings to market lots of new features and enhancements over prior versions. SQL Server 2014 delivers mission critical performance across all workloads with in-memory built-in, faster insights from any data with familiar tools, and a platform for hybrid cloud enabling organizations to easily build, deploy, and manage solutions that span on-permises and cloud.
SQL Server 2014 delivers new in-memory capabilities built into the core database for OLTP and data warehousing, which complement our existing in-memory data warehousing and BI capabilities for the most comprehensive in-memory database solution in the market.
The Power of SQL Server 2014 includes:
Performance Enhancements:
  • In-Memory OLTP: Average 10x and up to 50x performance gains
  • Enhanced In-Memory ColumnStore for DW: Updatable, faster, better compression
  • In-Memory BI with PowerPivot: Fast insights
  • Buffer Pool Extension to SSDs: Faster paging
  • Enhanced Query Processing: Faster performance without any app changes
Data Retrieval Enhancements:
  • Power Query (codename “Data Explorer”): Easy access to internal and external data
  • Power Map (codename “Geo Flow”): Richer insights with 3D visualization
  • Parallel Data Warehouse with Polybase: Query big data with T-SQL
  • Data Mining Add-ins for Excel: Predictive analytics
Cloud based Enhancements:
  • Simplified cloud Backup: Reduce CAPEX & OPEX
  • Windows Azure SQL Database service: Develop new variable demand cloud apps quickly with HA built-in
  • HA/Always On in Azure: Helps to handle disaster recovery
  • Extend on-premises apps to the cloud: Gain cloud scale on demand
You can download  Microsoft SQL Server 2014 Community Technology Preview 1 (CTP1) at below location:  

Monday, September 30, 2013

Software Architecture

Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.

Why Is Architecture Important?
Like any other complex structure, software must be built on a solid foundation. Failing to consider key scenarios, failing to design for common problems, or failing to appreciate the long term consequences of key decisions can put your application at risk. Modern tools and platforms help to simplify the task of building applications, but they do not replace the need to design your application carefully, based on your specific scenarios and requirements. The risks exposed by poor architecture include software that is unstable, is unable to support existing or future business requirements, or is difficult to deploy or manage in a production environment. Systems should be designed with consideration for the user, the system (the IT infrastructure), and the business goals. For each of these areas, you should outline key scenarios and identify important quality attributes (for example, reliability or scalability) and key areas of satisfaction and dissatisfaction. Where possible, develop and consider metrics that measure success in each of these areas.
Architecture focuses on how the major elements and components within an application are used by, or interact with, other major elements and components within the application. The selection of data structures and algorithms or the implementation details of individual components are design concerns. Architecture and design concerns very often overlap.

The Goals of Architecture
Application architecture seeks to build a bridge between business requirements and technical requirements by understanding use cases, and then finding ways to implement those use cases in the software. The goal of architecture is to identify the requirements that affect the structure of the application. Good architecture reduces the business risks associated with building a technical solution.
The architecture should:
  • Expose the structure of the system but hide the implementation details.
  • Realize all of the use cases and scenarios.
  • Try to address the requirements of various stakeholders.
  • Handle both functional and quality requirements.
Key Architecture Principles:
  • Build to change instead of building to last. Consider how the application may need to change over time to address new requirements and challenges, and build in the flexibility to support this.
  • Model to analyze and reduce risk. Use design tools, modeling systems such as Unified Modeling Language (UML), and visualizations where appropriate to help you capture requirements and architectural and design decisions, and to analyze their impact. However, do not formalize the model to the extent that it suppresses the capability to iterate and adapt the design easily.
  • Use models and visualizations as a communication and collaboration tool. Efficient communication of the design, the decisions you make, and ongoing changes to the design, is critical to good architecture. Use models, views, and other visualizations of the architecture to communicate and share your design efficiently with all the stakeholders, and to enable rapid communication of changes to the design.
  • Identify key engineering decisions. Use the information in this guide to understand the key engineering decisions and the areas where mistakes are most often made. Invest in getting these key decisions right the first time so that the design is more flexible and less likely to be broken by changes.

Cloud computing

In past few years cloud computing infiltrated computer industries, the cloud has been variously hailed as the savior of IT and reviled as a risky and chaotic environment. “The cloud” is simply a business model for the creation and delivery of compute resources. The model’s reliance on shared resources and virtualization allows cloud users to achieve levels of economy and scalability that would be difficult in a traditional data center. It is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network such as the Internet. Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.
Common cloud options include:
  • Public cloud, in which multiple companies share physical servers and networking resources hosted in a provider’s data center.
  • Private cloud, in which companies do not share resources (although efficiencies may be realized by hosting multiple virtual applications from the same company on a single physical server). Private clouds can be located either in a provider’s data center or in the company’s own on-premises data center.
  • Hybrid cloud, in which virtualized applications can be moved among private and public cloud environments.

Common cloud benefits include:
  • Scalable, on-demand resources: The ability to launch a cloud application in minutes, without having to purchase and configure hardware, enables enterprises to significantly cut their time to market. By taking advantage of cloud options for “bursting” during peak work periods, enterprises can also cost-effectively improve application performance and availability.
  • Budget-friendly: Cloud computing services require no capital investment, instead tapping into the operating budget. As many companies tighten up their processes for approval of capital expenditures, a service can be easier and faster to approve and deploy.
  • Utility pricing: The pay-per-use model that characterizes most cloud services appeals to enterprises that want to avoid overinvesting. It also can shorten the time to recoup the investment.
Few cloud computing providers:
Google Oracle Amazon Web Services Microsoft
Rackspace Salesforce VMware Joyent
Citrix Bluelock CenturyLink/Savvis Verizon/Terremark

Tuesday, August 27, 2013

Web crawler

The Web crawler is a computer program that, given one or more seed URLs, downloads data or information from World Wide Web which associated with these URLs, extracts any hyperlinks contained in them, and recursively continues to download the information identified by these hyperlinks. Web information is changed or updated rapidly without any information or notice. Web crawler searches the web for updated or new information. Web crawlers are an important component of web search engines, where they are used to collect the corpus of web pages indexed by the search engine. Moreover, they are used in many other applications that process large numbers of web pages, such as web data mining, comparison shopping engines, and so on.
A Web crawler is an Internet bot that systematically browses the World Wide Web, typically for the purpose of Web indexing. A Web crawler may also be called a Web spider, an ant, an automatic indexer, or a Web scutter.

An Internet bot, also known as web robot, WWW robot or simply bot, is a software application that runs automated tasks over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone.
Why do we need a web crawler?
  • To maintain mirror sites for popular Web sites.
  • To test web pages and links for valid syntax and structure.
  • To monitor sites to see when their structure or contents change.
  • To search for copyright infringements.
  • To build a special-purpose index. For example, one that has some understanding of the content stored in multimedia files on the Web.
How does a web crawler work?
A typical web crawler starts by parsing a specified web page: noting any hypertext links on that page that point to other web pages. The Crawler then parses those pages for new links, and so on, recursively. A crawler is a software or script or automated program which resides on a single machine. The crawler simply sends HTTP requests for documents to other machines on the Internet, just as a web browser does when the user clicks on links. All the crawler really does is to automate the process of following links. Below picture shows architecture of a Web Crawler.
List of published crawler architectures for general-purpose crawlers:
  1. Yahoo! Slurp
  2. Bingbot
  3. Googlebot
  4. PolyBot
  5. RBSE
  6. WebCrawler
  7. World Wide Web Worm
  8. WebFountain
  9. WebRACE

Tuesday, August 6, 2013

Agile Software Development using Scrum

Many software development organizations are striving to become more agile, because successful agile teams are producing higher-quality software that better meets user needs more quickly and at a lower cost than are traditional teams.
Below attributes makes transition to Scrum more difficult than other changes:
  • Successful change is not entirely top-down or bottom-up.
  • The end state is unpredictable.
  • Scrum is pervasive.
  • Scrum is dramatically different.
  • Change is coming more quickly than ever before.
  • Best practices are dangerous.
Despite all the reasons why transitioning to Scrum can be particularly difficult, it worth effort because it reduces time-to-martket due to higher productivity of agile teams. Below reasons shows why transitioning to an agile process like Scrum is worthwhile:
  • Higher productivity and lower costs
  • Improved employee engagement and j ob satisfaction
  • Faster time to market
  • Higher quality
  • Improved stakeholder satisfaction
  • What we've been doing no longer works
The five common activities necessary for a successful and lasting Scrum adoption:
  • Awareness that the current process is not delivering acceptable results
  • Desire to adopt Scrum as a way to address current problems
  • Ability to succeed with Scrum
  • Promotion of Scrum through sharing experiences so that we remember and others can see our successes
  • Transfer of the implications of using Scrum throughout the company
Conveniently, these five activities - Awareness, Desire, Ability, Promotion, and Transfer - can be remembered by the acronym ADAPT. These activities are also summarized in below figure.

Wednesday, July 17, 2013

Agile : Planning Poker

Planning poker, also called Scrum poker, is a consensus-based technique for estimating, mostly used to estimate effort or relative size of user stories in software development. Planning Poker is a technique to determine user story size and to build consensus with the development team members. Planning poker is a popular and straightforward approach to estimating story size.
The method was first defined and named by James Grenning in 2002 and later popularized by Mike Cohn in the book Agile Estimating and Planning, whose company trade marked the term.
To start a poker planning session, the product owner or customer reads a agile user story or describes a feature to the estimators. Each estimator is holding a deck of Planning Poker cards with values like 0, 1, 2, 3, 5, 8, 13, 20, 40 and 100, which is the sequence we recommend. The values represent the number of story points, ideal days, or other units in which the team estimates. The estimators discuss the feature, asking questions of the product owner as needed. When the feature has been fully discussed, each estimator privately selects one card to represent his or her estimate. All cards are then revealed at the same time. Only the development team plays estimation poker. The team lead and product owner don’t get a deck and don’t provide estimates. However, the team lead can act as a facilitator, and the product owner reads the user stories and provides details on user stories as needed.
"Planning Poker is a good way to come to a consensus without spending too much time on any one topic. It allows, or forces, people to voice their opinions, thoughts and concerns."
- Lori Schubring, Manager, Bemis Manufacturing Company

Planning poker benefits
Planning poker is a tool for estimating software development projects. It is a technique that minimizes anchoring by asking each team member to play their estimate card such that it cannot be seen by the other players. After each player has selected a card, all cards are exposed at once. Anchoring can occur if a team estimate is based on discussion alone. A team normally has a mix of conservative and optimistic estimators and there may be people who have agendas; developers are likely to want as much time as they can to do the job and the product owner or customer is likely to want it as quickly as possible. Planning poker exposes the potentially influential team member as being isolated in his or her opinion among the group. It then demands that she or he argue the case against the prevailing opinion. If a group is able to express its unity in this manner they are more likely to have faith in their original estimates. If the influential person has a good case to argue everyone will see sense and follow, but at least the rest of the team won't have been anchored; instead they will have listened to reason.

Friday, July 12, 2013

lawnchair : simple JSON storage

Persistent local storage is one of the areas where native client applications have held an advantage over web application. Cookies were invented early in the web’s history, and indeed they can be used for persistent local storage of small amounts of data. But they have three potentially deal breaking downsides:
  • Cookies are included with every HTTP request, thereby slowing down your web application by needlessly transmitting the same data over and over
  • Cookies are included with every HTTP request, thereby sending data unencrypted over the internet (unless your entire web application is served over SSL)
  • Cookies are limited to about 4 KB of data — enough to slow down your application (see above), but not enough to be terribly useful
Whats really needed is:
  • a lot of storage space
  • on the client
  • that persists beyond a page refresh
  • and isn’t transmitted to the server
What is Web Storage?
With HTML5, web pages can store data locally within the user's browser. Web Storage is more secure and faster. The data is not included with every server request, but used ONLY when asked for. It is also possible to store large amounts of data, without affecting the website's performance. The data is stored in key/value pairs, and a web page can only access data stored by itself.
Lawnchair
A lawnchair is sorta like a couch except smaller and outside. Perfect for HTML5 web/mobile apps that need a lightweight, adaptive, simple and elegant persistence solution.
  • Collections. A lawnchair instance is really just an array of objects.
  • Adaptive persistence. The underlying store is abstracted behind a consistent interface.
  • Pluggable collection behavior. Sometimes we need collection helpers but not always.
Features
  • Super micro tiny storage without the nasty SQL: pure and delicious JSON.
  • Default build weighs in at 3.4K minified; 1.5 gzip'd!
  • Adapters for any client side store.
  • Designed with mobile in mind.
  • Clean and simple API.
  • Key/value store ...key is optional.
  • Terse syntax for searching/finding.
  • Battle tested in app stores and on the open mobile web.
  • Framework agnostic. (If not a framework athiest!)
By default, Lawnchair will persist data using DOM Storage but if other adapters are available and DOM Storage isn't supported by the currently executing JavaScript runtime. Lawnchair will attempt each successive adapter until it finds one that works.

Sunday, June 30, 2013

Reporting Services Script File

The reporting services rs utility (RS.exe), utility processes script you provide in an input file. Developers and report server administrators can perform operations on a report server through the use of the rs utility (RS.exe). Using this utility, you can programmatically administer a report server using Visual Basic .NET scripts. 
Reporting Services scripts can be used to run any of the Reporting Services Web service operations. Script files take a certain format and are written in Visual Basic .NET. Scripting can be used to copy security to multiple reports on a server, to add and delete items, to copy report server items from one server to another and more.
RS.exe is located at \Program Files\Microsoft SQL Server\110\Tools\Binn folder.  
To run the tool, you must have permission to connect to the report server instance you are running the script against. You can run scripts to make changes to the local computer or a remote computer.
Reporting Services Script File
A Reporting Services script is a Visual Basic .NET code file, written against a proxy that is built on Web Service Description Language (WSDL), which defines the Reporting Services SOAP API. A script file is stored as a Unicode or UTF-8 text file with the extension .rss.
The script file acts as a Visual Basic module and can contain user defined procedures and module-level variables. For the script file to run successfully, it must contain the Main procedure. The Main procedure is the starting point for your script file, and it is the first procedure that is accessed when your script file runs. Main is where you can add your Web service operations and run your user defined subprocedures. The minimum structure you need to execute a report server script file is the Main procedure.
Sample script.rss file:
Public Sub Main()
    Dim items() As CatalogItem
    items = rs.ListChildren("/", True)

    Dim item As CatalogItem
    For Each item In items
        Console.WriteLine(item.Name)
    Next item
End Sub 
To run Script.rss in the script environment specifying a user name and password for authenticating the Web service calls: 
rs –i Script.rss -s http://servername/reportserver -u myusername -p mypassword
Above script.rss file will list all children of a root (/) folder.

Tuesday, June 25, 2013

Microformats

tagline “humans first, machines second”
Microformats are a collection of vocabularies for extending HTML with additional machine-readable semantics. Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards. A microformat is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata and other attributes in web pages and other contexts that support (X)HTML, such as RSS. This approach allows software to process information intended for end-users (such as contact information, geographic coordinates, calendar events, and similar information) automatically.
Being machine readable means a robot or script that understands the microformat vocabulary being used can understand and process the marked-up data. Each microformat defines a specific type of data and is usually based on existing data formats — like vcard (address book data; RFC2426) and icalendar (calendar data; RFC 2445) — or common coding patterns. Microformats are extensions to HTML for marking up people, organizations, events, locations, blog posts, products, reviews, resumes, recipes etc. Sites use microformats to publish a standard API that is consumed and used by search engines, browsers, and other tools.
Microformats are:
  • A way of thinking about data
  • Design principles for formats
  • Adapted to current behaviors and usage patterns (“Pave the cow paths.”)
  • Highly correlated with semantic XHTML, AKA the real world semantics, AKA lowercase semantic web, AKA lossless XHTML
  • A set of simple open data format standards that many are actively developing and implementing for more/better structured blogging and web microcontent publishing in general.
  • “An evolutionary revolution”
Microformats are not:
  • A new language
  • Infinitely extensible and open-ended
  • An attempt to get everyone to change their behavior and rewrite their tools
  • A whole new approach that throws away what already works today
  • A panacea for all taxonomies, ontologies, and other such abstractions
  • Defining the whole world, or even just boiling the ocean
Microformats principles:
  • Solve a specific problem
  • Start as simple as possible
  • Design for humans first, machines second
  • Reuse building blocks from widely adopted standards
  • Modularity / embeddability
  • Enable and encourage decentralized development, content, services
Example:
The contact information using 'hCard' microformat markup as below:
<ul class="vcard">
  <li class="fn">Joe Doe</li>
  <li class="org">The Example Company</li>
  <li class="tel">604-555-1234</li>
  <li><a class="url" href="http://example.com/">http://example.com/</a></li>
</ul>
Here, the formatted name (fn), organisation (org), telephone number (tel) and web address (url) have been identified using specific class names and the whole thing is wrapped in class="vcard".

Website: http://microformats.org/
Wiki: http://en.wikipedia.org/wiki/Microformat 

Monday, June 24, 2013

Hibernate Query Language (HQL)

Hibernate Query Language (HQL) is an object-oriented query language, similar to SQL, but instead of operating on tables and columns, HQL works with persistent objects and their properties. HQL queries are translated by Hibernate into conventional SQL queries which in turns perform action on database. HQL is an SQL inspired language which allows SQL-like quiries to be writeen against persistent data objects. HQL is fully object-oriented and understands notions like inheritance, polymorphism and association.
Advantage of HQL:
  • database independent
  • supports polymorphic queries
Query Interface:
It is an object oriented representation of Hibernate Query. The object of Query can be obtained by calling the createQuery() method of Session interface. Keywords like SELECT, FROM and WHERE etc. are not case sensitive but properties like table and column names are case sensitive in HQL.
Examples of HQL:
  • FROM Clause
    You will use FROM clause if you want to load a complete persistent objects into memory. Following is the simple syntax of using FROM clause:
    String hql = "FROM Employee";
    Query query = session.createQuery(hql);
    List results = query.list();
  • SELECT Clause
    The SELECT clause provides more control over the result set than the from clause. If you want to obtain few properties of objects instead of the complete object, use the SELECT clause. Following is the simple syntax of using SELECT clause to get just first_name field of the Employee object: (It is notable here that Employee.firstName is a property of Employee object rather than a field of the EMPLOYEE table.)
    String hql = "SELECT E.firstName FROM Employee E";
    Query query = session.createQuery(hql);
    List results = query.list();
  • WHERE Clause
    If you want to narrow the specific objects that are returned from storage, you use the WHERE clause. Following is the simple syntax of using WHERE clause:
    String hql = "FROM Employee E WHERE E.id = 10";
    Query query = session.createQuery(hql);
    List results = query.list();