Tuesday, June 25, 2013

Microformats

tagline “humans first, machines second”
Microformats are a collection of vocabularies for extending HTML with additional machine-readable semantics. Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards. A microformat is a web-based approach to semantic markup which seeks to re-use existing HTML/XHTML tags to convey metadata and other attributes in web pages and other contexts that support (X)HTML, such as RSS. This approach allows software to process information intended for end-users (such as contact information, geographic coordinates, calendar events, and similar information) automatically.
Being machine readable means a robot or script that understands the microformat vocabulary being used can understand and process the marked-up data. Each microformat defines a specific type of data and is usually based on existing data formats — like vcard (address book data; RFC2426) and icalendar (calendar data; RFC 2445) — or common coding patterns. Microformats are extensions to HTML for marking up people, organizations, events, locations, blog posts, products, reviews, resumes, recipes etc. Sites use microformats to publish a standard API that is consumed and used by search engines, browsers, and other tools.
Microformats are:
  • A way of thinking about data
  • Design principles for formats
  • Adapted to current behaviors and usage patterns (“Pave the cow paths.”)
  • Highly correlated with semantic XHTML, AKA the real world semantics, AKA lowercase semantic web, AKA lossless XHTML
  • A set of simple open data format standards that many are actively developing and implementing for more/better structured blogging and web microcontent publishing in general.
  • “An evolutionary revolution”
Microformats are not:
  • A new language
  • Infinitely extensible and open-ended
  • An attempt to get everyone to change their behavior and rewrite their tools
  • A whole new approach that throws away what already works today
  • A panacea for all taxonomies, ontologies, and other such abstractions
  • Defining the whole world, or even just boiling the ocean
Microformats principles:
  • Solve a specific problem
  • Start as simple as possible
  • Design for humans first, machines second
  • Reuse building blocks from widely adopted standards
  • Modularity / embeddability
  • Enable and encourage decentralized development, content, services
Example:
The contact information using 'hCard' microformat markup as below:
<ul class="vcard">
  <li class="fn">Joe Doe</li>
  <li class="org">The Example Company</li>
  <li class="tel">604-555-1234</li>
  <li><a class="url" href="http://example.com/">http://example.com/</a></li>
</ul>
Here, the formatted name (fn), organisation (org), telephone number (tel) and web address (url) have been identified using specific class names and the whole thing is wrapped in class="vcard".

Website: http://microformats.org/
Wiki: http://en.wikipedia.org/wiki/Microformat 

Monday, June 24, 2013

Hibernate Query Language (HQL)

Hibernate Query Language (HQL) is an object-oriented query language, similar to SQL, but instead of operating on tables and columns, HQL works with persistent objects and their properties. HQL queries are translated by Hibernate into conventional SQL queries which in turns perform action on database. HQL is an SQL inspired language which allows SQL-like quiries to be writeen against persistent data objects. HQL is fully object-oriented and understands notions like inheritance, polymorphism and association.
Advantage of HQL:
  • database independent
  • supports polymorphic queries
Query Interface:
It is an object oriented representation of Hibernate Query. The object of Query can be obtained by calling the createQuery() method of Session interface. Keywords like SELECT, FROM and WHERE etc. are not case sensitive but properties like table and column names are case sensitive in HQL.
Examples of HQL:
  • FROM Clause
    You will use FROM clause if you want to load a complete persistent objects into memory. Following is the simple syntax of using FROM clause:
    String hql = "FROM Employee";
    Query query = session.createQuery(hql);
    List results = query.list();
  • SELECT Clause
    The SELECT clause provides more control over the result set than the from clause. If you want to obtain few properties of objects instead of the complete object, use the SELECT clause. Following is the simple syntax of using SELECT clause to get just first_name field of the Employee object: (It is notable here that Employee.firstName is a property of Employee object rather than a field of the EMPLOYEE table.)
    String hql = "SELECT E.firstName FROM Employee E";
    Query query = session.createQuery(hql);
    List results = query.list();
  • WHERE Clause
    If you want to narrow the specific objects that are returned from storage, you use the WHERE clause. Following is the simple syntax of using WHERE clause:
    String hql = "FROM Employee E WHERE E.id = 10";
    Query query = session.createQuery(hql);
    List results = query.list(); 

What is an ORM?

Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. There are both free and commercial packages available that perform object-relational mapping, although some programmers opt to create their own ORM tools.
Overview
Many popular database products such as structured query language database management systems (SQL DBMS) can only store and manipulate scalar values such as integers and strings organized within tables. The programmer must either convert the object values into groups of simpler values for storage in the database (and convert them back upon retrieval), or only use simple scalar values within the program. Object-relational mapping is used to implement the first approach.
The heart of the problem is translating the logical representation of the objects into an atomized form that is capable of being stored on the database, while somehow preserving the properties of the objects and their relationships so that they can be reloaded as an object when needed. If this storage and retrieval functionality is implemented, the objects are then said to be persistent.
NHibernate
NHibernate (ORM) solution for the Microsoft .NET platform: it provides a framework for mapping an object-oriented domain model to a traditional relational database. Its purpose is to relieve the developer from a significant portion of relational data persistence-related programming tasks. NHibernate is free as open source software that is distributed under the GNU Lesser General Public License. NHibernate is a port of Hibernate. Below diagram shows the NHibernate integration with database and application.
Comparison with traditional data access techniques
Compared to traditional techniques of exchange between an object-oriented language and a relational database, ORM often reduces the amount of code that needs to be written.
Disadvantages of O/R mapping tools generally stem from the high level of abstraction obscuring what is actually happening under the hood. Also, heavy reliance on ORM software has been cited as a major factor in producing poorly designed databases.

Friday, May 24, 2013

MongoDB : Agile and Scalable

MongoDB (from "humongous") is an open-source document database, and the leading NoSQL database. It is part of the NoSQL family of database systems. Instead of storing data in tables as is done in a "classical" relational database, MongoDB stores structured data as JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster. Written in C++.
Features:
Document-Oriented Storage
JSON-style documents with dynamic schemas offer simplicity and power.
Full Index Support
Index on any attribute. Any field in a MongoDB document can be indexed (indices in MongoDB are conceptually similar to those in RDBMSes). Secondary indices are also available.
Replication & High Availability
Mirror across LANs and WANs for scale and peace of mind. MongoDB supports master-slave replication. A master can perform reads and writes. A slave copies data from the master and can only be used for reads or backup (not writes). The slaves have the ability to select a new master if the current one goes down.
Auto-Sharding
Scale horizontally without compromising functionality.
Querying, Ad hoc queries
Rich, document-based queries. MongoDB supports search by field, range queries, regular expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions.
Fast In-Place Updates
Atomic modifiers for contention-free performance.
Aggregation, Map/Reduce
Flexible aggregation and data processing. MapReduce can be used for batch processing of data and aggregation operations. The aggregation framework enables users to obtain the kind of results for which the SQL GROUP BY clause is used.
GridFS
Store files of any size without complicating your stack. MongoDB could be used as a file system, taking advantage of load balancing and data replication features over multiple machines for storing files. This function, called GridFS, is included with MongoDB drivers and available with no difficulty for development languages (see "Language Support" for a list of supported languages). MongoDB exposes functions for file manipulation and content to developers. GridFS is used, for example, in plugins for NGINX and lighttpd. In a multi-machine MongoDB system, files can be distributed and copied multiple times between machines transparently, thus effectively creating a load balanced and fault tolerant system.
Load balancing
MongoDB scales horizontally using sharding. The developer chooses a shard key, which determines how the data in a collection will be distributed. The data is split into ranges (based on the shard key) and distributed across multiple shards. (A shard is a master with one or more slaves.) MongoDB can run over multiple servers, balancing the load and/or duplicating data to keep the system up and running in case of hardware failure. Automatic configuration is easy to deploy and new machines can be added to a running database.
Server-side JavaScript execution
JavaScript can be used in queries, aggregation functions (such as MapReduce), are sent directly to the database to be executed.
Capped collections
MongoDB supports fixed-size collections called capped collections. This type of collection maintains insertion order and, once the specified size has been reached, behaves like a circular queue.
Data manipulation:
MongoDB stores structured data as JSON-like documents, using dynamic schemas (called BSON), rather than predefined schemas. In MongoDB, an element of data is called a document, and documents are stored in collections. One collection may have any number of documents. The arrangement of data in a MongoDB instance is innovative compared to traditional relational databases. A typical MongoDB collection would look like this:
{
    "_id": ObjectId("4efa8d2b7d284dad101e4bc9"),
    "Last Name": "DUMONT",
    "First Name": "Jean",
    "Date of Birth": "01-22-1963"
},
{
    "_id": ObjectId("4efa8d2b7d284dad101e4bc7"),
    "Last Name": "PELLERIN",
    "First Name": "Franck",
    "Date of Birth": "09-19-1983",
    "Address": "1 chemin des Loges",
    "City": "VERSAILLES"
}

Wednesday, May 15, 2013

Axure : Make Interactive HTML Prototypes

Axure RP is the standard in interactive wireframe software and gives you the power to quickly and easily deliver much more than typical mockup tools. Generate an interactive HTML website wireframe or UI mockup without coding.
Axure RP gives you the wireframing, prototyping and specification tools needed to make informed design choices, persuade any skeptics, get your design built to spec... and maybe win a few fans along the way. AxShare is an easy way to share Axure RP prototypes with your team and with clients. It allows user to create a free account to quickly share prototypes and have discussions in the prototype.
Features:
Sketch & Design
Quickly create wire-frames that evolve from sketch to ready-for-dev designs.
Interact
Experience and present prototypes that go beyond basic links.
Share
Share Axure RP prototypes with your team and with clients.
Document
Annotate your wireframes and automatically generate Word specifications.
Collaborate
Design together keeping a history of changes and keep track of  feedback.

You can explore more on the website. Link: http://www.axure.com/

Wednesday, April 10, 2013

User stories - WHO wants them, WHAT are they and WHY use them?

User stories are a popular technique for capturing high-level requirements. User stories provide the rationale and basic description of a new feature request. The following format is the most recognizable user story template:
As a <WHO>
I want <WHAT>
So that <WHY>
Agile teams adopting the user story technique often struggle with questions such as: WHO are user stories produced for? WHAT do good user stories look like? WHY maintain user stories instead of more detailed requirement specifications? To answer these questions – I will recycle the who/what/why template of user stories.
WHO WANTS USER STORIES?
  • Project stakeholders: these individuals want an easy method to pin ideas to the product backlog. With user stories - ideas don’t need to be defined in detail – the user story will provide a “placeholder for a conversation”.
  • The end user: teams that are able to elicit requirements directly from end users can use this technique to facilitate the discussion and documentation of feature requests. What does the user want to do? Why?
  • Project Manager/Product Owner: when grooming the product backlog – user stories are much easier to prioritise than detailed requirement specifications. User stories provide a non-technical, concise summary for the product team to decide the primacy of a feature.
WHAT ARE USER STORIES?
  • Definition: User stories describe the desired interaction/dialogue between a user and the system. User stories provide the user’s rationale for a feature.
  • Typical format:
    1. AS A [actor/user role] – this can be referred to as the WHO section. Who wants this feature? The user could be a generic actor (e.g. AS A user of the website), or a specific user role (e.g. AS A frequent business traveler), or even another system (AS A BACS payment system). Actors can be identified by internal discussions within the project team – identifying user roles may require more sophisticated analysis (e.g. profiling activities by the marketing department, transaction analysis, industry segmentations etc).
    2. I WANT [feature/action] – this can be referred to as the WHAT section. What does the user want? The user will typically want the system to perform a new behaviour e.g. I WANT the ability to track an order, I WANT to pay for orders using an AMEX card, I WANT to cancel an order without any hassle.
    3. SO THAT [benefit] – this can be referred to as the WHY section. Why does a user want this functionality? This section provides the justification/benefit of the feature.
  • Characteristics of good user stories: the INVEST acronym is frequently used to describe attributes of a good user story:
    1. Independent
    2. Negotiable
    3. Valuable/Vertical
    4. Estimable
    5. Sized Appropriately/Small
    6. Testable
WHY USE USER STORIES?
  • Requirements as an emergent property: user stories provide the Business Analyst with a springboard for analysis. A single user story (e.g. AS A price sensitive user, I WANT to be able to cancel my order, SO THAT I do not get charged by the bank for exceeding their overdraft limit) can lead to multiple scenarios for the BA. What is the happy path of this user story? What are the edge cases (e.g. what if some of the items were reduced as part of a promotion)? What are the business rules (e.g. full refunds are only provided up to 3 days from when the transaction was processed)? Requirements should emerge from user stories (not vice versa) - all requirements should have a user justification.
  • Maintenance of the backlog: the detail of a feature is abstracted a level below user stories. In addition – user stories should have few/no dependencies (refer to the INVEST acronym) – this means that user stories are lightweight additions to the product backlog and are therefore easy to maintain.
  • Available for discussion: user stories should be understandable by business users/end users/developers/all team members. User stories facilitate cross-role discussions and encourage open communication between various project silos.
  • Trees and forests: Working at a detailed level can occasionally mean that some requirements are not identified. User stories provide a way to mitigate the probability that user journeys are missed by the team.
User stories provide the team with a method to capture and discuss high-level requirements. Good user stories follow the INVEST acronym and provide the user’s justification for a new feature. Within Scrum/Agile teams – user stories provide an abstraction of requirement details – this facilitates maintenance of the backlog and provides the team with a “placeholder for a conversation”.

Monday, April 8, 2013

Hybrid Memory Cube spec makes DRAM 15 times faster

The three largest memory makers - Micron, Samsung and Hynix; announced the final specifications for three-dimensional DRAM, which is aimed at increasing performance for networking and high-performance computing markets. The technology, called a Hybrid Memory Cube (HMC), will stack multiple volatile memory dies on top of a DRAM controller. The DRAM is connected to the controller by way of the relatively new silicon VIA (Vertical Interconnect Access) technology, a method of passing an electrical wire vertically through a silicon wafer.
The logic portion of the DRAM functionality is dropped into the logic chip that sits at the base of that 3D stack. That logic process allows to take advantage of higher performance transistors ... to not only interact up through the DRAM on top of it, but in a high-performance, efficient manner across a channel to a host processor. This logic layer serves both as the host interface connection as well as the memory controller for the DRAM sitting on top of it. The DRAM is broken into 16 partitions, each one with two I/O channels back to the controller. Each Hybrid Memory Cube - there are two prototypes - has either 128 or 256 memory banks available to the host system.
The first Hybrid Memory Cube specification will deliver 2GB and 4GB of capacity, providing aggregate bi-directional bandwidth of up to 160GBps compared with DDR3's 11GBps of aggregate bandwidth and DDR4, with 18GB to 20GB of aggregate bandwidth.
The Hybrid Memory Cube technology solves some significant memory issues. Today's DRAM chips are burdened with having to drive circuit board traces or copper electrical connections, and the I/O pins of numerous other chips to force data down the bus at gigahertz speeds, which consumes a lot of energy. The Hybrid Memory Cube technology reduces this task to make the DRAM drive only tiny TSVs which are connected to much lower loads over shorter distances. A logic chip at the bottom is the only one burdened with driving the circuit board traces and the processor's I/O pins.
The interface is 15 times as fast as standard DRAMs, while reducing power by 70 percent! The beauty of it is that it gets rid of all the issues that were keeping DDR3 and DDR4 from going as fast as they could. For example, instead of having multiple DIMMS (which can range from one to four) on a motherboard, you would need only one Hybrid Memory Cube, cutting down on the number of interfaces to the CPU.
The HMC has defined two physical interfaces back to a host system processor: a short reach and an ultra-short reach.
The short reach is similar to most motherboard technologies today where the DRAM is within eight to 10 inches of the CPU. That technology is aimed mainly for use in network applications and has the goal of boosting throughput from as much as 15Gbps to 28Gbps per lane in a four-lane configuration.
The ultra-short reach interconnection definition is focused on a low energy, close-proximity memory design support of FPGAs, ASICs and ASSPs, such as high-performace networking, and test and measurement applications. That will have a one to three-inch channel back to the CPU, and it has the throughput goal of 15Gbps per lane.