Friday, December 18, 2009

DirectX

Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with Direct, such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. The name DirectX was coined as shorthand term for all of these APIs (the X standing in for the particular API names) and soon became the name of the collection. 
Windows supports DirectX 8.0, which enhances the multimedia capabilities of your computer. DirectX provides access to the capabilities of your display and audio cards, which enables programs to provide realistic three-dimensional (3-D) graphics and immersive music and audio effects. DirectX is a set of low-level Application Programming Interfaces (APIs) that provides Windows programs with high-performance hardware-accelerated multimedia support.
DirectX enables the program to easily determine the hardware capabilities of your computer, and then sets the program parameters to match. This allows multimedia software programs to run on any Windows-based computer with DirectX compatible hardware and drivers and ensures that the multimedia programs take full advantage of high-performance hardware.
DirectX contains a set of APIs that provide access to the advanced features of high-performance hardware, such as 3-D graphics acceleration chips and sound cards. These APIs control low-level functions, including two-dimensional (2-D) graphics acceleration; support for input devices such as joysticks, keyboards, and mice; and control of sound mixing and sound output.
DirectX is composed of multiple APIs:
  • Direct3D (D3D): for drawing 3D graphics.
  • DXGI: for enumerating adapters and monitors and managing swap chains for Direct3D 10 and up.
  • Direct2D: for 2D graphics.
  • DirectWrite: for fonts.
  • DirectCompute: for GPU Computing.
  • DirectSound3D (DS3D): for the playback of 3D sounds.
  • DirectX Media: comprising DirectAnimation for 2D/3D[14] web animation, DirectShow for multimedia playback and streaming media, DirectX Transform for web interactivity, and Direct3D Retained Mode for higher level 3D graphics. DirectShow contains DirectX plugins for audio signal processing and DirectX Video Acceleration for accelerated video playback.
  • DirectX Diagnostics (DxDiag): a tool for diagnosing and generating reports on components related to DirectX, such as audio, video, and input drivers.
  • DirectX Media Objects: support for streaming objects such as encoders, decoders, and effects.
  • DirectSetup: for the installation of DirectX components, and the detection of the current DirectX version.
  • XACT3 higher-level audio API
  • XAudio2: low-level API for audio

Wednesday, December 2, 2009

Ethical Hacker

An ethical hacker is a computer and networking expert who systematically attempts to penetrate a computer system or network on behalf of its owners for the purpose of finding security vulnerabilities that a malicious hacker could potentially exploit. Ethical hackers use the same methods and techniques to test and bypass a system's defenses as their less-principled counterparts, but rather than taking advantage of any vulnerabilities found, they document them and provide actionable advice on how to fix them so the organization can improve its overall security.
'ethical hackers are becoming a mainstay of the effort to make corporate networks more secure'
For hacking to be deemed ethical, the hacker must obey the following rules:
  • Expressed (often written) permission to probe the network and attempt to identify potential security risks.
  • You respect the individual's or company's privacy.
  • You close out your work, not leaving anything open for you or someone else to exploit at a later time.
  • You let the software developer or hardware manufacturer know of any security vulnerabilities you locate in their software or hardware, if not already known by the company.
Difference between Security and Protection
  • Security and protection are extremely close concepts though not same.
  • Security measures are adopted to increase the level of protection.
  • The feeling of protection arises when one has enough security measures.
  • Security is a type of protection against external threats.
When it comes to cybersecurity, hacking comes in many colors: white, grey, black, and shades in between. White hat hackers use their skills for good. They practice ethical hacking: involved testing to see if an organization's network is vulnerable to outside attacks. Ethical hacking is key to strengthening network security, and it's one of the most desired stills for any IT security professional.

Wednesday, November 11, 2009

TSV File Format

TSV (Tab-separated values) is a file extension for a tab-delimited file used with spreadsheet software. TSV files are used for raw data and can be imported into and exported from spreadsheet software. TSV files are essentially text files, and the raw data can be viewed by text editors, though they are often used when moving raw data between spreadsheets. A tab-separated values (TSV) file is a simple text format for storing data in a tabular structure (e.g. database or spreadsheet data). Each record in the table is one line of the text file. Each field value of a record is separated from the next by a tab stop character – it is a form of the more general delimiter-separated values format.
File extension: TSV
File type: Tab-Separated Values
MIME type: text/tab-separated-values
TSV is an alternative to the common comma-separated values (CSV) format, which often causes difficulties because of the need to escape commas – literal commas are very common in text data, but literal tab stops are infrequent in running text. The IANA standard for TSV achieves simplicity by simply disallowing tabs within fields.

Tuesday, October 6, 2009

Intel's Mobile Core i7s: 10 Things You Should Know

  1. Hyper-Threading Arrives: This was first introduced four years ago in the Intel Pentium 4s for desktops, but never made it in as a mobile part. This latest batch of Core i7s has four physical cores, with two threads per core. The operating system recognizes it as 8 cores.
  2. L3 Cache: A huge L3 Cache is a first for mobile processors, and it'll play a significant role in communicating with the built-in memory controller, as well as speed up processes. The new Core i7s will have a shared L3 cache that range between 6MB to 8MB
  3. Goodbye NorthBridge: Getting rid of a physical chip that housed the memory and graphics controllers means a speedier system, smaller motherboard form factors, and sleeker designs in the future.
  4. DDR3 Memory Only: The mobile Core i7s mark the end of DDR2 memory for Intel mobile chips. Their dual-channel memory controller can support DDR3 memory speeds of up to 1333MHz.
  5. 16GB of Memory: PC makers can add up to 4 memory slots on the latest Core i7 motherboards, which mean the latter can support up to 16GB of DDR3 memory.
  6. Turbo Boost: It's like an energy drink for lightly-threaded processes. Inactive cores lend their power to active ones and require no customization from the user.
  7. High Performance First: This batch of Core i7s does not target commuters and long battery life seekers. They target gamers and high-end users instead, and could have decent battery life provided they come with a huge battery. Later on, the mainstream Core i7s and Core i5s (codenamed Arrandale) will focus on battery efficiency and thinner form factors.
  8. SouthBridge Gets A Makeover: The Intel PM55 Express chipset holds the key to future motherboard designs. It will support more internal and external peripherals than the previous Intel G45 chipset, and will be the platform for the mainstream Core i7s and Core i5s.
  9. Frugal Gamers, Rejoice: These systems are not only fast, but they're inexpensive, too. Upcoming laptops with Core i7 will easily come in under $2,000.
  10. SLI and CrossFire Support: For those seeking the ultimate gaming experience, Core i7 laptops can support dual graphics chipsets using Nvidia's SLI or ATI's CrossFire technology at the processor die, something the previous generation of Intel mobile CPUs couldn't do.

Wednesday, September 23, 2009

Regular Expression Engines

A regular expression is a string containing a combination of normal characters and special metacharacters or metasequences. The normal characters match themselves. Metacharacters and metasequences are characters or sequences of characters that represent ideas such as quantity, locations, or types of characters.

Pattern matching consists of finding asection of text that is described (matched) by a regular expression. The underlying code that searches the text is the regular expression engine. You can predict the results of most matches by keeping two rules in mind:
1. The earliest (leftmost) match wins
Regular expressions are applied to the input starting at the first character and proceeding toward the last. As soon as the regular expression engine finds a match, it returns.
2. Standard quantifiers are greedy
Quantifiers specify how many times something can be repeated. The standard quantifiers attempt to match as many times as possible. They settle for less than the maximum only if this is necessary for the success of the match. The process of giving up characters and trying less-greedy matches is called backtracking.

Regular expression engines have differences based on their type. There are two classes of engines: Deterministic Finite Automaton (DFA) and Nondeterministic Finite Automaton (NFA). DFAs are faster, but lack many of the features of an NFA, such as capturing, lookaround, and nongreedy quantifiers. In the NFA world, there are two types: traditional and POSIX.

DFA engines
DFAs compare each character of the input string to the regular expression, keeping track of all matches in progress. Since each character is examined at most once, the DFA engine is the fastest. One additional rule to remember with DFAs is that the alternation metasequence is greedy. When more than one option in an
alternation (foo|foobar) matches, the longest one is selected. So, rule No. 1 can be amended to read "the longest leftmost match wins".

Traditional NFA engines
Traditional NFA engines compare each element of the regex to the input string, keeping track of positions where it chose between two options in the regex. If an
option fails, the engine backtracks to the most recently saved position. For standard quantifiers, the engine chooses the greedy option of matching more text; however, if that option leads to the failure of the match, the engine returns to a saved position and tries a less greedy path. The traditional NFA engine uses ordered alternation, where each option in the alternation is tried sequentially. A longer match may be ignored if an earlier option leads to a successful match. So, here rule #1 can be amended to read "the first leftmost match after greedy quantifiers have had their fill wins".

POSIX NFA engines
POSIX NFA Engines work similarly to Traditional NFAs with one exception: a POSIX engine always picks the longest of the leftmost matches. For example, the alternation cat|category would match the full word "category" whenever possible, even if the first alternative ("cat") matched and appeared earlier in the alternation.

Friday, August 28, 2009

Am I a hacker? - A Nice One

I used to call myself a hacker all the time. That was before the term was usurped by a different breed of people. Maybe I should back up and explain.
If you read Steven Levy's book, Hackers, it paints a picture of people who love and understand the technology they use. These are people who can "hack" together a solution to a problem with a soldering iron and a few paper clips. People who modify operating systems because they don't like the way they work. People who follow the "hacker ethic" that information should be free.
If that's a hacker, that's me.

Thursday, August 13, 2009

A Good Programmer’s key indicators

A Good Programmer’s key indicators:
#1 : Passion
#2 : Self-teaching and love of learning
#3 : Intelligence
#4 : Hidden experience
#5 : Variety of technologies
#6 : Formal qualifications

Positive indicators:
• Passionate about technology
• Programs as a hobby
• Will talk your ear off on a technical subject if encouraged
• Significant (and often numerous) personal side-projects over the years
• Learns new technologies on his/her own
• Opinionated about which technologies are better for various usages
• Very uncomfortable about the idea of working with a technology he doesn’t believe to be “right”
• Clearly smart, can have great conversations on a variety of topics
• Started programming long before university/work
• Has some hidden “icebergs”, large personal projects under the CV radar
• Knowledge of a large variety of unrelated technologies (may not be on CV)

Thursday, July 16, 2009

Regular Expressions

Regular expressions are the key to powerful, flexible, and efficient text processing. Regular expressions themselves, with a general pattern notation almost like a mini programming language, allow you to describe and parse text. With additional support provided by the particular tool being used, regular expressions can add, remove, isolate, and generally fold, spindle, and mutilate all kinds of text and data. It might be as simple as a text editor's search command or as powerful as a full text processing language.
The Filename Analogy
You know that report.txt is a specific filename, and the pattern "*.txt" can be used to select multiple files. With such filename patterns like this (called file globs), there are a few characters "*" that have special meanings. The star means "match anything", and a question mark "?" means "match any one character". With "*.txt", we start with a match-anything "*" and end with the literal ".txt" , so we end up with a pattern that means "select the files whose names start with anything and end with .txt".
The Language Analogy
Full regular expressions are composed of two types of characters. The special characters (like the * from the filename analogy) are called metacharacters, while everything else are called literal, or normal text characters. What sets regular expressions apart from filename patterns is the scope of power their metacharacters provide. Filename patterns provide limited metacharacters for limited needs, but a regular expression "language" provides rich and expressive metacharacters for advanced uses. It might help to consider regular expressions as their own language, with literal text acting as the words and metacharacters as the grammar. The words are combined with grammar according to a set of rules to create an expression which communicates an idea. For example, the expression used to find lines beginning with "From:" or "Subject:" is written as ^(From|Subject):.
A very simple case of a regular expression in this syntax would be to locate the same word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcards could also achieve this, but are more limited in what they can pattern (having fewer metacharacters and a simple language-base).
The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexps are usually employed in applications that pattern-match text strings in general. For example, the regexp ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regexp used to match any numeral is ^[+-]?(\d+(\.\d+)?|\.\d+)([eE][+-]?\d+)?$.

Tuesday, June 9, 2009

Structuring Projects and Solutions in Team Foundation Source Control

Many of the default folder conventions used by Visual Studio when creating new solutions and projects are not optimized for team development and for use with TFS source control. Rather than accepting the defaults when you create new Visual Studio projects and solutions, you should give careful consideration to your local and server-based folder structure.
Objectives
  • Structure projects for effective team development in Microsoft Visual Studio Team Foundation Server (TFS) source control.
  • Know when to use multiple solutions and when to use a single solution.
  • Identify appropriate structures for small, medium-size and very large teams.
  • Keep server-side and client-side folder structures synchronized.
  • Choose a strategy for unit test structure.
  • Create a folder structure that supports various branching scenarios.
  • Identify files are added to source control and maps local files to source control.
Strategies for Solution and Project Structure
The three most common strategies used to structure solution and project files are:
  • Single solution. If you work on a small system, create a single solution and place all of your projects within it.
  • Partitioned solution. If you work on a large system, use multiple solutions to group related projects together. Create solutions to logically group subsets of projects that a developer would be most likely to modify as a set, and then create one master solution to contain all of your projects. This approach reduces the amount of data that needs to be pulled from source control when you only need to work on specific projects.
  • Multiple solutions. If you are working on a very large system that requires dozens of projects or more, use multiple solutions to work on sub-systems but for dependency mapping and performance reasons do not create a master solution that contains all projects.
Keep the following considerations in mind when designing a project and solution structure:
  • Each project generates an assembly at build time. Start by determining what assemblies you want to create and then use this to decide what projects you need. Use this to determine how to factor your codebase into projects.
  • Start with the simplest single solution structure. Only add complexity to your structure when it is really necessary.
  • When designing a multi-solution structure:
    • Consider project dependencies. Try to group those projects that have dependencies on one another as part of the same solution. This enables you to use project references within your solution. By using project references instead of file references, you enable Visual Studio to keep build
    • configurations (debug/release) synchronized, and to track versioning to determine when projects need to be rebuilt. Try to minimize the number of cross-solution project references.
    • Consider source sharing. Place projects that share the same source in the same solution.
    • Consider team structure. Structure your solutions to make it easy for teams to work on a set of related projects together.
  • Keep a flat project structure so that it is easy for you to group projects into solutions without needing to make file system or source control folder structure changes.

Wednesday, May 6, 2009

Microsoft Office PerformancePoint Server

Microsoft Office PerformancePoint Server is a business intelligence software product released in 2007 by Microsoft. Enterprises gather data related to line-of-business (LOB) operations. Data as-is is not useful. You need to transform it into information so you can understand your business and make decisions. PerformancePoint Server 2007 is a product that can help you to build a performance management application.
Designing a Microsoft Office PerformancePoint Server 2007 topology should begin at the back end and move towards the front end. Define the number of users. Define the number of PerformancePoint Planning applications. Then determine the number of databases and cubes needed. Then build the topology plan from back end to front end. 
To setup PerformancePoint server for an enterprise includes,
Planning Server
When determining your Planning Server deployment, research and install additional application dependencies. Any applications that Planning Server depends on should be installed first. For example, if you are using Microsoft Office SharePoint Server 2007 for the templates and reports repository, then Office SharePoint Server 2007 should be installed before you install PerformancePoint Server. The same is true for Microsoft SQL Server 2005 and SQL Server 2005 Analysis Services.
SQL Server
Installing and configuring Planning Server creates two system databases. Creating a Planning application creates another database (an application database) and an Analysis Services cube. If you create a staging database for the application, that is one more database. Therefore, consider deploying the system databases on one computer running SQL Server and the application databases on another. Also, when you are building multiple model sites, multiple Analysis Services computers should be used.
Monitoring Server
Before you install Monitoring Server and build a dashboard, you should first plan your development process. Begin by defining performance management goals. Decide if you want to build one dashboard or a set of dashboards. Define the scorecard and any related key performance indicators (KPIs) that you require to build those scorecards, and determine whether those scorecards will share a set of common KPIs. Design any reports that contribute to the information provided in the scorecard or that might help in understanding the KPIs.
Next steps include:
•    Identify contributors and users   In particular, identify the contributors who will participate in dashboard development.
•    Identify information technology resources   Locate existing databases and documents that can provide business data for KPIs, and determine whether existing SQL Server or Microsoft SharePoint Products and Technologies installations can be used. Determine if you require new databases in order to develop and deploy dashboards.

Thursday, April 30, 2009

Programming Language Origin

A list of programming languages is modest when compared with the catalogue of natural languages. Beginning in the 1940s with Plankalkül, more than 1,000 programming languages have been documented. Like natural languages, the variety and diversity of these languages is impressive: the succinctness of assembler, the verbosity of COBOL, and the efficiency of C.
The motivations that inspire the creation of languages are diverse: FORTRAN was created for scientific analysis, COBOL for building business applications, RPG for report generation, and so on. Some languages serve as refinements of earlier languages. CPL combined the best ingredients of several languages, including ALGOL, FORTRAN, and COBOL. C# is an independently developed, object-oriented language and a member of the C family of languages. It shares similar syntax and some concepts with other C-family languages; more important, however, C# has few if any vestiges of procedural programming, in which the basic programming element is the procedure (that is, a named sequence of statements, such as a routine, subroutine, or function). Unfortunately, C++ inherited many of the artifacts of procedural programming from C. C#, however, was designed to be a purely object-oriented language.
ALGOL is arguably the most influential programming language in history. The language was introduced in 1958 but became popular when ALGOL-60 was released in 1960. Its impact on future languages such as Pascal, C, and Java is undeniable—these languages grammatical syntax borrows heavily from ALGOL. The major design goals of ALGOL were portability, a formal grammar, and support for algorithms. ALGOL-68 extended the language, but the additions increased complexity and furthered abstraction from hardware. This abstraction prevented developers from easily accessing devices and the lower tiers of the operating environment. Soon, languages were introduced that were less complex and not as abstracted from the architecture.
The journey from ALGOL to C began with CPL. CPL, a derivative of ALGOL-60, was developed at the Computer Lab of Cambridge University. CPL was created in 1963 by David Barron, Christopher Strachey, and Martin Richards. Although CPL is not as abstracted as ALGOL, it did maintain one characteristic of ALGOL: complexity. Martin Richards introduced Basic CPL (BCPL) in 1967 as a lean version of CPL. Ken Thompson of Bell Labs drafted B in 1970 as the successor to BCPL. B was lighter, faster, and more appropriate for systems programming. C was developed by Dennis Ritchie, also of Bell Labs, in 1972. C returned some of the abstraction removed from B while keeping that language simple and quick. Although initially consigned to the UNIX operation system and systems programming, C is a general-purpose language and has been used for a diverse assortment of applications across a variety of platforms and operating systems.
FORTAN, ALGOL, and COBOL dominated the procedural programming landscape in the 1960s. On a separate track, Simula was created between 1962 and 1965 by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Center. Simula is notable for being the first object-oriented programming (OOP) language. It was designed for simulation, but evolved into a general-purpose language. Simula introduced the important OOP concepts of classes, inheritance, and dynamic binding.
Combining aspects of C and Simula, Bjarne Stroustrup introduced C with Classes in 1979 as an enhancement of the C programming language. Later, under Stroustrup's stewardship, C++ was created as a direct descendant of C with Classes and was publicly recognized in 1983. C++ rapidly became the premier object-oriented programming language and introduced structured exception handling, templates, and much more.
C# premiered at the Professional Developers Conference (PDC) held in Orlando, Florida, in 2000. The primary architects of C# were Anders Hejlsberg, Scott Wiltamuth, Peter Sollichy, Eric Gunnerson, and Peter Golde. C# was designed to be a fully object-oriented language focusing on developing components in a distributed environment and was launched as part of a larger initiative by Microsoft called Microsoft .NET.

Tuesday, April 21, 2009

Here is my Blog

This is my first post!!!