Monday, October 29, 2012

Understanding the SQL Server Reporting Services

History of Reporting Services:
Initial development began in 2000. Below Figure-1 tracks in chronological order the major events that have shaped Reporting Services. The black milestones represent the reporting technologies that Microsoft acquired to enhance Reporting Services.
Figure-1 : The Reporting Services history
Components of Reporting Servies:
  • Report Server : At the heart of the Reporting Services architecture is the report server, a web-based middle-tier layer that receives incoming report requests and generates reports. The Figure-2 shows a simplified logical view of the report server. To facilitate integration with external client applications, the report server provides two communication interfaces: URL access and a Web service. Consequently, both off-the-shelf and custom tools can communicate with the report server via HTTP or SOAP. The Report Processor component of the report server is responsible for processing the reports at run time.
Figure-2 : The major Reporting Services components
  • Report Server Database : In Reporting Services, report definitions and properties are saved in the report server database. The report server database is implemented as two SQL Server databases that get installed when you configure the report server. The report server database is a logical term for two physical databases: ReportServer and ReportServerTempDB. The ReportServer database hosts the report catalog. It stores the report definitions and management settings. The ReportServerTempDB stores temporary information about the report execution.
Reporting Services supports two deployment modes.
  • Native mode (default) : The report server as a stand-alone application server that provides all processing and management capability exclusively through Reporting Services components.
  • SharePoint mode : The report server is integrated with Windows SharePoint Services or Microsoft Office SharePoint Server. Report viewing and management happens inside the SharePoint portal.
Report Definition Language (RDL) is an XML-based schema for defining reports. RDL is an open standard proposed by Microsoft to promote interoperability of commercial reporting products.
Reporting Services Architecture:
Figure-3 : The Reporting Services Service hosts the three report server applications: Report Manager, Report Server Web Service, and Background Processor
  • Report Manager : Report Manager is an ASP.NET web application that provides report management and viewing capabilities for a Reporting Services instance configured in native mode. You can view Report Manager as a client application that integrates with the report server.
  • Report Server Web Service : The Report Server Web service handles on-demand report processing. The Report Server Web service is the primary programmatic interface for custom applications that integrate with Reporting Services. The Report Server Web service provides four Web service endpoints that expose the Reporting Services feature set to external clients.
  • Background Processor : The Background Processor application is responsible for handling all tasks that run in an unattended mode.

Tuesday, October 16, 2012

SSIS: Read customized configuration data from SQL table to multiple package variables

Microsoft SQL Server Integration Services provides multiple ways to configure the package. These package configuration types are as below:
  • XML configuration file
  • Environment variable
  • Registry entry
  • Parent package variable
  • SQL server
If you select the SQL Server configuration type, a default "SSIS Configurations" table will be used to store the configurable data. But sometime you need to use customized configurable table, which SSIS does not support for automatic configuration. So below solution can be one of the way of implementation:
Lets consider our configurable table is as below:
And list of our SSIS package variables as belw:
Here variables "V_SCHEMA_NAME", "V_SCHEMA_RESULTSET" and "V_SCHEMA_ROW" are used internally for looping and data storage.
Below are the steps to create a SSIS package to read the confiuration values from SQL server table into our SSIS package variables.
Steps:
1. Add "Execute SQL Task" and set the properties as below:
Tab Properties Value
General Name Execute SQL Task - Get Schema List
ResultSet Full result set
ConnectionType ADO.NET
Connection SSISTest
SQLSourceType Direct input
SQLStatement SELECT DISTINCT [SCHEMA_NAME] FROM [dbo].[CFG_SCHEMA_CONN] WHERE [ISACTIVE] = 1
Result Set Result Name 0
VariableName User::V_SCHEMA_RESULTSET
2. Add "Foreach Loop Container" and set the properties as below:
Tab Properties Value
General Name Foreach Loop Container - Loop through each Schema
Collection Enumerator Foreach ADO Enumerator
ADO object source variable User::V_SCHEMA_RESULTSET
Enumeration mode Rows in the first table
Variable Mappings Variable User::V_SCHEMA_NAME
Index 0
3. Add "Execute SQL Task" inside the foreach loop container and set the properties as below:
Tab Properties Value
General Name Execute SQL Task - Get data for schema
ResultSet Full result set
ConnectionType ADO.NET
Connection SSISTest
SQLSourceType Direct input
SQLStatement SELECT [REPORT_CONN_STR], [PROD_CONN_STR] FROM [dbo].[CFG_SCHEMA_CONN] WHERE [SCHEMA_NAME] = @SCHEMA_NAME
Parameter Mappings Variable Name User::V_SCHEMA_NAME
Direction Input
Data Type String
Parameter Name @SCHEMA_NAME
Parameter Size -1
Result Set Result Name 0
Variable Name User::V_SCHEMA_ROW
4.Add "Script Task" inside the foreach loop container and set the properties as below:
Tab Properties Value
General Name Script Task - Store data in package variable
Script ScriptLanguage Microsoft Visual C# 2008
EntryPoint Main
ReadOnlyVariables User::V_SCHEMA_NAME,
User::V_SCHEMA_ROW
ReadWriteVariables User::V_SCHEMA1_PROD_CONN_STR,
User::V_SCHEMA1_REPORT_CONN_STR,
User::V_SCHEMA2_PROD_CONN_STR,
User::V_SCHEMA2_REPORT_CONN_STR,
User::V_SCHEMA3_PROD_CONN_STR,
User::V_SCHEMA3_REPORT_CONN_STR,
User::V_SCHEMA4_PROD_CONN_STR,
User::V_SCHEMA4_REPORT_CONN_STR,
User::V_SCHEMA5_PROD_CONN_STR,
User::V_SCHEMA5_REPORT_CONN_STR
Edit Script public void Main()
{
try
{
DataSet ds = Dts.Variables["V_SCHEMA_ROW"].Value as DataSet;

if (ds != null && ds.Tables.Count > 0 && ds.Tables[0].Rows.Count > 0)
{
DataRow dr = ds.Tables[0].Rows[0];

string schemaRptConnStr = Dts.Variables["V_SCHEMA_NAME"].Value.ToString() + "_REPORT";
string schemaProdConnStr = Dts.Variables["V_SCHEMA_NAME"].Value.ToString() + "_PROD";

foreach (Variable v in Dts.Variables)
{
if (v.Namespace.Equals("User") && v.Name.Contains(schemaRptConnStr))
{
v.Value = dr[0].ToString();
}
if (v.Namespace.Equals("User") && v.Name.Contains(schemaProdConnStr))
{
v.Value = dr[1].ToString();
}
}
}

Dts.TaskResult = (int)ScriptResults.Success;
}
catch
{
Dts.TaskResult = (int)ScriptResults.Failure;
}
}
The SSIS package will look like as below:

Monday, October 15, 2012

Creating YTD Calulation using MDX

Ytd function returns a set of sibling members from the same level as a given member, starting with the first sibling and ending with the given member, as constrained by the Year level in the Time dimension.
And
PeriodsToDate function returns a set of sibling members from the same level as a given member, starting with the first sibling and ending with the given member, as constrained by a specified level in the Time dimension.
One of these function can be used for YTD calculation in MDX queries.
Prior to use of these funtion, please check your Time Dimesion as below:
1. Time dimension attributes should have properites as shown below:
2. Time dimension should have hierarchies specified.
3. Time dimension should also have attribute relationships specified.
Now, you can add the calculated members within the cube to get the Year-to-date values, as below:

New Calulated Member
Name : [YTD Amount]
Parent hierarchy: Measures
Expression: SUM(PERIODSTODATE([Dim Time].[Fiscal].[Fiscal Year], [Dim Time].[Fiscal].CURRENTMEMBER),[Measures].[Amount])

Thursday, September 6, 2012

Windows Management Instrumentation (WMI)

Microsoft Systems Management Server (SMS) 2003 is a Windows-based product that you can use to manage, support, and maintain a distributed network of computer resources. SMS enhances its abilities by using Windows Management Instrumentation (WMI), which is management infrastructure that supports monitoring and controlling system resources through a common set of interfaces.
WMI is a middle-layer technology that enables standardized management of Windows-based computers. It collects computer management data from a wide variety of sources and makes it accessible by using standard interfaces. WMI can be accessed remotely, but it does not consolidate the management data in a central location - that is one of the functions of SMS. You can also use WMI to set configuration details on your computer and to detect and respond to changes in the configuration of your computer (using WMI events).
You can manage WMI by using the following tools:
  • WMI Control
  • MOF Compiler (MOFComp.exe)
  • WinMgmt.exe
MOF Files:
The Managed Object Format (MOF) file describes Common Information Model (CIM) classes; defines textual descriptions of classes that are used by a provider to return data; commonly used to change WMI settings and to transfer them between computers.
MOF files are programmed in a language derived from the Microsoft Visual C++ syntax. The contents of MOF files only become effective when the files are compiled. They can can be compiled into the WMI repository using the program Mofcomp.exe. They typically are bundled with a corresponding .DLL file, which contains the skeleton code that returns the data defined in the MOF file.
The MOF file format and compiler were developed by the Desktop Management Task Force (DMTF).

MOF Compiler (MOFComp.exe):
MOF files are usually automatically compiled during the installation of the systems with which they are provided, but you can also compile MOF files by using the MOF Compiler (Mofcomp.exe). The MOF Compiler is available in the %Windir%\System32\wbem directory. You must specify the MOF file as the parameter of the MOF Compiler. The MOF compiler parses a file containing MOF statements and adds the classes and class instances defined in the file to the WMI repository. You can also specify an Autorecover switch if you want the MOF file to be automatically recompiled if the CIM Repository ever has to be automatically recovered.

Monday, August 27, 2012

.NET : Partial Methods

A partial class or struct may contain a partial method. One part of the class contains the signature of the method. An optional implementation may be defined in the same part or another part. If the implementation is not supplied, then the method and all calls to the method are removed at compile time.
Partial methods enable the implementer of one part of a class to define a method, similar to an event. The implementer of the other part of the class can decide whether to implement the method or not. If the method is not implemented, then the compiler removes the method signature and all calls to the method. The calls to the method, including any results that would occur from evaluation of arguments in the calls, have no effect at run time. Therefore, any code in the partial class can freely use a partial method, even if the implementation is not supplied. No compile-time or run-time errors will result if the method is called but not implemented.
A partial method declaration consists of two parts: the definition, and the implementation. These may be in separate parts of a partial class, or in the same part. If there is no implementation declaration, then the compiler optimizes away both the defining declaration and all calls to the method.
Example:
// Definition in file1.cs
partial void onNameChanged();

// Implementation in file2.cs
partial void onNameChanged()
{
  // method body
}
The Rules:
  • Partial method declarations must begin with the contextual keyword partial and the method must return void.
  • Partial methods can have ref but not out parameters.
  • Partial methods are implicitly private, and therefore they cannot be virtual.
  • Partial methods cannot be extern, because the presence of the body determines whether they are defining or implementing.
  • Partial methods can have static and unsafe modifiers.
  • Partial methods can be generic. Constraints are put on the defining partial method declaration, and may optionally be repeated on the implementing one. Parameter and type parameter names do not have to be the same in the implementing declaration as in the defining one.
  • You can make a delegate to a partial method that has been defined and implemented, but not to a partial method that has only been defined.
Source : http://msdn.microsoft.com/en-us/library/wa80x488.aspx

.NET : Extension Method

An Extension method is a new language feature of C# starting with the 3.0 specification, as well as Visual Basic.NET starting with 9.0. Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. An extension method is a static method of a static class that you can call as though it were an instance method of a different class. 
For example, you could create an extension method named ToDouble that is a static method in a static class you create named StringConversions, but that is called as though it were a method of an object of type string.
Extension Method Declarations and Invocations:
Specifying a method’s first argument with the this keyword modifier will make that method an extension method. The extension method will appear as an instance method of any object with the same type as the extension method’s first argument’s data type. For example, if the extension method’s first argument is of type string, the extension method will appear as a string instance method and can be called on any string object. Also, extension methods can only be declared in static classes.
Example of an extension method:
namespace Netsplore.Utilities
{
    public static class StringConversions
    {
        public static doubleToDouble(this string s)
        {
            return Double.Parse(s);
        }
        public static boolToBool(this string s)
        {
            return Boolean.Parse(s);
        }
    }
}
Here both the class and every method it contains are static. ToDouble is an extension method, because method is static and its first argument specifies the this keyword.
Calling an extension method:
    usingNetsplore.Utilities;
    double pi = "3.1415926535".ToDouble();
    Console.WriteLine(pi);
This produces the following results:
    3.1415926535
Extension Method Precedence:
Normal object instance methods take precedence over extension methods when their signature matches the calling signature. Extension methods seem like a really useful concept, especially when you want to be able to extend a class you cannot, such as a sealed class or one for which you do not have source code. The previous extension method examples all effectively add methods to the string class. Without extension methods, you couldn’t do that because the string class is sealed.

Thursday, August 23, 2012

T-SQL Syntax

Recently I have came across very good question on SQLServerCentral.com
Different T-SQL constructs can assign a value to a regular identifier with a leading '@' without using SET nor SELECT :
  1. An Input argument to a Procedure or Function - http://msdn.microsoft.com/en-us/library/ms187926
  2. An Output argument to a Procedure - http://msdn.microsoft.com/en-us/library/ms187926
  3. EXECUTE a Function: EXEC @return = udfFunct() - http://msdn.microsoft.com/en-us/library/ms188332
  4. The Stored Procedure return status: EXEC @status = uspProc (This is very different from a Function return.) - http://msdn.microsoft.com/en-us/library/ms188332
  5. The OUTPUT clause: OUTPUT INTO @tablevar (Could count this 4 times but it is really one construct) http://msdn.microsoft.com/en-us/library/ms177564
  6. RECEIVE .... FROM INTO @tablevar - http://msdn.microsoft.com/en-us/library/ms186963.aspx
  7. FETCH NEXT FROM cursor INTO @varname - http://msdn.microsoft.com/en-us/library/ms180152
  8. DECLARE @varname INT = 0; - http://msdn.microsoft.com/en-us/library/ms188927

Wednesday, August 8, 2012

WCF Endpoints : Addresses

All communication with a Windows Communication Foundation (WCF) service occurs through the endpoints of the service. Endpoints provide clients access to the functionality offered by a WCF service.
Each endpoint consists of four properties:
  • An address that indicates where the endpoint can be found.
  • A binding that specifies how a client can communicate with the endpoint.
  • A contract that identifies the operations available.
  • A set of behaviours that specify local implementation details of the endpoint.
Addresses:
In WCF, every service is associated with a unique address. The address provides two important elements: the location of the service and the transport protocol, or transport scheme, used to communicate with the service. The location portion of the address indicates the name of the target machine, site, or network; a communication port, pipe, or queue; and an optional specific path, or URI (Universal Resource Identifier). A URI can be any unique string, such as the service name or a globally unique identifier (GUID).
WCF supports the following transport schemes:
  • HTTP/HTTPS
  • TCP
  • IPC
  • Peer network
  • MSMQ
  • Service bus
Addresses always have the following format:
    [base address]/[optional URI]
The base address is always in this format:
    [transport]://[machine or domain][:optional port]
Here are a few sample addresses:
    http://localhost:8001
    http://localhost:8001/MyService
    net.tcp://localhost:8002/MyService
    net.pipe://localhost/MyPipe
    net.msmq://localhost/private/MyQueue
    net.msmq://localhost/MyQueue


TCP Addresses
TCP addresses use net.tcp for transport and typically include a port number, as in:
    net.tcp://localhost:8002/MyService
When a port number is not specified, the TCP address defaults to port 808:
    net.tcp://localhost/MyService
It is possible for two TCP addresses (from the same host) to share a port:
    net.tcp://localhost:8002/MyService
    net.tcp://localhost:8002/MyOtherService
You can configure TCP-based addresses from different service hosts to share a port.

HTTP Addresses
HTTP addresses use http for transport and can also use https for secure transport. You typically use HTTP addresses with outward-facing Internet-based services, and you can specify a port as shown here:
    http://localhost:8001
If you do not specify the port number, it defaults to 80 (and port 443 for HTTPS). As with TCP addresses, two HTTP addresses from the same host can share a port, even on the same machine.

IPC Addresses
IPC (Inter-Process Communication) addresses use net.pipe for transport, to indicate the use of the Windows named pipe mechanism. In WCF, services that use IPC can only accept calls from the same machine. Consequently, you must specify either the explicit local machine name or localhost for the machine name, followed by a unique string for the pipe name:
    net.pipe://localhost/MyPipe
You can open a named pipe only once per machine, so it is not possible for two named pipe addresses to share a pipe name on the same machine.

MSMQ Addresses
MSMQ addresses use net.msmq for transport, to indicate the use of the Microsoft Message Queue (MSMQ). You must specify the queue name. When you’re dealing with private queues, you must also specify the queue type, but you can omit that for public queues:
    net.msmq://localhost/private/MyService
    net.msmq://localhost/MyService

Service Bus Addresses
Windows Azure AppFabric Service Bus addresses use sb, http, or https for transport, and must include the service bus address along with the service namespace, for example:
    sb://MyNamespace.servicebus.windows.net/

Monday, July 23, 2012

#temp table Vs @table variable

Some facts about Table Variables and Temp Tables are:
  • You can create local and global temporary tables. Local temporary tables (#table_name) are visible only in the current session, and global temporary tables (##table_name) are visible to all sessions. 
  • Table Variables and Temp Tables both use the tempdb database.
  • Table variables are Transaction neutral. They are variables and thus aren't bound to a transaction.
  • Temp tables behave same as normal tables and are bound by transactions.
  • Assignment operation between table variables is not supported.
  • Clustered indexes can be created on both table variables and temporary tables.
  • Both are logged in the transaction log.
  • Temporary tables cannot be partitioned.
  • Table variables are only allowed in SQL Server 2000+, with compatibility level set to 80 or higher.
Difference:
#temp table @table variable
You can truncate a temp table. You cannot truncate a table variable.
You can alter temp table. Table variables cannot be altered after they have been declared.
One of the most valuable assets of a temp table is the ability to add either a clustered or non-clustered index. You cannot explicitly add an index to a table variable, however you can create a system index through a PRIMARY KEY CONSTRAINT, and you can add as many indexes via UNIQUE CONSTRAINTs as you like.
Temp tables can be used in the following situations:
INSERT #temp EXECsp_someProcedure
SELECT * INTO #temp FROM someTable
You cannot use a table variable in either of the following situations:
INSERT @table EXECsp_someProcedure
SELECT * INTO @table FROM someTable
You cannot create temp tables inside user-defined function. You can declare table variable inside user-defined function.
Temp tables allow for the auto-generated statistics to be created against them. The system will not generate automatic statistics on table variables. Likewise, you cannot manually create statistics
Statistics generated on temp table, help the optimizer to determine cardinality. A table variable will always have a cardinality of 1, because the table doesn'texist at compile time.
Temp tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE. You cannot drop a table variable when it is no longer necessary. They are cleaned up automatically at the end of the function, stored procedure, or batch in which they are defined.
Temp table can be referenced by its name or by an alias, except in the FROM clause.
e.g.: SELECT id FROM #t1 t INNER JOIN #t2 ON t.id = #t2.id
Table variables must be referenced by an alias, except in the FROM clause.
e.g.: SELECT id FROM @foo f INNER JOIN #foo ON f.id = #foo.id
Temp tables are visible to the calling procedure in the case of nested procs. Table variables are not visible to the calling procedure in the case of nested procs.
Use of temp table in stored procedure can cause stored procedure recompilations. Table variables used in stored procedures result in fewer recompilations of the stored procedures than when temporary tables are used
Temp tables are involved in SQL transactions. Transactions involving table variables last only for the duration of an update on the table variable. Thus, table variables require less locking and logging resources.
Temp tables are preferred when cost-based choices are required. This typically includes queries with joins, parallelism decisions, and index selection choices. Table variables are not supported in the SQL Server optimizer's cost-based reasoning model. Therefore, they should not be used when cost-based choices are required to achieve an efficient query plan.
Only static tables and temporary tables support the statement "SET IDENTITY_INSERT ON".The table variable does not support the statement "SET IDENTITY_INSERT ON".

Wednesday, May 30, 2012

Windows Server Core

Server Core is an exciting new installation option available in Windows Server 2008 that enables branch offices, data centers, and other networking environments to greatly reduce the total cost of ownership (TCO) involved with deploying and managing Windows servers. The Server Core option is a new minimal installation option available that excludes large parts of the graphical user interface(GUI).
A Server Core installation includes only a limited number of server roles compared with a Full installation of Windows Server 2008. It also supports only a limited subset of the features available on a Full installation of Windows Server 2008. 
Benefits of Server Core
  • Greater stability: Because a Server Core installation has fewer running processes and services than a Full installation, the overall stability of Server Core is greater. 
  • Simplified management: Because there are fewer things to manage on a Server Core installation, it's easier to configure and support a Server Core installation than a Full one—once you get the hang of it.
  • Reduced maintenance:Because Server Core has fewer binaries than a Full installation, there's less to maintain. For example, fewer hot fixes and security updates need to be applied to a Server Core installation. 
  • Reduced memory and disk requirements: A Server Core installation on x86 architecture, with no roles or optional components installed and running at idle, has a memory footprint of about 180 megabytes (MB), compared to about 310 MB for a similarly equipped Full installation of the same edition. Disk space needs differ even more—a base Server Core installation needs only about 1.6 gigabytes (GB) of disk space compared to 7.6 GB for an equivalent Full installation.
  • Reduced attack surface: Because Server Core has fewer system services running on it than a Full installation does, there's less attack surface (that is, fewer possible vectors for malicious attacks on the server). This means that a Server Core installation is more secure than a similarly configured Full installation.
Read more : http://technet.microsoft.com/en-us/library/dd184075.aspx
 

Features of SQL Server 2012

This latest release of the SQL Server presents new features and improvements that increase the power and efficiency of architects, developers, and administrators who design, develop, and maintain data storage systems.
  • Availability Enhancements
AlwaysOn SQL Server Failover Cluster Instances
AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering(WSFC) functionality to provide local high availability through redundancy at the server-instance level - a failover cluster instance(FCI).

AlwaysOn Availability Groups
With AlwaysOn, users will be able to fail over multiple databases in groups instead of individually. Also, secondary copies will be readable, and can be used for database backups.
  • Manageability Enhancements
Manageability of the SQL Server 2012 Database Engine is improved by enhancement to tools and monitoring features listed below:
    SQL Server Management Studio
    Startup Option
    Contained Databases
    Data-tier Applications
    Windows PowerShell
    BCP Utility
    sqlcmd.exe
    Database Engine Tuning Advisor
  • Programmability Enhancements
Sequences
Sequence is a user defined object that generates a sequence of a number.

Ad-Hoc Query Paging
The Order By option in the SQL SELECT statement has been enhanced in SQL Server 2012. Using a combination of OFFSET and FETCH along with ORDER BY gives you control of paging through a result set. Using this technique can really help performance by bring back only the results you want to show to your users when they are needed.

Full Text Search
The Full Text Search in SQL Server 2012 has been enhanced by allowing you to search and index data stored in extended properties or metadata.
  • Scalability and Performance Enhancements
Scalability and performance enhancements in the Database Engine includes:
    Columnstore Indexes
    Online Index Create, Rebuild, and Drop
    Partition Support Increased
    FILESTREAM Filegroups Can Contain Multiple Files
  • Security Enhancements
Security enhancements in the SQL Server Database Engine include provisioning during setup, new SEARCH PROPERTY LIST permissions, new user-defined server roles, and new ways of managing server and database roles.
  • Resource Governor Enhancements
The enhancements to the Resource Governor enable you to more effectively govern performance in multi-tenancy environments like private cloud. The enhancements include support for 64 resource pools, greater CPU usage control, and resource pool affinity for partitioning of physical resources and predictable resource allocation.


Monday, April 2, 2012

Undocumented sp_msforeachtable, sp_msforeachdb procedure

As database administrators or developers, sometimes we need to perform an action on all of the tables within a database or on all the databases within a instance. Microsoft SQL Server provides two undocumented stored procedures designed for iteration that allow you to process through all tables in a database, or all databases in a SQL Server instance. The first stored procedure, "sp_msforeachtable" allows you to easily process some code against every table in a single database. The other stored procedure, "sp_msforeachdb" will execute a T-SQL statement against every database associated with the current SQL Server instance.
 
For example, the following script checks the integrity of each table in the AdventureWorks database using the DBCC CHECKTABLE command. Notice that a [?] is used as a placeholder for the table name in the SQL statement.
USE AdventureWorks;
EXECUTE sp_msforeachtable 'DBCC CHECKTABLE ([?])';
 
As another example, you can sp_msforeachdb procedure to find all the stored procedures in all available databases on the instance which have the word 'RPT' in their definition by running the following command.
EXECUTE sp_msforeachdb 'SELECT ''?'' AS DB, SPECIFIC_NAME, OBJECT_DEFINITION(OBJECT_ID(SPECIFIC_NAME)) FROM [?].INFORMATION_SCHEMA.ROUTINES WHERE OBJECT_DEFINITION(OBJECT_ID(SPECIFIC_NAME)) LIKE ''%RPT%''';

Some level of testing and care should be taken when using undocumented code from Microsoft. Since these stored procedures are not documented, it means that Microsoft might change this code with any new release or patch without notifying customers.

Wednesday, February 29, 2012

How the .NET Runtime Resolves Type References

Lets, consider the below source code:

public sealed class Program
{
public static void Main()
{
System.Console.WriteLine("Hi");
}
}
This code is compiled and built into an assembly, say Program.exe. When you run this application, the CLR loads and initializes. Then the CLR reads the assembly's CLR header, looking for the MethodDefToken that identifies the application's entry point method (Main). From the MethodDef metadata table, the offset within the file for the method's IL code is located and JIT-compiled into native code, which includes having the code verified for type safety. The native code then starts executing. Following is the IL code for the Main method. To obtain this output, run ILDasm.exe, chose the View menu's Show Bytes menu item, and then double click the Main method in the tree view.
.method public hidebysig static void Main() cil managed
// SIG: 00 00 01
{
.entrypoint
// Method begins at RVA 0X2050
// Code size 11 (0xb)
.maxstack 8
IL_0000: /* 72 | (70)000001 */
ldstr "Hi"
IL_0005: /* 28 | (0A)000003 */
call void [mscorlib]System.Console::WriteLine(string)
IL_000a: /* 2A | */
ret
} // end of method Program::Main
When JIT-compiling this code, the CLR detects all references to types and members and loads their defining assemblies (if not already loaded). As you can see, the IL code above has a reference to System.Console.WriteLine. Specifically, the IL call instruction references metadata token 0A000003. This token identifies entry 3 in the MemberRef metadata table (table 0A). The CLR looks up this MemberRef entry and sees that one of its fields refers to an entry in a TypeRef table (the System.Console type). From the TypeRef entry, the CLR is directed to an AssemblyRef entry:
"MSCorLib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089".
At this point, the CLR knows which assembly it needs. Now the CLR must locate the assembly in order to load it.

When resolving a referenced type, the CLR can find the type in one of three places:
  • Same file Access to a type that is in the same file is determined at compile time (sometimes referred to as early bound). The type is loaded out of the file directly, and execution continues.
  • Different file, same assembly The runtime ensures that the file being referenced is, in fact, in the assembly's FileRef table of the current assembly's manifest. The runtime then looks in the directory where the assembly's manifest file was loaded. The file is loaded, its hash value is checked to ensure the file's integrity, the type's member is found, and execution continues.
  • Different file, different assembly When a referenced type is in a different assembly's file, the runtime loads the file that contains the referenced assembly's manifest. If this file doesn't contain the type, the appropriate file is loaded. The type's member is found, and execution continues.
If any errors occur while resolving a type reference—file can't be found, file can't be loaded, hash mismatch, and so on—an appropriate exception is thrown.

Below figure illustrates how type binding occur:
Figure: Flowchart showing how, given IL code that refers to a method or type, the CLR uses metadata to locate the proper assembly file that defines a type 

XAML : Extensible Application Markup Language

XAML (short for Extensible Application Markup Language, and pronounced "zammel") is a markup language used to instantiate .NET objects. Although XAML is a technology that can be applied to many different problem domains, its primary role in life is to construct WPF user interfaces. In other words, XAML documents define the arrangement of panels, buttons, and controls that make up the windows in a WPF application.
It’s important to understand that WPF doesn’t require XAML. There’s no reason Visual Studio couldn’t use the Windows Forms approach and create code statements that construct WPF windows. But if it did, window would be locked into the Visual Studio environment and available to programmers only.
The XAML standard is quite straightforward. Below are few ground rules:
  • Every element in a XAML document maps to an instance of a .NET class. The name of the element matches the name of the class exactly.
  • As with any XML document, one element can be nested inside another. However, nesting is usually a way to express containment.
  • You can set the properties of each class through attributes. However, in some situations an attribute isn’t powerful enough to handle the job. In these cases, you’ll use nested tags with a special syntax.
XAML Language:
  • Declarative object syntax
  • Use to define the static structure and configuration of an object hierarchy
  • Not specific to WPF
  • Easier to write development tools
  • Relatively easy to edit and understand
  • Expresses object hierarchies in a more compact form
  • Objects must have default constructor
  • Requires type conversion
XAML Constructs:

Architecture of WPF

Architecture in the Presentation Layer
  • "The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them."
    - From the book Software Architecture in Practice (2nd edition), (Bass, Clements, Kazman;Addison-Wesley 2003)
  • WPF has a rich and extensible architecture for designing compelling UI applications
WPF uses a multi layered architecture. At the top, application interacts with a high-level set of services that are completely written in managed C# code. The actual work of translating .NET objects into Direct3D textures and triangles happens behind the scenes, using a lower level unmanaged component called milcore.dll.
milcore.dll is implemented in unmanaged code because it needs tight integration with Direct3D and because it’s extremely performance-sensitive.
Figure : WPF rendering architecture
Key components:
  • PresentationFramework.dll holds the top-level WPF types, including those that represent windows, panels, and other types of controls. It also implements higher-level programming abstractions such as styles. Most of the classes you’ll use directly come from this assembly.
  • PresentationCore.dll holds base types, such as UIElement and Visual, from which all shapes and controls derive. If you don’t need the full window and control abstraction layer, you can drop down to this level and still take advantage of WPF’s rendering engine.
  • WindowsBase.dll holds even more basic ingredients that have the potential to be reused outside of WPF, such as DispatcherObject and DependencyObject, which introduces the plumbing for dependency properties.
  • milcore.dll is the core of the WPF rendering system and the foundation of the Media Integration Layer (MIL). Its composition engine translates visual elements into the triangle and textures that Direct3D expects. Although milcore.dll is considered a part of WPF, it’s also an essential system component for Windows Vista. In fact, the Desktop Window Manager (DWM) in Windows Vista uses milcore.dll to render the desktop.
  • milcore.dll is sometimes referred to as the engine for “managed graphics.” Much as the common language runtime (CLR) manages the lifetime of a .NET application, milcore.dll manages the display state. And just as the CLR saves you from worrying about releasing objects and reclaiming memory, milcore.dll saves you from thinking about invalidating and repainting a window.
  • WindowsCodecs.dll is a low-level API that provides imaging support (for example, processing, displaying, and scaling bitmaps and JPEGs).
  • Direct3D is the low-level API through which all the graphics in a WPF are rendered.
  • User32 is used to determine what program gets what real estate. As a result, it’s still involved in WPF, but it plays no part in rendering common controls.
The Class Hierarchy
Figure shows a basic overview with some of the key branches of the class hierarchy:
Figure : The fundamental classes of WPF

Windows Presentation Foundation (WPF)

The Windows Presentation Foundation (WPF) is an entirely new graphical display system for Windows. WPF is designed for .NET, influenced by modern display technologies such as HTML and Flash, and hardware-accelerated. It’s also the most radical change to hit Windows user interfaces since Windows 95.

A standard Windows application relies on two well-worn parts of the Windows operating system to create its user interface:
User32 provides the familiar Windows look and feel for elements such as windows, buttons, text boxes, and so on.
GDI/GDI+ provides drawing support for rendering shapes, text, and images at the cost of additional complexity (and often lackluster performance).

The new frameworks simply deliver better wrappers for interacting with User32 and GDI/GDI+. It can provide improvements in efficiency, reduce complexity, and add pre-baked features so you don’t have to code them yourself; but they can’t remove the fundamental limitations of a system component that was designed more than a decade ago.

DirectX: The New Graphics Engine
Microsoft created one way around the limitations of the User32 and GDI/GDI+ libraries: DirectX. Its design mandate was speed, and so Microsoft worked closely with video card vendors to give DirectX the hardware acceleration needed for complex textures, special effects such as partial transparency, and three-dimensional graphics.
In WPF, the underlying graphics technology isn’t GDI/GDI+. Instead, it’s DirectX. Remarkably, WPF applications use DirectX no matter what type of user interface you create. As a result, even the most mundane business applications can use rich effects such as transparency and anti-aliasing. You also benefit from hardware acceleration, which simply means DirectX hands off as much work as possible to the GPU (graphics processing unit), which is the dedicated processor on the video card.

Note:
• DirectX is more efficient because it understands higher-level ingredients such as textures and gradients, which can be rendered directly by the video card. GDI/GDI+ doesn’t, so it needs to convert them to pixel-by-pixel instructions, which are rendered much more slowly by modern video cards.
• The goal of WPF is to offload as much of the work as possible on the video card so that complex graphics routines are render-bound (limited by the GPU) rather than processor-bound (limited by your computer’s CPU). That way, you keep the CPU free for other work, you make the best use of your video card, and you are able to take advantage of performance increases in newer video cards as they become available.
WPF TIERS
Video cards differ significantly. When WPF assesses a video card, it considers a number of factors, including the amount of RAM on the video card, support for pixel shaders (built-in routines that calculate per-pixel
effects such as transparency), and support for vertex shaders (built-in routines that calculate values at the vertexes of a triangle, such as the shading of a 3-D object). Based on these details, it assigns a rendering tier value.
WPF recognizes three rendering tiers. They are as follows:
Rendering Tier 0. The video card will not provide any hardware acceleration. This corresponds to a DirectX version level of less than 7.0.
Rendering Tier 1. The video card can provide partial hardware acceleration. This corresponds to a DirectX version level greater than 7.0 but less than 9.0.
Rendering Tier 2. All features that can be hardware accelerated will be. This corresponds to a DirectX version level greater than or equal to 9.0.

Tuesday, January 31, 2012

Command Redirection

Redirection is a function common to most command-line interpreters, that can redirect standard streams to user-specified locations. The input or output stream location is referred to as a handle.
Definition: Redirection is the switching of a standard stream of data so that it comes from a source other than its default source or or that it goes to some destination other than its default destination.
standard streams for input, output, and error
Redirection operatorDescription
>Writes the command output to a file or a device, such as a printer, instead of the Command Prompt window.
<Reads the command input from a file, instead of reading input from the keyboard.
>>Appends the command output to the end of a file without deleting the information that is already in the file.
>&Writes the output from one handle to the input of another handle.
<&Reads the input from one handle and writes it to the output of another handle.
|Reads the output from one command and writes it to the input of another command. Also known as a pipe.
The following table lists the available handles:
HandleNumeric equivalentDescription
STDIN0Keyboard input
STDOUT1Output to the Command Prompt window
STDERR2Error output to the Command Prompt window
UNDEFINED3-9These handles are defined individually by the application and are specific to each tool.
The numbers zero through nine (that is, 0-9) represent the first 10 handles. You can use Cmd.exe to run a program and redirect any of the first 10 handles for the program. To specify which handle you want to use, type the number of the handle before the redirection operator. If you do not define a handle, the default < redirection input operator is zero (0) and the default > redirection output operator is one (1). After you type the < or > operator, you must specify where you want to read or write the data. You can specify a file name or another existing handle.
To specify redirection to existing handles, use the ampersand (&) character followed by the handle number that you want to redirect (that is, &handle#). For example, the following command redirects handle 2 (that is, STDERR) into handle 1 (that is, STDOUT): 1<&2
Duplicating handles:
command 2> filenameRedirect any error message into a file
command 2>> filenameAppend any error message into a file
(command) 2> filenameRedirect any CMD.exe error into a file
command > file 2>&1Redirect errors and output to one file
command > file 2<&1Redirect output and errors to one file
command > fileA 2> fileBRedirect output and errors to separate files
command 2>&1 > filenameThis will fail!
Redirect to NUL (hide errors):
command 2> nulRedirect error messages to NUL
command >nul 2>&1Redirect error and output to NUL
command >filename 2> nulRedirect output to file but suppress error
(command) >filename 2> nulRedirect output to file but suppress CMD.exe errors

Note, any long filenames must be surrounded in "double quotes". A CMD error is an error raised by the command processor itself rather than the program/command.

Redirection with > or 2> will overwrite any existing file.

You can also redirect to a printer with > PRN or >LPT1

To prevent the > and < characters from causing redirection, escape with a caret: ^> or ^<


Using the pipe operator (|)
The pipe operator (|) takes the output (by default, STDOUT) of one command and directs it into the input (by default, STDIN) of another command. For example, the following command sorts a directory:
dir | sort
In this example, both commands start simultaneously, but then the sort command pauses until it receives the dir command's output. The sort command uses the dir command's output as its input, and then sends its output to handle 1 (that is, STDOUT).


Examples of redirection:
DIR >MyFileListing.txt
   
   DIR /o:n >"Another list of Files.txt"

   ECHO y| DEL *.txt

   ECHO Some text ^<html tag^> more text
   
   MEM /C >>MemLog.txt

   Date /T >>MemLog.txt

   SORT < MyTextFile.txt

   SET _output=%_missing% 2>nul

   DIR C:\ >List_of_C.txt 2>errorlog.txt
   
   FIND /i "Jones" < names.txt >logfile.txt

   DIR C:\ >List_of_C.txt & DIR D:\ >List_of_D.txt

   ECHO DIR C:\ ^> c:\logfile.txt >NewScript.cmd

   (TYPE logfile.txt >> newfile.txt) 2>nul

Wednesday, January 25, 2012

Concurrent Administrative Operations

Not all administrative tasks are allowed to run concurrently. In the table below, a black circle indicates two operations that cannot run in a database at the same time.
Grid showing tasks that can run concurrently
File shrink operations spend most processing time reallocating pages into areas retained after the shrink has completed; it then attempts to change the file size only as the last step. File shrink operations can be started while a backup is running, provided that the backup finishes before the file shrink operation attempts to change the size of the files.

Source : http://technet.microsoft.com/en-us/library/ms189315.aspx

Tuesday, January 24, 2012

Using Automatic Proxy Configuration

Automatic proxy (auto-proxy) makes system administration easier, because you can automatically configure proxy settings such as server addresses and bypass lists. To configure more advanced settings for auto-proxy, you can create a separate .js, .jvs, or .pac script file and then copy the file to a server location. Then, you can specify the server location for the script file within the Automatic Configuration settings of browser. The auto-proxy script file is executed whenever a network request is made. Within the script, you can configure multiple proxy servers for each protocol type; then, if a proxy server connection fails, browser automatically attempts to connect to another proxy server that you have specified.
PAC File
A proxy auto-config (PAC) file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL. The Proxy auto-config file format was originally designed by Netscape in 1996 for the Netscape Navigator 2.0 and is a text file that defines at least one JavaScript function, FindProxyForURL(url, host), with two arguments. By convention, the PAC file is normally named proxy.pac.
The JS function syntax: string FindProxyForURL(url, host)
url The full URL being accessed or URL of the object
host The hostname extracted from the URL. This is only for convenience, it is the exact same string as between :// and the first : or / after that. The port number is not included in this parameter. It can be extracted from the URL when necessary.
return value A string describing the configuration. The return value of the function should be a semicolon seperated list of options from the following list:
DIRECT Connections should be made directly, without any proxies.
PROXY host:port The specified proxy should be used.
SOCKS host:port The specified SOCKS server should be used.
A null string is the same as DIRECT. Each option will be tried in turn until one is useable.
To use it, a PAC file is published to a web server, and client user agents are instructed to use it, either by entering the URL in the proxy connection settings of the browser or through the use of the WPAD protocol. Even though most clients will process the script regardless of the MIME type returned in the HTTP request, for the sake of completeness and to maximize compatibility, the web server should be configured to declare the MIME type of this file to be either application/x-ns-proxy-autoconfig or application/x-javascript-config.
Example :
1. function FindProxyForURL(url, host)
{
     if (isPlainHostName(host))
         return "DIRECT";
     else
          return "PROXY proxy:80";
}
2. function FindProxyForURL(url, host)
{
     if (url.substring(0, 5) == "http:")
     {
          return "PROXY proxy:80";
     }
     else if (url.substring(0, 4) == "ftp:")
     {
          return "PROXY fproxy:80";
     }
     else if (url.substring(0, 7) == "gopher:")
     {
          return "PROXY gproxy";
     }
     else if (url.substring(0, 6) == "https:")
     {
          return "PROXY secproxy:8080";
     }
     else
     {
          return "DIRECT";
     }
}
Autoconfigure the Proxy Settings from a Local Copy of the PROXY.PAC File (IE or Netscape) :
To use local copy of PROXY.PAC file, copy the file to some local directory, and point to it.
1. Copy the PROXY.PAC file to the C:\WINDOWS directory, or other directory of your choice.
2. In the browser proxy settings, configure the Automatic Proxy Configuration (Netscape) or Use Automatic Configuration Script (IE) URL to:
Netscape, use: file:///c|/windows/proxy.pac
Internet Explorer, use: file://c:/windows/proxy.pac
In Netscape, click on the Reload button.
The Web Proxy Auto-Discovery Protocol (WPAD)
WPAD is not designed to find the actual proxy settings, but to find the PAC script which tell the browser which settings to use. WPAD uses several methods for finding out location of the PAC script. If the method does not provide information about the port or the path name, then the client should use, as defaults, port 80 and /wpad.dat respectively. The client should not use a default host.

Tuesday, January 17, 2012

SQL Server : Protocols

When an application communicates with the Database Engine, the application programming interfaces (APIs) exposed by the protocol layer formats the communication using a Microsoft-defined format called a tabular data stream (TDS) packet. The SQL Server Network Interface (SNI) protocol layer on both the server and client computers encapsulates the TDS packet inside a standard communication protocol, such as TCP/IP or Named Pipes. On the server side of the communication, the network libraries are part of the Database Engine. On the client side, the network libraries are part of the SQL Native Client. The configuration of the client and the instance of SQL Server determine which protocol is used. SQL Server can be configured to support multiple protocols simultaneously, coming from different clients. Each client connects to SQL Server with a single protocol. If the client program does not know which protocols SQL Server is listening on, you can configure the client to attempt multiple protocols sequentially. The following protocols are available:

Shared Memory The simplest protocol to use, with no configurable settings. Clients using the Shared Memory protocol can connect only to a SQL Server instance running on the same computer, so this protocol is not useful for most database activity. Clients using MDAC 2.8 or earlier cannot use the Shared Memory protocol. If such a connection is attempted, the client is switched to the Named Pipes protocol.

Named Pipes A protocol developed for local area networks (LANs). A portion of memory is used by one process to pass information to another process, so that the output of one is the input of the other. The second process can be local (on the same computer as the first) or remote (on a networked computer).

TCP/IP The most widely used protocol over the Internet. TCP/IP can communicate across interconnected networks of computers with diverse hardware architectures and operating systems. It includes standards for routing network traffic and offers advanced security features. Enabling SQL Server to use TCP/IP requires the most configuration effort, but most networked computers are already properly configured.

Virtual Interface Adapter (VIA) A protocol that works with VIA hardware. This is a specialized protocol; configuration details are available from your hardware vendor.