Friday, October 5, 2012

Windows: How To Find a User's Security Identifier (SID) in Windows

Reference: http://pcsupport.about.com/od/registry/ht/find-user-security-identifier.htm


How To Find a User's Security Identifier (SID) in Windows


There are many reasons why you might want to find the security identifier<http://pcsupport.about.com/od/termss/g/security-identifier.htm> (SID) for a particular user's account in Windows but in my corner of the world, the common reason for doing so is to determine which key<http://pcsupport.about.com/od/termss/g/registy_subkey.htm> under HKEY_USERS<http://pcsupport.about.com/od/termshm/g/hkey_users.htm> in the Windows Registry<http://pcsupport.about.com/od/termsr/p/registrywindows.htm> to look for user-specific registry data for.
Regardless of the reason for your need, matching SIDs to user names is really easy thanks to the wmic command, a command<http://pcsupport.about.com/od/termsc/g/commands.htm> available from the Command Prompt<http://pcsupport.about.com/od/termsc/p/command-prompt.htm> in most versions of Windows.
Follow these easy steps to display a table of user names and their corresponding SIDs:
Note: See Tip #1 for instructions on matching a user name to an SID via the Windows Registry, an alternative to the wmic command method described below. The wmic command didn't exist before Windows XP so you'll have to use the registry method in those older versions of Windows.
Difficulty: Easy
Time Required: It'll take less than a minute to find a user's SID in Windows
Here's How:
1. Open Command Prompt.
2. Once Command Prompt is open, type the following command<http://pcsupport.about.com/od/termsc/g/commands.htm> exactly as shown here, including spaces or lack thereof:
3. wmic useraccount get name,sid
and then press Enter.
4. You should see a table, similar to the following, displayed in the Command Prompt window:
5. Name SID
6. Administrator S-1-5-21-1180699209-877415012-3182924384-500
7. Guest S-1-5-21-1180699209-877415012-3182924384-501
8. HomeGroupUser$ S-1-5-21-1180699209-877415012-3182924384-1002
9. Tim S-1-5-21-1180699209-877415012-3182924384-1004
10.UpdatusUser S-1-5-21-1180699209-877415012-3182924384-1007
This is a list of each user account in Windows, listed by user name, followed by the account's corresponding SID.
11. Now that you're confident that a particular user name corresponds to a particular SID, you can make whatever changes you need to in the registry or do whatever else you needed this information for.
Tips:
1. How To Find User's SIDs in the Registry:
You can also determine a user's SID by looking through the ProfileImagePath values<http://pcsupport.about.com/od/termsv/g/registryvalue.htm> in each S-1-5-21 prefixed SID listed under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList key.
The ProfileImagePath value within each SID-named registry key lists the profile directory, which includes the user name.
For example, the ProfileImagePath value under the S-1-5-21-1180699209-877415012-3182924384-1004 key on my computer is C:\Users\Tim so I know that the SID for the user "Tim" is "S-1-5-21-1180699209-877415012-3182924384-1004".
Note: This method of matching users to SIDs will only show those users who are logged in or have logged in and switched users. To continue to use the registry method for determining other user's SIDs, you'll need to log in as each user on the system and repeat these steps. This is a big drawback so, assuming you're able, you're much better off using the wmic command method above.
Related Articles
* HKEY_USERS (HKU)<http://pcsupport.about.com/od/termshm/g/hkey_users.htm>
* Security Identifier Definition (SID)<http://pcsupport.about.com/od/termss/g/security-identifier.htm>
* Emacs Documentation - Find Init<http://linux.about.com/od/emacs_doc/a/emacsdoc163.htm>
* write - Linux Command - Unix Command<http://linux.about.com/od/commands/l/blcmdl1_write.htm>
* Windows' Command Line Puts You in Control<http://pcworld.about.com/magazine/2309p146id121786.htm>

Friday, September 7, 2012

SQL: Partition switching in SQL Server 2005

http://www.techrepublic.com/blog/datacenter/partition-switching-in-sql-server-2005/143

Partition switching in SQL Server 2005
2Comments<http://www.techrepublic.com/blog/datacenter/partition-switching-in-sql-server-2005/143#talkback>
more +

* Email
* Print
* Add to Favorites
* Del.icio.us
* Digg
* Hacker News
* LinkedIn
* Reddit
* Technorati
By Tim Chapman<http://www.techrepublic.com/search?q=tim+chapman>
July 23, 2007, 11:21 AM PDT

Takeaway: Tim Chapman demonstrates how you can modify defined partitions in SQL Server 2005 so that you can easily and efficiently add new data.

Last week, I showed how you can use the new table and index partitioning features in SQL Server 2005 to split your table data across one or more filegroups<http://blogs.techrepublic.com.com/datacenter/?p=139>. In this article, I demonstrate how you can modify those defined partitions so that you can easily and efficiently add new data.

Partitioning

Partitioning is the separation of database rows from one large table into one or more smaller tables. There can be performance and administrative advantages in using partitioning because you are essentially using one or more tables as one large table. These advantages can be even greater when you take into account the ability to easily add and delete data to and from your partitions.

Partition switching

Partition functions are used to map table field values to defined partitions. These partitions are initially defined when the function is created. However, it is sometimes necessary to change how your partitions are set up to accommodate new data, move current data, or delete data. Partition switching is the process of moving a block of data around in one or more tables.

An example
One common use for partition is the archiving of data, such as moving data from your OLTP database to your data warehouse for reporting purposes. This example looks at the importing of MP3 product data into a SalesHistoryArchive table. For the purpose of this example, assume that a list of MP3Player product sales have been imported into the database and placed in a table named MP3Import.

This example is built upon the partitioning example I illustrated in my article on creating table partitions in SQL Server 2005<http://blogs.techrepublic.com.com/datacenter/?p=139>. The following script recreates the objects I used in that article. These objects include the partition function, partition scheme, and the SalesHistoryArchive table.

CREATE PARTITION FUNCTION [pf_Product_Partition](VARCHAR(10)) AS RANGE LEFT

FOR VALUES (N'BigScreen', N'Computer', N'PoolTable')

Go



CREATE PARTITION SCHEME [ps_Product_Scheme] AS PARTITION [pf_Product_Partition]

ALL TO ([PRIMARY])

GO



IF OBJECT_ID('SalesHistoryArchive')>0

DROP TABLE [SalesHistoryArchive];

GO

CREATE TABLE [dbo].[SalesHistoryArchive]

(

[SaleID] [int] IDENTITY(1,1),

[Product] [varchar](10) NULL,

[SaleDate] [datetime] NULL,

[SalePrice] [money] NULL

)ON [ps_Product_Scheme](Product)

GO



DECLARE @i SMALLINT

SET @i = 1

WHILE (@i <=10000)

BEGIN

INSERT INTO SalesHistoryArchive(Product, SaleDate, SalePrice)

VALUES('Computer', DATEADD(mm, @i, '3/11/1919'), DATEPART(ms, GETDATE()) + (@i + 57))

INSERT INTO SalesHistoryArchive(Product, SaleDate, SalePrice)

VALUES('BigScreen', DATEADD(mm, @i, '3/11/1927'), DATEPART(ms, GETDATE()) + (@i + 13))

INSERT INTO SalesHistoryArchive(Product, SaleDate, SalePrice)

VALUES('PoolTable', DATEADD(mm, @i, '3/11/1908'), DATEPART(ms, GETDATE()) + (@i + 29))

SET @i = @i + 1

END

GO

Now that I have data to use for the example, I can dig into the details of the new product import. The MP3Player product records that I will be importing will be placed in the SalesHistoryArchive table. This table is currently partitioned, and I want the MP3Player product to be placed on its own partition. To accomplish this, I will need to alter the partition function that I use on the SalesHistoryArchive table.

ALTER PARTITION FUNCTION [pf_Product_Partition] ()

SPLIT RANGE ('MP3Player')

Run the following query to verify that a new partition has been added for the SalesHistoryArchive table:

SELECT * From sys.partitions

WHERE OBJECT_NAME(object_id) = 'SalesHistoryArchive'

At this point, data can be inserted directly into the SalesHistory archive table, and any product of MP3Player will be placed in its own partition. However, data is inserted into the database from some type of external source and placed in its own individual database table. One great feature of partitioning in SQL Server 2005 is that it allows you to switch data from one table into another table very easily and quickly. It is so fast because the switching of the partition data only involves the changing of SQL Server's internal metadata, so no data is actually moved. The script below creates the MP3Import table, which I will assume has been imported into the database from an external data source.

CREATE TABLE [dbo].MP3Import

(

[SaleID] [int] IDENTITY(1,1),

[Product] [varchar](10) NULL,

[SaleDate] [datetime] NULL,

[SalePrice] [money] NULL

)ON [ps_Product_Scheme](Product)

GO

ALTER TABLE MP3Import

ADD CONSTRAINT ck_Product CHECK(Product = 'MP3Player')

GO

There are a couple of specific things to notice in the script above. First, the table structure of the MP3Import table is the same as the SalesArchiveHistory table, and the MP3Import table uses the same partitioning scheme as the SalesHistoryArchive table. Using the same partitioning scheme isn't really necessary to complete what I want to complete, but I think it makes it easier. Next, I am creating a check constraint on the table to ensure that only the MP3Player product is imported into the table. This will ensure that when I switch the data that all of the data goes into the exact partition that I want.

The next step is to load some data into the import table:

DECLARE @i SMALLINT

SET @i = 1

WHILE (@i <=10000)

BEGIN

INSERT INTO MP3Import(Product, SaleDate, SalePrice)

VALUES('MP3Player', DATEADD(mm, @i, '3/11/1919'), DATEPART(ms, GETDATE()) + (@i + 57))

SET @i = @i + 1

END

GO

I want to switch the data that is in the MP3Import table to the SalesHistoryArchive table. In previous versions of SQL Server, this would entail using an INSERT statement to add the data to the table. With SQL Server 2005, I can simply use the SWITCH statement to move the data pointers to point to the SalesHistoryArchive table. In the script below, I alter the MP3Import table and switch the data that resides in partition 3, where the data was initially inserted, to partition 3 on the SalesHistory archive table.

ALTER TABLE MP3Import

SWITCH PARTITION 3 TO SalesHistoryArchive PARTITION 3

The following query shows that the SalesHistoryArchive table now contains 10000 records in partition 3. Because the data in the MP3Import table is no longer needed, I can simply issue a DROP command on the table to remove it from the database.

DROP TABLE MP3Import

Because I have associated the data pointers with the new partition in the SalesHistoryArchive table, I can safely remove the MP3Import table.

To switch or not to switch

This article explores one scenario where it is advantageous to use partitioning to easily add data to your partitoned tables. It is worth noting that data can be removed as easily as it is added when the full partition is involved. While partitioning is a great advantage when it is suitable, it is not appropriate for all circumstances. Only through testing and analysis can you determine if partitioning, and thus partitioning switching, can improve your overall database scenario.

Assign to a T-SQL variable from a CASE statement

References: http://stackoverflow.com/questions/6945979/assign-to-a-t-sql-variable-from-a-case-statement


The example you've given should work. You can assign to variables from a case statement. Just pretend that the entire CASE..WHEN..THEN..ELSE..END block is a field. Here is a generic example:


declare
@string1 nvarchar(100) = null
,@string2 nvarchar(100) = null
;

select top 1
@string1 = case when 1=1 then 'yes' else 'no' end
,@string2 = case when 1=0 then 'yes' else 'no' end

print 'string1 = ' + @string1
print 'string2 = ' + @string2
Gives:
string1 = yes
string2 = no

SQL Partitioning (Horizontal & Vertical )

References: http://msdn.microsoft.com/en-us/library/ms178148(v=sql.105).aspx


Partitioning

Partitioning a database improves performance and simplifies maintenance. By splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Maintenance tasks, such as rebuilding indexes or backing up a table, can run more quickly.

Partitioning can be achieved without splitting tables by physically putting tables on individual disk drives. Putting a table on one physical drive and related tables on a separate drive can improve query performance because, when queries that involve joins between the tables are run, multiple disk heads read data at the same time. SQL Server filegroups can be used to specify on which disks to put the tables.
Hardware Partitioning<javascript:void(0)>
________________________________

Hardware partitioning designs the database to take advantage of the available hardware architecture. Following are examples of hardware partitioning:

* Multiprocessors that enable multiple threads of operations, permitting many queries to run at the same time. Alternatively, a single query may be able to run faster on multiple processors by letting components of the query run at the same time. For example, each table referenced in the query can be scanned at the same time by a different thread.

* RAID (redundant array of independent disks) devices that enable data to be striped across multiple disk drives, permitting faster access to the data, because more read and write heads read data at the same time. A table striped across multiple drives can typically be scanned faster than the same table stored on a single drive. Alternatively, storing tables on separate drives from related tables can significantly improve the performance of queries joining those tables. For more information, see RAID<http://msdn.microsoft.com/en-us/library/ms184252(v=sql.105).aspx>.
Horizontal Partitioning<javascript:void(0)>
________________________________

Horizontal partitioning divides a table into multiple tables. Each table then contains the same number of columns, but fewer rows. For example, a table that contains 1 billion rows could be partitioned horizontally into 12 tables, with each smaller table representing one month of data for a specific year. Any queries requiring data for a specific month only reference the appropriate table.

Determining how to partition the tables horizontally depends on how data is analyzed. You should partition the tables so that queries reference as few tables as possible. Otherwise, excessive UNION queries, used to merge the tables logically at query time, can affect performance. For more information about querying horizontally partitioned tables, see Scenarios for Using Views<http://msdn.microsoft.com/en-us/library/ms188250(v=sql.105).aspx>.

Partitioning data horizontally based on age and use is common. For example, a table may contain data for the last five years, but only data from the current year is regularly accessed. In this case, you may consider partitioning the data into five tables, with each table containing data from only one year.

For more information, see Partitioned Tables and Indexes<http://msdn.microsoft.com/en-us/library/ms188706(v=sql.105).aspx>.
Vertical Partitioning<javascript:void(0)>
________________________________

Vertical partitioning divides a table into multiple tables that contain fewer columns. The two types of vertical partitioning are normalization and row splitting:

* Normalization is the standard database process of removing redundant columns from a table and putting them in secondary tables that are linked to the primary table by primary key and foreign key relationships.

* Row splitting divides the original table vertically into tables with fewer columns. Each logical row in a split table matches the same logical row in the other tables as identified by a UNIQUE KEY column that is identical in all of the partitioned tables. For example, joining the row with ID 712 from each split table re-creates the original row.

Like horizontal partitioning, vertical partitioning lets queries scan less data. This increases query performance. For example, a table that contains seven columns of which only the first four are generally referenced may benefit from splitting the last three columns into a separate table.

Vertical partitioning should be considered carefully, because analyzing data from multiple partitions requires queries that join the tables. Vertical partitioning also could affect performance if partitions are very large.

Wednesday, September 5, 2012

SQL: How to Truncate Log File in SQL Server 2005

http://www.codeproject.com/Articles/14400/How-to-Truncate-Log-File-in-SQL-Server-2005


How to Truncate Log File in SQL Server 2005

[cid:image001.jpg@01CD8B98.8F2B1E70]
Introduction
SQL Server 2005 is quite different from SQL Server 2000. To truncate log file is one thing which is different from SQL Server 2000. In SQL Server 2000, you just use Shrink to whatever file size you like. In SQL Server 2005, sometimes I cannot shrink the log file at all.
Here I want to describe some tricks to truncate log file for a database in SQL Server 2005. The work environment is Microsoft SQL Server Management Studio.
I. Shrink the Log File Size at the Right Time
I found out this trick:
Immediately after I use the SSIS package or Import the data to the database (highlight the database->Tasks->Import data ...), or Export the data from the database (highlight the database->Tasks->Export data ...), I can shrink the log file to the desired size, for example, 1MB. That is, highlight the database->Tasks->Shrink->Files , set the file size, say, 1MB.
Then, click OK and you are done.
II. Eliminate the Log File Completely
Sometimes, we just do not need the big log file. For example, I have a 40GB log file. I am sure I do not need this log file and want to get rid of it completely to free up the hard drive space. The logic is:

1. Detach the database
2. Rename the log file
3. Attach the database without the log file
4. Delete the log file
Let's say, the database name is testDev. In the SQL Server Management Studio,

1. Highlight the database-> Tasks->Detach..-> Click OK
2. Go to log file folder -> rename the testDev_log.ldf to be like testDev_log-aa.ldf
3. Highlight Databases->Attach...-> Click Add -> add the database testDev, highlight the log file and click the 'Remove' button. This means you only attach testDev.mdf
4. After this is done, you can verify the contents of the attached database and then delete the log file
This way we can safely delete the log file and free up the space.

SQL : Row_Number() to delete duplicate records

http://www.databasejournal.com/features/mssql/article.php/3577481/RowNumber-function-in-SQL-Server-2005-150-Part-II.htm
Row_Number() function in SQL Server 2005 - Part II

By Muthusamy Anantha Kumar aka The MAK<http://www.databasejournal.com/feedback.php/http:/www.databasejournal.com/features/mssql/article.php/3577481/RowNumber-function-in-SQL-Server-2005-150-Part-II.htm>

In part I<http://www.databasejournal.com/features/mssql/article.php/3572301> of this article, we saw how to generate row numbers and delete a simple single duplicate row. This article walks you through how to delete multiple duplicate rows. This article uses the new feature Row_Number() function, common table expression and the ANSI co-related subquery.

Let us assume we have the following table, Mytable, in the database MyDB. Let us create the database MyDB and MyTable by using the following script.

USE [MASTER]

GO

IF EXISTS

[cid:image001.gif@01CD8B62.46AF4580] (SELECT NAME FROM SYS.DATABASES

[cid:image001.gif@01CD8B62.46AF4580] WHERE NAME = N'MYDB')

DROP DATABASE [MYDB]

GO

CREATE DATABASE MYDB

GO

USE [MYDB]

GO

IF EXISTS

[cid:image001.gif@01CD8B62.46AF4580] (SELECT * FROM SYS.OBJECTS

[cid:image001.gif@01CD8B62.46AF4580] WHERE OBJECT_ID =

[cid:image001.gif@01CD8B62.46AF4580] OBJECT_ID(N'[DBO].[MYTABLE]')

[cid:image001.gif@01CD8B62.46AF4580] AND TYPE IN (N'U'))

DROP TABLE [DBO].[MYTABLE]

GO

CREATE TABLE MYTABLE

[cid:image001.gif@01CD8B62.46AF4580] (ID INT, FNAME VARCHAR(50),

[cid:image001.gif@01CD8B62.46AF4580] LNAME VARCHAR(50))

GO

insert into mytable select 1,'Jen','Ambelang'

insert into mytable select 11,'Jiong','Hong'

insert into mytable select 25,'Sandra','Mator'

insert into mytable select 35,'Chun','Chang'

insert into mytable select 21,'Yuki','Fukushima'

insert into mytable select 1,'Jen','Ambelang'

insert into mytable select 1,'Jen','Ambelang'

insert into mytable select 25,'Sandra','Mator'

insert into mytable select 25,'Sandra','Mator'

go

Let us query all the rows from the table MyTable using the following Select query.

SELECT * FROM MYTABLE ORDER BY ID

This query displays the following results.

Related Articles

* Row_Number() function in SQL Server 2005<http://www.databasejournal.com/features/mssql/article.php/3572301/RowNumber-function-in-SQL-Server-2005.htm>

1


Jen


Ambelang


1


Jen


Ambelang


1


Jen


Ambelang


11


Jiong


Hong


21


Yuki


Fukushima


25


Sandra


Mator


25


Sandra


Mator


25


Sandra


Mator


35


Chun


Chang


From the results, it is very clear that we have duplicate rows: 1, Jen, Ambelang and 25, Sandra, Mator.

Step 1

The first step in deleting duplicate rows is to generate a unique row id for the entire table. This can be done using the Row_Number() function.

Let us generate the unique numbers by executing the following query:

SELECT ROW_NUMBER() OVER (ORDER BY ID) AS ROW,

ID,LNAME,FNAME FROM MYTABLE

This query produces the following results.

1


1


Ambelang


Jen


2


1


Ambelang


Jen


3


1


Ambelang


Jen


4


11


Hong


Jiong


5


21


Fukushima


Yuki


6


25


Mator


Sandra


7


25


Mator


Sandra


8


25


Mator


Sandra


9


35


Chang


Chun


Step 2

The second step in deleting duplicate rows is to generate unique row ids for every group.

Now, by using a co-related sub-query we can produce unique row ids for each group.

SELECT ROW,GROUPROW= CASE WHEN ID=ID

THEN

(SELECT COUNT(*) FROM (SELECT ROW_NUMBER() OVER (ORDER BY ID) AS ROW,

ID,LNAME,FNAME FROM MYTABLE

) AS A WHERE A.ID=B.ID AND

A.ROW<B.ROW)+1 END,ID,FNAME,LNAME FROM (SELECT ROW_NUMBER() OVER (ORDER

BY ID) AS ROW,

ID,LNAME,FNAME FROM MYTABLE

)AS B

This query would produce the following results with a unique id for each group.

1


1


1


Jen


Ambelang


2


2


1


Jen


Ambelang


3


3


1


Jen


Ambelang


4


1


11


Jiong


Hong


5


1


21


Yuki


Fukushima


6


1


25


Sandra


Mator


7


2


25


Sandra


Mator


8


3


25


Sandra


Mator


9


1


35


Chun


Chang


Step 3

The last step in deleting duplicate rows is to use the common table expression, as shown below.

WITH DUPLICATE(ROW,GROUPROW,ID,FNAME,LNAME)

AS

(

SELECT ROW,GROUPROW= CASE WHEN ID=ID

THEN

(SELECT COUNT(*) FROM (SELECT ROW_NUMBER() OVER (ORDER BY ID) AS ROW,

ID,LNAME,FNAME FROM MYTABLE

) AS A WHERE A.ID=B.ID AND

A.ROW<B.ROW)+1 END,ID,FNAME,LNAME FROM (SELECT ROW_NUMBER() OVER (ORDER

BY ID) AS ROW,

ID,LNAME,FNAME FROM MYTABLE

)AS B

)

DELETE FROM DUPLICATE WHERE GROUPROW<>1

Let us query all the rows from the source table, Mytable, by using the following Select Query.

SELECT * FROM MYTABLE ORDER BY ID

This query displays the following results:

1


Jen


Ambelang


11


Jiong


Hong


21


Yuki


Fukushima


25


Sandra


Mator


35


Chun


Chang


From the results, it is clear that duplicates from the table, Mytable, have been removed.

Conclusion

The main intent of this article was to demonstrate the use of SQL Server 2005's new feature Row_number() function and Common Table Expression, with the help of co-related sub-query, to delete duplicate rows.

Tuesday, September 4, 2012

SQL: Data partitioning in SQL Server 2005 - Part V (nice example)

Reference: http://www.databasejournal.com/features/mssql/article.php/3647561/Data-partitioning-in-SQL-Server-2005---Part-V.htm

Data partitioning in SQL Server 2005 - Part V


By Muthusamy Anantha Kumar aka The MAK<http://www.databasejournal.com/feedback.php/http:/www.databasejournal.com/features/mssql/article.php/3647561/Data-partitioning-in-SQL-Server-2005---Part-V.htm>

In Part IV<http://www.databasejournal.com/features/mssql/article.php/3647271> of this article series, we have seen how to partition an existing table based on certain range of dates.

Part V illustrates how to merge and split existing partitions.

First let us create a database with a table. Then let us insert data and then partition the table.

Step 1
Let us assume we have created the folder, C:\Data2, and the following subfolders. [Refer Fig 1.0]
C:\Data2\Primary
C:\Data2\FG1
C:\Data2\FG2
C:\Data2\FG3
C:\Data2\FG4
[cid:image001.jpg@01CD8ACB.2C95A050]
Fig 1.0
Step 2
Let us assume that we have a database, Data Partition DB3, with five different file groups, as shown below.
use master
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'Data Partition DB5')
DROP DATABASE [Data Partition DB5]
GO
CREATE DATABASE [Data Partition DB5]
ON PRIMARY
(NAME='Data Partition DB5',
FILENAME=
'C:\Data2\Primary\Data Partition DB5.mdf',
SIZE=5,
MAXSIZE=500,
FILEGROWTH=1 ),
FILEGROUP [Data Partition DB5 FG1]
(NAME = 'Data Partition DB5 FG1',
FILENAME =
'C:\Data2\FG1\Data Partition DB5 FG1.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1 ),
FILEGROUP [Data Partition DB5 FG2]
(NAME = 'Data Partition DB5 FG2',
FILENAME =
'C:\Data2\FG2\Data Partition DB5 FG2.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1 ),
FILEGROUP [Data Partition DB5 FG3]
(NAME = 'Data Partition DB5 FG3',
FILENAME =
'C:\Data2\FG3\Data Partition DB5 FG3.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1 ),
FILEGROUP [Data Partition DB5 FG4]
(NAME = 'Data Partition DB5 FG4',
FILENAME =
'C:\Data2\FG4\Data Partition DB5 FG4.ndf',
SIZE = 5MB,
MAXSIZE=500,
FILEGROWTH=1 )
Step 3
Let us assume that we have a table on the primary file group. You could create that table by executing the following T-SQL statement.
USE [Data Partition DB5]
go
CREATE TABLE MyTable
(ID INT NOT NULL,
Date DATETIME,
Cost money ) on [primary]
Step 4
Now let us create a unique clustered index on the table by executing the following T-SQL statement.
USE [Data Partition DB5]
go
CREATE UNIQUE CLUSTERED INDEX MyTable_IXC
ON MyTable(ID) on [PRIMARY]
Step 5
Now let us insert some data onto the table, MyTable, using the following T-SQL statements.
USE [Data Partition DB5]
go
declare @count int
set @count =-25
while @count <=100
begin
insert into MyTable select @count,getdate(),100.00
set @count=@count+1
end
set @count =101
while @count <=200
begin
insert into MyTable select @count,getdate(),200.00
set @count=@count+1
end
set @count =201
while @count <=300
begin
insert into MyTable select @count,getdate(),300.00
set @count=@count+1
end
set @count =301
while @count <=400
begin
insert into MyTable select @count,getdate(),400.00
set @count=@count+1
end
set @count =401
while @count <=800
begin
insert into MyTable select @count,getdate(),500.00
set @count=@count+1
end
Let us query the table using the T-SQL statement shown below.

select * from MyTable

ID Date Cost

----------- ----------------------- ---------------------

-25 2006-11-09 15:12:01.177 100.00

-24 2006-11-09 15:12:01.207 100.00

-23 2006-11-09 15:12:01.207 100.00

-22 2006-11-09 15:12:01.223 100.00

-21 2006-11-09 15:12:01.223 100.00

-20 2006-11-09 15:12:01.223 100.00

-19 2006-11-09 15:12:01.240 100.00

.

.

.

0 2006-11-09 15:12:01.347 100.00

1 2006-11-09 15:12:01.347 100.00

2 2006-11-09 15:12:01.363 100.00

3 2006-11-09 15:12:01.363 100.00

.

.

792 2006-11-09 15:12:06.270 500.00

793 2006-11-09 15:12:06.270 500.00

794 2006-11-09 15:12:06.287 500.00

795 2006-11-09 15:12:06.287 500.00

796 2006-11-09 15:12:06.287 500.00

797 2006-11-09 15:12:06.300 500.00

798 2006-11-09 15:12:06.300 500.00

799 2006-11-09 15:12:06.317 500.00

800 2006-11-09 15:12:06.317 500.00

Now let us query the partition information of this table by executing the following query.

select * from sys.partitions where object_name(object_id)='MyTable'

You can see the result as shown below [Refer Fig 1.1]. It is obvious that all of the 826 rows are in the same partition.

[cid:image002.jpg@01CD8ACB.2C95A050]
Fig 1.1

Step 6

Partitioning of the tables depends on the partition range defined by Partition Function. Let us assume that we are going to partition the table into four parts, onto four different file groups.

use [Data Partition DB5]

GO

CREATE PARTITION FUNCTION [Data Partition Range](int)

AS RANGE LEFT FOR VALUES (100,200,300)

When we use LEFT clause in the range, the first partition could hold values from negative infinity to 100, the second partition could hold values from 101 to 200, the third partition could hold values from 201 to 300 and the fourth partition could hold values from 301 to infinity or whatever the max range of the integer value.

When we use RIGHT clause in the range, the first partition could hold values from negative infinity to 99, the second partition could hold values from 100 to 199, the third partition could hold values from 200 to 299 and the fourth partition could hold values from 300 to Infinity or whatever the max range of the integer value.

Execute the following query to see the partition information.

SELECT * FROM sys.partition_range_values

Step 7

Partition Function is not useful, unless it is associated with the proper file groups that we have created.

Let us assume that we are going to use file group [Data Partition DB5 FG1], [Data Partition DB5 FG2], [Data Partition DB5 FG3], [Data Partition DB5 FG4] for the partition table that we are going to create. This can be created as shown below.

USE [Data Partition DB5]

go

CREATE PARTITION SCHEME [Data Partition Scheme]

AS PARTITION [Data Partition Range]

TO ([Data Partition DB5 FG1], [Data Partition DB5 FG2], [Data Partition DB5 FG3],[Data Partition DB5 FG4]);

Step 8
Now, let us move the table to the new partition by using the following TSQL command.

Drop index MyTable_IXC on MyTable with (Move To [Data Partition Scheme] (ID) )

Now, let us query the partition information by using the following query.

select * from sys.partitions where object_name(object_id)='MyTable'

You can see the result as shown below [Refer Fig 1.2]. It is obvious that all of the 826 rows are spread across all four file groups, based on the partition range.

[cid:image003.jpg@01CD8ACB.2C95A050]
Fig 1.2

Now we have MyTable partitioned in to four different partitions. It is time to merge the first two partitions.

Merge partition

Execute the following T-SQL statement to merge the first two partitions together.

Alter partition function [Data Partition Range]()

merge range(100)

Now, let us query the partition information by using the following query.

select * from sys.partitions where object_name(object_id)='MyTable'

You will the see the following results showing the merge of partition number 1 and 2. [Refer Fig 1.3]

[cid:image004.jpg@01CD8ACB.2C95A050]
Fig 1.3

Split partition

Execute the following T-SQL statement to split last partition into two partitions.

Alter partition function [Data Partition Range]()

split range(300)

When executed you will get the following error.

Msg 7707, Level 16, State 1, Line 1

The associated partition function 'Data Partition Range' generates

more partitions than there are file groups mentioned in the scheme

'Data Partition Scheme'.

In order to avoid this, we have to add another file group to the database.

Let us execute the below T-SQl statement, which will add a new file group to the database.

Alter database [Data Partition DB5] add FILEGROUP [Data Partition DB5 FG5]

go

alter database [Data Partition DB5]

ADD FILE

(NAME = 'Data Partition DB5 FG5',

FILENAME =

'C:\Data2\FG4\Data Partition DB5 FG5.ndf',

SIZE = 5MB,

MAXSIZE=500,

FILEGROWTH=1 )

TO FILEGROUP [Data Partition DB5 FG5]

Go

Now let us try again to split the partition range by executing the following T-SQL statement.

Use [Data Partition DB5]

go

ALTER PARTITION SCHEME [Data Partition Scheme]

NEXT USED [Data Partition DB5 FG5]

GO

Alter partition function [Data Partition Range]()

split range(500)

Note: We should make SQL Server aware that it should use the next partition when it is splitting. The clause NEXT USED does that.

Now, let us query the partition information by using the following query.

select * from sys.partitions where object_name(object_id)='MyTable'

You will the see the following results showing the split of partition number 3. [Refer Fig 1.4]

[cid:image005.jpg@01CD8ACB.2C95A050]
Fig 1.4

You can see that any ID values between 300 and 500 are moved to partition 3 and any ID values more than 500 are moved to partition 4.

Conclusion

Part V of this article series has illustrated how to merge and split existing partitions.

Monday, August 27, 2012

Implementing Semaphore At Database Level (SQL Server)

Hi,

I am Bishnu Patro.
Recently, I had an requirement to execute a particular stored proc certain number of times (it should be configurable depending upon the available Memory). The reason, is if the stored proc is executed certain number of times simultaneously, it used to take more time to execute since the all available memory is consumed and there is no free memory left. So, we decided to limit the number of parallel execution at the stored procedure level and it should be configurable. If it configured to run the stored proc simultaneouly to a maximum of 2. Then when the next request comes for execution of the stored proc, this request will wait for one of the execution to complete.

I have implemented this by way of implementing customized semaphore at database level.

I have created a table MultipleInstanceJobConfiguration, to keep track of the number of simultaneous execution of the stored proc.


CREATE TABLE [dbo].[MultipleInstanceJobConfiguration](
[JobType] [nvarchar](50) NULL,
[Total] [smallint] NULL,
[Available] [smallint] NULL,
[IsEnabled] [bit] NULL
) ON [PRIMARY]


Here, jobtype is the name of the stored proc.
Total is the number of simultaneous execution of the stored proc.
Available is at any point how many simultaneous execution available.
IsEnabled is to whether you want to enable this feature or disable it.


The following statement inserts for a jobtype SI:

INSERT INTO [dbo].[MultipleInstanceJobConfiguration] ([JobType] ,[Total] ,[Available] ,[IsEnabled]) VALUES ('SI',2,2,1)
GO


The following stored proc is used to take a lock on an object:


CREATE Proc [dbo].[AcqLock]
@Resource_Name1 as nvarchar(255),
@Lock_Mode1 as nvarchar(32),
@Lock_TimeOut1 as int
As

DECLARE @LockResult int

EXECUTE @LockResult = sp_getapplock
@Resource = @Resource_Name1,
@LockMode = @Lock_Mode1,
@LockOwner = 'Session',
@LockTimeout = @Lock_TimeOut1


while( @LockResult <> 0)
BEGIN
WAITFOR DELAY '00:00:01'
EXECUTE @LockResult = sp_getapplock
@Resource = @Resource_Name1,
@LockMode = @Lock_Mode1,
@LockOwner = 'Session',
@LockTimeout = @Lock_TimeOut1
END
GO


The following stored proc is used to release a lock on an object:


CREATE Proc [dbo].[ReleseLock]
@Resource_Name1 as nvarchar(255)
As
EXECUTE sp_releaseapplock
@Resource = @Resource_Name1, @LockOwner = 'Session'
GO


The following stored proc is an example implementing semphore:


CREATE PROC [dbo].[StoredProcUsingSemaphore]
AS

--Updating Configuation Available Count On Start of stored proc
declare @total int
declare @available int
declare @isEnabled bit
select @total = Total,@available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'
WHILE(@total > 0 and @isEnabled = 1)
BEGIN
if( @available > 0)
BEGIN
Begin Try
EXEC dbo. AcqLock 'MY_MULTIPLEINSTANCES','Exclusive',0
select @available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'
if(@available > 0 and @isEnabled = 1)
BEGIN
Update dbo.MultipleInstanceJobConfiguration
Set Available = Available - 1
where JobType like 'SI'

INSERT INTO MetricsLoadProcessingLog VALUES (@loadID,'SI_Job_Configuration',null ,1 ,'Decremented Available Count',GETUTCDATE())

Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
BREAK
END
End Try
Begin Catch
Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
End Catch

END
WAITFOR DELAY '00:00:15'
select @available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'

END


----Your piece of code will go here.....


-----Updating Available Count on Finish
if(@total > 0 and @isEnabled = 1)
BEGIN
EXEC dbo. AcqLock 'MY_MULTIPLEINSTANCES ','Exclusive',0

Update dbo.MultipleInstanceJobConfiguration
Set Available = Available + 1
where JobType like 'SI' and Available < Total


Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
END

GO


Please let me know if the post was useful....

Thursday, August 16, 2012

SQL: Tips For Lightning-Fast Insert Performance On SQL Server

http://arnosoftwaredev.blogspot.in/2011/10/tips-for-lightning-fast-insert.html

Tips For Lightning-Fast Insert Performance On SQL Server
1. Increase ADO.NET BatchSize to eliminate unnecessary network roundtrips, e.g. SqlDataAdapter.UpdateBatchSize<http://msdn.microsoft.com/en-us/library/xw5z0hdd(v=VS.100).aspx> when working with DataAdapters, or SqlBulkCopy.BatchSize<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.batchsize.aspx> when applying SqlBulkCopy.

2. Limit the number of indices on the target table to what is really essential for query performance.

3. Place indices on master table columns referenced by the target table's foreign keys.

4. Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits. Usually an identity column (AKA "autoinc") is a good choice. If you don't have autoinc primary keys, consider introducing a new identity column just for the sake of making it your clustered index.

5. Let the client insert into a temporary heap table first (that is, a table that has no clustered index, resp. no index at all). Then, issue one big "insert-into-select"<http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/> statement to push all that staging table data into the actual target table. The "insert-into-select"-statement must contain an "order-by"-clause which guarantees ordering by clustered index.

6. Apply SqlBulkCopy<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx>.

7. Decrease transaction logging by choosing bulk-logged recovery model<http://msdn.microsoft.com/en-us/library/ms189275.aspx>, resp. setting SqlServer traceflag 610<http://sqlserverplanet.com/sql-server-2008/sql-server-2008-minimally-logged-inserts/>.

8. If your business scenario allows for it, place a table lock before inserting. This will make any further locking unnecessary, and is especially a viable option on staging tables as described in (5). SqlBulkCopy also supports table locks via SqlBulkCopyOptions<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopyoptions.aspx>.

9. Place database datafile and logfile on two physically separated devices, e.g. on two disks or two SAN LUNs configured for different spindles.

10. Prefer server-side processsing (e.g. by means of "insert-into-select") to client-to-server-roundtrips wherever possible.

11. This is probably the fastest insert-approach I have ever heard of (taken from this sqlbi whitepaper<http://www.sqlbi.eu/LinkClick.aspx?fileticket=svahq1Mpp9A%3D&tabid=169&mid=375>, see final paragraph): Create a new heap table just for the current insert batch, SqlBulk-Copy data into that table, then create a suited clustered index on the table, and add the table as a new table partition to an existing partitioned table.

12. Check execution plan when inserting, and go sure it does not contain anything unexpected or dispensable that might slow down your inserts, e.g. UDF-calls during check constraint execution, heavyweight trigger code, referential integrity checks without index usage or indexed view updates.

Thursday, August 9, 2012

SQL Server 2005: Immediate Deadlock notifications (example)

http://weblogs.sqlteam.com/mladenp/archive/2008/05/21/SQL-Server-2005-Immediate-Deadlock-notifications.aspx


SQL Server 2005: Immediate Deadlock notifications<http://weblogs.sqlteam.com/mladenp/archive/2008/05/21/SQL-Server-2005-Immediate-Deadlock-notifications.aspx>



Deadlocks... huh??
Deadlocks can be a pain to debug since they're so rare and unpredictable. The problem lies in repeating them in your dev environment. That's why it's crucial to have as much information about them from the production environment as possible.
There are two ways to monitor deadlocks, about which I'll talk about in the future posts. Those are SQL Server tracing and Error log checking. Unfortunately both of them suffer from the same thing: you don't know immediately when a deadlock occurs. Getting this info as soon as possible is sometimes crucial in production environments. Sure you can always set the trace flag 1222 on, but this still doesn't solve the immediate notification problem.
One problem for some might be that this method is only truly useful if you limit data access to stored procedures. <joke> So all you ORM lovers stop reading since this doesn't apply to you anymore! </joke>
The other problem is that it requires a rewrite of the problematic stored procedures to support it. However since SQL Server 2005 came out my opinion is that every stored procedure should have the try ... catch block implemented. There's no visible performance hit from this and the benefits can be huge. One of those benefits are the instant deadlocking notifications.
Needed "infrastructure"
So let's see how it done. This must be implemented in the database you wish to monitor of course.
First we need a view that will get lock info about the deadlock that just happened. You can read why this type of query gives info we need in my previous post<http://weblogs.sqlteam.com/mladenp/archive/2008/04/29/SQL-Server-2005-Get-full-information-about-transaction-locks.aspx>.
CREATE VIEW vLocks
AS
SELECT L.request_session_id AS SPID,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.login_name AS LoginName,
ES.host_name AS HostName,
TST.is_user_transaction AS IsUserTransaction,
AT.name AS TransactionName
FROM sys.dm_tran_locks L
LEFT JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
LEFT JOIN sys.objects O ON O.object_id = P.object_id
LEFT JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
LEFT JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
LEFT JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
LEFT JOIN sys.dm_exec_requests ER ON AT.transaction_id = ER.transaction_id
CROSS APPLY sys.dm_exec_sql_text(ER.sql_handle) AS ST
WHERE resource_database_id = db_id()
GO
Next we have to create our stored procedure template:
CREATE PROC <ProcedureName>
AS
BEGIN TRAN
BEGIN TRY

<SPROC TEXT GOES HERE>

COMMIT
END TRY
BEGIN CATCH
-- check transaction state
IF XACT_STATE() = -1
BEGIN
DECLARE @message xml
-- get our deadlock info FROM the VIEW
SET @message = '<TransactionLocks>' + (SELECT * FROM vLocks ORDER BY SPID FOR XML PATH('TransactionLock')) + '</TransactionLocks>'

-- issue ROLLBACK so we don't ROLLBACK mail sending
ROLLBACK

-- get our error message and number
DECLARE @ErrorNumber INT, @ErrorMessage NVARCHAR(2048)
SELECT @ErrorNumber = ERROR_NUMBER(), @ErrorMessage = ERROR_MESSAGE()

-- if it's deadlock error send mail notification
IF @ErrorNumber = 1205
BEGIN
DECLARE @MailBody NVARCHAR(max)
-- create out mail body in the xml format. you can change this to your liking.
SELECT @MailBody = '<DeadlockNotification>'
+
(SELECT 'Error number: ' + isnull(CAST(@ErrorNumber AS VARCHAR(5)), '-1') + CHAR(10) +
'Error message: ' + isnull(@ErrorMessage, ' NO error message') + CHAR(10)
FOR XML PATH('ErrorMeassage'))
+
CAST(ISNULL(@message, '') AS NVARCHAR(MAX))
+
'</DeadlockNotification>'
-- for testing purposes
-- SELECT CAST(@MailBody AS XML)

-- send an email with the defined email profile.
-- since this is async it doesn't halt execution
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'your mail profile',
@recipients = 'dba@yourCompany.com',
@subject = 'Deadlock occured notification',
@body = @MailBody;
END
END
END CATCH
GO
The main part of this stored procedure is of course the CATCH block. The first line in there is check of the XACT_STATE()<http://msdn.microsoft.com/en-us/library/ms189797.aspx> value. This is a scalar function that reports the user transaction state. -1 means that the transaction is uncommittable and has to be rolled back. This is the state of the victim transaction in the internal deadlock killing process. Next we read from our vLocks view to get the full info (SPID, both SQL statements text, values, etc...) about both SPIDs that created a deadlock. This is possible since our deadlock victim transaction hasn't been rolled back yet and the locks are still present. We save this data into an XML message. Next we rollback our transaction to release locks. With error message and it's corresponding number we check if the error is 1205 - deadlock and if it is we send our message in an email. How to configure database mail can be seen here<http://msdn.microsoft.com/en-us/library/ms175887.aspx>.
Both the view and the stored procedures template can and probably should be customized to suit your needs.
Testing the theory
Let's try it out and see how it works with a textbook deadlock example that you can find in every book or tutorial.
-- create our deadlock table with 2 simple rows
CREATE TABLE DeadlockTest ( id INT)
INSERT INTO DeadlockTest
SELECT 1 UNION ALL
SELECT 2
GO
Next create two stored procedures (spProc1 and spProc2) with our template:
For spProc1 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 12
WHERE id = 2

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 11
WHERE id = 1
For spProc2 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 11
WHERE id = 1

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 12
WHERE id = 2
Next open 2 query windows in SSMS:
In window 1 run put this script:
exec spProc1
In window 2 put this script:
exec spProc2
Run the script in the first window and after a second or two run the script in the second window. A deadlock will happen and a few moments after the victim transaction fails you should get the notification mail. Mail profile has to be properly configured of course.
The resulting email should contain an XML with full info about the deadlock. You can view it by commenting msdb.dbo.sp_send_dbmail execution and uncommenting the SELECT CAST(@MailBody AS XML) line.
If you fire up the SQL Profiler, reset the table values, rerun both scripts and observe the deadlock graph event you should get a picture similar to this one:
[cid:image001.png@01CD761E.26957D40]<http://weblogs.sqlteam.com/images/weblogs_sqlteam_com/mladenp/WindowsLiveWriter/SQLServer2005SendmailwhenDeadlockhappens_124F2/deadlockProfiler_6.png>
End ? ... for now... but more to come soon!
With the upper technique we can get mostly the same info as with trace flags with the advantage of knowing immediately after happening, plus if we want we can extend this to notify/log about every error which I actually prefer to do.
We can see just how awesome is the new async functionality like database mail built on top of the SQL Server Service Broker<http://weblogs.sqlteam.com/mladenp/archive/2007/08/21/Service-Broker-goodies-Cross-Server-Many-to-One-One-to.aspx>. And instead of sending mail you can put the error in an error logging table in the same async way which I've demonstrated here<http://www.sqlteam.com/article/centralized-asynchronous-auditing-with-service-broker>.

Thursday, July 5, 2012

SQL Server Isolation Levels By Example (Nice One)

http://www.gavindraper.co.uk/2012/02/18/sql-server-isolation-levels-by-example/


SQL Server Isolation Levels By Example
Posted on February 18, 2012<http://www.gavindraper.co.uk/2012/02/18/sql-server-isolation-levels-by-example/> by Gavin<http://www.gavindraper.co.uk/author/admin/>

Isolation levels in SQL Server control the way locking works between transactions.
SQL Server 2008 supports the following isolation levels
§ Read Uncommitted
§ Read Committed (The default)
§ Repeatable Read
§ Serializable
§ Snapshot
Before I run through each of these in detail you may want to create a new database to run the examples, run the following script on the new database to create the sample data. Note : You'll also want to drop the IsolationTests table and re-run this script before each example to reset the data.
CREATE TABLE IsolationTests
(
Id INT IDENTITY,
Col1 INT,
Col2 INT,
Col3 INT
)
INSERT INTO IsolationTests(Col1,Col2,Col3)
SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3

Also before we go any further it is important to understand these two terms....
1. Dirty Reads - This is when you read uncommitted data, when doing this there is no guarantee that data read will ever be committed meaning the data could well be bad.
2. Phantom Reads - This is when data that you are working with has been changed by another transaction since you first read it in. This means subsequent reads of this data in the same transaction could well be different.
Read Uncommitted
This is the lowest isolation level there is. Read uncommitted causes no shared locks to be requested which allows you to read data that is currently being modified in other transactions. It also allows other transactions to modify data that you are reading.
As you can probably imagine this can cause some unexpected results in a variety of different ways. For example data returned by the select could be in a half way state if an update was running in another transaction causing some of your rows to come back with the updated values and some not to.
To see read uncommitted in action lets run Query1 in one tab of Management Studio and then quickly run Query2 in another tab before Query1 completes.
Query1
BEGIN TRAN
UPDATE Tests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK

Query2
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM IsolationTests

Notice that Query2 will not wait for Query1 to finish, also more importantly Query2 returns dirty data. Remember Query1 rolls back all its changes however Query2 has returned the data anyway, this is because it didn't wait for all the other transactions with exclusive locks on this data it just returned what was there at the time.
There is a syntactic shortcut for querying data using the read uncommitted isolation level by using the NOLOCK table hint. You could change the above Query2 to look like this and it would do the exact same thing.
SELECT * FROM IsolationTests WITH(NOLOCK)

Read Committed
This is the default isolation level and means selects will only return committed data. Select statements will issue shared lock requests against data you're querying this causes you to wait if another transaction already has an exclusive lock on that data. Once you have your shared lock any other transactions trying to modify that data will request an exclusive lock and be made to wait until your Read Committed transaction finishes.
You can see an example of a read transaction waiting for a modify transaction to complete before returning the data by running the following Queries in separate tabs as you did with Read Uncommitted.
Query1
BEGIN TRAN
UPDATE Tests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK

Query2
SELECT * FROM IsolationTests

Notice how Query2 waited for the first transaction to complete before returning and also how the data returned is the data we started off with as Query1 did a rollback. The reason no isolation level was specified is because Read Committed is the default isolation level for SQL Server. If you want to check what isolation level you are running under you can run "DBCC useroptions". Remember isolation levels are Connection/Transaction specific so different queries on the same database are often run under different isolation levels.
Repeatable Read
This is similar to Read Committed but with the additional guarantee that if you issue the same select twice in a transaction you will get the same results both times. It does this by holding on to the shared locks it obtains on the records it reads until the end of the transaction, This means any transactions that try to modify these records are force to wait for the read transaction to complete.
As before run Query1 then while its running run Query2
Query1
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK

Query2
UPDATE IsolationTests SET Col1 = -1

Notice that Query1 returns the same data for both selects even though you ran a query to modify the data before the second select ran. This is because the Update query was forced to wait for Query1 to finish due to the exclusive locks that were opened as you specified Repeatable Read.
If you rerun the above Queries but change Query1 to Read Committed you will notice the two selects return different data and that Query2 does not wait for Query1 to finish.
One last thing to know about Repeatable Read is that the data can change between 2 queries if more records are added. Repeatable Read guarantees records queried by a previous select will not be changed or deleted, it does not stop new records being inserted so it is still very possible to get Phantom Reads at this isolation level.
Serializable
This isolation level takes Repeatable Read and adds the guarantee that no new data will be added eradicating the chance of getting Phantom Reads. It does this by placing range locks on the queried data. This causes any other transactions trying to modify or insert data touched on by this transaction to wait until it has finished.
You know the drill by now run these queries side by side...
Query1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK

Query2
INSERT INTO IsolationTests(Col1,Col2,Col3)
VALUES (100,100,100)

You'll see that the insert in Query2 waits for Query1 to complete before it runs eradicating the chance of a phantom read. If you change the isolation level in Query1 to repeatable read, you'll see the insert no longer gets blocked and the two select statements in Query1 return a different amount of rows.
Snapshot
This provides the same guarantees as serializable. So what's the difference? Well it's more in the way it works, using snapshot doesn't block other Queries from inserting or updating the data touched by the snapshot transaction. Instead it creates it's own little snapshot of the data being read at that time, if you then read that data again in the same transaction it reads it from its snapshot, This means that even if another transaction has made changes you will always get the same results as you did the first time you read the data.
So on the plus side your not blocking anyone else from modifying the data whilst you run your transaction but.... You're using extra resources on the SQL Server to allocate each snapshot transaction the additional resources to store the snapshot data which can be quite significant if your transaction is working with a large amount of data.
To use the snapshot isolation level you need to enable it on the database by running the following command
ALTER DATABASE IsolationTests
SET ALLOW_SNAPSHOT_ISOLATION ON

If you rerun the examples from serializable but change the isolation level to snapshot you will notice that you still get the same data returned but Query2 no longer waits for Query1 to complete.
Summary
You should now have a good idea how each of the different isolation levels work. You can see how the higher the level you use the less concurrency you are offering and the more blocking you bring to the table. You should always try to use the lowest isolation level you can which is usually read committed.

Tuesday, July 3, 2012

SQL: How to Create Dynamic Table Names in SQL Server

http://support.microsoft.com/kb/175850

INF: How to Create Dynamic Table Names in SQL Server


-- Variable that will contain the name of the table
declare @mytable varchar(30)
-- Creates a temp table name
select @mytable ='TEMP'+ CONVERT(char(12), GETDATE(), 14)
print @mytable

-- Table cannot be created with the character ":" in it
-- The following while loop strips off the colon
declare @pos int
select @pos = charindex(':',@mytable)

while @pos > 0
begin
select @mytable = substring(@mytable,1,@pos - 1) +
substring(@mytable,@pos + 1,30-@pos )
select @pos = charindex(':',@mytable)
end
print 'Name without colon is :'
print @mytable

--create table @mytable (col1 int)

-- Create the temporary table
execute ('create table '+ @mytable +
'(col1 int)' )

-- Insert two rows in the table
execute ('insert into ' + @mytable +
' values(1)')

execute ('insert into ' + @mytable +
' values(2)')

-- Select from the temporary table
execute ('select col1 from ' + @mytable )
-- Drop the temporary table
execute ('drop table ' + @mytable)

Friday, June 22, 2012

division in SQL results in 0

http://social.msdn.microsoft.com/Forums/en-US/transactsql/thread/483c81d4-8e6d-41fb-9a6e-cc61b42772e6/


declare @x int, @y int, @f float
set @x = 100
set @y = 2000
set @f = @x / (@y * 1.0)
print @x
print @y
print @f

Friday, June 15, 2012

SQL : Apply (Cross Apply / Outer Apply)

Reference: http://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx

Using APPLY
SQL Server 2008 R2
Other Versions <javascript:;>


* SQL Server 2008<http://technet.microsoft.com/en-us/library/ms175156(v=sql.100).aspx>
* SQL Server 2005<http://technet.microsoft.com/en-us/library/ms175156(v=sql.90).aspx>
17 out of 32 rated this helpful - Rate this topic<http://technet.microsoft.com/en-us/library/ms175156(v=sql.105).aspx#feedback>

The APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query. The table-valued function acts as the right input and the outer table expression acts as the left input. The right input is evaluated for each row from the left input and the rows produced are combined for the final output. The list of columns produced by the APPLY operator is the set of columns in the left input followed by the list of columns returned by the right input.
[cid:image001.gif@01CD4B0F.8C893F90]Note


To use APPLY, the database compatibility level must be at least 90.


There are two forms of APPLY: CROSS APPLY and OUTER APPLY. CROSS APPLY returns only rows from the outer table that produce a result set from the table-valued function. OUTER APPLY returns both rows that produce a result set, and rows that do not, with NULL values in the columns produced by the table-valued function.

As an example, consider the following tables, Employees and Departments:
Copy<javascript:CodeSnippet_CopyCode('CodeSnippetContainerCode_089d4fd8-1e08-4186-9cdf-71ca05351d3a');>

--Create Employees table and insert values.

CREATE TABLE Employees

(

empid int NOT NULL

,mgrid int NULL

,empname varchar(25) NOT NULL

,salary money NOT NULL

CONSTRAINT PK_Employees PRIMARY KEY(empid)

);

GO

INSERT INTO Employees VALUES(1 , NULL, 'Nancy' , $10000.00);

INSERT INTO Employees VALUES(2 , 1 , 'Andrew' , $5000.00);

INSERT INTO Employees VALUES(3 , 1 , 'Janet' , $5000.00);

INSERT INTO Employees VALUES(4 , 1 , 'Margaret', $5000.00);

INSERT INTO Employees VALUES(5 , 2 , 'Steven' , $2500.00);

INSERT INTO Employees VALUES(6 , 2 , 'Michael' , $2500.00);

INSERT INTO Employees VALUES(7 , 3 , 'Robert' , $2500.00);

INSERT INTO Employees VALUES(8 , 3 , 'Laura' , $2500.00);

INSERT INTO Employees VALUES(9 , 3 , 'Ann' , $2500.00);

INSERT INTO Employees VALUES(10, 4 , 'Ina' , $2500.00);

INSERT INTO Employees VALUES(11, 7 , 'David' , $2000.00);

INSERT INTO Employees VALUES(12, 7 , 'Ron' , $2000.00);

INSERT INTO Employees VALUES(13, 7 , 'Dan' , $2000.00);

INSERT INTO Employees VALUES(14, 11 , 'James' , $1500.00);

GO

--Create Departments table and insert values.

CREATE TABLE Departments

(

deptid INT NOT NULL PRIMARY KEY

,deptname VARCHAR(25) NOT NULL

,deptmgrid INT NULL REFERENCES Employees

);

GO

INSERT INTO Departments VALUES(1, 'HR', 2);

INSERT INTO Departments VALUES(2, 'Marketing', 7);

INSERT INTO Departments VALUES(3, 'Finance', 8);

INSERT INTO Departments VALUES(4, 'R&D', 9);

INSERT INTO Departments VALUES(5, 'Training', 4);

INSERT INTO Departments VALUES(6, 'Gardening', NULL);

Most departments in the Departments table have a manager ID that corresponds to an employee in the Employees table. The following table-valued function accepts an employee ID as an argument and returns that employee and all of his or her subordinates.
Copy<javascript:CodeSnippet_CopyCode('CodeSnippetContainerCode_49c631ea-0bf3-4c16-848d-0cbb92ffb354');>

CREATE FUNCTION dbo.fn_getsubtree(@empid AS INT)

RETURNS @TREE TABLE

(

empid INT NOT NULL

,empname VARCHAR(25) NOT NULL

,mgrid INT NULL

,lvl INT NOT NULL

)

AS

BEGIN

WITH Employees_Subtree(empid, empname, mgrid, lvl)

AS

(

-- Anchor Member (AM)

SELECT empid, empname, mgrid, 0

FROM Employees

WHERE empid = @empid



UNION all



-- Recursive Member (RM)

SELECT e.empid, e.empname, e.mgrid, es.lvl+1

FROM Employees AS e

JOIN Employees_Subtree AS es

ON e.mgrid = es.empid

)

INSERT INTO @TREE

SELECT * FROM Employees_Subtree;



RETURN

END

GO

To return all of the subordinates in all levels for the manager of each department, use the following query.
Copy<javascript:CodeSnippet_CopyCode('CodeSnippetContainerCode_11e1c87e-7a8e-423a-b94a-9c74afd15b59');>

SELECT D.deptid, D.deptname, D.deptmgrid

,ST.empid, ST.empname, ST.mgrid

FROM Departments AS D

CROSS APPLY fn_getsubtree(D.deptmgrid) AS ST;

Here is the result set.
Copy<javascript:CodeSnippet_CopyCode('CodeSnippetContainerCode_982ac3b1-7369-41ee-8cb6-c8d5369e7e18');>

deptid deptname deptmgrid empid empname mgrid lvl

----------- ---------- ----------- ----------- ---------- ----------- ---

1 HR 2 2 Andrew 1 0

1 HR 2 5 Steven 2 1

1 HR 2 6 Michael 2 1

2 Marketing 7 7 Robert 3 0

2 Marketing 7 11 David 7 1

2 Marketing 7 12 Ron 7 1

2 Marketing 7 13 Dan 7 1

2 Marketing 7 14 James 11 2

3 Finance 8 8 Laura 3 0

4 R&D 9 9 Ann 3 0

5 Training 4 4 Margaret 1 0

5 Training 4 10 Ina 4 1

Notice that each row from the Departments table is duplicated as many times as there are rows returned from fn_getsubtree for the department's manager.

Also, the Gardening department does not appear in the results. Because this department has no manager, fn_getsubtree returned an empty set for it. By using OUTER APPLY, the Gardening department will also appear in the result set, with null values in the deptmgrid field, as well as in the fields returned by fn_getsubtree.

Thursday, May 31, 2012

SQL SERVER - Intorduction to Service Broker and Sample Script

SQL SERVER - Intorduction to Service Broker and Sample Script


Service Broker in Microsoft SQL Server 2005 is a new technology that provides messaging and queuing functions between instances. The basic functions of sending and receiving messages forms a part of a "conversation." Each conversation is considered to be a complete channel of communication. Each Service Broker conversation is considered to be a dialog where two participants are involved.

Service broker find applications when single or multiple SQL server instances are used. This functionality helps in sending messages to remote databases on different servers and processing of the messages within a single database. In order to send messages between the instances, the Service Broker uses TCP/IP.

This transaction message queuing system enables the developers to build secure and reliable applications, which are scalable. The developers can design applications from independent components known as "services." If the applications need to avail the functionality of these services, then it sends message to the particular "service."

Loosely coupled applications (programs that exchange messages independently) are supported by the Service broker. The three components of the Service broker are as follows: conversation components (which consist of the conversation groups, conversations and messages); service definition components (which define the conversations); and networking and security components (defines the infrastructure used for exchanging messages between instances)

The maintenance of Service Broker is easy and it is a part of the routine database administration procedure. This is because this functionality forms a part of the Database Engine. Service Broker also provides security by preventing unauthorized access from networks and by message encryption.

Let us understand Service Broker with simple script. Script contains necessary comments to explain what exactly script is doing.
---------------------------- Service Broker -----------------------
In this exercise we will learn how to cofigure Servie Broker and send and recieve messages.
-------------------------------------------------------------------

CREATE DATABASE ServiceBrokerTest
GO
USE ServiceBrokerTest
GO
-- Enable Service Broker
ALTER DATABASE ServiceBrokerTest SET ENABLE_BROKER
GO
-- Create Message Type
CREATE MESSAGE TYPE SBMessage
VALIDATION = NONE
GO
-- Create Contract
CREATE CONTRACT SBContract
(SBMessage SENT BY INITIATOR)
GO
-- Create Send Queue
CREATE QUEUE SBSendQueue
GO
-- Create Receive Queue
CREATE QUEUE SBReceiveQueue
GO
-- Create Send Service on Send Queue
CREATE SERVICE SBSendService
ON QUEUE SBSendQueue (SBContract)
GO
-- Create Receive Service on Recieve Queue
CREATE SERVICE SBReceiveService
ON QUEUE SBReceiveQueue (SBContract)
GO
-- Begin Dialog using service on contract
DECLARE @SBDialog uniqueidentifier
DECLARE @Message NVARCHAR(128)
BEGIN DIALOG CONVERSATION @SBDialog
FROM SERVICE SBSendService
TO SERVICE 'SBReceiveService'
ON CONTRACT SBContract
WITH ENCRYPTION = OFF
-- Send messages on Dialog
SET @Message = N'Very First Message';
SEND ON CONVERSATION @SBDialog
MESSAGE TYPE SBMessage (@Message)
SET @Message = N'Second Message';
SEND ON CONVERSATION @SBDialog
MESSAGE TYPE SBMessage (@Message)
SET @Message = N'Third Message';
SEND ON CONVERSATION @SBDialog
MESSAGE TYPE SBMessage (@Message)
GO
-- View messages from Receive Queue
SELECT CONVERT(NVARCHAR(MAX), message_body) AS Message
FROM SBReceiveQueue
GO
-- Receive messages from Receive Queue
RECEIVE TOP(1) CONVERT(NVARCHAR(MAX), message_body) AS Message
FROM SBReceiveQueue
GO
-- Receive messages from Receive Queue
RECEIVE CONVERT(NVARCHAR(MAX), message_body) AS Message
FROM SBReceiveQueue
GO
-- Clean Up
USE master
GO
DROP DATABASE ServiceBrokerTest
GO


You can download the above script from here<http://www.pinaldave.com/sql-download/scripts/Service_Broker.zip>.

Let me know what do you think of this script and how simply one can configure service broker.

Reference : Pinal Dave (http://blog.sqlauthority.com<http://blog.sqlauthority.com/>)
http://blog.sqlauthority.com/2009/09/21/sql-server-intorduction-to-service-broker-and-sample-script/

Thursday, May 24, 2012

Learn English

http://www.englishclub.com/


http://www.englishclub.com/pronunciation/tongue-twisters.htm

Tuesday, May 22, 2012

SQL Server 2008 - HierarchyID - Part I

http://blogs.msdn.com/b/manisblog/archive/2007/08/17/sql-server-2008-hierarchyid.aspx

SQL Server 2008 - HierarchyID - Part I

I am excited by the cool new features that SQL Server 2008 is binging up !
SQL Server 2008 is bringing some commendable new features one of the nice feature is HierarchyID. This a new data type that is available in the in the latest July CTP of the SQL Server 2008 Developer Edition.
Organizations have struggled in past with the representation of tree like structures in the databases, lot of joins lots of complex logic goes into the place, whether it is organization hierarchy or defining a BOM (Bill of Materials) where one finished product is dependent on another semi finished materials / kit items and these kit items are dependent on another semi finished items or raw materials.
SQL Server 2008 has the solution to the problem where we store the entire hierarchy in the data type HierarchyID. HierarchyID is a variable length system data type. HierarchyID is used to locate the position in the hierarchy of the element like Scott is the CEO and Mark as well as Ravi reports to Scott and Ben and Laura report to Mark, Vijay, James and Frank report to Ravi.
Scott
|
Mark <- > Ravi
| |
Ben<-> Laura Vijay <-> Frank <-> James
This tree can expand to n nodes.
The average number of bits that are required to represent a node in a tree with n nodes depends on the average fanout (the average number of children of a node). For small fanouts (0-7), the size is about 6*logAn bits, where A is the average fanout. A node in an organizational hierarchy of 100,000 people with an average fanout of 6 levels takes about 38 bits. This is rounded up to 40 bits, or 5 bytes, for storage. It supports the insertion and deletion of nodes.
The HierarchyID can be indexed in two ways
1) Depth First strategy: A depth-first index, rows in a subtree are stored near each other. For example, all employees that report through a manager are stored near their managers' record.
[cid:image001.gif@01CD383A.42FF5220]<http://blogs.msdn.com/blogfiles/manisblog/WindowsLiveWriter/SQLServer2008HierarchyID_13519/DF.gif>
2) Breadth First Strategy : A breadth-first stores the rows each level of the hierarchy together. For example, the records of employees who directly report to the same manager are stored near each other.
[cid:image002.gif@01CD383A.42FF5220]<http://blogs.msdn.com/blogfiles/manisblog/WindowsLiveWriter/SQLServer2008HierarchyID_13519/BF.gif>
We can change the strategy as and when we want but the index needs to be dropped and then a new index has to be rebuilt. If if the data is huge and the index is clustered then the table is converted to heap and then indexed again.
The following are the methods in the SQL Server 2008 Database Engine to support HierarchyID data type.
1) GetAncestor
2) GetDescendant
3) GetLevel
4) GetRoot
5) IsDescendant
6) Parse
7) Read
8) Reparent
9) ToString
10) Write

GetAncestor()
This method is useful to find the (nth ancestor of the given child node.
Syntax: child.GetAncestor ( n )
GetDescendant()
This method is very useful to get the descendant of a given node. It has a great significance in terms of finding the new descendant position get the descendants etc.
Syntax: parent.GetDescendant ( child1 , child2 )
This function returns one child node that is a descendant of the parent.
1. If parent is NULL, returns NULL.
2. If parent is not NULL, and both child1 and child2 are NULL, returns a child of parent.
3. If parent and child1 are not NULL, and child2 is NULL, returns a child of parent greater than child1.
4. If parent and child2 are not NULL and child1 is NULL, returns a child of parent less than child2.
5. If parent, child1, and child2 are all not NULL, returns a child of parent greater than child1 and less than child2.
6. If child1 or child2 is not NULL but is not a child of parent, an exception is raised.
7. If child1 >= child2, an exception is raised.
GetLevel()
This method is useful to find the Level of the current node.
Syntax: node.GetLevel ( )
This function will return an integer that represents the depth of this node in the current tree.
GetRoot()
This method will return the root of the hierarchy tree and this is a static method if you are using it within CLR.
Syntax: hierarchyid::GetRoot ( )
It will return the data type hierarchyID.
IsDescendant()
This method returns true/false (BIT) if the node is a descendant of the parent.
Syntax: parent.IsDescendant ( child )
Parse()
Parse converts the canonical string representation of a hierarchyid to a hierarchyid value. Parse is called implicitly when a conversion from a string type to hierarchyid occurs. Acts as the opposite of ToString(). Parse() is a static method.
Syntax: hierarchyid::Parse ( input )
Read()
Read reads binary representation of SqlHierarchyId from the passed-in BinaryReader and sets the SqlHierarchyId object to that value. Read cannot be called by using Transact-SQL. Use CAST or CONVERT instead.
Syntax: void Read( BinaryReader r )
Reparent()
This is a very useful method which helps you to reparent a node i.e. suppose if we want to align an existing node to a new parent or any other existing parent then this method is very useful.
Syntax: node.Reparent ( oldRoot, newRoot )
ToString()
This method is useful to get the string representation of the HierarchyID. The method returns a string that is a nvarchar(4000) data type.
Syntax: node.ToString ( )
Write()
Write writes out a binary representation of SqlHierarchyId to the passed-in BinaryWriter. Write cannot be called by using Transact-SQL. Use CAST or CONVERT instead.
Syntax: void Write( BinaryWriter w )
Sample Code
Create Table and Index
Use AdventureWorksLT
Go
--Scheme Creation
Create Schema HumanResources
Go
--Table Creation
CREATE TABLE HumanResources.EmployeeDemo
(
OrgNode HIERARCHYID,
EmployeeID INT,
LoginID VARCHAR(100),
Title VARCHAR(200),
HireDate DATETIME
)
Go
--Index Creation
CREATE UNIQUE CLUSTERED INDEX idxEmployeeDemo
ON HumanResources.EmployeeDemo (OrgNode,EmployeeID)

Data Insertion
--Insert First Row
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (hierarchyid GetRoot(), 1,'adventure-works\scott', 'CEO', '3/11/05') ;
Go
--Insert Second Row
DECLARE @Manager hierarchyid
SELECT @Manager = hierarchyid GetRoot() FROM HumanResources.EmployeeDemo;
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(NULL,NULL), 2, 'adventure-works\Mark',
'CTO', '4/05/07')
Go
--Insert Third Row
DECLARE @Manager hierarchyid
DECLARE @FirstChild hierarchyid
SELECT @Manager = hierarchyid GetRoot() FROM HumanResources.EmployeeDemo;
Select @FirstChild = @Manager.GetDescendant(NULL,NULL)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(@FirstChild,NULL), 3, 'adventure-works\ravi',
'Director Marketing', '4/08/07')
Go
--Insert the First Descendant of a Child Node
DECLARE @Manager hierarchyid
SELECT @Manager = CAST('/1/' AS hierarchyid)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(NULL, NULL),45,
'adventure-works\Ben','Application Developer', '6/11/07') ;
Go
--Insert the Second Descendant of a Child Node
DECLARE @Manager hierarchyid
DECLARE @FirstChild hierarchyid
SELECT @Manager = CAST('/1/' AS hierarchyid)
SELECT @FirstChild = @Manager.GetDescendant(NULL,NULL)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(@FirstChild, NULL),55,
'adventure-works\Laura','Trainee Developer', '6/11/07') ;
Go
--Insert the first node who is the Descendant of Director Marketing
DECLARE @Manager hierarchyid
DECLARE @FirstChild hierarchyid
SELECT @Manager = CAST('/2/' AS hierarchyid)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(NULL, NULL),551,
'adventure-works\frank','Trainee Sales Exec.', '12/11/07') ;
Go
--Insert the second node who is the Descendant of Director Marketing
DECLARE @Manager hierarchyid
DECLARE @FirstChild hierarchyid
SELECT @Manager = CAST('/2/' AS hierarchyid)
SELECT @FirstChild = @Manager.GetDescendant(NULL,NULL)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(@FirstChild, NULL),531,
'adventure-works\vijay','Manager Industrial Sales', '12/09/06') ;
Go
--Insert the third node who is the Descendant of Director Marketing
--in between 2 existing descendants
DECLARE @Manager hierarchyid
DECLARE @FirstChild hierarchyid
DECLARE @SecondChild hierarchyid
SELECT @Manager = CAST('/2/' AS hierarchyid)
SELECT @FirstChild = @Manager.GetDescendant(NULL,NULL)
SELECT @SecondChild = @Manager.GetDescendant(@FirstChild,NULL)
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES (@Manager.GetDescendant(@FirstChild, @SecondChild),543,
'adventure-works\james','Manager Consumer Sales', '12/04/06') ;

Procedure to insert Employee Record
--Use Serializable Transaction
CREATE PROCEDURE AddEmployee(@ManagerID hierarchyid, @EmpID int,
@LogID varchar(100), @JobTitle as varchar(200), @JoiningDate datetime)
AS
BEGIN
DECLARE @LastChild hierarchyid
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT @LastChild = Max(OrgNode) From HumanResources.EmployeeDemo
WHERE OrgNode = @ManagerID
INSERT HumanResources.EmployeeDemo (OrgNode, EmployeeID, LoginID, Title, HireDate)
VALUES(@LastChild, @EmpID,@LogID , @JobTitle, @JoiningDate)
COMMIT
END ;

Hope you like this article I will write more about HierarchyID very soon. If you like the article or have any doubts then drop me an post.
I am attaching the sample SQL Script along so that you can test it in your test environment.

 
doggy steps
doggy steps