Monday, August 27, 2012

Implementing Semaphore At Database Level (SQL Server)

Hi,

I am Bishnu Patro.
Recently, I had an requirement to execute a particular stored proc certain number of times (it should be configurable depending upon the available Memory). The reason, is if the stored proc is executed certain number of times simultaneously, it used to take more time to execute since the all available memory is consumed and there is no free memory left. So, we decided to limit the number of parallel execution at the stored procedure level and it should be configurable. If it configured to run the stored proc simultaneouly to a maximum of 2. Then when the next request comes for execution of the stored proc, this request will wait for one of the execution to complete.

I have implemented this by way of implementing customized semaphore at database level.

I have created a table MultipleInstanceJobConfiguration, to keep track of the number of simultaneous execution of the stored proc.


CREATE TABLE [dbo].[MultipleInstanceJobConfiguration](
[JobType] [nvarchar](50) NULL,
[Total] [smallint] NULL,
[Available] [smallint] NULL,
[IsEnabled] [bit] NULL
) ON [PRIMARY]


Here, jobtype is the name of the stored proc.
Total is the number of simultaneous execution of the stored proc.
Available is at any point how many simultaneous execution available.
IsEnabled is to whether you want to enable this feature or disable it.


The following statement inserts for a jobtype SI:

INSERT INTO [dbo].[MultipleInstanceJobConfiguration] ([JobType] ,[Total] ,[Available] ,[IsEnabled]) VALUES ('SI',2,2,1)
GO


The following stored proc is used to take a lock on an object:


CREATE Proc [dbo].[AcqLock]
@Resource_Name1 as nvarchar(255),
@Lock_Mode1 as nvarchar(32),
@Lock_TimeOut1 as int
As

DECLARE @LockResult int

EXECUTE @LockResult = sp_getapplock
@Resource = @Resource_Name1,
@LockMode = @Lock_Mode1,
@LockOwner = 'Session',
@LockTimeout = @Lock_TimeOut1


while( @LockResult <> 0)
BEGIN
WAITFOR DELAY '00:00:01'
EXECUTE @LockResult = sp_getapplock
@Resource = @Resource_Name1,
@LockMode = @Lock_Mode1,
@LockOwner = 'Session',
@LockTimeout = @Lock_TimeOut1
END
GO


The following stored proc is used to release a lock on an object:


CREATE Proc [dbo].[ReleseLock]
@Resource_Name1 as nvarchar(255)
As
EXECUTE sp_releaseapplock
@Resource = @Resource_Name1, @LockOwner = 'Session'
GO


The following stored proc is an example implementing semphore:


CREATE PROC [dbo].[StoredProcUsingSemaphore]
AS

--Updating Configuation Available Count On Start of stored proc
declare @total int
declare @available int
declare @isEnabled bit
select @total = Total,@available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'
WHILE(@total > 0 and @isEnabled = 1)
BEGIN
if( @available > 0)
BEGIN
Begin Try
EXEC dbo. AcqLock 'MY_MULTIPLEINSTANCES','Exclusive',0
select @available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'
if(@available > 0 and @isEnabled = 1)
BEGIN
Update dbo.MultipleInstanceJobConfiguration
Set Available = Available - 1
where JobType like 'SI'

INSERT INTO MetricsLoadProcessingLog VALUES (@loadID,'SI_Job_Configuration',null ,1 ,'Decremented Available Count',GETUTCDATE())

Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
BREAK
END
End Try
Begin Catch
Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
End Catch

END
WAITFOR DELAY '00:00:15'
select @available = Available,@isEnabled = IsEnabled from dbo.MultipleInstanceJobConfiguration with (nolock) where JobType like 'SI'

END


----Your piece of code will go here.....


-----Updating Available Count on Finish
if(@total > 0 and @isEnabled = 1)
BEGIN
EXEC dbo. AcqLock 'MY_MULTIPLEINSTANCES ','Exclusive',0

Update dbo.MultipleInstanceJobConfiguration
Set Available = Available + 1
where JobType like 'SI' and Available < Total


Exec dbo. ReleseLock 'MY_MULTIPLEINSTANCES'
END

GO


Please let me know if the post was useful....

Thursday, August 16, 2012

SQL: Tips For Lightning-Fast Insert Performance On SQL Server

http://arnosoftwaredev.blogspot.in/2011/10/tips-for-lightning-fast-insert.html

Tips For Lightning-Fast Insert Performance On SQL Server
1. Increase ADO.NET BatchSize to eliminate unnecessary network roundtrips, e.g. SqlDataAdapter.UpdateBatchSize<http://msdn.microsoft.com/en-us/library/xw5z0hdd(v=VS.100).aspx> when working with DataAdapters, or SqlBulkCopy.BatchSize<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.batchsize.aspx> when applying SqlBulkCopy.

2. Limit the number of indices on the target table to what is really essential for query performance.

3. Place indices on master table columns referenced by the target table's foreign keys.

4. Choose the target table's clustered index wisely, so that inserts won't lead to clustered index node splits. Usually an identity column (AKA "autoinc") is a good choice. If you don't have autoinc primary keys, consider introducing a new identity column just for the sake of making it your clustered index.

5. Let the client insert into a temporary heap table first (that is, a table that has no clustered index, resp. no index at all). Then, issue one big "insert-into-select"<http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/> statement to push all that staging table data into the actual target table. The "insert-into-select"-statement must contain an "order-by"-clause which guarantees ordering by clustered index.

6. Apply SqlBulkCopy<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx>.

7. Decrease transaction logging by choosing bulk-logged recovery model<http://msdn.microsoft.com/en-us/library/ms189275.aspx>, resp. setting SqlServer traceflag 610<http://sqlserverplanet.com/sql-server-2008/sql-server-2008-minimally-logged-inserts/>.

8. If your business scenario allows for it, place a table lock before inserting. This will make any further locking unnecessary, and is especially a viable option on staging tables as described in (5). SqlBulkCopy also supports table locks via SqlBulkCopyOptions<http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopyoptions.aspx>.

9. Place database datafile and logfile on two physically separated devices, e.g. on two disks or two SAN LUNs configured for different spindles.

10. Prefer server-side processsing (e.g. by means of "insert-into-select") to client-to-server-roundtrips wherever possible.

11. This is probably the fastest insert-approach I have ever heard of (taken from this sqlbi whitepaper<http://www.sqlbi.eu/LinkClick.aspx?fileticket=svahq1Mpp9A%3D&tabid=169&mid=375>, see final paragraph): Create a new heap table just for the current insert batch, SqlBulk-Copy data into that table, then create a suited clustered index on the table, and add the table as a new table partition to an existing partitioned table.

12. Check execution plan when inserting, and go sure it does not contain anything unexpected or dispensable that might slow down your inserts, e.g. UDF-calls during check constraint execution, heavyweight trigger code, referential integrity checks without index usage or indexed view updates.

Thursday, August 9, 2012

SQL Server 2005: Immediate Deadlock notifications (example)

http://weblogs.sqlteam.com/mladenp/archive/2008/05/21/SQL-Server-2005-Immediate-Deadlock-notifications.aspx


SQL Server 2005: Immediate Deadlock notifications<http://weblogs.sqlteam.com/mladenp/archive/2008/05/21/SQL-Server-2005-Immediate-Deadlock-notifications.aspx>



Deadlocks... huh??
Deadlocks can be a pain to debug since they're so rare and unpredictable. The problem lies in repeating them in your dev environment. That's why it's crucial to have as much information about them from the production environment as possible.
There are two ways to monitor deadlocks, about which I'll talk about in the future posts. Those are SQL Server tracing and Error log checking. Unfortunately both of them suffer from the same thing: you don't know immediately when a deadlock occurs. Getting this info as soon as possible is sometimes crucial in production environments. Sure you can always set the trace flag 1222 on, but this still doesn't solve the immediate notification problem.
One problem for some might be that this method is only truly useful if you limit data access to stored procedures. <joke> So all you ORM lovers stop reading since this doesn't apply to you anymore! </joke>
The other problem is that it requires a rewrite of the problematic stored procedures to support it. However since SQL Server 2005 came out my opinion is that every stored procedure should have the try ... catch block implemented. There's no visible performance hit from this and the benefits can be huge. One of those benefits are the instant deadlocking notifications.
Needed "infrastructure"
So let's see how it done. This must be implemented in the database you wish to monitor of course.
First we need a view that will get lock info about the deadlock that just happened. You can read why this type of query gives info we need in my previous post<http://weblogs.sqlteam.com/mladenp/archive/2008/04/29/SQL-Server-2005-Get-full-information-about-transaction-locks.aspx>.
CREATE VIEW vLocks
AS
SELECT L.request_session_id AS SPID,
DB_NAME(L.resource_database_id) AS DatabaseName,
O.Name AS LockedObjectName,
P.object_id AS LockedObjectId,
L.resource_type AS LockedResource,
L.request_mode AS LockType,
ST.text AS SqlStatementText,
ES.login_name AS LoginName,
ES.host_name AS HostName,
TST.is_user_transaction AS IsUserTransaction,
AT.name AS TransactionName
FROM sys.dm_tran_locks L
LEFT JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
LEFT JOIN sys.objects O ON O.object_id = P.object_id
LEFT JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
LEFT JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
LEFT JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
LEFT JOIN sys.dm_exec_requests ER ON AT.transaction_id = ER.transaction_id
CROSS APPLY sys.dm_exec_sql_text(ER.sql_handle) AS ST
WHERE resource_database_id = db_id()
GO
Next we have to create our stored procedure template:
CREATE PROC <ProcedureName>
AS
BEGIN TRAN
BEGIN TRY

<SPROC TEXT GOES HERE>

COMMIT
END TRY
BEGIN CATCH
-- check transaction state
IF XACT_STATE() = -1
BEGIN
DECLARE @message xml
-- get our deadlock info FROM the VIEW
SET @message = '<TransactionLocks>' + (SELECT * FROM vLocks ORDER BY SPID FOR XML PATH('TransactionLock')) + '</TransactionLocks>'

-- issue ROLLBACK so we don't ROLLBACK mail sending
ROLLBACK

-- get our error message and number
DECLARE @ErrorNumber INT, @ErrorMessage NVARCHAR(2048)
SELECT @ErrorNumber = ERROR_NUMBER(), @ErrorMessage = ERROR_MESSAGE()

-- if it's deadlock error send mail notification
IF @ErrorNumber = 1205
BEGIN
DECLARE @MailBody NVARCHAR(max)
-- create out mail body in the xml format. you can change this to your liking.
SELECT @MailBody = '<DeadlockNotification>'
+
(SELECT 'Error number: ' + isnull(CAST(@ErrorNumber AS VARCHAR(5)), '-1') + CHAR(10) +
'Error message: ' + isnull(@ErrorMessage, ' NO error message') + CHAR(10)
FOR XML PATH('ErrorMeassage'))
+
CAST(ISNULL(@message, '') AS NVARCHAR(MAX))
+
'</DeadlockNotification>'
-- for testing purposes
-- SELECT CAST(@MailBody AS XML)

-- send an email with the defined email profile.
-- since this is async it doesn't halt execution
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'your mail profile',
@recipients = 'dba@yourCompany.com',
@subject = 'Deadlock occured notification',
@body = @MailBody;
END
END
END CATCH
GO
The main part of this stored procedure is of course the CATCH block. The first line in there is check of the XACT_STATE()<http://msdn.microsoft.com/en-us/library/ms189797.aspx> value. This is a scalar function that reports the user transaction state. -1 means that the transaction is uncommittable and has to be rolled back. This is the state of the victim transaction in the internal deadlock killing process. Next we read from our vLocks view to get the full info (SPID, both SQL statements text, values, etc...) about both SPIDs that created a deadlock. This is possible since our deadlock victim transaction hasn't been rolled back yet and the locks are still present. We save this data into an XML message. Next we rollback our transaction to release locks. With error message and it's corresponding number we check if the error is 1205 - deadlock and if it is we send our message in an email. How to configure database mail can be seen here<http://msdn.microsoft.com/en-us/library/ms175887.aspx>.
Both the view and the stored procedures template can and probably should be customized to suit your needs.
Testing the theory
Let's try it out and see how it works with a textbook deadlock example that you can find in every book or tutorial.
-- create our deadlock table with 2 simple rows
CREATE TABLE DeadlockTest ( id INT)
INSERT INTO DeadlockTest
SELECT 1 UNION ALL
SELECT 2
GO
Next create two stored procedures (spProc1 and spProc2) with our template:
For spProc1 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 12
WHERE id = 2

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 11
WHERE id = 1
For spProc2 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 11
WHERE id = 1

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 12
WHERE id = 2
Next open 2 query windows in SSMS:
In window 1 run put this script:
exec spProc1
In window 2 put this script:
exec spProc2
Run the script in the first window and after a second or two run the script in the second window. A deadlock will happen and a few moments after the victim transaction fails you should get the notification mail. Mail profile has to be properly configured of course.
The resulting email should contain an XML with full info about the deadlock. You can view it by commenting msdb.dbo.sp_send_dbmail execution and uncommenting the SELECT CAST(@MailBody AS XML) line.
If you fire up the SQL Profiler, reset the table values, rerun both scripts and observe the deadlock graph event you should get a picture similar to this one:
[cid:image001.png@01CD761E.26957D40]<http://weblogs.sqlteam.com/images/weblogs_sqlteam_com/mladenp/WindowsLiveWriter/SQLServer2005SendmailwhenDeadlockhappens_124F2/deadlockProfiler_6.png>
End ? ... for now... but more to come soon!
With the upper technique we can get mostly the same info as with trace flags with the advantage of knowing immediately after happening, plus if we want we can extend this to notify/log about every error which I actually prefer to do.
We can see just how awesome is the new async functionality like database mail built on top of the SQL Server Service Broker<http://weblogs.sqlteam.com/mladenp/archive/2007/08/21/Service-Broker-goodies-Cross-Server-Many-to-One-One-to.aspx>. And instead of sending mail you can put the error in an error logging table in the same async way which I've demonstrated here<http://www.sqlteam.com/article/centralized-asynchronous-auditing-with-service-broker>.

 
doggy steps
doggy steps