sql – Handle locks in a concurrent environment

Question:

I am working on a system with 15 concurrent processes that perform various operations against a SQL SERVER database, the versions can go from 2005 to 2014. Each process has its particular data in several tables, no process queries / removes / updates / inserts information from other processes. We have three tables involved, R, E and P, the main process runs in a stored procedure with a transaction. The operations it does are in this sequence:

BEGIN TRANSACION UNCOMMITED

  1. INSERT HUNDREDS OF ROWS IN R
  2. UPDATE in P with FK of the process id setting the maximum date of R
  3. DELETE E with FK of the process id and filtering by dates
  4. INSERT INTO E SELECT FROM R FK from process id and filtering by dates

COMMIT

Every so often deadlocks appear, what I have done is create a CLUSTERED index for processID and for the date field in table R and in table E. I have also created NON CLUSTERED indexes for both tables including these fields processID, date and some more .

I have set READUNCOMMITED because I have no problem with dirty reads by not sharing information between processes, I have disabled the LOCK SCALATION. The performance seems acceptable but I wanted to try and improve it. When the deadlock is caused, what I do is retry the transaction and in most cases it executes conveniently. My understanding is that memory consumption could be increased by setting LOCK SCALATION to DISABLE. I could also disable the ROW_lock and page_lock in the indexes, but I don't know if this would be adequate. How could you improve this environment to allow for existing concurrency? What other mechanisms could it integrate? Thanks

Answer:

Crash problems occur when multiple processes compete to update the same records. This is not a problem if the processes are fast enough, because all jobs are queued and processed as they enter. I don't quite understand what your process does, but I usually solve this in two ways:

  • Separating tables for each process: If you can, the ideal is that each process uses its tables and in the last step (the insert), you do it on the final table common to all.
  • Rethinking the process: It verifies that all the actions it takes are necessary and justified. Think if it could be done in another way, identify which processes are the ones that CONSUME MOST execution time and try to give them another approach.

As a general rule, you can also check the indexes you have on the tables. Indexes improve query performance but penalize all other insert and update operations. Verify that you have the strictly necessary indexes. Eliminate all the ones you don't need and if you can try to do operations on smaller blocks of registers. I leave you a small script so you can see how to perform a basic example task:

DECLARE @RowCount INT
SET @RowCount = 1
WHILE (@RowCount > 0) 
BEGIN  
    delete from [XXXXX] where Pky IN (
        select top 500 [XXXXXX].Pky
        from [XXXXXX] with (nolock)
        inner join [YYYYYYYY] with (nolock) ON [XXXXXX].AAAAA = [YYYYYYY].BBBBB
    )
    SET @RowCount = @@ROWCOUNT
    WAITFOR DELAY '00:00:05'
END

In any case, I refer you to my two comments at the beginning. First verify that the process is correctly planned and that there is no way to do it better, if there is not, check the subject of the indexes and finally, try to do it using shorter processes using scripts like the one I give you as an example.

Scroll to Top