Quantcast
Channel: SQL Steve » Uncategorized
Viewing all articles
Browse latest Browse all 4

How to: Identify SQL Server Transaction Log Usage by Table

$
0
0

The transaction log of a SQL Server database is often misunderstood.  In short, the transaction log records every change made to the database.  Yes, that’s right – every change.  As such, depending on the changes being made to the database, the recovery model and certain other factors that can prevent log truncation, transaction logs can grow – sometimes out of control.

A SQL Server transaction log that has runaway growth can have several negative side-effects including:

  1. Consuming all of the free space on the disk
  2. Overloading the I/O subsystem
  3. Causing Transaction Log Shipping to fall out of sync

So, how can one determine the cause of the rapid growth of the transaction log?  Well, fortunately, there is a function named sys.fn_dblog.  This function is officially undocumented, however it is very straightforward to use.  It takes 2 input parameters, the first being the starting LSN (Log Sequence Number) and the second being the ending LSN.  If NULL provided to either of these parameters, the parameter is effectively ignored.  What sys.fn_dblog returns is a table with 129 columns (SQL Server 2012).  While there is enough information to write a book about all of these columns, we are going to focus our attention on only 3 columns:

  1. AllocUnitName
  2. Operation
  3. Log Record Length

The combination of the above columns will give us the information we need to identify the cause of transaction log growth.

Environment Setup

use master
GO
 
IF EXISTS (SELECT * FROM sys.databases WHERE name = ‘DeleteMe’)
ALTER DATABASE DeleteMe SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE DeleteMe
GO
 
CREATE DATABASE DeleteMe
GO
 
ALTER DATABASE DeleteMe SET RECOVERY SIMPLE
ALTER DATABASE DeleteMe MODIFY FILE (NAME = N’DeleteMe_log’, SIZE = 10240KB)
GO
 
USE DeleteMe
GO
 
CREATE TABLE dbo.TestTable
(
    TestTableId INT NOT NULL IDENTITY(1, 1),
    Column01 INT NOT NULL,
    Column02 NVARCHAR(50) NOT NULL,
    CONSTRAINT PK_TestTable PRIMARY KEY CLUSTERED
    (
        TestTableId ASC
    )
)
GO

 

The code above simply:

  1. Creates the database
  2. Sets the recovery model to simple
  3. Sets the transaction log size to 10 MB
  4. Creates a table named TestTable

Now that we have an environment to work in, let’s clear out the transaction log.  Since the recovery model is simple (formerly known as “TRUNCATE LOG ON CHECKPOINT”) issuing a simple CHECKPOINT command will clear the entries out of the transaction log.

use DeleteMe
GO
 
CHECKPOINT
GO

INSERT Operations

Next, let’s insert a few rows of data to work with.

use DeleteMe
GO
 
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO

 

Now that we’ve just performed an insert, there should be records written to the transaction log.  We can check with the following query:

use DeleteMe
GO
 
WITH CTE AS
(
    SELECT
        AllocUnitName,
        Operation,
        SUM(CONVERT(BIGINT, [Log Record Length])) AS TotalTranLogBytes,
        SUM(CONVERT(BIGINT, [Log Record Length])) * 100 /
            SUM(CONVERT(MONEY, SUM(CONVERT(BIGINT, [Log Record Length]))))
            OVER() AS PercentOfLog
    FROM
        sys.fn_dblog(NULL,NULL)
    GROUP BY
        AllocUnitName,
        Operation
)
 
SELECT
    AllocUnitName,
    Operation,
    TotalTranLogBytes,
    PercentOfLog
FROM
    CTE
WHERE
    PercentOfLog >= 0
ORDER BY
    TotalTranLogBytes DESC
GO

 

On my system, when I execute the above query, I get the following:

Capture

You can see right at the top of the list the the LOP_INSERT_ROWS operation on the clustered index of dbo.TestTable accounts for 21.524% of my total transaction log consumption.

UPDATE Operations

Let’s try an update next:

use DeleteMe
GO
 
UPDATE
    dbo.TestTable
SET
    Column01 = 1,
    Column02 = ‘One’
GO

 

Since I omitted the WHERE clause, we updated all 5 rows in the table.  Checking with sys.fn_dblog again, we get the following result:

Capture2

Again, at the top, we can see the LOP_MODIFY_ROW operation which represents the UPDATE operation we just performed.

TRUNCATE vs. DELETE

How about TRUNCATE vs. DELETE?  Let’s check out TRUNCATE first:

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5 rows in the table
and the transaction log is clear.
*/
 
–Truncate the table
TRUNCATE TABLE dbo.TestTable
GO

 

We have cleared the table, inserted a fresh set of rows, cleared the transaction log using a CHECKPOINT and then issued the TRUNCATE TABLE command.  Here is our result:

Capture3

The sum of the “TotalTranLogBytes” comes out to 2,402.  As you can see, some system tables are updated to take note of the now missing data, but only 196 “TotalTranLogBytes” were used for the actual clustered index of dbo.TestTable.

Let’s check DELETE next.

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5 rows in the table
and the transaction log is clear.
*/
 
–DELETE from the table
DELETE dbo.TestTable
GO

 

We’ve just performed the same set of steps, only replacing the last TRUNCATE with a DELETE.  Checking the transaction log we see the following:

Capture4

The total “TotalTranLogBytes” here totals up to 1,224!  How can that be?  DELETE operations are always supposed to consume more transaction log space.

Let’s check it again:

Capture5

Whoa!  We didn’t execute any further commands but our “TotalTranLogBytes” just jumped to 1,612.  How can that be?  Well, the reason for this is how SQL Server handles DELETE operations.  When a record is deleted, it is simply marked for deletion and becomes a “ghost record”.  A separate process comes along (think Garbage Collection) and actually deletes the records at a later time.

So, the TRUNCATE statement still took (though very little) more transaction log space than the delete.  Let’s check this again with larger data sets.

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
GO
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO 1000
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5,000 rows in the table
and the transaction log is clear.
*/
 
–TRUNCATE the table
TRUNCATE TABLE dbo.TestTable
GO

 

I’ve modify my query slightly to insert 5,000 rows instead of just 5.  When we check the transaction log again, this is what we find:

Capture6

A total of 4,438 bytes have been consumed truncating this table of 5,000 records.

Let’s try the same thing with DELETE:

use DeleteMe
GO
 
–Clear the table
TRUNCATE TABLE dbo.TestTable
GO
 
–Insert 5 rows
INSERT INTO
    dbo.TestTable
    (Column01, Column02)
VALUES
    (1, ‘One’),
    (2, ‘Two’),
    (3, ‘Three’),
    (4, ‘Four’),
    (5, ‘Five’)
GO 1000
 
–Issue a CHECKPOINT to clear the log
CHECKPOINT
GO
 
/*
At this point, there are 5,000 rows in the table
and the transaction log is clear.
*/
 
–DELETE from the table
DELETE dbo.TestTable
GO

 

This is what we get:

Capture7

The DELETE operation took a total of 968,604 bytes – more than 218 times the space taken by the TRUNCATE operation.  This of course also translates into disk I/O as well as time.

Production Usage

I would like to point out that sys.fn_dblog is an undocumented function and as such is subject to change without notice from Microsoft.  Additionally, this function can be resource intensive so if you use this code in production, please do so with care.

Until next time.


Filed under: SQL Server, SQL Server 2008, SQL Server 2012, SQL Server 2014, T-SQL, Uncategorized Tagged: Disk I/O, Log Shipping, SQL Server, sys.fn_dblog, Transaction Log

Viewing all articles
Browse latest Browse all 4

Trending Articles