info@lasmart.biz

june 29, 2024

Recommendations for optimising TempDB performance

When working with high-load systems, it is essential to address the optimisation of all server processes. One potential cause of performance degradation can be the suboptimal configuration of TempDB, a system database in Microsoft SQL Server. In this article, we will discuss our experience with TempDB and the recommendations we have developed through our work.

Database description

TempDB, like other databases, consists of primary and secondary data and a transaction log.

  • Primary data stores information necessary to initialise the database and is located in the tempdb.mdf file.
  • Secondary data contains information not present in the primary data. It is used to distribute data across multiple files to balance the load. These data are found in tempdb_mssql_#.ndf files.
  • The transaction log contains information about the actions performed and is located in the templog.ldf file.

The TempDB database is used for:

  • Temporary objects explicitly created by the user, such as temporary tables, indexes, table variables, and cursors;
  • Executing queries that use GROUP BY, ORDER BY, UNION, SORT, DISTINCT operators;
  • Storing row versions for versioned tables.

Problem detection

Issues with TempDB performance can be identified using Performance Monitor (perfmon) and the counters Average Disk sec/Write and Avg. Disk Write Queue Length. Optimal values are 25 ms for write time and 2 for queue length, respectively.

Figure 1 shows an example of these counters’ data for one of our clients. The data covers a week. The X-axis represents time from 1:30 on 2024/02/05 to 16:50 on 2024/02/12, while the Y-axis represents values from 0 to 100. The Average Disk sec/Write counter is shown in blue, and the Avg. Disk Write Queue Length counter is shown in red.

Figure 1. Counter graph

The recommended value for the average write time, 25 ms, is shown as a blue dashed line.

The recommended value for the average queue length, 2, is shown in red.

The peak values for average disk write time, depicted in blue, exceed the optimal value every day at 5:30 AM. The queue length also surpasses the recommended values, often by several times, as indicated by the red lines.

These issues suggest an improper configuration of TempDB. Below, we will provide the recommendations that were implemented for this server.

Optimisation recommendations

Optimal database configuration will ensure operation with minimal delays. To achieve this, it is necessary to speed up the data loading and reading operations on the disk where the database is located.

The first recommendation is to choose the most efficient disk. It is not advisable to place TempDB files on slow HDDs; it is better to opt for faster SSDs. Read and write speeds on SSDs can be several times higher than those on HDDs.

Pay attention to the number of data files in the database. For servers with fewer than eight logical processors, it is recommended to create as many files as there are processors. If there are more processors, create eight files and analyse the read and write contention. It is recommended to add four files at a time until contention is minimal. Reduced contention can be observed by a decrease in PAGELATCH_UP wait times.

Additionally, it is important to ensure an even distribution of files across the disks. Each disk should have approximately the same number of files. This will reduce the average read/write time by distributing operations across multiple disks.

Along with the number of files, it is necessary to properly determine their size. It is recommended to choose a size that avoids frequent auto-growth operations of TempDB data files due to insufficient space for temporary objects. Each growth operation involves memory allocation and data transfer, which increases the average write time.

To monitor changes in the size of the TempDB database, you can use the Data File(s) Size (KB) counter in Performance Monitor. This counter allows you to analyse the space occupied by the database in real time.

Figure 2 shows an example of the counter monitor at the moment the TempDB size increases. The bottom part of the screen indicates the counter’s name, the database instance from which the data is read, the server, and other information. The image shows a sharp increase in the database size. If such spikes occur frequently, consider increasing the initial size of TempDB data files and their auto-growth increment.

Figure 2. Data file size counter for the TempDB instance

In SQL Server 2022, the ability to move database metadata to memory as Memory-Optimized Tempdb Metadata has been added. It is recommended to enable this feature to speed up operations related to TempDB. Additional benefits include improved parallelism and reduced contention. To move the metadata to memory, use the following script:

After executing the query, you should restart the server.

Case study

We performed an optimisation of the TempDB database on a client’s server, the example of which was provided in the article. The server contains 72 logical processors, 3 TB of RAM, and 13 SSDs. The server has 4 NUMA nodes with 18 processors each, so the optimal number of TempDB files is 18. To reduce write queues, we distributed the database files across the SSDs and moved the metadata to memory. As a result, we achieved a 20% reduction in the time required to perform the daily data warehouse load.

For more information, please get in touch here:

Contact Form main