Tuesday, November 17, 2009

Hourly Execution Statistics from the dm_exec_query_stats DMV

The dm_exec_query_stats dynamic management view (DMV) provides copious amounts of data about how often queries are executed and how much resources they consume.  One such value, execution_count, indicates how many times the stored procedure or query was executed.  Unfortunately (and probably unavoidably), this value is aggregated.  In other words, it counts all executions of a stored procedure or SQL back until the server was last (re) started or certain commands were executed (i.e. DBCC FreeProcCache).  

There are many times when what is needed is an hourly breakdown of executions or resources used throughout the day.  There are monitoring tools that can provide this, but here is a simple explanation of how to get stats on a particular stored procedure, in this case for ASP Session State, that provides such a breakdown. 

The basic idea is to schedule a hourly recurring SQL Server Job to gather the output from dm_exec_query_stats DMV and store it in a table.  Then another query can do some math and format the results for display. 

This script can be used to create a table to hold the output from dm_exec_query_stats at a given moment.

CREATE TABLE [dbo].[StatisticsSProcRaw](
  [id] [int] IDENTITY(1,1) NOT NULL,
  [object_name] [nvarchar](50) NOT NULL,
  [date] [datetime] NOT NULL,
  [execution_count] [int] NULL,
  [time_blocked] [int] NULL,
  [avg_cpu_cycles] [int] NULL,
  [total_logical_reads] [int] NULL,
  [total_logical_writes] [int] NULL,
  [total_logical_io] [int] NULL,
 CONSTRAINT [PK_Statistics_Raw] PRIMARY KEY NONCLUSTERED
(
  [id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]



This script can be placed in an hourly recurring SQL Server job to take a sampling of the values from dm_exec_query_stats and insert it in the table.  See how to create a job Here.

insert into StatisticsSProcRaw
  ([object_name]
  ,date
  ,execution_count
  ,time_blocked
  ,avg_cpu_cycles
  ,total_logical_reads
  ,total_logical_writes
  ,total_logical_io
  )
select
   object_name(st.objectid, st.dbid) as [object_name]
  ,GetDate()
  ,qs.execution_count
  ,total_elapsed_time - total_worker_time as time_blocked
  ,total_worker_time / qs.execution_count as avg_cpu_cycles
  ,qs.total_logical_reads
  ,qs.total_logical_writes
  ,qs.total_logical_reads + qs.total_logical_writes as total_logical_io
from sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) st
where st.dbid is not null
  and DB_Name(st.dbid) = 'ASPState'
  and object_name(st.objectid, st.dbid) = 'TempGetStateItemExclusive3'


This script does the math to figure out how much the values have changed and formats the results.  It saves one from running the DMV every hour and pasting the results into Excel to get the deltas since the DMV numbers just go up and up.


SELECT TOP 100
       sr.[object_name]
      ,convert(varchar,sr1.[date],108) as start_time
      ,convert(varchar,sr.[date], 108) as end_time
      ,sr.[execution_count] - sr1.[execution_count] as [execution_count]
      ,sr.[time_blocked] - sr1.[time_blocked] as [time_blocked]
      ,sr.[avg_cpu_cycles]
      ,sr.[total_logical_reads] - sr1.[total_logical_reads] as [logical_reads]
      ,sr.[total_logical_writes] - sr1.[total_logical_writes] as [logical_writes]
      ,sr.[total_logical_io] - sr1.[total_logical_io] as [logical_io]
FROM [pt].[dbo].[StatisticsSProcRaw] sr
JOIN
(
SELECT sr2.[id]
    ,sr2.[object_name]
      ,sr2.[date]
      ,sr2.[execution_count]
      ,sr2.[time_blocked]
      ,sr2.[avg_cpu_cycles]
      ,sr2.[total_logical_reads]
      ,sr2.[total_logical_writes]
      ,sr2.[total_logical_io]
FROM [pt].[dbo].[StatisticsSProcRaw] sr2
--) as sr1 on sr1.id = (sr.id - 1)  --faster, but fails for non-sequential ids 

) as sr1 on sr1.id = (select max(id) from [pt].[dbo].[StatisticsSProcRaw] where id < sr.id and object_name = sr.object_name)
ORDER BY sr.id desc

This script is subtracting from a set of values (the sr alias) what the values were on the previous record (the sr1 alias).  It does this by doing a self-join to the same table.   With a small enough table, this is actually efficient.  As the size of the table grows, this SQL will need to be re-evaluated, the table truncated or a more complicated aggregation scheme devised.

Here is some sample output:


With this, I can see the execution_count slowly winding down after the dinner hour.

No comments:

Post a Comment