2012年3月29日星期四

Fulltable scan/sec - How to resolve

Hi All,

When we did PT for our application, Full Scans/sec is really huge and it is between 75 to 150. When we did stress testing, it is above 500.

Please help me in identifying the problem that causes too much of FT scans.

I tried doing this, but I'm not sure whether the approach is correct:

Step 1. Create performance log counter against SQLServer:Access Methods\Full Scans/sec
(Results as)
Time \SQLServer:Access Methods\Full Scans/sec
07/19/2006 04:56:03 PM 13.518823923
07/19/2006 04:56:13 PM 3.600423705
07/19/2006 04:56:23 PM 30
07/19/2006 04:56:33 PM 10.099895061

Step 2. Create Trace (using SQL Profiler)
Start both the Performance log and SQL Profiler at same time.

Step 3. Pick a time duration from Performance Log results and identify the queries that got executed during this time frame using Profiler.

Step 4. From Profiler results, identify the queries that got executed and evaluate the query plan of these queries.
Here I'm not clear with the queries that i pick, because there are different eventclass during this time like RPC:Completed,SQL:BatchStarting, SQL:BatchCompleted. I did analysis on "TextData" only against RPC:Completed and counted on Table/Clustered Index Scans from Query Plan.

Full Scans/sec. isn't always an indicator of a problem needing resolution. For instance, if I have a read-only lookup table that has 5 rows, chances are, I won't pay much attention to indexing this column. If it's used a lot, then you'll likely see a table scan every time. Is that a problem? Maybe, maybe not. If reading 5 rows (probably a single IO) from a table is your only application bottleneck, fixing it may take higher priority.

So, make sure that you're really looking at a problem that needs to be solved. Regarding your question, it sounds like you're on the right track. Find the plan events, and then locate the ones that have Table Scans in the operator tree.

Thanks,
--R

没有评论:

发表评论