Stored procedure performance is really slow - sql-server

I have a SQL Server stored procedure containing a whole batch of queries that used to be really quick on an old server, but is now really really slow on a newer server. I am trying to understand what is going wrong.
It is still in the middle of running a very long set of instructions, and I have run some queries to find out where the bottlenecks are.
It is coming up with the following stats:
I ran a different query and it provided similar results:
But what does all this mean? Any idea what I do next?

Starting with SQL Server 2014, there is a new query optimizer. It might be the reason why your SP performs badly.
Try SP execution after setting database compatibility level 110.
Alternatively, I can change your SP by adding trace flag OPTION(QUERYTRACEON 9481)


A T-SQL query executes in 15s on sql 2005, but hangs in SSRS (no changes)?

When I execute a T-SQL query it executes in 15s on sql 2005.
SSRS was working fine until yesterday. I had to crash it after 30min.
I made no changes to anything in SSRS.
Any ideas? Where do I start looking?
Start your query in SSIS then look into the Activity Monitor of Management Studio. See if the query is currently blocked by any chance, and in that case, what it is blocked on.
Alternatively you can use sys.dm_exec_requests and check the same thing, w/o the user interface getting in the way. Look at the session executing the query from SSIS, check it's blocking_session_id, wait_type, wait_time and wait_resource columns. If you find that the query is blocked, the SSIS has no fault probably and something in your environment is blocking the query execution. If on the other hand the query is making progress (the wait_resource changes) then it just executes slowly and its time to check its execution plan.
Have you tried making the query a stored procedure to see if that helps? This way execution plans are cached.
Updated: You could also make the query a view to achieve the same affect.
Also, SQL Profiler can help you determine what is being executed. This will allow you to see if the SQL is the cause of the issue, or Reporting Services rendering the report (ie: not fetching the data)
There are a number of connection-specific things that can vastly change performance - for example the SET options that are active.
In particular, some of these can play havoc if you have a computed+persisted (and possibly indexed) column. If the settings are a match for how the column was created, it can use the stored value; otherwise, it has to recalculate it per row. This is especially expensive if the column is a promoted column from xml.
Does any of that apply?
Are you sure the problem is your query? There could be SQL Server problems. Don't forget about the ReportServer and ReportServerTempDB databases. Maybe they need some maintenance.
The first port of call for any performance problems like this is to get an execution plan. You can either get this by running an SQL Profiler Trace with the ShowPlan Xml event, or if this isn't possible (you probably shouldn't do this on loaded production servers) you can extract the cached execution plan that's being used from the DMVs.
Getting the plan from a trace is preferable however, as that plan will include statistics about how long the different nodes took to execute. (The trace wont cripple your server or anything, but it will have some performance impact)

How do I log the frequency and last-used time for a stored procedure?

I want to know how often a set of stored procedures run, and the last time they were used.
I am thinking of adding calls to the top of every stored procedure in the database to insert/update a table, with the following schema:
SprocName ExecCount LastExec
GetCompany 434 2009-03-02
ExportDist 2 2008-01-05
Obviously, adding code to every sproc isn't exactly productive.
Is there a built in feature of SQL Server 2005 that can help?
Or is there a better way?
There's an MSDN blog here that talks about various options. For SQL 2005, this boils down to:
Do a server side tracing, be aware that this is not a light weight option
to do
Change your stored procedures to include the logging of the execution
You may want to check out the contents of the table sys.dm_exec_query_stats.
Here's a blog post for something similar:
I found a thread on a forum that talks about using sys.syscacheobjects to find out?!?
I'm not answering with much experience, but I think that other options shown here like tracing would detriment performance, while adding a line at the top of each proc would be very lightweight during execution (even tough a lot of work depending on how many you have).
I would build one new SP that logs and takes as a parameter the name of the calling SP and having the logic where to insert or update. That way you only add one line to your other SP and pass their name as a parameter.
The most accurate method to achieve your objective is to use a custom logging solution that is built into your stored procedures.
You can use the SQL Server Dynamic Management Views(DMV's), as others have eluded to, to get a rough idea of the queries/stored procedures that are being executed on your server however the actual purpose for these DMV's is to provide an insight into performance tuning, not to provide an audit trail.
For example: How to identify the most costly SQL Server queries using DMV’s
The data provided by the DMV's in question (sys.dm_exec_query_stats etc.) only details the query plans that are currently stored within the SQL Server Plan Cache and so can only provide a limited perspective of server activity.
SQL Server Books Online: sys.dm_exec_query_stats

Execution Plan for a Currently Running SQL Statement in SQL Server 2000

Is there any way for a DBA to peek in on the execution plan of a long-running query in SQL Server 2000? I know how to get the SQL being run using fn_get_sql(). And yes, theoretically if open a new connection and set the environment flags the same, it should generate the same plan for the SQL. However, I'm in a data warehouse environment and this query has run for 12 hours with a data load in between, so there's no guarantee that the new plan would match the old plan. I just want to know exactly what the server is doing.
And no, I'm certainly not going to kill the currently running statement unless I can see the plan and know for certain that I can do better with index and join hints.
I feel so close, but I still think it can be done. It can definitely be done in 2K5 and later. If you look at the syscacheobjects virtual table, there are object ids for every cached plan. You can call sp_OA* methods on these ids, but without knowledge of the object model (which is proprietary), I can't get anywhere.
I don't think you can do such a thing, it needs to be submitted to the server with the original query: .
You could load up the query and get the estimated execution plan.
No you cannot. The best you can do is run DBCC INPUTBUFFER on query process and see what the last statement being executed was. You can then run this in query analyzer and get an execution plan.
Run profiler, and expand the "performance" events node.
Choose one of the SHOWPLAN options.
Hopefully, you will be able to trap the end of execution.
I know you can log query plans, but I don't know if it works in this case.
I don't have SQL 2k profiler, only 2k5, to test something or see the options.

Copy Stored Proc Execution Plan to Another Database

Using SQL Server 2008 R2.
We've got a stored procedure that has been intermittently running very long. I'd like to test a theory that parameter sniffing is causing the query engine to choose a bad plan.
How can I copy the query's execution plans from one database to another (test) database?
I'm fully aware that this may not be parameter sniffing issues. However, I'd like to go through the motions of creating a test plan and using it, if at all possible. Therefore please do not ask me to post code and/or table schema, since this is irrelevant at this time.
Plans are not portable, they bind to object IDs. You can use planguides, but they are strictly tied to the database. What you have to do is test on a restored backup of the same database. On a restored backup you can use a planguide. But for relevance the physical characteristics of the machines should be similar (CPUs, RAM, Disks).
Normally though one does not need to resort to such shenanigans as copy the plans. Looking at actual execution plans all the answers are right there.
Have you tried using OPTIMIZE FOR clause? With it you can tune your procedure easier, and without the risk that plan that you copy from another database will be inappropriate due to differences in those databases (if copying the plan is even possible).

Getting around cached plans in SQL Server 2005

Can someone please explain why this works. Here is the scenerio. I have a stored proc and it starts to run slow. Then I pick a parameter and declare a variable to house its value and in the proc use the declared variable instead of the parameter. The proc will then speed up considerably.
I think it has something to do with the cached plan and statistics, but I am not sure. Is it that statistics get out of date as the database grows and changes so that the cached plan is optimized on a past state of the data which is different from the present state of the data?
What you describe is commonly referred to as parameter sniffing, and it seems to be a SQL Server only issue - never had it on Oracle IME, nor am I aware of the issue on MySQL. The link I gave breaks down the issue well.
Mind that the statistics used by the optimizer aren't sync'd with data changes - you might need to run UPDATE STATISTICS occaissionally too.
When you change ddl the stored procedure execution plan is removed from the cache but as OMG Ponies has said the optimizer does not track data changes.
One way to get around the issue is to use With Recompile option and the procedure will be compiled every time you run it. Another possible solution is to run sp_recompile periodically which marks the stored procedure for recompilation.