Latest Entries »

Moving On Up

Hi there, I’ve moved my blog to a new site: stevenormrod.com

Please take a moment to update your bookmarks and then head on over.

Thanks for all the hits!

– Steven

ThankSQLgiving

The past twelve months have been fantastic for me, both personally and professionally.  Heck, I even finished remodeling my house.  I embraced each positive experience and fed it into the next one down the line.  I would never have had such a good year if it were not for the wonderful people I met along the way.

Thanksgiving is as apropos as can be.

PASS Summit

Last year, I was fortunate enough to go to the PASS Summit.  PASS, as it is colloquially known, is a world-wide conference of SQL Professionals.  It is jam packed with speakers, technical sessions, networking, parties, and hot tubs.  Although, not necessarily all at the same time.

I attended tons of great sessions, many being given by the same bloggers I had been reading; one of whom was Trevor Barkhouse (blog | twitter).  On the meet and greet day, I met Wes Brown (blog | twitter), who runs the local CACTUSS group in Austin, and he mentioned the SQL Saturday conferences.

I left Seattle absolutely loving the city, ready to move there, and recharged about my career with SQL Server.  I have been working with technology for over fifteen years, but I have never experienced the camaraderie that I felt with the SQL community.

SQL Saturday

After learning about SQL Saturday, I started looking up when and where the next one was going to be.  SQL Saturday is an almost free mini-PASS conference put on your fellow DBAs in the field.  It is on a rotating schedule that moves from town to town, and even other countries.

The first SQL Saturday I attended was in Houston, where I saw a session listed by Trevor Barkhouse.  I recognized his name from PASS, so I attended his session, asked a bunch of weird questions, and won a book at the end: SQL Server 2008 Internals and Troubleshooting; which I proceeded to read furiously…

After that, I attended the Dallas SQL Saturday and met-up with Trevor, and others, again.  The SQL community was really beginning to feel like a family.

SQLskills

To say that it was a blessing to attend the SQLskills classes this year would be an understatement.  Paul Randal (blog | twitter), Kimberly Tripp (blog | twitter), Bob Beauchemin (blog | twitter),  and Jonathan Kehayias (blog | twitter) offer, hands down, the best training I have ever had.  Period.  Forget about SQL training, technical training, or any other training.  I have never been so challenged, felt so dumb, and felt so smart; all at the same time.

As a special treat, I was able to attend a class where Brent Ozar (blog | twitter) was an additional one of the instructors.  His is one of the first blogs I started reading when I started out as a DBA.

If you can only do one thing as a DBA next year, attend a SQLskills class.  You will come away with a sore brain, a list of projects to implement in your shop, and a ton of great scripts and demos to review later.

Grandma’s 99th Birthday & Summer Mummers

This summer, my grandmother turned ninety-nine.  So, my cousin organized a secret family reunion to surprise her.  This was a fantastic event and we had people come out of the woodwork for this.  It was great to see her and other family members that I haven’t seen recently.  It amazes me how lucent she still is; I hope some of her genes rub off on me.

While in Midland, I also got to check out Summer Mummers.  Summer Mummers is a gin-u-wine Wild West Vaudeville Show.  It is an incredible experience and well worth the drive through West Texas to see it.  In fact, people travel from other countries every year to see it.

If you love theatre, do yourself a favor and go see this show at least once in your life.  Once you do, you will wonder how you ever lived before.

SQL Saturday Austin

With some encouragement from Wes Brown (blog | twitter) and Trevor Barkhouse (blog | twitter), I went for broke and submitted a session for SQL Saturday #97 in Austin.  I did a trial run of my presentation at the local CACTUSS user group, and even though I went way over time, I got plenty of good feedback.  So I edited and cut my way down to 60 minutes and re-worked my demos and went for it.

SQL Saturday Austin was like a class reunion for me.  I saw several of my friends from SQLskills class: Trevor Barkhouse (blog | twitter), Vicky Harp (twitter), and Rudy Rodarte (blog | twitter | youtube).  All of us as presenters!

By taking the leap of faith and becoming a speaker, I met tons of cool folks and made some good, new friends.  Doing this really charged my batteries and primed me for the next great chapter in my career development.

Employment

November brought on another change; after much consideration, I have decided to move on from Whole Foods Market.  During the past four and a half years, I have had a wonderful tenure.  I have made some great friends, worked on some great projects, and gained tons of valuable experience.  I feel very fortunate for having worked there, and am grateful for all the opportunities they have afforded me.  Between data center migrations, multi-instance clustering and consolidation, and some interesting disaster recovery situations I feel I could not have had a better training ground as a Database Administrator.  And the location’s not bad, either.  It’s hard to beat 6th & Lamar; downtown Austin at its finest…

In a couple of weeks, I will begin the next chapter of my career in the healthcare industry with a global provider of orthotics and prosthetics.  I am excited to be working with an organization that does so much to improve the lives of people all around the world.  I will continue my work with clustering and Denali.  While I will no longer be downtown, the Domain is a pretty cool location, as well.  I’m sure I will find some cool places to explore…

SQLskills Mentoring

Last week, I pretty much won the lottery.  Around mid-week, Jonathan Kehayias (blog | twitter) reached out to tell me a new mentoring program that SQLskills is starting, and oh by the way, would I be interested?  They say there is no such thing as a dumb question, but I’m not so sure…  :-)

Needless to say, I was interested; and floored, honored, humbled, etc.  I am truly touched that Jonathan and Paul reached out to me and are willing to take the time to guide me along my personal and professional goals for the coming year.  My hope is that I will be able to pay this forward to someone else down the line.

So my next mission is to sort out what exactly are my goals for two thousand and twelve.  I guess I’m not gonna worry about that Mayan Calendar thing any more.  But I do think December 20th, 2012 will be a good day to buy some stocks…  :-)

Everything’s Coming up Milhouse

When I look back upon this great year, it amazes me how things have developed.  If you don’t think your life is going as well as you would like, you DO have the power to change things.  It takes a lot of work and is not easy, but you can create your own luck and make the life that you would like to have.  You will not always get everything you want, but as Wayne Gretzky said, ‘You miss one hundred percent of the shots you don’t take.’

Just go for it.

I do it for the schwag...

This weekend, I rounded out the SQL Saturday Texas Trifecta: Houston, Dallas, and Austin.

Earlier this year, I attended the SQL Saturdays in Houston and Dallas.  These were both fantastic events; I gained some excellent technical training and I made some great connections with other DBAs in the field.

When it was announced that Austin was going to host a SQL Saturday, I immediately jumped at the chance to present.  With this move, I went from being a spectator to a spectacle.

SQL Server Amnesia

I gave my presentation over SQL Server memory usage and how to troubleshoot problems that arise from it.  I took my attendees through the basics of using Perfmon and DMVs to peek under the hood and see what SQL Server and Windows are doing with all the memory.  Thankfully, all my demos worked as planned.  When I did this presentation at the CACTUSS User Group, my laptop locked up, and I had to pop the battery out.  How’s that for memory pressure?

No one threw tomatoes at me, and several people asked me some extended questions afterwards. So, I guess that means I did alright.  If you did see my session, please take a moment to give me some feedback.  If you would like the slides or sample code, they are posted on the SQL Saturday #97 website.

I was a little nervous making the jump from presentations at the CACTUSS User Group to a SQL Saturday.  In the end, it was a good decision and I feel excited about making the plunge.  I see this as an opportunity to step up my game and try to give a little back to the community that has helped me hone my DBA skills.

The Sessions

After presenting my session, I checked out some of the other great speakers.  Russell Johnson (blog | twitter) gave a nice overview of the various options that are available for HA and DR in our ever-futile pursuit of a No-Downtime Architecture.

SQL Saturday #97

Then it was time for lunch, so I stowed my laptop, grabbed lunch, and chatted with some of the other attendees.  Lunch is also a great time to check out the vendor and sponsor booths.

After lunch, I checked out Jim Murphy’s (blog | twitter) demo of Denali’s Always On Architecture.  He took us through the basics of how it works, and then showed off a demo of Always On in action.

Next, I attended Joe Celko’s dive into Advanced Table Constraints.  He took us through some of the more obscure options for constraints, and discussed how these can assist the query optimizer.

To finish out the day, I learned about the potpourri of management TLAs that are available: CMS, PBM, EPM, MDW, UCP, and DAC.  Nancy Hidy-Wilson (blog) took us through the basics of each of these tools and showed us how they all fit together to help you manage your enterprise.

Vendors and Sponsors

In the foyer, there were booths from several vendors: Idera, Quest, Fusion-IO, Confio, and Texas Memory Systems.  Included with your registration were several raffle tickets that you could use at the vendor tables to try to win some cool prizes: Kindles, iPads, Gift Cards, Software Licences, and more. Several of these vendors had free versions of their tools available to try.

I really have to commend Idera here, as they went above and beyond in their sponsorship of the event: running the registration, manning the info booth, and giving away some great prizes.

Speakers Dinner and After Party

The Speakers Dinner was a nice event where we got to meet the other speakers, finalize any last minute changes to the event, and blow off a little steam before the big event.  Wes Brown & Co. (blog | twitter) treated us to some local grub at Iron Works BBQ.  I really enjoyed chatting with some of the more experienced speakers and getting some tips.  It was also cool meeting a few of the other first timers and seeing I wasn’t the only nervous one in the bunch.

We had the After Party at the same location, and this was a good opportunity to reflect on the day’s events and compare notes with speakers and attendees alike.  I met some really cool people and I’m looking forward to seeing my new friends again.

Conclusion

I hope this is shrink to fit...

If you have never attended a SQL Saturday, I encourage you to check the schedule and see when we’ll be coming to your town.  When I look back to January, when I attended my first SQL Saturday in Houston to ten months later, presenting at my first one in Austin, it amazes me what a good year 2011 has been for me.

From learning about some cool tools, picking up some good technical knowledge, and most importantly making some new friends; I have been very fortunate indeed.  The best part, I finally got a coveted SQL Saturday shirt to call my own.

CACTUSS June 2011

Thanks for the Memory

Last night, I had the good fortune of being selected to present at the local SQL Server User Group, CACTUSS.

This was my first presentation with the user group, and my first one on SQL Server, as well.  I have given technical presentations before, but it has been a couple of years.  So I was a little rusty when it came to pacing and how much content to try to include.

SQL Server Memory

My topic for the night was how SQL Server uses memory.  As a back story, I spoke about some of my experiences supporting and troubleshooting SQL Server over the years.  From there I rolled it up into some techniques and queries to spot problems.

Bit Off More Than I Could Chew

One comment I received, and I agree with, is that while I had plenty of good information, perhaps I could break it up into multiple presentations.  Even after removing several slides and demos I still went over time.

The other thing I realize is that instead of saving all the demos for the end, it may have been more effective to intersperse the demos within the presentation.  Present Topic 1, Demo Topic 1; Present Topic 2, Demo Topic 2, etc.

My audience was most gracious and stayed the entire time.  The best compliment for me was that several people were taking notes on what I was showing them.  Nice.  :-)

Disaster Strikes

As much as I practiced my presentation and related demos, I was not able to keep Mister Murphy at bay.  Once I first adjusted my Buffer Pool to introduce some Memory Pressure, I was no longer to connect and query the database.  I had to laugh and tell the audience that I just created some true Memory Pressure.

Slide Deck

If you are interested in the slides from my presentation, SQL Server Amnesia.  It doesn’t look like the free, hosted version of WordPress will let me upload a .ZIP or .SQL with all of my demo scripts.  My apologies, I’m still new at blogging.  I may publish the code later as a separate post.

Feedback

If you were there and would like to leave me some feedback, feel free to leave a comment.  Alternatively, you can use SpeakerRate to give me some feedback.

Club News

Before the presentation, Wes Brown (blog | twitter) gave us an update about the South Austin SQL Users Group and SQL Saturday Austin.  It looks like our space for the South Austin Group fell through for this month, so now we are hoping to try again for next month.

Likewise, the struggle for a SQL Saturday Austin got another twist.  It turns out the date we were shooting for has several other regions hosting a SQL Saturday at the same time.  So now, the official rumour date is December 3rd.  Cross your fingers…

After working hard all summer and saving your pennies you finally have enough money to buy a Lamborghini.  So you pedal downtown to the dealership and trade your jar of coins for a shiny new Fasterosa.  While driving home, you start to notice some bondo and shellac flaking off your pimp ride.  By the time you pull back into the driveway, you realize what you got was a Pinto.  Somebody done got swindled, and it was you!

CPU Throttling

CPU Throttling can occur when your Operating System and BIOS are set to Power Savings mode.  On certain types of computers, DEV boxes and desktop for example, this may be OK.  However, this is not something that you typically want on a Production Database Server.

CPUID

CPUID Utility

The best way to verify if you have CPU Throttling is to use the CPUID utility.  CPUID is a free utility that will give you correct information about you CPU speeds, cache, and memory.  It will even tell you which memory slots are in use and the specifics for that particular stick of RAM.  You can also use it to determine if you have Hyper-threading enabled.

Performant Green

For many organizations, being green is part of their mission statement.  Perhaps it is even part of your own personal philosophy.  However, as a DBA you must consider the cost / benefit analysis of Power Savings versus Performance.

Are these DEV servers, or do they run an application critical to your business?  Would slow performance cause safety problems for anyone who depends upon your data?

Licensing Costs

Another important factor is licensing and hardware costs.  SQL Server Enterprise Edition costs about $30,000 per processor.  How much do your servers and racks cost?

Add up all of your servers and CPUs and do a little math.  For example, if you have four servers with two CPUs each in your onsite data center.  Then, another four servers in your offsite data center.  That makes for a total of 16 CPUs with a licensing cost of approximately $480,000.  Let’s call this M, for money.

CPU Throttling

Next, figure out just how much throttling is going on.  Launch CPUID and look at the CPU tab.  First, look for the section named Specification; this is what you paid your hardware vendor for.  Let’s call this C, for CPU speed.  Next, look for the section named Core Speed; this is what your systems guys are giving you.  Let’s call this T for throttling.  Once you have those two numbers, it is time for some tricky division.

OK, let’s find out what percentage of your capicity you are actually using…

True Speed = T / C * 100

Looking at my example numbers, I would have:

True Speed = 1600 / 2800 * 100

True Speed = 0.5714 * 100

True Speed = 57%

Now, let’s use that number to find out how much money you are throwing out the window…

Wasted Money = (1 – T / C)  * M

Looking at my example numbers, I would have:

Wasted Money = (1 – 1600 / 2800) * 480000

Wasted Money = (1 – 0.5714) * 480000

Wasted Money = 0.4286 * 480000

Wasted Money = $205,714.29

Since we are true geeks, let’s use SQL Server to do the calculations for us.


-- CPU Throttling Analysis

declare @CPUSpeed as numeric;
declare @ThrottlingSpeed as numeric;

set @CPUSpeed = 2800;			-- CPUID Specification
set @ThrottlingSpeed = 1600;	-- CPUID Core Speed

-- calculate how much throttling is going on
select cast(cast(@ThrottlingSpeed / @CPUSpeed * 100 as numeric(5,2)) as varchar) + '%'
	as 'What your cruise control is set to...';

-- now, let's factor in your licensing costs...

declare @LicensingCost as numeric;

-- add up your total licensing costs
-- http://www.microsoft.com/sqlserver/2008/en/us/pricing.aspx
set @LicensingCost = 480000;

-- figure out how much money you are wasting
select '$' + cast(cast((1 - @ThrottlingSpeed / @CPUSpeed) * @LicensingCost as numeric(10,2))as varchar)
	as 'How much bread you''re wasting...'

Results

CPU Throttling Analysis

Business Cost

How much does it cost your business when critical processes take longer to complete?  Will you lose sales?  Will patients receive the care they need?

Are there costs associated when certain processes are not able to finish in a short amount of time?

If a customer has to wait on your application longer than the competition, then you may lose that customer.

Only you can answer these questions, they will be different for every business.

This one is a little harder to quantify.  After you have gotten management’s attention with the money figures from above, use this angle to tie everything back to the business.  Say it with me, you are not a DBA, you are a Business Partner.

If you can create a simple spreadsheet or PowerPoint presentation which illustrates this, then you will have a much easier time of convincing management of your proposal.

CPU Utilization versus CPU Speed

One misconception is that if you do not have high CPU utilization, then you do not need to worry about CPU throttling.  If look at my example, you will see that we are not maxing out the CPU, so why worry about it?

CPU Utilization

CPU Utilization and CPU Throttling are two different ideas and have different affects upon the system.  CPU Utilization is telling us how many of the CPU cycles we are using.

Consider a highway.  Let’s say we have a six-lane highway, but only three lanes have traffic in them.  This is 50% utilization.

Next, let’s consider the Speed Limit.  If the same six-lane highway has a speed limit of 35 MPH, then it will take us a while to get out of Dodge.  Remember the Lamborghini you bought?  If the engine has a governor that keeps you legal, then you may as well have bought a Pinto.  That is throttling.

So, even though your utilization is low, i.e. lots of lanes are open, you may have the cruise control set to half of what the car is capable of.  You have lots of cycles available for processing; they’re just running at half speed.

Ticking Out the Camaro

The fix for this is pretty easy, but it takes two steps.  First, you will need to check your BIOS settings and see what your Power Management settings are currently set to.  Each manufacturer is different, so my screenshot my not reflect the hardware you have in your shop.  However, it should be fairly similar.  The bad option is highlighted in red, while the good option is highlighted in green.

BIOS Power Savings The Bad with the Good

Next, under the Control Panels, select Power Options and make sure High Performance is selected.

Power Savings Turned Off

Trust, but Verify

Once you have taken care of these two steps, run CPUID again and verify that you are getting all the CPU that you have paid for.  Have fun, and enjoy that Lamborghini; you’ve earned it!

Memorial Day 2011

Today is Memorial Day.  The purpose of this holiday is to commemorate any soldiers who have lost their lives while serving their country.  Please take a moment to consider and be grateful for the things you have in your life, and have a little thought in your heart for those who have been lost.

I would like to thank the Marine Corps for helping to fund my way through college and giving me the drive to achieve the things I have in my life.

Memorial Day is also used to mark the start of the Summer vacation season.  So it’s time to get the Family Truckster headed to the beach.  I hope everyone is enjoying the day off with family and friends, and of course, digging into some good BBQ!

SQL Saturday #63 Dallas

SQL Saturday #63 Schwag

This was my second SQL Saturday after attending the one in Houston earlier in the year.  I was curious to see if this one would be as awesome as my first.  Now, I am excited about the prospect of a Texas Trifecta in Austin later in the year.

SQL Saturday is an almost free, all day event dedicated to all things SQL Server.  I saw almost-free since you will still have travel costs: hotel, gas, rental car, meals, etc.  However, I think this is the best bang for the buck that you will find anywhere.

Location

Dallas’ SQL Saturday was held on April 2nd at the Region 10 Education Service Center.  All of the conference rooms were very nice and had adequate audiovisual equipment.  However, in between the sessions, the hallways and common areas were a little cramped.  I think it would have been better if a few more conference rooms were availble.  This way, the vendor and networking areas could have been hosted there, thus alleviating some of the congestion.

Networking

Friday, before the event, Idera hosted a casual meet and greet at Wizard’s Sports Cafe.  This was an excellent opportunity to meet your fellow attendees, have a cold beer, and display your lack of pool skills.

SQL Saturday itself is a great way to meet other DBAs from across the state and the country.  You can talk shop, trade horror stories, and even get feedback on your resume.

Saturday, the official networking event was held at Dave and Busters.  There was free food to much on, other attendees to trade notes with, and presenters to pepper with more questions.

Sessions

SQL Saturday #63 Schedule

There were a total of forty-two sessions covering seven different tracks.  So a wide variety of topics to choose from.  Some sessions were of the same quality you would find at SQL PASS.  While others with from beginning speakers looking to polish their craft.  I think it is wonderful that there is a hospitable venue for the aspiring speaker.

Highlights

I began my morning with a deep dive on Solid State Storage with Wes Brown (blog | twitter).  He took us through the differences amongst the various vendors.  Additionally, we learned about some of the gotchas that come along with flash storage, and what the difference between enterprise and consumer grade was.

Next, I took in Kevin Kline’s (blog | twitter) presentation on SQL Internals and Architecture.  This was a lively show and the room was literally standing room only.  Afterwards, Kevin was very gracious and took the time for some one-on-one questions from me.

For the afternoon, I checked out Kevin Boles’ (blog | twitter) session on Parallel Query Execution.  He have us all the basics and showed up what all the options available with sp_configure mean.  In addition to sharing his technical skills with us, he also displayed his sense of humor while dealing with a heckler.

To close out the day, I saw Suresh Kandoth (blog | twitter) give a lesson on Database Availability and Recovery.  This is an area that we all have to deal with from time to time.  Having some knowledge of the fundamentals will help make things easier when the time comes.

Deep Ellum Arts Festival

Deep Ellum Music

As luck would have it, the Deep Ellum Arts Festival was being held on the same weekend as SQL Saturday.  Before checking into the hotel, I decided to explore a little bit.  It was held in Deep Ellum and several streets were blocked off to traffic and pedestrianized.  There were multiple stages and several local bands were playing a variety of styles.  Additionally, there were local artists and vendors with booths setup to showcase their wares.  While the event itself was free, you did have to pay for the concessions.  Wandering the streets, listening some local bands, drinking a cold brew…what a great way to get ready for SQL Saturday.

Conclusion

If you have never been to a SQL Saturday, then you are missing out.  It is a great way to get some nice training, make some new friends, and re-energize yourself about SQL Server.

Heaps of Heaps

Recently, I was looking into a stored procedure that was running poorly.  As part of my analysis, I examined the underlying tables and their indexes.  By using Kimberly Tripp’s (blog | twitter) rewrite of  sp_help, I was able to quickly determine that one of the tables was a heap.

Heaps

A Heap is a table without a Clustered Index.  This is bad for a few reasons:

  • When you update a Heap, you can get forwarded records
  • When you insert to the Heap, an 8-byte uniquifier will be added
  • Can compromise insert performance

How Many Heaps in a Pile

Like most things in nature, when there is one, there is another, and another, and another…

Knowing this, I set about to discover how many other tables in this database were heaps.

First, let’s look at our overall count of heaps in relation to the total table count.


-- percentage of heaps
declare @heapCount numeric;
declare @tableCount numeric;

select @heapCount = count(t.object_id)
	from sys.tables as t
	join sys.indexes as i
	on t.object_id = i.object_id
	where i.type_desc = 'HEAP';

select @tableCount = COUNT(*) from sys.tables;

select @heapCount as 'Heap Count',
	@tableCount as 'Table Count',
	CAST((@heapCount / @tableCount * 100) as numeric(5, 2)) as 'Percentage of Heaps';
go

Next, let’s enumerate all the tables that are heaps.


-- list all the heaps in the currently selected database
select OBJECT_NAME(t.object_id) as 'Table Name', i.type_desc
from sys.tables as t
join sys.indexes as i
on t.object_id = i.object_id
where i.type_desc = 'HEAP'
order by OBJECT_NAME(t.object_id);
go

Time to Make the Clusters

What I found was startling to say the least.  On this particular server, all three databases contain quite a few heaps.

Obviously, this is going to be where I start my tuning efforts.  After seeing how many heaps existed on a strange database, I think it would be wise to make this a standard check when first looking at any unfamiliar database.

Clearing the Plan Cache

While at SQL Saturday #63 in Dallas, I got some questions about clearing the plan cache.  This evolved into an email discussion; which resulted in me writing up some example scripts.  Since I received some good feedback, I thought this would be a good opportunity to turn them into a blog post.

Warning: These scripts should almost never be run on a production system.  These are meant for debugging and testing on DEV.  If you do run these on PROD, be sure to understand the consequences.

I could see running these on PROD in a few limited cases.  For example, if you have a query that is stuck with a bad plan in cache, you can use these scripts to remove that individual plan.  Another possibility would be to clear out the adhoc plans, perhaps just after you have enabled the optimize for adhoc workloads setting.

Examining the Plan Cache

The Plan Cache is where SQL Server stores the Execution Plans for any queries or stored procedures it has run.  The purpose of this is to enable SQL Server to reuse Execution Plans for each subsequent run of the query or stored procedure.  This allows SQL Server to scale up as more requests begin to pile on.

To examine the Plan Cache you can start by querying the sys.dm_exec_cached_plans DMV.


-- look at the plan cache, list all plans
select * from sys.dm_exec_cached_plans;
go

This is not very useful by itself.  In order to make this query a bit more useful, we can link in either the Query Text or the Query Plan by using two other DMVs: sys.dm_exec_sql_text and sys.dm_exec_query_plan respectively.


-- link in the sql text
select decp.*, dest.text
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest;
go

-- link in the query plan
select decp.*, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp;
go

And if you wish to see both the Query Text and the Query Plan together, you can link multiple DMVs in the same query.


-- link in both
select decp.*, dest.text, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp;
go

Clearing the Plan Cache

As I work through clearing the Plan Cache, I’m going to start with the broadest example, and then get more granular as we go along.  If you need to clear the entire Plan Cache, you can do so with a DBCC FREEPROCCACHE command.

I feel the need to issue another warning here, and reiterate that this is almost always a bad idea on a production system.


-- clears everything in the plan cache
dbcc freeproccache;
go

Clearing by Database

If clearing the entire Plan Cache seems a bit excessive, as well it should, then you can get a wee bit more focused by only clearing the plans for a particular database.  This can be achieved by using the DBCC FLUSHPROCINDB command.  This is an undocumented command in SQL 2008, however, you can read about it under the SQL 2005 documentation.

As with clearing the entire cache, this is probably not something you want to do with a production system unless you have a good reason.  You may have one database that is a special case, which needs to be tamed.


-- clear the plan cache for a particular database
declare @v_DBID tinyint;

select @v_DBID = dbid from sys.sysdatabases where name = 'AdventureWorks2008R2';

dbcc flushprocindb (@v_DBID);
go

Clearing by Object Type

Next, let’s look at clearing the Plan Cache for a particular type of object.  By using DBCC FREESYSTEMCACHE we can choose to clear out the Adhoc Plans, Stored Procedure Plans, Views, or Extended Stored Procedures.

Most likely, the ones you will be interested in are Adhoc Plans and Stored Procedures.


-- clear plans for a particular type of object

-- clear sql plans: adhoc sql, prepared statements
dbcc freesystemcache('SQL Plans');
go

-- clear object plans: stored procedures, functions, triggers
dbcc freesystemcache('Object Plans');
go

-- clear bound trees: views, constraints, defaults
dbcc freesystemcache('Bound Trees');
go

-- clear extended stored procedures
dbcc freesystemcache('Extended Stored Procedures');
go

Like a broken record, I gotta remind you that it is a bad idea to do this on a production system.

Optimize for Adhoc Workloads

The primary reason you may wish to clear out the Adhoc Plans from your cache, is if you have just enabled the SQL Server setting ‘optimize for adhoc workloads‘.  This setting tells SQL Server not to cache the Execution Plans for Adhoc queries until they have been run more than once.  This helps keep Plan Cache Bloat under control.

You can enable this setting by using sp_configure.


-- show advanced options - enable all options to be seen / set
use [master]
go
exec sp_configure 'show advanced options', 1
go
reconfigure
go
reconfigure with override
go

-- optimize for ad hoc workloads
sp_configure 'optimize for ad hoc workloads', 1
go
reconfigure
go
reconfigure with override
go

-- verify change
sp_configure 'optimize for ad hoc workloads'
go

Once you have enabled this setting, the Single-Use Adhoc Plans will remain in cache until either the SQL Service has been restarted, or you have explicitly cleared the Plan Cache of Adhoc Plans by using the code example above.

Resource Governor

If you are using Resource Governor, then you have the option of clearing the Plan Cache for a particular Resource Pool.  Resource Governor allows you to create various Resource Pools and then designate a certain percentage of CPU or Memory to that pool.

In order to get a list of the Resource Pools that are defined on your system, you can query the sys.dm_resource_governor_resource_pools DMV.


-- list the resource governor pools
select distinct name as 'Resource Governor Pools'
from sys.dm_resource_governor_resource_pools;
go

Once you have identified the Resource Pool that needs to have its associated Cached Plans cleared, you can clear the plans by using the DBCC FREEPROCCACHE command.


-- clear the default resource governor pool
dbcc freeproccache ('default');
go

-- clear the internal resource governor pool
dbcc freeproccache ('internal');
go

-- clear the default resource governor pool
dbcc freeproccache ('YOURRESOURCEGOVERNORPOOL');
go

Again, please do not do this on a production system unless you are certain this is what you need.

Clearing a Single Plan

If you have a particular query which is causing you problems, for example, it got compiled with a bad plan, then, one option is to remove the offending plan from the cache, which will cause a recompile.  There is no guarantee, however, that the new plan will be any better than the old plan.

You may wish to capture the existing plan so that you can compare it with the new plan.  This way you can explore what the differences are, and focus your tuning efforts there.

To find the query plan for a particular query we will need a search string; the SQL from your query.  We will place your query inside of mine, so we can search the Plan Cache for it.  It is not necessary to use the entire text of the query, that would be a bit cumbersome. All we need is enough to make it unique, or unique enough that we can find it in the results.


-- or it may be even better to search by the sql text
-- so that you only see the ones that interest you
-- also link in the query plan so that you can analyze it
select decp.usecounts, decp.size_in_bytes, dest.text, decp.plan_handle, deqp.query_plan
from sys.dm_exec_cached_plans decp
cross apply sys.dm_exec_sql_text(decp.plan_handle) dest
cross apply sys.dm_exec_query_plan(decp.plan_handle) deqp
where dest.text like '%PLACE YOUR SQL CODE SNIPPET HERE%';
go

Once you have identified the offending plan, it is a simple matter to remove it from the cache.  Use the plan_handle with the DBCC FREEPROCCACHE command to clear it from the Plan Cache.


-- once you find the plan you wish to prune
-- place the plan_handle in the following statement
dbcc freeproccache (0x0600010010450926B8E00C07000000000000000000000000);
go

So this is the only one of the options where I can see a valid case for doing it on Production.  All the same, be careful and understand what you are doing.  It is important to clear only the Query Plan that is needed, and no more.

Conclusion

It is important to keep tabs on what is going on in your Plan Cache.  Being able to clear out all or part of your Plan Cache can be a powerful debugging technique.  Please bear in mind that this is not something that should be done on a Production server without careful consideration of the side effects.

However, playing with this in DEV can help you understand more of what is happening under the covers of SQL Server.

Back to the Drawing Board

Like many DBAs, I have a variety of scripts I use to check on the health of my servers.  Some are borrowed, others stolen, and a few I have written myself.  A common theme amongst them is the need to filter either user or system processes depending on the situation.

Filtering User Processes

Typically I have done this by adding a where clause which filters the results on session_id.


-- incorrect method to filter for user sessions
select *
from sys.dm_exec_sessions
where session_id > 50

-- incorrect method to filter for user requests
select *
from sys.dm_exec_requests
where session_id > 50

You see, I have always been taught that system processes will always have a session_id of one through fifty, while user processes will always be greater than fifty.

Recently, I had the opportunity to work with a Microsoft engineer while troubleshooting some systems.  As we were running through my scripts he mentioned that this was no longer the case; system SPIDs can exist above fifty.  When I asked him how I should go about filtering user and system processes, he said he did not know of a way.

System Processes Have SPIDs Above 50

I was crestfallen to say the least.  Here I was, using an outdated technique and thinking I was cool. However, I was sure there must be a way to achieve the results I was looking for.

As luck would have it, the following week I was taking a SQLskills class with Paul Randal (b | t). During one of the breaks, I asked him about this and if he knew of a new method for filtering user and system processes.  True to form, he did some research, reached out to one of his contacts at Microsoft and had an answer for me within the hour.

New way?  What’s this about a new way?

Apparently, this behavior has existed since SQL 2005 but is not widely known.  The correct way to filter user processes is to query sys.dm_exec_sessions and filter on the is_user_process column.


-- list user processes
select *
from sys.dm_exec_sessions
where is_user_process = 1

-- list system processes
select *
from sys.dm_exec_sessions
where is_user_process = 0

So, in order to leverage this in other DMVs, you will need to use a join clause.  My updated code looks like this…


-- correct method to filter user requests
select der.*
from sys.dm_exec_requests der
	join sys.dm_exec_sessions des
	on der.session_id = des.session_id
where des.is_user_process = 1

An Exercise for the Writer

It always amazes me how long you can use a product and still stumble upon ‘new’ features at any moment.  So now, I will be reviewing my existing scripts and making updates where appropriate.

Follow

Get every new post delivered to your Inbox.