This is a question that came up yesterday in our Disaster Recovery class so I’m typing it up in between attending sessions at Microsoft Day here at the conference. It’s an interesting experience watching all the Microsoft speakers walking around in the distinctive blue shirts and no longer having to wear one myself.
The question is the following – why does DBCC CHECKDB terminate with an out-of-space error in SQL Server 2005? A customer had a 500GB database spread over 17 LUNs on a SAN, with each LUN having only 5GB free. There’s a heavy concurrent workload running while DBCC CHECKDB is running and very often it doesn’t complete but instead stops with an error. What’s going on?
The reason for this is the way that DBCC CHECKDB gets a transactionally consistent view of the database. In a nutshell, it creates a internal database snapshot of the database and then checks the database snapshot (you can read more about this in my previous post detailing all the steps of DBCC CHECKDB). A database snapshot needs to have one snapshot file for each file in the source database. In the case of DBCC CHECKDB, the snapshot files are created in the same place as the existing database files – i.e. stored on the same disk volume – and you have no control over this. This means any changes to the database while DBCC CHECKDB is running will cause these snapshot files to grow.
If there’s a significant concurrent workload while DBCC CHECKDB is running, then these snapshot files can get very big very quickly. In the case described in the question, the workload caused one of the snapshto files to take up all available space on the LUN and then it ran out of space. When this happens the snapshot is no longer valid and so DBCC CHECKDB has to stop.
There are two solutions to this. The obvious first one is to run DBCC CHECKDB in a period with low concurrent workload. That’s not feasible for many 24×7 shops so the better solution is to create your own database snapshot (so you can control the placement of it) and then run DBCC CHECKDB on that. This is no different than having DBCC CHECKDB create its own snapshot.
There’s another reason that DBCC CHECKDB may stop with an out-of-space error. DBCC CHECKDB has to validate the contents of the whole database. As such it needs to store info about things it has seen at one point during database scanning so that it can match that info against things it sees at later points during database scanning. These bits of info are stored in an in-memory worktable. In many cases though, the amount of info being stored exceeds the available memory and the worktable needs to spill over to tempdb. (For example, running DBCC CHECKDB against a 1TB database on a machine with 16GB of memory – the amount of info that DBCC CHECKDB needs to store the intermediate pieces of info will likely exceed the memory available to SQL Server). So – if tempdb isn’t big enough to store the worktable, it has to grow. If it can’t grow, then DBCC CHECKDB will fail because it needs the worktable to operate.
You can check in advance how much (estimated) space will be needed by DBCC CHECKDB by running it using the WITH ESTIMATEONLY option (broken in 2008 R2, fixed in 2012 SP2 and 2014) and then making sure that tempdb is sized accordingly to accommodate the DBCC CHECKDB requirements AND the regular tempdb space requirements. See Capacity Planning for tempdb in Books Online for more info.
18 thoughts on “CHECKDB From Every Angle: Why would CHECKDB run out of space?”
Odd I ran an ESTIMATEONLY on my 80 GB database and it estimated around 29 MB and I have 60 GB allocated to my tempdb and ran out space. I’ve also got 15 GB of RAM so I would think it would be more then enough. any ideas?
It’s broken in recent releases – see comment at the end of https://www.sqlskills.com/blogs/paul/how-does-dbcc-checkdb-with-estimateonly-work/
I checked the DBCC checkdb with estimateonly for a 225 GB database and i got a very low space usage of 250KB i am using the sql server 2012 SP1 version which your post said its working in 2012 and 2014 versions. but still checkdb is failing with tempdb full issue and there not much work load on the sql server. Can you suggest me?
It’s not working until 2012 SP2. You need to create a larger tempdb database – the DBCC CHECKDB bug in estimation has nothing to do with how much actual tempdb space you need.
DB size 24GB
DBCC CHECKDB (MyDB) WITH ESTIMATEONLY give this:
DBCC results for ‘MyDB’.
Estimated TEMPDB space (in KB) needed for CHECKDB on database MyDB = 15528.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘ePO_SRV003801’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Yet when I run DBCC CHECKDB ([MyDB]) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY
it pushes my TempDB up to around 27GB. I tested this on a clean instance (SQL 2012 SP2 Ent) where nothing else is happening. I even restarted the instance and set TempDB to 2GB before restart.
Tested twice with the same result.
Is there any other reason why I need so much space in TempDB for this?
Yes, when they fixed it after breaking it for 2008 R2, they must have made the estimation algorithm not as conservative as it used to be, and your database has a lot of structures in it, possibly with one very large table with lots of rows? CHECKDB needs what it needs, and there’s no way around it. Nothing else will be taking up space in tempdb.
Does the estimated size required returned by the result of …WITH ESTIMATEONLY apply to situations where there are multiple tempDB data files for the instance on which the database in question resides? (i.e. The combined total space of tempDB has to be >= the result of …WITH ESTIMATEONLY… or will overflow into tempDB not work this way). I’m running into a space restriction in SQL Server 2014 with tempDBs as described above.
The estimation calculation is orthogonal to how you have tempdb configured – it just needs that much space in tempdb.
i have a Query,
I am running DBCC checkTable against to one of the table (size 30 GB) but it is ended with failure saying TempDB (40 GB) is out of space. before running Checktable I checked Size of Temp DB and it is in MB’s but as soon as I run CheckTable, Temp DB get filled up. Please advise what would be the cause of this issue.
I have checked active transactions using sys.sysprocess, SQL server is idel when we do this operation
Likely you’ve hit a bug if it’s needing that much space – make sure you’re on the latest build of your version. If you are, it may just be a complex schema in the table that requires a lot of space for all the facts about it.
Hi,
by checking internals of Database, i could see Database compatibility is set as 90 but Database is on SQL Server 2012 version
Hi Paul
First at all, thanks a lot for sharing your knowledge. Your blog has been (and still is) a very useful and confident site to enhance our SQL Server skills.
Now, here is my question.
Since “A database snapshot needs to have one snapshot file for each file in the source database. In the case of DBCC CHECKDB, the snapshot files are created in the same place as the existing database files – i.e. stored on the same disk volume – and you have no control over this.”
Without a significant concurrent workload while DBCC CHECKDB is running, would it be better to:
a) Have one data file of 5TB at one drive (let’s say K:\)? or
b) Have 5 data files of 1TB at one drive (let’s say K:\)? or
b) Have 5 data files of 1TB and place each drive on their own drive (let’s say K;\, H:\, N:\, M:\ and O:\)?
Regards,
I don’t understand your question. For the database snapshot that DBCC CHECKDB creates, you have no control over where the snapshot files are created. If you’re going to create your own database snapshot and then run DBCC CHECKDB against it, you must have one snapshot file per database data file, and it’s more of a space consideration rather than a performance consideration.
Hi Paul
Sorry for the confusion, but you already answered my doubt.
If we don’t have any space constraint, then it won’t matter where and how I would deployed those 5TB
Thanks
My database size is 77G, but dbcc checkdb () with estimateonly command estimates 182G tempdb space. How it can be?
It’s a bug they’ve introduced – at some point they’ll fix it.
Hi Paul
How to calculate estimate space requirement for snapshot file Since snapshot file created in the same place where existing databse file on same volume.
You can’t – it depends on the uncommitted transactions at the time the snapshot is created.