Before I start, I want to make it clear that you can only hit this bug if you ALREADY have corruption, that it’s quite rare, and that there is a workaround.
I’ve noticed a few more people in the forums having DBCC CHECKDB fail with this particular error in the last month:
Msg 8967, Level 16, State 216, Line 1 An internal error occured in DBCC which prevented further processing. Please contact Product Support.
Whenever DBCC CHECKDB is using a database snapshot, it must check that the page it read through the snapshot does not have an LSN (Log Sequence Number) higher than that when the snapshot was created. If it did, this would mean that the page was modified AFTER the snapshot was created and hence DBCC CHECKDB would be working from an inconsistent view of the database. If this case is discovered, DBCC CHECKDB stops immediately. When I rewrote DBCC CHECKDB for SQL Server 2005, I changed a bunch of code assertions into separate states of the 8967 error, so that DBCC CHECKDB would fail gracefully if some condition occurred that indicates a bug or something that should never happen. State 216 is for the bad LSN condition I’ve just described.
I used to think it was caused by a race condition with the NTFS code that implements sparse files, which is used by the hidden database snapshot that DBCC CHECKDB uses by default. However, I’ve come to learn that this is a bug in DBCC CHECKDB (not one of mine I should say :-)) that causes this behavior under certain circumstances when corruption is present. The bug is that if a corrupt page fails auditing inside DBCC CHECKDB, the LSN check is still performed. If the corruption affects the LSN stamped in the page header, the 8967 error could be triggered. I’ve seen this a handful of times in the last few weeks – hence the need for a blog post. I’ve discussed this with the dev team and hopefully the fix will make it into the next SPs for 2005 and 2008 (too late to fix such a rare problem in such a critical component at this stage of 2008 development). They’re going to put a KB article together too – but in the meantime, I wanted to get this on the Internet so Google/Live Search pick it up.
[Edit: June 24 2010 – this still hasn’t been fixed AFAIK – see KB 960791 which just discusses the workaround.)
Now let’s repro the problem. Starting with a simple database and table, I’ll find the first page so I can corrupt it.
CREATE DATABASE [TestCheckdbBug]; GO USE [TestCheckdbBug]; GO CREATE TABLE [test] ([c1] INT, [c2] CHAR (5000)); INSERT INTO [test] VALUES (1, 'a'); GO EXEC sp_AllocationMetadata 'test'; GO
Object Name Index ID Alloc Unit ID Alloc Unit Type First Page Root Page First IAM Page ----------- -------- ----------------- --------------- ---------- --------- -------------- test 0 72057594042318848 IN_ROW_DATA (1:143) (0:0) (1:152)
Now I’ve corrupted the page type on page (1:143) to be 255 (an invalid page type), which will guarantee the page fails the audit inside CHECKDB.
DBCC CHECKDB ('TestCheckdbBug') WITH ALL_ERRORMSGS, NO_INFOMSGS; GO
Msg 8928, Level 16, State 1, Line 1 Object ID 2073058421, index ID 0, partition ID 72057594038321152, alloc unit ID 72057594042318848 (type In-row data): Page (1:143) could not be processed. See other errors for details. Msg 8939, Level 16, State 6, Line 1 Table error: Object ID 2073058421, index ID 0, partition ID 72057594038321152, alloc unit ID 72057594042318848 (type In-row data), page (1:143). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 255 and 255. CHECKDB found 0 allocation errors and 2 consistency errors in table 'test' (object ID 2073058421). CHECKDB found 0 allocation errors and 2 consistency errors in database 'TestCheckdbBug'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (TestCheckdbBug).
Now I’m going to corrupt the LSN on that page such that it’s guaranteed to be higher than the creation LSN of the database snapshot (basically by filling the first part of the page header LSN field with 0xFF).
DBCC CHECKDB ('TestCheckdbBug') WITH ALL_ERRORMSGS, NO_INFOMSGS; GO
Msg 8967, Level 16, State 216, Line 1 An internal error occurred in DBCC that prevented further processing. Contact Customer Support Services. Msg 8921, Level 16, State 1, Line 1 Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.
Bingo! And in the error log, there’s some diagnostic information so we can tell which page caused the problem:
2008-05-22 14:55:01.95 spid53 DBCC encountered a page with an LSN greater than the current end of log LSN (31:0:1) for its internal database snapshot. Could not read page (1:143), database 'TestCheckdbBug' (database ID 15), LSN = (-1:65535:18), type = 255, isInSparseFile = 0. Please re-run this DBCC command. 2008-05-22 14:55:01.95 spid53 DBCC CHECKDB (TestCheckdbBug) WITH all_errormsgs, no_infomsgs executed by ROADRUNNERPR\paul terminated abnormally due to error state 1. Elapsed time: 0 hours 0 minutes 0 seconds.
Note the page ID tells us the bad page and the LSN reflects the corruption that I caused. If the page ID field of the header was corrupt, it wouldn’t be possible to tell from these diagnostics which page is corrupt. However, all is not lost. This bug means that under these circumstances the default online behavior of DBCC CHECKDB can’t run. The workaround is to use the WITH TABLOCK option, which does offline checking and doesn’t need the snapshot – but the trade-off is that an exclusive database lock is required for a short time and then shared table locks for all tables in the database (this is why online is the default). Running this option on my corrupt database gives:
DBCC CHECKDB ('TestCheckdbBug') WITH ALL_ERRORMSGS, NO_INFOMSGS, TABLOCK; GO
Msg 8928, Level 16, State 1, Line 1 Object ID 2073058421, index ID 0, partition ID 72057594038321152, alloc unit ID 72057594042318848 (type In-row data): Page (1:143) could not be processed. See other errors for details. Msg 8939, Level 16, State 6, Line 1 Table error: Object ID 2073058421, index ID 0, partition ID 72057594038321152, alloc unit ID 72057594042318848 (type In-row data), page (1:143). Test ((m_type >= DATA_PAGE && m_type <= UNDOFILE_HEADER_PAGE) || (m_type == UNKNOWN_PAGE && level == BASIC_HEADER)) failed. Values are 255 and 255. CHECKDB found 0 allocation errors and 2 consistency errors in table 'test' (object ID 2073058421). CHECKDB found 0 allocation errors and 2 consistency errors in database 'TestCheckdbBug'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (TestCheckdbBug).
Which are the exact same results we had before I corrupted the LSN field (this is expected, as there is no check of a page’s LSN field EXCEPT when running from a database snapshot). Now we can proceed to restore/repair as appropriate.
So – a scary little bug that has caused some people headaches, but I want to stress again – this can only happen if the database is ALREADY corrupt, and that it’s quite rare. Hope this helps some of you picking this up from search engines in the future.
6 thoughts on “CHECKDB bug that people are hitting – Msg 8967, Level 16, State 216”
Paul,
Thanks for the follow up on this problem. Good to know it wasn’t your code :-)
Many thanks – saved me lots of time and headache.
Dear Sir,
An excellent analysis and well put together article, much appreciated.
I have also run into this error in the following case:
1) SQL 2005 install on VMWare ESX server
2) two databases restored from same BAK file
3) db1 has all mdf & ldf files in single directory on hard drive on VMWare image
4) db2 has all mdf & ldf files in single directory on external hard drive using USB connection
5) DBCC CHECKDB on db1 returns expected output
6) DBCC CHECKDB on db2 returns Msg 8967, Level 16, State 216, Line 1…..
A couple of iterations with different BAK files yield the same result. Any thoughts, particularly with reference to VMWare? The hardware (external HD & USB cable) run fine for all other applications used for to date.
Hi SparkMeister,
Do you always get that problem storing the database on the USB drive? If so, the USB or VMWare I/O layer to the USB is corrupting the database… try running CHECKDB using WITH TABLOCK and it’ll tell you where.
What happens if you restore onto the USB through an instance not installed on VMWare?
Thanks
Running a FULL backup of a 60 GB database over a slow link between sites that takes over 4 hours to complete will probably result in some corruption esp. if the DB has image LOBs.
The initial error during a test restore of the 5-hour backup said:
Damage to the backup set was detected.
RESTORE WITH CONTINUE_AFTER_ERROR was successful but some damage was encountered. Inconsistencies in the database are possible.
The database was left in a SUSPECT state and as "RESTORING" as I used NO Recovery.
I removed the suspect bit and finished off the RESTORE with the differential backup taken after the full backup. That worked , but a DBCC VERIFY told me:
Msg 8967, Level 16, State 216, Line 1
An internal error occurred in DBCC that prevented further processing. Contact Customer Support Services.
DBCC results for ‘thisDB60Gb’.
Msg 8921, Level 16, State 1, Line 1
Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.
CHECKDB found 0 allocation errors and 0 consistency errors in database ‘thisDB60Gb’.
In the SQL Log (not interactively!) we find the telltale message:
DBCC encountered a page with an LSN greater than the current end of log LSN (6105:0:1) for its internal database snapshot. Could not read page (1:6103693), database ‘thisDB60Gb’ (database ID 18), LSN = (7310:17986:19), type = 3, isInSparseFile = 0. Please re-run this DBCC command.
alter database thisDB60GB set single_user with rollback immediate
— that’s because even WITH TABLOCK below did not work …
DBCC CHECKDB (‘FSIDArchive’) WITH TABLOCK, ALL_ERRORMSGS, NO_INFOMSGS;
Now DBCC worked … found these 50 problems …
Object ID 2105058535, index ID 0, partition ID 72057594038452224, alloc unit ID 72057594043432960 (type In-row data): Errors found in off-row data with ID 59509243904 owned by data record identified by RID = (1:2008990:43)
Msg 8929, Level 16, State 1, Line 1
and
Table error: Object ID 2105058535, index ID 0, partition ID 72057594038452224, alloc unit ID 72057594043498496 (type LOB data). The off-row data node at page (1:2024489), slot 0, text ID 59509243904 does not match its reference from page (1:6103693), slot 1.
and finally
CHECKDB found 0 allocation errors and 50 consistency errors in table ‘zzzz’ (object ID 2105058535).
CHECKDB found 0 allocation errors and 50 consistency errors in database ‘thisDB60GB’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB
———–
which I did with these effects:
Repair has invalidated the differential bitmap for database FSIDArchive. The differential backup chain is broken. You must perform a full database backup before you can perform a differential backup.
Repair: Deleted record for object ID 2105058535, index ID 0, partition ID 72057594038452224, alloc unit ID 72057594043432960 (type In-row data), on page (1:2008990), slot 43. Indexes will be rebuilt.
Repair: Deleted off-row data column with ID 59509243904, for object ID 2105058535, index ID 0, partition ID 72057594038452224, alloc unit ID 72057594043432960 (type In-row data) on page (1:2008990), slot 43.
etc… and a lot of repairs to LOBs..
Msg 8965, Level 16, State 1, Line 1
Table error: Object ID 2105058535, index ID 0, partition ID 72057594038452224, alloc unit ID 72057594043498496 (type LOB data). The off-row data node at page (1:6103743), slot 0, text ID 89164808192 is referenced by page (1:6103740), slot 0, but was not seen in the scan.
The error has been repaired.!! quite a few times ….
then finally
CHECKDB found 0 allocation errors and 50 consistency errors in table ‘zzzz’ (object ID 2105058535).
which it then "fixed"
and
CHECKDB fixed 0 allocation errors and 50 consistency errors in database ‘thisDB60GB’.
———
Don’t know how much I can trust these repairs though . . . but that is another story.
This seems to be a case where the log was actually corrupt, rather than hitting the bug in question. In this case, the behavior is exactly as expected. As to trusting the repairs, I can only say that I wrote the repair code for 2005, and there hasn’t been found a bug in it yet :-)
Thanks for the story!