(New for 2020: we’ve published a range of SQL Server interview candidate screening assessments with our partner Kandio, so you can avoid hiring an ‘expert’ who ends up causing problems. Check them out here.)
(Check out my Pluralsight online training course: SQL Server: Logging, Recovery, and the Transaction Log.)
I’ve blogged a bunch about using the undocumented fn_dblog function I helped write (and I’ve got a lot more to come :-) but here’s one I haven’t mentioned on my blog before: fn_dump_dblog (although I have talked about it at conferences last year).
Here’s a scenario: someone dropped a table and you want to find out when it happened and maybe who did it. The default trace has also wrapped so you can’t get the DDL trace from there anymore.
If the transaction log for the DROP hasn’t yet been cleared from the active portion of the log then you’d be able to use fn_dblog to search through the log for the information you need. You might even be able to look in the inactive portion of the log by using trace flag 2536, which instructs the log reader to ignore the log truncation point and dump all possible log records from the log.
But what do you do if the pertinent log records just don’t exist in the log anymore? They’re only in your log backups. You could tediously inch your way through restoring the log backups a few seconds at a time until you find the point at which the DROP took place, and then restore to just before that point so you can get the data back.
Or you could save a whole ton of time and use fn_dump_dblog which allows you to dump and search log records from a log backup file, without having to restore the database!
Edit 8/15/13: Beware – Jonathan just found out from a customer system that uses this extensively that every time fn_dump_dblog is called, it creates a new hidden SQLOS scheduler and up to three threads, which will not go away (and will not be reused) until a server restart. It’s a bug that the SQL team is going to fix now we’ve alerted them to it. Use with caution.
Edit 5/15/15: It’s fixed in SQL Server 2012 SP2+ and SQL Server 2014. The fix won’t be backported any earlier.
Finding a DROP in the log
Here’s an example – I’m going to create a table, populate it, back it up, then drop it.
USE [master]; GO CREATE DATABASE [FNDBLogTest]; GO USE [FNDBLogTest]; GO SET NOCOUNT ON; GO -- Create tables to play with CREATE TABLE [ProdTable] ( [c1] INT IDENTITY, [c2] DATETIME DEFAULT GETDATE (), [c3] CHAR (25) DEFAULT 'a'); CREATE TABLE [ProdTable2] ( [c1] INT IDENTITY, [c2] DATETIME DEFAULT GETDATE (), [c3] CHAR (25) DEFAULT 'a'); GO INSERT INTO [ProdTable] DEFAULT VALUES; GO 1000 -- Take initial backups BACKUP DATABASE [FNDBLogTest] TO DISK = N'D:\SQLskills\FNDBLogTest_Full.bak' WITH INIT; GO BACKUP LOG [FNDBLogTest] TO DISK = N'D:\SQLskills\FNDBLogTest_Log1.bak' WITH INIT; GO INSERT INTO [ProdTable2] DEFAULT VALUES; GO 1000
Now I’ll drop the table and add some more log records:
DROP TABLE [ProdTable]; GO INSERT INTO [ProdTable2] DEFAULT VALUES; GO 1000
Now how can I find the point at which the table was dropped?
SELECT [Current LSN], [Operation], [Context], [Transaction ID], [Description] FROM fn_dblog (NULL, NULL), (SELECT [Transaction ID] AS [tid] FROM fn_dblog (NULL, NULL) WHERE [Transaction Name] LIKE '%DROPOBJ%') [fd] WHERE [Transaction ID] = [fd].[tid]; GO
Current LSN Operation Context Transaction ID Description ---------------------- --------------- ----------------- ------------- -------------------------------- 0000009d:0000021e:0001 LOP_BEGIN_XACT LCX_NULL 0000:00001ff7 DROPOBJ; <snip> 0000009d:0000021e:0002 LOP_LOCK_XACT LCX_NULL 0000:00001ff7 0000009d:0000021e:0003 LOP_LOCK_XACT LCX_NULL 0000:00001ff7 0000009d:0000021e:0008 LOP_MODIFY_ROW LCX_IAM 0000:00001ff7 0000009d:0000021e:0009 LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009b 0000009d:0000021e:000a LOP_MODIFY_ROW LCX_IAM 0000:00001ff7 0000009d:0000021e:000b LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009c 0000009d:0000021e:000c LOP_MODIFY_ROW LCX_IAM 0000:00001ff7 0000009d:0000021e:000d LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009d 0000009d:0000021e:000e LOP_MODIFY_ROW LCX_IAM 0000:00001ff7 0000009d:0000021e:000f LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009e 0000009d:0000021e:0010 LOP_MODIFY_ROW LCX_IAM 0000:00001ff7 0000009d:0000021e:0011 LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009f 0000009d:0000021e:0012 LOP_MODIFY_ROW LCX_PFS 0000:00001ff7 Deallocated 0001:0000009a 0000009d:0000021e:0013 LOP_HOBT_DDL LCX_NULL 0000:00001ff7 Action 3 on HoBt 0xd:100 <snip> 0000009d:0000021e:0014 LOP_DELETE_ROWS LCX_MARK_AS_GHOST 0000:00001ff7 0000009d:0000021e:0032 LOP_LOCK_XACT LCX_NULL 0000:00001ff7 ...snip...
Cool eh?
Now I’ll take another log backup, which clears the log, and contains the log I just looked at.
BACKUP LOG [FNDBLogTest] TO DISK = N'D:\SQLskills\FNDBLogTest_Log2.bak' WITH INIT; GO
Who Did the DROP?
If you want to figure out who ran the DROP command, look at the Transaction SID field for the DROP transaction’s LOP_BEGIN_XACT log record and then pass that value into the SUSER_SNAME () function. Simple!
Restoring using STOPBEFOREMARK
The LSN for the LOP_BEGIN_XACT log record is where I need to restore to just before.
To do that you can just plug in the LSN to the STOPBEFOREMARK option for RESTORE. The option is documented but the format is not – how helpful!!
The LSN needs to be specified with ‘0x’ in front of it, and then the format is exactly as returned by fn_dblog.
The restore sequence to restore to just before the DROP is therefore:
RESTORE DATABASE [FNDBLogTest2] FROM DISK = N'D:\SQLskills\FNDBLogTest_Full.bak' WITH MOVE N'FNDBLogTest' TO N'C:\SQLskills\FNDBLogTest2.mdf', MOVE N'FNDBLogTest_log' TO N'C:\SQLskills\FNDBLogTest2_log.ldf', REPLACE, NORECOVERY; GO RESTORE LOG [FNDBLogTest2] FROM DISK = N'D:\SQLskills\FNDBLogTest_Log1.bak' WITH NORECOVERY; GO RESTORE LOG [FNDBLogTest2] FROM DISK = N'D:\SQLskills\FNDBLogTest_Log2.bak' WITH STOPBEFOREMARK = 'lsn:0x0000009d:0000021e:0001', NORECOVERY; GO RESTORE DATABASE [FNDBLogTest2] WITH RECOVERY; GO
And the table is there again, right before the point it was dropped. You can see where I used the constructed LSN string in the final log restore.
Using fn_dump_dblog
So what if the log records are no longer in the log? I can use the fn_dump_dblog function.
Edit 8/15/13: Beware – Jonathan just found out from a customer system that uses this extensively that every time fn_dump_dblog is called, it creates a new hidden SQLOS scheduler and up to three threads, which will not go away (and will not be reused) until a server restart. It’s a bug that the SQL team is going to fix now we’ve alerted them to it. Use with caution.
Edit 5/15/15: It’s fixed in SQL Server 2012 SP2+ and SQL Server 2014. The fix won’t be backported any earlier.
For instance, here is how I can use it to look in the FNDBLogTest_Log2.bak backup:
SELECT COUNT (*) FROM fn_dump_dblog ( NULL, NULL, N'DISK', 1, N'D:\SQLskills\FNDBLogTest_Log2.bak', DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT); GO
I have to specify all the DEFAULT parameters (63 of them!) or it won’t work. The other parameters are:
- Starting LSN (usually just NULL)
- Ending LSN (again, usually just NULL)
- Type of file (DISK or TAPE)
- Backup number within the backup file (for multi-backup media sets)
- File name
So I could do the same query as I did above:
SELECT [Current LSN], [Operation], [Context], [Transaction ID], [Description] FROM fn_dump_dblog ( NULL, NULL, N'DISK', 1, N'D:\SQLskills\FNDBLogTest_Log2.bak', DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT), (SELECT [Transaction ID] AS [tid] FROM fn_dump_dblog ( NULL, NULL, N'DISK', 1, N'D:\SQLskills\FNDBLogTest_Log2.bak', DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT) WHERE [Transaction Name] LIKE '%DROPOBJ%') [fd] WHERE [Transaction ID] = [fd].[tid]; GO
Which works perfectly, but takes much longer to run.
So maybe you’re wondering what all the other parameters to fn_dump_dblog are for? They are for specifying the media families of a media set that has more than one media family.
Here’s an example using a log backup striped across two files:
BACKUP LOG [FNDBLogTest] TO DISK = N'D:\SQLskills\FNDBLogTest_Log3_1.bak', DISK = N'D:\SQLskills\FNDBLogTest_Log3_2.bak' WITH INIT; GO
If I try to use fn_dump_dblog and only specify a single file, I get an error:
SELECT COUNT (*) FROM fn_dump_dblog ( NULL, NULL, N'DISK', 1, N'D:\SQLskills\FNDBLogTest_Log3_1.bak', DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT); GO
Msg 3132, Level 16, State 1, Line 1 The media set has 2 media families but only 1 are provided. All members must be provided.
So I have to specify both media families:
SELECT COUNT (*) FROM fn_dump_dblog ( NULL, NULL, N'DISK', 1, N'D:\SQLskills\FNDBLogTest_Log3_1.bak', N'D:\SQLskills\FNDBLogTest_Log3_2.bak', DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT); GO
Summary
So there you go – some more powerful tools to add to your disaster recovery arsenal.
Enjoy!
94 thoughts on “Using fn_dblog, fn_dump_dblog, and restoring with STOPBEFOREMARK to an LSN”
Great info Paul as always. You make DBA superhuman providing them magic tricks :)
Regards
Thank you. You continue to wow me with lot of secret undocumented functions like these. The fn_dump_dblog is new to me. Thanks again.
Great post, this will be a lot of fun to play with!
Thank you, thank you, thank you! Not only for the two excellent functions but for the translation of LSN to the weird deciamlstring format that restore is looking for. I can recall having to hunt and peck my way to the correct format more than once.
Hi Paul,
Just for clarification, in the example you do a full backup and then log backup to D:\SQLskills\FNDBLogTest_Log1.bak. In the next code you do a full restore and then a log restore from D:\SQLskills\FNDBLogTest_Log1.bak and then from D:\SQLskills\FNDBLogTest_Log2.bak. Where in the code did you do another log backup to D:\SQLskills\FNDBLogTest_Log2.bak?
It is late at night for me. I’m I missing something?
Thanks for the great articles!
You’re right – I missed copying that from my script – it’s in there now at the end of the first section. Thanks!
Absolutely fantastic information. Thank you very much for sharing your knowledge.
Great info Paul. Thanks for this Great Article..
Thanks a ton.I always find your Articles very informative.
I’ve been researching an incident that took place just after Christmas, with no luck. All google had to tell me was to use this tool or that tool. I could find nothing that would work with a SQL Log backup until today. After reading this, I had the SID of who made the change in minutes, and a user to go with that SID shortly thereafter. Thanks for the truely useful info.
Hi Paul,
This is very useful. I was testing with fn_dblog() and DBCC LOG() for one my blog post, but didn’t know about fn_dump_dblog().
thanks!
Awesome! just wonderful, well redacted & useful post.
Thanks a lot. I’ll be reading…..
I’m trying to use this and I get this error:
Msg 615, Level 21, State 1, Line 4
Could not find database ID 0, name '0'. The database may be offline. Wait a few minutes and try again.
I’ve cut and pasted your code and just changed the filename to point at one of my files. The database id isn’t 0, it isn’t offline.
If I run fm_dump_dblog with all nulls it will return the current transaction log info but I need to see ones that are older.
Thanks for any help.
Can you post the command you’re using? And the output of select [state_desc] from sys.databases where [name]=’yourdb’
What a difference a long weekend makes. I slept, the server slept, whatever… I now run my fn_dump_dblog query and I get data back for prior transaction logs.
I’ll give you credit for your aura doing the trick.
Thanks
Dear Brian;
I also face this error, may I know what have you done to solve this issue?
Best Regards
Hamid J. Fard
Hi,
I know this is old, but I am facing the same issue :(
It works fine with no file specified, but when the file is specified, no luck. All the DBs in this instance are [state_desc] = ONLINE.
The .trn file does exist, we have verified that. Also the MSSQL Agent service account has full rights on that folder. Not sure what else to check? What are we missing? Changing to different .trn files and different values of @Start also give the same error (even with @Start = NULL
The command looks like this:
declare
@P1 varchar(25)=’0x0045bf1f:0005e513:0062 ‘,@P2 varchar(23)=’0045bf1f:0005e513:0062 ‘
select /* top 50000 */
[Current LSN],
[operation],
[Context],
[Transaction ID],
[Transaction Name],
[Begin Time],
[End Time],
[Flag Bits],
[PartitionID],
[Page ID],
[Slot ID],
[RowLog Contents 0],
[Log Record],
0x00
from sys.fn_dump_dblog (@P1, NULL,NULL, 1,’W:\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\MSSQL\Log Backups\CS_20180215100001.trn’,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
What exact problem are you having?
Hello Paul,
Thank you for your post about the fn_dump_dblog it was very interesting. This post makes me want to look through what other cool undocumented functions exist. I do have a question for you though. Do you know if this would work on all flavours of SQL? (Enterprise, Standard, Express, etc?)
Thank you for your time,
Kyle
Yes it does.
also could use STOPBEFOREMARK = ‘lsn:0xLSN’ , like
RESTORE LOG [FNDBLogTest2]
FROM
DISK = N’D:\SQLskills\FNDBLogTest_Log2.bak’
WITH
STOPBEFOREMARK = ‘lsn:0x0000009d:0000021e:0001’,
NORECOVERY;
GO
I would never imagine that I could learn this kind of topic by myself unless DBA’s come across your articles.
Your articles are so special and not routine like other SQL websites. You are more practical.
Best of all, you are ON to the point.
Quick question.. you stated:
“Beware – Jonathan just found out from a customer system that uses this extensively that every time fn_dump_dblog is called, it creates a new hidden SQLOS scheduler and up to three threads, which will never go away and never be reused. It’s a bug that the SQL team is going to fix now we’ve alerted them to it. Use with caution.”
When you say never goes away, do you mean that literally? It will even survive a reboot?
Thanks!
No – it will be cleared on a reboot. I’ll edit to make it clearer. Cheers
Hi Paul,
Do you know if the SQL team already fix the bug?
Yes – the KB article says there’s a hotfix. [Edit: oops, wrong thread – there’s no fix for this that I know of.]
Hi Paul.
Great article as always and has proven to very useful to us, thank you. I do have one question though. With regards to viewing the transaction log and/or the backup files. Is it possble using these techniques to find out at any point in time in the past who or what had an open transaction and what they were running that may have caused a chain of blocking processes that essentially then caused a chain of timeouts on a server?
Regards
Dave
No – there’s no practical way to do that.
hi paul,
thanks for your blog and I have a question to ask that how to see the blogs when they were inactive and had not been
backup?
You’ll need to loop through the backupset table in msdb to get the history of which backups have been performed. Search around on Google and you’ll find scripts to do that.
Great article Paul, The info provided here gives not only about the 2 hidden functions but the log internals too. Thanks for sharing the info.
Hi Paul – Is there any drawback to use fn_dblog function at production system. Is it recommended ?
The only drawback is that log truncation is prevented while it’s running.
Paul – is it possible to get the contents of the .trn (tlog backup) files participating in the Log Shipping configuration?
Yes, they’re log backups the same as any other – nothing special about them. fn_dump_dblog will work just fine.
Sadly, this is still present in SQL 2014 RTM with CU 6 (Build 12.0.2480.0)
Msg 4335, Level 16, State 2, Line 1
The specified STOPAT time is too early. All or part of the database is already rolled forward beyond that point.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
Do you have a question?
My prodtable is restored but ProdTable2 records are not restored in FNDBLogTest2
SELECT COUNT(*)
FROM [FNDBLogTest2].[dbo].[ProdTable]
SELECT COUNT(*)
FROM [FNDBLogTest2].[dbo].[ProdTable2]
——————
ProdTable2_Count
0
ProdTable_Count
1000
You’re doing something wrong. I just followed the exact steps in the blog post, using that code, in a newly installed SQL Server and it works perfectly. You’re likely not converting the LSN correctly and restoring too a point that’s too early.
Thank you paul for your instant reply.I am using LSN conversion but that didn’t worked for me. I have used STOPBEFOREMARK = ‘lsn:0x0000001a:00000263:0001’ this is working now.
but using stopbeforemark only recover the first 1000 records but how to recover records which are inserted after the execution of drop table1.
Now after recovery table records are
SELECT COUNT(*)
FROM [FNDBLogTest2].[dbo].[ProdTable]
SELECT COUNT(*)
FROM [FNDBLogTest2].[dbo].[ProdTable2]
——————
ProdTable2_Count
1000
ProdTable_Count
1000
You’d have to restore forward in time…
can u plz guide how ?
Follow the steps in the blog post to get the correct LSN to use – I can’t give you any simpler guidance.
Greetings Paul,
I have two questions regarding the hidden SQLOS scheduler(s) created by the use of this function. On a server with multiple instances, will restarting the instance in question release the scheduler / thread(s) or is an actual restart of the entire server required? According to MSDN, I can identify hidden schedulers using sys.dm_os_schedulers WHERE scheduler_id>=1048576. Is there a way to determine which schedulers correspond to fn_dump_dblog using the results returned by querying sys.dm_os_schedulers?
Yes, restart releases the schedulers. You should be able to compare before and after to tell which are the extra ones created. Hidden schedulers are all listed as HIDDEN_ONLINE, regular ones are listed as VISIBLE_ONLINE. But why do you care?
Thanks for the info pertaining to the question of reboot versus restart.
I care which schedulers correspond to fn_dump_dblog because I’d like to query (via Registered Servers) which servers / instances might have lingering schedulers.
Hi Paul,
Just tested it. I find it very useful and I have watched some of your Pluralsight courses and was amazing learning experience.
Thanks and appreciate your efforts.
Any chance fn_dump_dblog can be used for multiple log backup files that aren’t striped? The last known time the user can guarantee me that the data was good was about a week ago; We use MS DPM to backup the SQL Server & it syncs every 15mins therefore backs up the transaction log as well, so I’m looking at about 800 separate log files.
We’re in a lurk staff-wise after someone leaving & I’m just trying to help these folks out as best I can, I’m definitely not a DBA. So I apologize if this is elementary to the point of insulting.
The DB (& log backups) are restoring right now, I’m looking through the file names & I haven’t deciphered the naming scheme yet.
No chance there’s a straightforward way to concatenate a bunch of log backups such as this, eh?
Any help would be greatly appreciated.
Thanks!
No – if they’re not part of the same backup stripe set, you have to run fn_dump_dblog on them individually.
Just a quick comment: on recent versions of SQL Server you’ll want to use trace flag 2537 rather than 2536.
Hi Paul,
Nice blog, but can I also see what object is dropped?
Just to now current work in a 65 person development team and we find one specific object to be dropped but know one seems to be doing this. Because I see multiple DROPOBJ actions in the log I would like to be able to filter on the name of the object. Is this possible?
You only get the object ID, IIRC, so you’d need to have a prior backup of the database to match object IDs against names.
Paul can you please clarify your edit: *5/15/15: It’s fixed in SQL Server 2012 SP2+ and SQL Server 2014. The fix won’t be backported any earlier.*
Are you saying that the fn_dump_dblog hidden scheduler/thread was fixed in those versions? Because even on those versions (and later) running that still appears to generate an extra offline scheduler or two and an additional few threads – according to the DMVs (though subsequent runs don’t appear to create extra ones).
Yes, once the extra hidden scheduler/threads are created, additional ones won’t be created each time fn_dump_dblog is run.
Hello Randal,
I have just watched your pluralsight course about transantion logs and read this article.
Ok, is it possible use fn_dblog function for distributed transaction to ? I mean in both sql distributed server sql instances have the same LSN number ? If it s not, is there any unique id or something that is shared in distributed transaction ? I wanna compare sql instance datas by this unique id or something ?
Yes – you look for LOP_PREP_XACT log records and match up the [Prepare Time] fields. See https://blogs.msdn.microsoft.com/ialonso/2015/04/10/how-to-identify-whether-a-database-has-participated-in-a-cross-database-transaction-or-in-a-distributed-transaction/ for more info on making sure the LOP_PREP_XACT are for distributed rather than cross-database transactions.
Hi Paul, I was running the function trying to mimic that client in your post. I ended up with over 1100 entries when running SELECT COUNT(*) FROM sys.dm_os_threads. This goes way beyond 512 (max_workers_count). How is that possible? Isn’t 512 a hard limit in the number of threads available based on my processor architecture?
There was a bug (should be fixed in most recent builds) that using fn_dump_dblog leaks a scheduler and a thread – see the text in the blog post for details.
Ah ok, should’ve read it more thoroughly. The extra threads( and schedulers) produced would that potentially lead to thread starvation on the SQL server?
Potentially, yes.
Ok thanks – I think I have noticed a side effect on the memory req. for 2MB per thread that needs to get allocated too.
Hi Paul,
Is there any way to get [Current LSN] column in a transaction. But i dont want to filter fn_dblog with XACT ID, because to filter with XACT ID first i run “select transaction_id from sys.dm_tran_current_transaction” this command and i can filter fn_dblog with XACT ID with this value. Problem is this query(“select transaction_id from sys.dm_tran_current_transaction”) result(transaction_id) is not unique. Every time sql server service is restared transaction_id is set to its initiale value. So after a while there may be 2 raws if i filter fn_dblog by XACT ID for different transactions. My purpose is to get [Current LSN] that is related to that transaction. How can i do get it ?
I may filter fn_dblog with [Transaction ID] column but i could not find a way to get this value in a transaction too. CDC tables has this value in their tracing tables. How do they do it :)
They’re using an internal value called XDES ID, which is the same as the [Transaction ID] column. You can’t get it.
Ok Paul, thanks.
Hi Paul,
Got a quick question.
Is there a way we can retrieve TSQL statements (Like inserts) from fn_dblog/fn_dump_dblog.
For example- I write an insert query and send to SQL server and it fails due to constraint violation(Say Null’s being sent to Not Null column). Now, is it possible to retrieve such failed inserts somehow from Transaction log files using these funtions.
Not as such – but you could figure out how to reconstruct a simple statement that does the equivalent (it’s extremely difficult to do). There are some commercial programs that try to do it – Apex SQL Log, for instance.
Fantastic post, I found this very useful for me.
I started to write a question but I think I found my answer…but I don’t like it so maybe you have something better.
I want to find the OBJECT of a DELETE in a historical Transaction Log.
I SELECT * FROM fn_dump_dblog(…) where [Transaction Name] = ‘DELETE’ . Lots of rows but no object_id I can see.
I notice that there are rows with the same [Transaction ID] for each delete with Operation = “LOP_DELETE_ROWS”. These rows have free text in the [LOCK INFORMATION] column like this:
“HoBt 72057595186511872:ACQUIRE_LOCK_IX OBJECT: 0:1047192468:0 ;ACQUIRE_LOCK_IX PAGE: 0:1:49457945 ;ACQUIRE_LOCK_X KEY: 0:72057595186511872 (33100b20f8cc)”
I see that the “OBJECT 0:1047192468:0” has my object_id: 1047192468.
So I parsed out the object_id and found that in the sys.objects table (providing as you mentioned it has not been dropped or recreated since the delete)
Is there a better way?
Yes – use the Allocation Unit ID and then do the metadata look-ups. I show how to do that in the code here: https://www.sqlskills.com/blogs/paul/tracking-page-splits-using-the-transaction-log/
Is there any way in hell that you can use this to find out when a row was updated in a table if a rowversion column doesnt exist, change tracking and cdc isnt enabled and it probable happened a couple of months ago…
There are no hidden columns? The DMVs ar useless (as is the database design) since the server has been restarted many times since the change.
#missionimpossible #hopeless
Best regards,
Johan
PS. We killed the guy who built this database. May he rest in pieces.
If you have a complete series of log backups, and the row hasn’t moved to a different page, you can look through all the log backups looking for LOP_MODIFY_* log records that touch that page, and then narrow it down from there.
Hi Paul
Can you please help me.
https://stackoverflow.com/questions/45287643/how-to-get-back-lost-data-from-a-dropped-table-whose-backup-was-not-present
Nope – nothing you can do if you don’t have backups. When a table is dropped, the data isn’t logged at all.
Hi Paul,
Thanks for the great artical.
I am using the fn_dblog() funcition read the logs and findout the DML operations but the problem here is I am able to read only logs till the last checkpoint. I would like to read the logs before checck point as well. Here we are not creating the tranaction log backups to use the function fn_dump_dblog().
Is it possible to read the logs by ignoring the check point and not by taking the log backup.
Regards,
Vijay
You can try trace flag 2537, which allows looking at inactive VLFs, but once the log has been overwritten, you can’t access it, obviously.
I’ve written a script to walk through all my log backups for a particular day to look up any row deletions. I’m using xp_dirtree to gather the path and filenames into a temp table that has an id on it so I can loop over that table with the fn_dump_dblog function. I am getting a failure message that appears to be appending C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\ to my real path which is G:\backup\… If I select out the @path variable, it gives me just the correct path and file name, so where is this C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\ coming from? Following is my code:
DECLARE
@BasePath varchar(1000)
,@Path varchar(1000)
,@FullPath varchar(2000)
,@Id int;
SET @BasePath = ‘G:\backup\cmhtest\default\20180708051636-20180807051636’;
IF OBJECT_ID(‘tempdb..#DirectoryTree’)IS NOT NULL
DROP TABLE #DirectoryTree;
CREATE TABLE #DirectoryTree (
id int IDENTITY(1,1)
,fullpath varchar(2000)
,subdirectory nvarchar(512)
,depth int
,isfile bit);
ALTER TABLE #DirectoryTree
ADD CONSTRAINT PK_DirectoryTree PRIMARY KEY CLUSTERED (id);
INSERT #DirectoryTree (subdirectory,depth,isfile)
EXEC master.sys.xp_dirtree @BasePath,1,1;
UPDATE #DirectoryTree SET fullpath = @BasePath;
select * from #DirectoryTree;
GO
DELETE FROM #DirectoryTree where SubDirectory NOT LIKE (‘trans_LSLMDB%’);
IF OBJECT_ID(‘tempdb..#results’)IS NOT NULL
DROP TABLE #results;
Create table #results (
path nvarchar(500),
[current lsn] numeric(25,0) ,
[transaction id] bigint,
[operation] nvarchar(25),
[transaction name] nvarchar(32),
[context] nvarchar(100),
[allocunitname] nvarchar(200),
[page id] numeric (25,0),
[slot id] tinyint,
[begin time] datetime,
[end time] datetime,
[number of locks] int,
[lock Information] nvarchar(500)
)
declare @id int = (select min(id) from #DirectoryTree);
declare @path nvarchar(2000) = (select ””+ fullpath +’\’+ subdirectory +”” from #DirectoryTree where @id=id);
select @path
WHILE @id@id);
END
GO
I figured out where that was coming from The database server default backup location had been left to the C drive where the actual backup job puts them in the G drive. I changed that setting on the server and tweaked a few other things and have it working now. Thank you for taking a look at this.
Hi Paul, I’m currently revisiting your ‘Advanced Corruption Recovery Techniques’ Pluralsight course and in the ‘Advanced Restore Techniques’ section, when you restore with STOPBEFOREMARK, you are converting the LSN. However I then remembered this article and was curious as to why you are doing that conversion when using the LSN lifted straight from fn_dblog() and prefixed with 0x works here.
Regards,
Gordon.
P.S Looking forward to seeing the SQLskills guys at sqlbits in March, although I think I’ll be doing Erin’s ‘Query Store’ session ;-)
Because when I recorded the course and originally wrote the article, I didn’t know about that option. The article’s been updated since then.
Heh… lordy. I kept looking for the decimal conversion in your article based on another article I just happened to run across. It cited your article above and I was going nuts trying to find the decimal (BIGINT, really) conversion that was being talked about. You apparently made the change after that other fellow wrote his article.
BTW, I agree with the others… this is the most comprehensive and demonstrative article I’ve ever seen on the subject of PiT restores to the mark. Thanks for the time you took to put it an the code together, Paul.
If the database is in SIMPLE recovery model, Is it possible to read full/differential backups to find out the details about who issued the DROP command against a table ?
Current log file did not return anything.
Thanks,
Krishna
Usually no – as there’s only enough log in a data backup to make the database crash recovered after restore. If you’re lucky and the drop is in that portion of the log, then yes.
Are we able to get the transaction information from a TDE enabled database?
Shouldn’t make any difference.
Hi Paul, after querying sys.fn_dump_dblog (on SQL Server 2016 Enterprise SP2 + CU12) and experiencing a drop in the client (SSMS v18.8) to server connection whilst the query was running, the session remained open on the server (for several days!) in a suspended state and with a wait type of ‘DUMP_LOG_COORDINATOR’. It was not possible to kill the session and SQL Server had to be restarted.
It has been possible to reproduce the behaviour in a test environment by simply cancelling the query at the point when the results are starting to be displayed by SSMS. The query has a filter on [Transaction ID], if this filter is removed then it doesn’t appear to be possible to reproduce the behaviour.
I have searched the web exhaustively but found no mention of others encountering such an issue.
I was wondering if this is something that you’ve encountered and if so whether you identified any factors which led to this behaviour?
Thanks
Chris
Hi Chris – there was a similar bug in earlier versions, but it was fixed a long time ago. So it’s a new bug but unsure whether it’s in SSMS or SQL Server. If you have a Premier agreement, you could raise it with them, and if not, shoot me an email with the details and I’ll mess around with it and tell the dev team. Thanks
Thanks Paul, your kind offer is appreciated.
I will recommend internally that this be raised with Microsoft in the first instance as the Production customer database whose specific transaction log backup file we are experiencing the problem with is configured with TDE. As you can imagine there are a couple of hurdles there with regards to clearance before even getting to the point of starting to investigate the issue externally.
In the meantime I’ll attempt to create a shareable reproduction of the issue.
If anything is found then I’ll reply to these comments with a summary.
Thanks again for your time, Chris.
Hello Paul,
Execute fn_dblog function, the returned information includes: rowflags field. What is the meaning of this field? What values are available? thank you very much
There’s no public documentation on those values – thanks
thanks.
May one safely assume that fn_dblog() returns entries in the order as they were recorded in the log? It would be great if it did, to save CPU cycles for ordering them in the query.
Yes it does.