There’s cool mechanism that the buffer pool has called disfavoring, that sometimes kicks in to prevent pages from a different database being flushed from the buffer pool when a large operation occurs. When a buffer is no longer required, the buffer can be marked as the least recently used of all the buffers in the buffer pool, meaning that it will used next when the buffer pool needs a new buffer to read a page from disk. This means that a small number of buffers will be used repeatedly instead of filling the buffer pool with pages for some operations.
The places that disfavor buffers when they’re done with them include:
- During redo of log records, if a page already has a log record’s effects on it and so doesn’t need to be changed
- During bulk load into a heap
- Large table scans that are more than 10% of the buffer pool size will disfavor pages instead of forcing pages from other databases to be flushed from memory
And backups don’t use the buffer pool at all so they’re not going to cause buffer pool flushing either.
You can watch disfavoring happening for table scans using the leaf_page_disfavored Extended Event.
DBCC CHECKDB can cause interesting buffer pool behavior as it doesn’t use this method of disfavoring. It does do disfavoring, but doesn’t reuse the pages it’s disfavored during the executing operation. For instance, let’s say you have SQL Server set to use a maximum of 5GB of memory, and have 5 databases that are 1.5GB each. If you cause three of the databases to be fully read into memory, the remaining buffer pool space is only a few hundred MB. If you then run DBCC CHECKDB on the fourth database using WITH TABLOCK, it will read all the pages from that database into memory, and push pages from one of the other databases out. Those pages from the fourth database are all marked as disfavored though, so if you run DBCC CHECKDB on the fifth database using WITH TABLOCK, it will push all the pages from the prior DBCC CHECKDB out and use that space.
Furthermore, if you allow DBCC CHECKDB to create a database snapshot, it will read all the pages in the context of the snapshot, and then drop the snapshot. This could cause some of the rest of the buffer pool to be thrown out, potentially even pages from the database being consistency-checked!
And Resource Governor doesn’t help here at all, as that just controls the query execution memory grant, not the amount of buffer pool space used when reading pages.
11 thoughts on “Buffer pool disfavoring”
I’m pretty sure it was originally put in specifically for the large scan case (15-16 year old design discussions now attempting to be retrieved from my brain’s archival storage), so it will be rather interesting if you can’t reproduce it. There has to be some mechanism for keeping scans from polluting the buffer pool.
Now there’s a blast from the past – good to hear from you Hal! Indeed – I can see the disfavoring happening but I don’t see the expected behavior in the contents of the buffer pool. I’m sure there’s something I’m missing as I’ve repro’d the behavior before when I was back on the team, but I can’t remember exactly what I did. I have a PFE buddy checking it out (if only I had code access still!) and I’ll repro as soon as I figure it out. I have a simple parallel clustered index scan in 2012 SP1 that will happily flush the bpool every time. Cheers.
This could be useful in an OLTP environment that has one off reports that run occasionally. I’m guessing there is no way to manually force disfavoring?
No way to force it, but it should happen naturally in the right circumstances. I’ll blog as soon as I figure it out.
Hi Paul,
One one hand DBCC CHECKDB clearly disfavors the buffer pool but on the other the page life expectancy counter drops almost to zero when checkdb is run against a large database. Shouldn’t the counter be barely affected in the case of BP difavoring ?
Thanks,
You would think so, but page life expectancy is a measure of pressure on the free list in the buffer pool so responds to the immediate pressure and also needs to take into account the pages being read into memory during creation of the database snapshot.
Hi Paul,
Thank you for sharing ur great work !
As it was mentioned in this article – “Large table scans that are more than 10% of the buffer pool size will disfavor pages instead of forcing pages from other databases to be flushed from memory.” Is there any way
(or HINT) to stop disfavouring.
I am asking this becuase one of client’s OLTP server is used for Reporting purpose (fetches 100+ GB data) during non-business hours. At that point in time, there is hardly any resource intensive operation (not even maintenance job) performed on other databases (quite small in size) residing on SQL Instance – so we want buffer pool be to fully utilized and deliver better performance.
Pls correct me if I misread something.
Thanking you for your time :)
Nope – no way to alter its behavior.
You had me at ‘that that’ :)
There’s cool mechanism that that
Hi Paul,
Question for you about “PS It can look like DBCC CHECKDB isn’t using any pages in the buffer pool but it does, via its hidden snapshot, but then it removes them all once the hidden snapshot is removed.”…
I was researching a drop in buffer pool usage and PLE right after my DBCC CHECKDB’s. You explained the drop in PLE above, but it’s the removal of pages from the buffer pool that concerns me. I’ve been testing this on a test server with 8GB RAM allocated to SQL. I ran queries in three databases to fill the buffer pool fairly equally between the three databases, as verified by by looking at sys.dm_os_buffer_descriptors. One of the databases had 1.5GB of the buffer pool. When I ran a DBCC CHECKDB on this database, it pushed out pages from the other two databases and used 6GB of the buffer pool, then, when the DBCC completed, buffer pool usage for this database went down to 59MB.
Is what I’m seeing is that the original 1.5GB of buffer pool associated with this database being replaced by pages from the snapshot, then when the snapshot is removed, all pages are removed, losing what was originally in the buffer pool?
I then tried using resource governor to see if I could limit the memory used by the DBCC CHECKDB, hoping that by giving it a much smaller chunk of memory, SQL would leave the rest of the buffer pool intact with pages from this and other databases. I verified that my session running the checkdb was included in the resource pool, and the granted memory was indeed much lower, but the exact same thing happened with the checkdb – my test database went from using 1.5GB to 6GB of buffer pool, then dropped to <100MB when it was done.
When was disfavoring pages added to SQL? My test server is SQL 2008R2 and I've seen the same phenomena happen in SQL 2012. Shouldn't the resource governor have had an effect on buffer pool usage by checkdb?
Thanks!
Hi Mike – what you’re seeing is that DBCC CHECKDB disfavors the pages that it uses, but those disfavored pages won’t be reused until after the DBCC CHECKDB operation finishes, so a DBCC CHECKDB can still cause pages from another database to be pushed out of the buffer pool. I’ve added a paragraph that explains.