I work for a company that has a pretty large SQL database. It’s about 4 terabytes in size. We had been using Ola Hallengren’s index maintenance script for a while but it got to be such that the script would take over 24 hours to run. That became a problem. That’s not to say there was really anything wrong with Ola’s script. It really is an excellent solution. However, it does only do index defragmentation serially … meaning one at a time. The same would be true for SQL Server Maintenance Plans. It works fine on smaller databases but when you have a HUGE database that no longer works well.
To solve this I created multiple jobs using his script and tried to break up the indexes between the jobs to have somewhat of a balance between them. That became a pretty big hassle to maintain. I would have to every-once-in-a-while go in and tweak it and some days it would work fine and other days it would not. So, I set out to see if anyone had created a solution to defragment indexes in parallel … meaning more than one at a time. I could not find one on the web so I built my own.
My solution will create a SQL Agent controller job and multiple worker jobs. The controller job will create a queue of indexes to be defragmented based on runtime parameters passed to it. Once the queue is created it will spawn the worker jobs which will proceed to take indexes off the queue until the queue is depleted or the termination window has expired.
The below table shows the performance gains we were able to get with our index defragmentation. Just using one job with Ola’s script took over 24 hours. When we broke it out into four jobs it still took 6-8 hours. With my script and six worker jobs the time went down to 3 hours for most all the indexes. There are two REALLY big indexes that take about 7 and 4 hours individually because the data in the table changes a lot daily. Finally, I realized that most of the indexes were set to a fill factor of 100%. Which means that I was constantly having high fragmentation on tables. So, I rebuilt all the indexes at 85% fill factor and the defragmentation time went down to about an hour per day with those two other indexes still being the outliers. I feel like that is a pretty good result.
Script | Jobs | Duration |
---|---|---|
Ola Hallengren’s script | 1 | Over 24 hours |
Ola Hallengren’s script | 4 | 6-8 hours |
My script | 6 | 3 hours with 1 index at 7 hours and 1 at 4 hours |
My script with fill factor at 85% | 6 | Monday: 1 hour (30 minutes of that is to build the queue) Other Days: 20 minutes (15 minutes of that is to build the queue) |
The maintenance solution is release under the MIT License so feel free to use it as you wish just please let me know if you have found it to be helpful. You can click the button below to download the solution. The second link is to the source code repository.
https://bitbucket.org/davinstuder/index-maintenance/
Release Notes
1.3.1 – Added a check to see if the index can be defragmented online or not. Added start and end logging to the controller stored procedure. Fixed an issue with error logging not saving the error details
1.3.0 – Worker jobs are no longer statically created. They are spawned at runtime dynamically by the controller job. Cleaned up some SQL to standardize on formatting.
1.2.0 – Added history cleanup
1.0.2 – Removed references to deprecated master.dbo.UpdateIndexesQueueHistory table
1.0.1 – Edited query to find highest compatibility level for server
1.0.0 – Initial release
I’m looking to do something similar. But just two jobs. One job for one particular table with a PK that takes 6+ hours, and another job for all the other tables. Was there any noticeable performance impact running parallel index maintenance? My customer is a 24/7 operation, although there is a decrease in overnight workload.
TIA
My index maintenance job will keep a history of the defragmentation time. The fist time it will just order them indexes in terms of fragmentation level, but subsequent runs will take into account the historical time it took to defragment placing the longest running indexes at the top. So, if you run it with two worker jobs the longest running table will get grabbed by the first job and the other tables will get picked up by the second job. We did not notice huge performance impacts, but our servers are pretty beefy and we kept the worker jobs fairly low too. If you have an index that takes 6+ hours I would also look into your fill factor. Is lots of data in that table being written to on a regular basis with a fill factor or 100%? If so you might get better performance if you change the fill factor to something like 80%. It will increase the size of the index but you will also have less page splits and should bring down maintenance time.
Is it possible to reindex based on the scheme?
At the moment no, but I could look into it. Please raise an issue using the below link.
https://bitbucket.org/davinstuder/index-maintenance/issues
Hi Davin,
Zip file download link referring to older version of the solution. Please update the same.
Thanks.
Thanks for the heads up. I’ve updated the linked .zip file.