Right up front, we’re talking SQL Agent job failure. I have not failed at my job. At least not yet.
After 10 days in dev and 2 days in production of success I had my first failure with IndexOptimize. It’s good to have failure and learn limits, unfortunately I wish the failure had been in dev and not on production, but you don’t always get to pick your failures. 502 indexes into the job and I hit a deadlock.
Transaction (Process ID 81) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I pulled up the error from CommandLog and noted the index it left off on. What suprised me a bit was the timing. The IndexOptimize job had run for much longer than I previously had experienced and had moved outside of my low activity window. I ran the query below and found three index REORGANIZEs were taking up nearly the entire run time of the job. They were also all on one table.
, DATEDIFF(mi,StartTime,EndTime) as 'mi'
WHERE StartTime >= '2015-03-10 17:00:00'
ORDER BY DATEDIFF(mi,StartTime,EndTime) DESC
I’m getting the feeling that I will need to stop treating my 371 thousand indexes equally. I’d like to have different jobs for indexes with a large number of pages, but I don’t think that is possible in Ola’s script. The job skips indexes with a lower page count than what @PageCountLevel is set to. I’d like to use an upper and lower page count variable and maybe I’ll make that suggestion. For now I’ll be separating out the problem table using the @Indexes variable.
ALL_INDEXES, -Db1.Schema1.Tbl1 All indexes except indexes on the object Schema1.Tbl1 in the database Db1