Query SQL backup history

The Job Activity Monitor and the View History option are just fine in SQL Server Management Studio, but sometimes you want to run a query to look at the all the successful backups you’ve run.

Don’t be dreary, get to the query.

CAST(CAST(s.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS bkSize,
CAST(DATEDIFF(second, s.backup_start_date,
s.backup_finish_date) AS VARCHAR(4)) + ' ' + 'Seconds' TimeTaken,
CAST(s.first_lsn AS VARCHAR(50)) AS first_lsn,
CAST(s.last_lsn AS VARCHAR(50)) AS last_lsn,
CASE s.[type]
WHEN 'D' THEN 'Full'
WHEN 'I' THEN 'Differential'
WHEN 'L' THEN 'Transaction Log'
END AS BackupType,
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE s.database_name = 'ImportantDB' /*Remove this line for all the databases*/
and s.backup_start_date > DATEADD(d,-1,GETDATE()) /*How far do you want to go back? This returns 1 day of data */
ORDER BY backup_start_date DESC, backup_finish_date

We have two tables here, backupset and backupmediafamily. The backupset does most of the heavy lifting giving you the pertinent info like database name, how big the backup is, how long it took, when the backup started, what kind of back up it is and then some.

msdb.dbo.backupset,”Contains a row for each backup set. A backup set contains the backup from a single, successful backup operation. RESTORE, RESTORE FILELISTONLY, RESTORE HEADERONLY, and RESTORE VERIFYONLY statements operate on a single backup set within the media set on the specified backup device or devices.”

And we only bring in backupmediafamily for the physical_device_name field, a.k.a. the file path and file name of your backup.

msdb.dbo.backupmediafamily “Contains one row for each media family. If a media family resides in a mirrored media set, the family has a separate row for each mirror in the media set.”

You want to see some results? Go run this script you pulled off the internet on your nearest production server. What could possibly go wrong. But really, if you are a good and curious DBA/DBA in training/DBA-curious dev/etc. start with just looking at the msdb.dbo.backupset table.


SQL Server 2014 get off my lawn

Do DBA’s fear change? I sure hope not because it’s coming. Be it DevOps or cloud services or your deep love for SQL Server 2000; platforms, services and your way of life will shift. I like installing new versions of SQL Server, checking out the changes and new features. I can’t say I always embrace the change and SQL Server 2014 is a mix of the comfortable familiar and the new (and maybe not for me yet) frontier. SQL Server 2014 has its feet on the edge of my lawn and I’m trying to figure out if I should start yelling at it yet.

Last week I had my first opportunity to install SQL Server 2014 on a new to me OS, Windows 2012. The SQL installation process was very familiar and changed little from 2012. It still started at the Installation Center and offered a near identical set on the Features page with the difference is the missing SQL Server Data Tools. In 2012 you had the option of installing the SQL Server development environment, including what was once called BIDS, Business Intelligence Development Studio. The 2014 install doesn’t have this option as it has been split and must be downloaded separately, Microsoft SQL Server Data Tools – Business Intelligence for Visual Studio 2013.

Miss you? I didn’t even realize you were gone!

The rest of the installation was very familiar with no surprises. There was this nice reminder on setting permissions for the default backup directory, just in case you forgot to do this. I don’t recall seeing this in 2012.

An actual permissions check would be better than a blanket warning.

Backups in 2014 have some new enhancements for your backup location and letting SQL Server manage your backups and. Taking some screenshots from the good old maintenance plans, you can see on the Destination tab that along with the Folder (local or network) you now have the option of an Azure storage container.

Where are your back ups going to go?

Managing backups depending on scale can be a pretty trivial to overwhelming job. My experiences with backups has been more on the side of trivial. Not to make light of the importance of backups, it just hasn’t taken much of my time. Those with with a more difficult job of managing backups may be interested in turning over responsibility to SQL Server. With the the new SQL Server Managed Backup to Windows Azure, SQL Server manages and automates the backups to the Azure Blob. With this setup the DBA is only required provide the retention period settings and the storage location with SQL Server determining the frequency based on the workload. I can see the appeal of turning this over to SQL Server to manage.

The criteria used to determine backup frequency is very interesting.

Full Database Backup:SQL Server Managed Backup to Windows Azure agent schedules a full database backup if any of the following is true.

  • A database is SQL Server Managed Backup to Windows Azure enabled for the first time, or when SQL Server Managed Backup to Windows Azure is enabled with default settings at the instance level.
  • The log growth since last full database backup is equal to or larger than 1 GB.
  • The maximum time interval of one week has passed since the last full database backup. TP – 1 week! None of my database have a full backup that infrequently, especially when differentials aren’t supported. See below for support.
  • The log chain is broken. SQL Server Managed Backup to Windows Azure periodically checks to see whether the log chain is intact by comparing the first and last LSNs of the backup files. If there is break in the log chain for any reason, SQL Server Managed Backup to Windows Azure schedules a full database backup. The most common reason for log chain breaks is probably a backup command issued using Transact-SQL or
    through the Backup task in SQL Server Management Studio. Other common scenarios include accidental deletion of the backup log files, or accidental overwrites of backups. TP – This is a pretty sweet feature. I can see other people having cases where someone (maybe a developer, just maybe) with too many permissions takes a back up. That’s never happened to me, just saying.

Transaction Log Backup:SQL Server Managed Backup to Windows Azure schedules a log backup if any of the following is true:

  • There is no log backup history that can be found. This is usually true when SQL Server Managed Backup to Windows Azure is enabled for the first time. TP – Starting thing off right, I like that.
  • The transaction log space used is 5 MB or larger. TP – How many changes were made in your database before 5 MB were written to the log? Is it OK to lose those?
  • The maximum time interval of 2 hours since the last log backup is reached. TP – Longer interval than most of my t-log backups, but at least it isn’t a week.
  • Any time the transaction log backup is lagging behind a full database backup. The goal is to keep the log chain ahead of full backup. TP – We all need goals.


  • SQL Server Managed Backup to Windows Azure agent supports database backups only: Full and Log Backups. File backup automation is not supported.
  • System Databases are not supported. TP – SQL Server can’t manage everything, I guess you still have a job.
  • Windows Azure Blob Storage service is the only supported backup storage option. Backups to disk or tape are not supported. TP – I guess that’s why they call it SQL Server Managed Backup to Windows Azure. Soon to be called SSMBWA, that rolls right off the tongue.

This is a change I’m not quite ready to embrace. I like control of my backups. I want to determine the frequency based off my discussions with the data owners, not on SQL Servers workload analysis. This feature isn’t allowed on my lawn.

You know who doesn’t fear change, but actively seeks it out and promotes it? Mercy Corps, be the change and help in needed places like Nepal.

Are you sure you are backing everything up?

backupYour databases are backed up with full, diff and t-log jobs using slick scripts from some of the several smart and helpful SQL superheros flying around the internet. They are also copied over to tape or other storage and stored offsite, ready for the next DR drill or real life disaster. Your computer (work or home, maybe that is the same thing) has automated backup software to keep all your cat pictures safe. You’re a pro, you’ve got this.

For all of us using #sqlnewblogger are we backing up our newest work? I’m hosted at WordPress and am relying on them to keep my data safe, available and recoverable. I’m not backing this site up and would be disappointed if it disappeared someday, though I’m probably the only one who would notice. There are several WorkPress plugins that handle backups, but I have not yet spent the time and energy to set those up. I do write all my posts outside of WP before I publish so I have a half-baked backup. It’s better than none at all, but not great. Kinda like unbaked cookie dough. It tates delicious, but might give you salmonella. Are you taking that risk? I am, with both the lack of backups and cookie dough eating.

This isn’t my only site. I have another site with a MySQL backend. I’ve put a lot of work into the data design, entry and maintenance, not so much work on the web design part yet. I do ad-hoc backups of the site and the database. Losing some of my SQL writings would be unfortunate. Losing my data on beer drinking would be horrific.

Why yes I can do table level restores

Table level restores are available in SQL Server 2008 R2 Standard…under the right circumstances.

Are you under these circumstances?

For better or for worse you probably aren’t under those circumstances so sorry or good for you, take your pick. MS Dynamics NAV running on SQL Server has the feature to do a restore of a subset of tables without having to restore the entire database. This is especially great when your database has 160 thousand plus tables and you only need to restore some of them. My implementation of Dynamics NAV has the the database split up in to companies and each company has 1,600+ tables. End users come to me with requsts to recover data in a company or refresh a testing environment with the lasted information from production fairly often. I could restore production to a new location and script out the 1,600 tables, but frankly that sounds like a pain. Luckly for me Dynamics NAV has the option to backup and restore by company. I can delete and restore those 1,600+ tables with a few mouse clicks and a fraction of the time a full restore would take.

While connected to your backup target with Classic Client, select Tools > Backup. Select the Company you want to backup.

Are you absolutely sure? Cause it is going to ask you.

Connect to your restore target, select File > Company > Delete. Click Yes an appropriate number of times when you are really, really sure this is in fact the company you want to delete.

Are you absolutely sure? Cause it is going to ask you.
Are you absolutely sure? Cause it is going to ask you.

Click again on Tools > Restore. Browse to your backup and so on and so forth. Much faster than the full restore.

My second method of table level restores takes a little more of a custom setup. Sometimes when you are in a rush to get an application into production the application periodically and undesirably deletes all of the data in a table. You’ve all had this problem, right? I didn’t feel like restoring the whole database just to get back a few hundred rows. I also didn’t have the budget/authority/etc to get a sweet 3rd party backup solution. So here comes the never been seen before combination of an INSERT script and SQL Agent.

Bill, “an INSERT script?” Ted, “and SQL Agent?” B&T, “Whoa”

In a utility database create a table with all the fields for your at risk table, plus in my case a company identifier, and a datestamp column with a GETDATE() default. This is where right clicking in SSMS on the table and selecting Script Table as > CREATE To comes in really handy. Name it something catchy like dbo.WhyInTheHellAmIConstantlyLossingDataFromThisStupidTable or if you’re not that creative just use a similar name as your problem table(s).

INSERT INTO [SQLAdministration].[dbo].[WhyInTheHellAmIConstantlyLossingDataFromThisStupidTable]
(company or other customer field
, all other fields from your problem table)
SELECT 'Company$ProblemTable' AS Company
, all of the fields in this table
FROM [ProductionDatabase].[dbo].[Company$ProblemTable]

Then set up a job to load the full contents of that table from production every X time units. Mine is running every 10 minutes. Don’t forget to include a trim step in your job. You don’t want these rows piling up forever. Or maybe you do. I don’t know. Maybe you like to impress others with the size of your tables.

Now when users coming running to you like their data is on fire (remember deleted data is not the same as burning data, trust me on this one) you can smoothly tell them that you can get their data back and save the day like the DBA superhero you are. That cape is a good look for you. Tights? Maybe, maybe not.


Thanks again Ed Leighton-Dick (T|B)