Prettify (or not) your SQL alerts and reports

HTML formatted table for your SQL alerts or reports
Alerts and reports in plain text are the easiest to setup, fastest to render and smallest to send so why should you bother with HTML formatting? Believe it or not your time and server/network load are not the top concern (or any concern) for most if not all of your users. They want emails showing up looking nice. One look at this example will show I didn’t my degree in design and I have no sense of style, but the structure is there for you to choose your own color pallet to please your users.

20150616-EmailTable
When emails look like this, who can ignore them?

I scraped some data from Twitter to load a test table and did a quick sum to see who is the top tweeter in a arbitrary span of time.

If your instance is set up to run msdb.dbo.sp_send_dbmail you should be able to run the script for an example.

CREATE TABLE #testData (rowNumber INT IDENTITY(1,1), person VARCHAR(30), tweets VARCHAR(140))

INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','45 minute run in 65℉ wiith 100% humidity. Had to work for that one.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','Awwww yeah, finally got the domain name I wanted for a running blog.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','I have a script for that.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','I say that a lot these days. It''s nice to be able to.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','If Saturday is for doing nothing useful, I''ve succeeded.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','I''ll always love string cheese.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','Ready for lunch! (It''s 9:39 am. Ooops.)')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','Scheduled carpet cleaning. I''m officially done with all adult duties for the day.')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','TO THE CLOUD!')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','To the driving range!')
INSERT INTO #testData (person, tweets) VALUES ('Jes Borland @grrl_geek','When I stand outside I sweat. We have 3 cubic feet of river rock to move. Today is going to suck.')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Checking out, then off to #sqlcruise')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Got my feet into the Med. I''m happy. #SQlcruise')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','I see @AaronBertrand and @sqlrnnr #SQLcruise')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','I''m at Plaça de Sant Felip Neri in Barcelona https://www.swarmapp.com/c/7hWHjnWOAt8 ')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','I''m at Xaloc in Barcelona, Catalonia https://www.swarmapp.com/c/h4856sB1K8O ')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Interesting reading: Haunting chalkboard drawings, frozen in time for 100 years, discovered in Oklahoma school http://buff.ly/1f4R5cs ')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Interesting reading: It’s The Future http://buff.ly/1Qr1aRI ')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Interesting reading: The Curious Case of Mencius Moldbug http://buff.ly/1cNmRJn ')
INSERT INTO #testData (person, tweets) VALUES ('Grant Fritchey @Gfritchey','Still in port, but I finished my first presentation. #sqlcruise is off and running.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','My connection is now leaving earlier than originally scheduled.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I''m so confused. ')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I wish I had some UA statuse friends who could call and see if I was rebooked yet or not')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Now another 1.5 hour delay #karenstravelstories')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Not sure this math works out for YYZ > ORD > MSP http://ift.tt/1MEZWMH ')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Stayed on tarmac for an hours. Now back to gate to get more fuel. http://ift.tt/1HKjsId ')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Dear @united I sent you a DM love note.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','So we took since tour of the Tarmac, now headed back to gate to get more fuel. #karenstrav… http://ift.tt/1C8SrrH ')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Now a ground stop at ORD. so no take off time now. # karenstravelstories. Connections at risk.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Both of these made me laugh. #dataquality ')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Wow. Now United is serving Prosecco on domestic flights. Better than the Na in breakfast salad.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','The crew doesn''t like being handed a plane full of kicked in the junk passengers.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Oh. So I turns out the crew was still at the gate when the GA made her announcement. They are not happy either')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I feel like an idiot with a Mac every time I go to touch the IFE on a UA flight. #ItsNotTouch')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I would have volunteered. But only other option has no F.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Nearly all the pax are saying "oh. It''s our fault?" I think she made a lot of non-cooperative pax here. #karenstravelstories')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','Due to the fact we aren''t getting any volunteers to take another flight, we are going to delay the flight. Really @united? Nasty tone')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I''m posting odds that he elbows past me at boarding time.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','He also has a white canvas tote, à la LLBean.')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I ant to ask if he works for JCREW')
INSERT INTO #testData (person, tweets) VALUES ('Karen Lopez @datachick ','I''m judging the guy next to me by how he is dressed. I think he must be a real jerk. ')

DECLARE @bodyMsg nvarchar(max)
DECLARE @subject nvarchar(max)
DECLARE @tableHTML nvarchar(max)
DECLARE @Table NVARCHAR(MAX) = N''

SET @subject = 'Twitter Status Report' --Subject line of the email

--Load @Table which isn't really a table with your data
SELECT @Table = @Table + '<tr style="background-color:#A3E0FF;">' +
'<td>' + person + '</td>' +
'<td>' + CAST(COUNT(tweets) AS VARCHAR(5)) + '</td>'
FROM #testData
GROUP BY person
ORDER BY COUNT(tweets) DESC

SET @tableHTML =
N'<H3><font color="Red">Who''s the top tweeter?</H3>' + --Title above the table
N'<table border="1" align="left" cellpadding="2" cellspacing="0" style="color:purple; font-family:helvetica; text-align:center;" >' + --Table formatting
N'<tr style ="font-size: 14px; font-weight: normal; background: #b9c9fe; text-align:left;"> ' + --Row level formatting
N'<th>Twitter celebrity</th><th>Number of Tweets</th>' + --Row titles
N'</tr>' + @Table + N'</table>' --Row data

EXEC msdb.dbo.sp_send_dbmail @recipients='dba@importantcompany.com',
@subject = @subject,
@body = @tableHTML,
@body_format = 'HTML' ;

DROP TABLE #testData

Business Intelligence Is Bull Shit

So many infograms, charts and reports are a waste of bytes or worse a waste of paper they are printed on. At least manure can be used as fertilizer. Bad BI is useless, wasteful and potentially dangerous. So much gets produced without well reasoned thought and without involving the people in the business this “intelligence” is produced for.

Titans of industry met last Friday night, 3 engineers and 1 IT geek at a table of tacos, margaritas and beer. What started off as friendly shop talk soon escalated into rants about the uselessness of various BI outputs. One described a hallway, walls lined with plastic sleeves filled with reports that no one read. Another talked about a business, its physical infrastructure crumbling, that poured countless hours into producing acronymed filled graphs. I offered my own story.

Just out of college, MBA degree freshly embossed, a love for statistical analysis and a brand new job I was eager to impress at, I worked to create a gargantuan monthly reporting package. It was Excel based. It was overly complicated and cumbersome. It contained graphs and charts based on incomplete data and did not answer any questions asked by those who really knew the company’s business. My boss was very excited and supportive. He thought this was vital information and had been badly needed. I’m certain he never finished reading the 1st month’s report and stopped looking at them altogether after a while. I soldiered on, determined to prove my worth. I was not providing value to the company, but I did learn a valuable lesson. Business intelligence, or whatever name your analysis goes by, is only as useful as the questions it is able to answer and the decisions you can make based on it.

BI, big data, analytics, etc are amazing, exciting and hot selling items right now. Like all hot selling and popular items it’s buyer beware. Before you churn out that next bit of analysis that you think the business needs to have, ask yourself some questions.

  • What decision have you ever made off of a heat map?
  • What strategic objective have you achieved because of a word cloud?
  • Who in the business has asked for that report and what question/problem is it solving?

Producing unneeded or not useful reporting not only wastes your time, but everyone else in the organization who comes across it. Time wasting is probably the least of your worries. If the data you are reporting on isn’t good in the first place you might be damning a generation to a lifetime of spinach. Phil Factor can make that point much better than I.

I am a giant hypocrite in this area. My own site, http://www.beerintelligenceproject.com, is dedicated to creating, storing and reporting on data for no other purpose than for my personal enjoyment. There’s a heat map based on count of breweries and beers and a top 10 breweries by beer chart. I don’t have a word cloud, but if I did there would be a giant IPA in the middle. This BI is crap, but nobody is paying me to produce it.

SQL Server 2014 get off my lawn

Do DBA’s fear change? I sure hope not because it’s coming. Be it DevOps or cloud services or your deep love for SQL Server 2000; platforms, services and your way of life will shift. I like installing new versions of SQL Server, checking out the changes and new features. I can’t say I always embrace the change and SQL Server 2014 is a mix of the comfortable familiar and the new (and maybe not for me yet) frontier. SQL Server 2014 has its feet on the edge of my lawn and I’m trying to figure out if I should start yelling at it yet.

Last week I had my first opportunity to install SQL Server 2014 on a new to me OS, Windows 2012. The SQL installation process was very familiar and changed little from 2012. It still started at the Installation Center and offered a near identical set on the Features page with the difference is the missing SQL Server Data Tools. In 2012 you had the option of installing the SQL Server development environment, including what was once called BIDS, Business Intelligence Development Studio. The 2014 install doesn’t have this option as it has been split and must be downloaded separately, Microsoft SQL Server Data Tools – Business Intelligence for Visual Studio 2013.

20150526-InstallOptions
Miss you? I didn’t even realize you were gone!

The rest of the installation was very familiar with no surprises. There was this nice reminder on setting permissions for the default backup directory, just in case you forgot to do this. I don’t recall seeing this in 2012.

20150526-Warning
An actual permissions check would be better than a blanket warning.

Backups in 2014 have some new enhancements for your backup location and letting SQL Server manage your backups and. Taking some screenshots from the good old maintenance plans, you can see on the Destination tab that along with the Folder (local or network) you now have the option of an Azure storage container.

20150526-BackupOptions
Where are your back ups going to go?

Managing backups depending on scale can be a pretty trivial to overwhelming job. My experiences with backups has been more on the side of trivial. Not to make light of the importance of backups, it just hasn’t taken much of my time. Those with with a more difficult job of managing backups may be interested in turning over responsibility to SQL Server. With the the new SQL Server Managed Backup to Windows Azure, SQL Server manages and automates the backups to the Azure Blob. With this setup the DBA is only required provide the retention period settings and the storage location with SQL Server determining the frequency based on the workload. I can see the appeal of turning this over to SQL Server to manage.

The criteria used to determine backup frequency is very interesting.

Full Database Backup:SQL Server Managed Backup to Windows Azure agent schedules a full database backup if any of the following is true.

  • A database is SQL Server Managed Backup to Windows Azure enabled for the first time, or when SQL Server Managed Backup to Windows Azure is enabled with default settings at the instance level.
  • The log growth since last full database backup is equal to or larger than 1 GB.
  • The maximum time interval of one week has passed since the last full database backup. TP – 1 week! None of my database have a full backup that infrequently, especially when differentials aren’t supported. See below for support.
  • The log chain is broken. SQL Server Managed Backup to Windows Azure periodically checks to see whether the log chain is intact by comparing the first and last LSNs of the backup files. If there is break in the log chain for any reason, SQL Server Managed Backup to Windows Azure schedules a full database backup. The most common reason for log chain breaks is probably a backup command issued using Transact-SQL or
    through the Backup task in SQL Server Management Studio. Other common scenarios include accidental deletion of the backup log files, or accidental overwrites of backups. TP – This is a pretty sweet feature. I can see other people having cases where someone (maybe a developer, just maybe) with too many permissions takes a back up. That’s never happened to me, just saying.

Transaction Log Backup:SQL Server Managed Backup to Windows Azure schedules a log backup if any of the following is true:

  • There is no log backup history that can be found. This is usually true when SQL Server Managed Backup to Windows Azure is enabled for the first time. TP – Starting thing off right, I like that.
  • The transaction log space used is 5 MB or larger. TP – How many changes were made in your database before 5 MB were written to the log? Is it OK to lose those?
  • The maximum time interval of 2 hours since the last log backup is reached. TP – Longer interval than most of my t-log backups, but at least it isn’t a week.
  • Any time the transaction log backup is lagging behind a full database backup. The goal is to keep the log chain ahead of full backup. TP – We all need goals.

Limitations:

  • SQL Server Managed Backup to Windows Azure agent supports database backups only: Full and Log Backups. File backup automation is not supported.
  • System Databases are not supported. TP – SQL Server can’t manage everything, I guess you still have a job.
  • Windows Azure Blob Storage service is the only supported backup storage option. Backups to disk or tape are not supported. TP – I guess that’s why they call it SQL Server Managed Backup to Windows Azure. Soon to be called SSMBWA, that rolls right off the tongue.

This is a change I’m not quite ready to embrace. I like control of my backups. I want to determine the frequency based off my discussions with the data owners, not on SQL Servers workload analysis. This feature isn’t allowed on my lawn.

You know who doesn’t fear change, but actively seeks it out and promotes it? Mercy Corps, be the change and help in needed places like Nepal.

Are you sure you are backing everything up?

backupYour databases are backed up with full, diff and t-log jobs using slick scripts from some of the several smart and helpful SQL superheros flying around the internet. They are also copied over to tape or other storage and stored offsite, ready for the next DR drill or real life disaster. Your computer (work or home, maybe that is the same thing) has automated backup software to keep all your cat pictures safe. You’re a pro, you’ve got this.

For all of us using #sqlnewblogger are we backing up our newest work? I’m hosted at WordPress and am relying on them to keep my data safe, available and recoverable. I’m not backing this site up and would be disappointed if it disappeared someday, though I’m probably the only one who would notice. There are several WorkPress plugins that handle backups, but I have not yet spent the time and energy to set those up. I do write all my posts outside of WP before I publish so I have a half-baked backup. It’s better than none at all, but not great. Kinda like unbaked cookie dough. It tates delicious, but might give you salmonella. Are you taking that risk? I am, with both the lack of backups and cookie dough eating.

This isn’t my only site. I have another site with a MySQL backend. I’ve put a lot of work into the data design, entry and maintenance, not so much work on the web design part yet. I do ad-hoc backups of the site and the database. Losing some of my SQL writings would be unfortunate. Losing my data on beer drinking would be horrific.

Red Gate would get my money, if I had any

This is how software companies should respond when a potential customer finds a problem with their product. I recently tested out Red Gate’s SQL Prompt and found some problems with it in my environment. The TL;DR is that it wasn’t auto-completing and I was getting timeout errors when trying to use it with a 161 thousand table and 3.3 million column database. After my testing and uninstalling the demo, I left my comments and the URL to my post detaling my experience in the uninstall survey. Not everyone likes filling out surveys, but they can provide valuable feedback and possibly effect change. Not long after I submitted I received this email.

Hi Tim,

I’m one of the developers of Prompt and I just saw your uninstall feedback about running out of memory with a huge database.

Unfortunately at the moment I don’t have any solutions for you (a work around is to exclude the database from being loaded which will stop the exception but won’t get you suggestions). We do have plans in the future for solving this by optimising how and where we store our model of the database.

I can let you know when we have a new build with those optimisations in place if you’d be willing to try it?

Thanks,
Aaron.

They are listening and working to improve their product. Yeah, I’ll take some time to test out a new version. Will I end up buying it? I’m not so sure about that. Budgets can be tight in any organization, but working at a non-profit has caused me to scrutenize every purchase extra carefully. I’ve worked without SQL Prompt for a long time and can continue to do my job without. Would I like to have it, (assuming a new version that works for me) of course. Will I be able to justify the purchase and ROI? Hopefully.

Speaking of working for a non-profit. Mercy Corps is doing great work to help survivors in Nepal, please consider a donation.

20150505_MC

Going for another month of #SQLNewBlogger

Corruption – The Denmark of SQL Instances

20150428-countrycorruption
Red = corruption, like your DBCC results

Those who follow government corruption indexes are used to seeing Denmark at the top. The places I work with these days, not so much. Somalia, I’m looking at you buddy. Along with being one of the happiest countries Denmark is also the least corrupt. In my decade and a half as a DBA and developer all of my SQL instances have been Danish. Literally as well as figuratively. Yes, some of them were actually on servers in Denmark, but I’m talking in terms of corruption. I’ve never had a case of corruption, except for purposely caused training examples. Have I been lucky? Yes. Will I hit corruption someday? Maybe. I’m I worried about it and protect against it? Oh hell yes.

20150428-sb
Backups!

First off, backups, or as Steve Ballmer might say, while sweating profusely, “Backups! Backups! Backups! Backups! Backups! Backups!….” With out good backups, corruption will make you hate being a DBA. So, now that you have your backups (fulls and diffs and transaction logs, oh my) taken care of how do you spot corruption? Alerts. What makes my instances, hardware and applications so amazballs that they have incorruptibility of a saint? I have no idea, but I 20150428-dbcccheck, recheck and alert the hell out of them. First step is a nightly DBCC CHECKDB. This of course is a standard across every SQL instance ever setup. Nobody has ever run a SQL instance without implementing this. Right? Good.

 

Next are my alerts. The alert CREATE script below assumes an Operator named DBA and Database Mail are setup.

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 016',
@message_id=0,
@severity=16,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 016', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
@message_id=0,
@severity=17,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 017', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 018',
@message_id=0,
@severity=18,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 018', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 019',
@message_id=0,
@severity=19,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 019', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 020',
@message_id=0,
@severity=20,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 020', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 021',
@message_id=0,
@severity=21,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 021', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 022',
@message_id=0,
@severity=22,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 022', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 023',
@message_id=0,
@severity=23,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 023', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 024',
@message_id=0,
@severity=24,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 024', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 025',
@message_id=0,
@severity=25,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 025', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 823',
@message_id=823,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 823', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 824',
@message_id=824,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 824', @operator_name=N'DBA', @notification_method = 1;
GO
EXEC msdb.dbo.sp_add_alert @name=N'Error Number 825',
@message_id=825,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO
EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 825', @operator_name=N'DBA', @notification_method = 1;
GO

Errors 823, 824 and 835 are caused by errors with Windows API (ReadFile, WriteFile, ReadFileScatter, WriteFileGather) performing I/O operations. These may be triggered by hardware failure and could indicate a need to increase your hardware budget.

Severity 16 through 25 cover a variety of issues, including corruption. In terms of corruption lets look at 22, 23 and 24. This is a terse version of the MSDN entries.
22: Indicates that the table or index specified in the message has been damaged by a software or hardware problem.
23: Indicates that the integrity of the entire database is in question because of a hardware or software problem.
24: Indicates a media failure. The system administrator may have to restore the database. You may also have to call your hardware vendor.

These errors rarely occur, but when/if they do run DBCC CHECKDB to find what objects are damaged.  There are several methods of getting your database back to a consistent state that range from easy and good (restoring from backups) to less than optimal (DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS) to methods that border on black magic (DBCC WRITEPAGE). All of these require practice.

Most people learn best by doing. Reading a blog won’t prevent or fix corruption, but practicing on a corrupted database will give you the skillz so when your time comes you can be the DBA in shinning armor saving the day, rescuing the end users and protecting the organization with trumpets blaring and confetti flying. Or it could just mean keeping your job. Steve Stedman has launched a good series of corruption challenges. He corrupts the database, gives you some details and tells you to fix it. Kinda like your boss (telling you to fix it), but more helpful (giving you details). It’s a timed challenge with a scoreboard for those who like to keep track.

These challenges give you a chance to practice fixing corruption as well as testing your alerts. With week #2 challenge restored you can run a query against the dbo.Result table and have this niffty message appear via email or text.

Msg 824, Level 24, State 2, Line 1
SQL Server detected a logical consistency-based I/O error: invalid protection option. It occurred during a read of page (1:244) in database ID 13 at offset 0x000000001e8000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge2.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

Challenge #3 set off alerts with severity 16 and 20 when trying to use an orphaned log file.

SQL Server Alert System: 'Severity 016'
DESCRIPTION: Cannot create file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\CorruptionChallenge3.mdf' because it already exists. Change the file path or the file name, and retry the operation.
SQL Server Alert System: 'Severity 020'
DESCRIPTION: During upgrade, database raised exception 945, severity 14, state 2, address 0000000001385BE1. Use the exception number to determine the cause.

Hopefully my databases continue to reside happily uncorrupted, but I’ve probably jinxed myself with this post and some of those alerts will be firing off soon. Crap.

#SQLNewBlogger
Thanks again Ed Leighton-Dick (T|B)

Sir-Auto-Completes-A-Lot a.k.a. how to break IntelliSense, SQL Prompt and SQL Complete

I like big schemas and can not lie
You other DBAs can’t deny
That when studio opens up with a funky ass database
and a million objects in your face
It gets hung, your coding gets rough
’cause you notice your base was stuffed
Deep in the schema it’s searchin
I’m stuck and IntelliSense is lurchin
Oh baby, I wanna finish this query
But this lag is makin’ me weary
My co-workers tried to warn me
Cause this SQL rhyme got way to corny

So coders (Yeah!) Coders (Yeah!)
Can you code better than Hodor (Hell yeah!)
Then fire it up. Type it out.
Even SAN admins got to shout
DBA got back…….
to typing out queries because auto complete tools aren’t really working out
.

I have a love/hate relationship with some of my databases. I love hating them as they are great fodder for testing/breaking tools/procedures, blog posts and most importantly Sir Mix-A-Lot parodies. I’ve put up with IntelliSense stalling and even locking up SSMS when I’m writing a query against one of the 162 thousand tables containing 3.3 million columns. I know how to turn it off, but I don’t have the human RAM to hold the names of everything.

20150421-TandC
This wouldn’t fit in a SMALLINT, but maybe it was really important to save those 2 bytes and not go INT.

IntelliSense is far from perfect for me here so I tried out a few other options, Red Gate’s SQL Prompt and ApexSQL Complete. Both struggled in my setup and both had some very cool features. This is a hot take and not a review/recommendation, your mileage may vary. I haven’t reached out to support yet for either product or fully explorer all of their features.

SQL Prompt
Downloading a trial version and installing was a snap. It shows up as a tool bar and menu item. When you connect to a database SQL Prompt loads objects or at least it tries to. 20150421-LoadingThis is where I hit my first problem. The loading worked great on database with a small number of objects, but once I switched to a large db I ran into the following 2 errors, OutOfMemoryException and timeout. The timeout period is around 1 minute and I haven’t found a way to adjust this.

20150421-PromptErrors

The prompting post errors were limited to the database level and it appeared that none of the tables had been loaded. I know the names of my databases and SSMS already has a cute little drop down to show this. I don’t think I need help at this level.

20150421-Tables
I need help on a deeper level

Promptless Prompt aside I like the Tab History, formatting and expanding wildcards. Expanding out my SELECT * when the table has over 200 columns is pretty great. Especially when those columns often use spaces or odd characters. I kid you not, I have a column that has both and & and ? IN THE NAME!!!

ApexSQL Complete                                                                                                                Complete had the same issues with loading a large number of objects. SSMS returned a busy message and objects weren’t loaded. Once I started to write out my query Complete threw the error, “An error occurred while executing batch. Error message is: Exception of type ‘System.OutOfMemoryException’ was thrown.”

20150421-Apex
Why load table names when you can load random code snippets

I uninstalled Apex without getting to deep into its features, but I did like Auto aliasing. On a small db Apex would automatically tack on an alias to your table. I once had a coworker who would alias their tables in alphabetical order. The first table would be “a”, the second “b” and so on a so forth. Apex would have been a vast improvement there.

#SQLNewBlogger
Thanks again Ed Leighton-Dick (T|B)

Why yes I can do table level restores

Table level restores are available in SQL Server 2008 R2 Standard…under the right circumstances.

20150414-Circumstances
Are you under these circumstances?

For better or for worse you probably aren’t under those circumstances so sorry or good for you, take your pick. MS Dynamics NAV running on SQL Server has the feature to do a restore of a subset of tables without having to restore the entire database. This is especially great when your database has 160 thousand plus tables and you only need to restore some of them. My implementation of Dynamics NAV has the the database split up in to companies and each company has 1,600+ tables. End users come to me with requsts to recover data in a company or refresh a testing environment with the lasted information from production fairly often. I could restore production to a new location and script out the 1,600 tables, but frankly that sounds like a pain. Luckly for me Dynamics NAV has the option to backup and restore by company. I can delete and restore those 1,600+ tables with a few mouse clicks and a fraction of the time a full restore would take.

While connected to your backup target with Classic Client, select Tools > Backup. Select the Company you want to backup.

20150414-CompanyDelete
Are you absolutely sure? Cause it is going to ask you.

20150414-CompanyBackup
Connect to your restore target, select File > Company > Delete. Click Yes an appropriate number of times when you are really, really sure this is in fact the company you want to delete.

Are you absolutely sure? Cause it is going to ask you.
Are you absolutely sure? Cause it is going to ask you.

Click again on Tools > Restore. Browse to your backup and so on and so forth. Much faster than the full restore.

My second method of table level restores takes a little more of a custom setup. Sometimes when you are in a rush to get an application into production the application periodically and undesirably deletes all of the data in a table. You’ve all had this problem, right? I didn’t feel like restoring the whole database just to get back a few hundred rows. I also didn’t have the budget/authority/etc to get a sweet 3rd party backup solution. So here comes the never been seen before combination of an INSERT script and SQL Agent.

20150414-BandT2
Bill, “an INSERT script?” Ted, “and SQL Agent?” B&T, “Whoa”

In a utility database create a table with all the fields for your at risk table, plus in my case a company identifier, and a datestamp column with a GETDATE() default. This is where right clicking in SSMS on the table and selecting Script Table as > CREATE To comes in really handy. Name it something catchy like dbo.WhyInTheHellAmIConstantlyLossingDataFromThisStupidTable or if you’re not that creative just use a similar name as your problem table(s).


INSERT INTO [SQLAdministration].[dbo].[WhyInTheHellAmIConstantlyLossingDataFromThisStupidTable]
(company or other customer field
, all other fields from your problem table)
SELECT 'Company$ProblemTable' AS Company
, all of the fields in this table
FROM [ProductionDatabase].[dbo].[Company$ProblemTable]

Then set up a job to load the full contents of that table from production every X time units. Mine is running every 10 minutes. Don’t forget to include a trim step in your job. You don’t want these rows piling up forever. Or maybe you do. I don’t know. Maybe you like to impress others with the size of your tables.

Now when users coming running to you like their data is on fire (remember deleted data is not the same as burning data, trust me on this one) you can smoothly tell them that you can get their data back and save the day like the DBA superhero you are. That cape is a good look for you. Tights? Maybe, maybe not.

#SQLNewBlogger

Thanks again Ed Leighton-Dick (T|B)

Maintenance on a few…hundred thousand…indexes

Some run big data demos on laptops. Some technologists say big data is data sets that are so large or complex that traditional methods and practices are no longer adequate. My databases fall very short of big, but when I start looking at the number of objects I venture into well above average and maybe even bigish. I don’t have published metrics to back this up, but in my decade plus time with SQL Server I haven’t seen anything like what I deal with these days. I support a MS Dynamics Nav database with a mind boggling number of objects. Maybe my mind boggles a bit easier than others, but when I see 162,525 tables and 371,629 indexes my brain spins a bit. The copious number of objects comes from a setup where each company in the financial system has its own set of tables, nearly 1,600 tables per company. Each country we work in and each legal entity has its own company and more than 80 of those later (and growing…) BAM we’re in the hundreds of thousands of objects.

So what do you do with hundreds of thousands of indexes? DEFRAG THEM! Of course. Indexes are bound to be fragmented because data must be added and deleted and modified in those tables. Right? What’s the point of having 162 thousand tables if you aren’t constantly filling them with mission critical data. Time to throw a standard maintenance plan rebuild task at it. What could possibly go wrong with trying to rebuild 371 thousand indexes on a regular basis?

I don’t know, I’ve never tried that.

I skipped ahead and ran my own script against it which loaded the results of sys.dm_db_index_physical_stats into a temp table, declared a cursor (yikes!) and reorganized when avg_fragmentation_in_percent was between 10 and 30 and rebuilt when it was over 30. It ran OK, it was doing its job and it was running for around 45 minutes on a weekly basis. That was working and nobody was complaining, at least not about that part. Why mess with things?

Because. Reasons.

Ola
My first go around with Ola Hallengren’s scripts did not go so well, but that was a while ago and I failed to take notes. I’m sure something very pressing was going on that prevented me. Yes, very pressing. This guy doesn’t get praise and accolades out the wazoo (or whatever the Swedish word for wazoo is) if he isn’t doing amazingly awesome things. I must have not tried hard enough. MOAR OLA! I gave Ola’s IndexOptimize script another shot. I’ve written a few things about my implementation of Ola’s scripts, but the TL;DR is that they are working. In the beginning this script worked on 1,600 indexes, a few less than the 371 thousand available. Now it floats between 400 and 800, usually taking less than 10 minutes. It’s not perfect yet, there are times when the job run over an hour and I am ignoring an elephant in the room.

Minion
With IndexOptimize running in production I can look into trying new things in development. Minion Reindex by Midnight SQL was introduced as an easy to install, easy to run and very customizable index maintenance tool kit. set up is a snap and there are many ways to tweek and control how it runs. I wanted to see how this ran out of the box. Out of the .zip is more accurate, but doesn’t have the the same consumerist snap to it.

[Minion].[IndexMaintMaster]
@IndexOption = 'All', -- Valid options: All, ONLINE, OFFLINE
@ReorgMode = 'REORG', -- Valid options: All, Reorg, Rebuild
@RunPrepped = 0, -- Valid options: 1, 0. If you've collected index frag stats ahead of time by running with @PrepOnly = 1, then you can use this option.
@PrepOnly = 0, -- Valid options: 1, 0. Allows you to take frag stats earlier in the day and save them for use at a later time.
@StmtOnly = 0, -- Print the reindex stmts w/o modifying the SP. This will NOT run any reindex stmts, only PRINT them.
@Include = NULL, -- Only do DBs listed here. Commas are used. ex: @Include = 'master, model, msdb'
@Exclude = NULL, -- Do all DBs except the ones listed here. Commas are used. ex: @Exclude = 'master, model, msdb'
@LogProgress = 0 -- Allows Live Insight through Log tables.

Minion has a few indexes to chew through. This is running on the dev box which has a slightly larger database than production.

20150403_Minion_WhoIsActive

sp_whoisactive is showing a run time of over 2 days, and 36 billion reads. Looking at the Status field of the Minion.IndexMaintLogDetails table at the same time shows me that it is almost complete, 383,484 of 386,105.

20150403_Minion_SQLAgent

Minion has run a few times with varied success. Two runs completed and lasted nearly 2 and 1/2 days each, then a failure on the 3rd time. I haven’t dug in to figure out what Parameter 3 is, but apparently it was incorrect. I can appreciate the extra level of detail and control here, but I appreciate even more a job that finishes successfully and in less time.

What else do you with indexes besides maintenance? Make them better! How? With sp_BlitzIndex of course. It takes a while to run, but it gave me nearly 300 thousand easy to implement suggestions on how I can improve the indexes I have to deal with.

Finding Count
Feature-Phobic Indexes: No indexes use includes 1
Feature-Phobic Indexes: Potential filtered index (based on column name) 2,544
Index Hoarder: Addicted to nulls 1,272
Index Hoarder: Addicted to strings 12,088
Index Hoarder: Borderline: Wide indexes (7 or more columns) 37,948
Index Hoarder: Many NC indexes on a single table 6,996
Index Hoarder: More than 5 percent NC indexes are unused 1
Index Hoarder: Unused NC index 1,573
Index Hoarder: Wide clustered index (> 3 columns OR > 16 bytes) 129,348
Index Hoarder: Wide tables: 35+ cols or > 2000 non-LOB bytes 23,957
Multiple Index Personalities: Borderline duplicate keys 76,852
Self Loathing Indexes: Active heap 1
Workaholics: Scan-a-lots (index_usage_stats) 5
Workaholics: Top recent accesses (index_op_stats) 5
Total 292,591

Unfortunately this is a 3rd party database and I can’t make any schema changes. Maybe that’s not so unfortunate, I’m not sure I’d want to be held responsible for the creation of this monstrosity.

 

#SQLNewBlogger
Thanks to Ed Leighton-Dick and others for getting this going.