T-SQL Tuesday #75–Running with Power BI


This month we have Jorge Segarra (B|T) hosting the T-SQL Tuesday with the topic of Power BI.



Hey data pros, people don’t want to look at your data in tables or result panes. They want charts, graphs and pretty pictures, right? To do that you need Power BI. Just look at this amazing stuff you can do with it. Seriously, go over to Jason Thomas’ site and check it out.


Now the danger of any BI is that data is distilled and as anyone with a pot still can tell you it doesn’t always come out so great and even when it does come out great, it might not be the best stuff for you. An older friend of mine once told me a story about his military service in WWII. Out of boredom they graphed and acyronymized their soda pop consumption. One day, their superior was giving a tour of their office and showed off said chart and everyone nodded in appreciation and utter ignorance. This could easily happen in any business with creative/bored developer and just a bit of time. For example here, I took a bunch of meaningless data, loaded it into Power BI and spit out some charts that would make an exec swoon.


Here’s the pointy and clicky version, https://app.powerbi.com/view?r=eyJrIjoiMjJjYmY5N2ItMmRhYS00ZjRmLTk1MjMtZWRiMjEyMGJiMjk5IiwidCI6IjA4ZGU4NjI4LTU5M2UtNDNmNS05Y2ViLWYzZTVkZDdhZTJjYiIsImMiOjF9

Now I’m sure there are some very valid use cases for BI and implementations of Power BI. Just make sure you know what you’re sipping. Just because you can visualize data doesn’t always mean you should. The most important aspect of BI is that it solves a business question/problem, you know the B part of BI.

T-SQL Tuesday #74 – Shredding nested XML


It’s T-SQL Tuesday,  the blog party founded by Adam Machanic (blog|@AdamMachanic) and this month hosted by Robert Davis (blog|@SQLSoldier). The topic, Be the Change.


Start of the New Year is a great time to Be the Change. You could do that by donating to Mercy Corps (full disclosure: my employer) with their familiar tag line Be the change.


You could also do it by digging into data changes, how it changes, where it changes and why it changes. One of my more interesting data changes is with an XML shred. It starts off with a download from an exchange rate API, saving the XML file, shredding it into a tabular format and uploading it to a table. The point I find most interesting is the XML shred.

The below XML has data nested in different levels that requires the nodes method to join them together. The nodes method accepts a XML string and returns a rowset. That rowset can then be used with CROSS APPLY to effectively link your way down.

nodes (XQuery) as Table(Column)

The tabular format I need requires data from 3 different levels of this XML gob and I need to wade through 5 “tables” to get there.

So I start with this….


And want to get here…

baseRate exchangeRate oDate cDate ask bid requestDate
USD KZT 2016-01-09T19:00:00+0000 1/9/2016 7:00:00 PM 353.06 347.57 1/9/2016
USD BRL 2016-01-09T19:00:00+0000 1/9/2016 7:00:00 PM 4.02911 4.02305 1/9/2016
USD AFN 2016-01-09T12:00:00+0000 1/9/2016 12:00:00 PM 68.79 68.59 1/9/2016

Tenge, Real and Afghani. Oh my!

So I have to do this…

 mainT.mainC.value('(base_currency/text())[1]','VARCHAR(3)') AS baseRate
, quoteT.quoteC.value('(currency/text())[1]','VARCHAR(3)') as exchangeRate
, quoteT.quoteC.value('(date/text())[1]','VARCHAR(40)') as oDate
, CONVERT(DATETIME,LEFT(REPLACE(REPLACE(quoteT.quoteC.value('(date/text())[1]','VARCHAR(40)') ,'T',' '),'+','.'),23),120) as cDate
, quoteT.quoteC.value('(ask/text())[1]','FLOAT') AS ask
, quoteT.quoteC.value('(bid/text())[1]','FLOAT') AS bid
, epT.epC.value('(date/text())[1]','VARCHAR(40)') AS requestDate
FROM @xml.nodes('/response') mainT(mainC) 
    /*mainT = main table, mainC = main column*/
CROSS APPLY mainT.mainC.nodes('meta') as metaT(metaC)
CROSS APPLY metaT.metaC.nodes('effective_params') as epT(epC) 
    /*ep = effective_params*/
CROSS APPLY mainT.mainC.nodes('quotes') as quotesT(quotesC)
CROSS APPLY quotesT.quotesC.nodes('quote') as quoteT(quoteC)

In the beginning of the FROM line I am grabbing the “response” line from @xml (a variable that I loaded the XML file into) and returning the main table/column, referred to as mainT(mainC). In the next line I join to that row set via a CROSS APPLY the “meta” table and so on and so for to get through the “effective_parms”, “quotes” and “quote”. Great naming there, watch for the “s”.

You can see back in the SELECT section I used the mainT.mainC.value to pull the base_currency column. Each rowset returned by the nodes function can be used in the SELECT.

There is also this little bit of fun to change a datetime stored as a string with a “T” in it, “2016-01-11T12:30:04+0000” to a more respectable format that will insert nicely.

CONVERT(DATETIME,LEFT(REPLACE(REPLACE(quoteT.quoteC.value('(date/text())[1]','VARCHAR(40)') ,'T',' '),'+','.'),23),120

Happy New Year everyone and I hope everything changes for the best.

Terrific Trifecta of T-SQL on Tuesday

Much Monitoring in the Month of May


Sometimes tooling can be easy, helpful and cheap. You usually only get to pick 2 legs of that triangle. So instead of a triangle you get a less than sign, as in less than awesome. This quick bit of code gets all legs of the Triforce.

I'm better at T-SQL than I ever was at Legend of Zelda
I’m better at T-SQL than I ever was at Legend of Zelda

First leg – Easy.
One create table script and one Agent job and you are ready to go with this. The table we will store the results in is a mirror of sp_Who2, plus an insertdate column with the default of GETDATE().

CREATE TABLE [dbo].[Who2](
[spid] [int] NULL,
[status] [varchar](50) NULL,
[loginname] [varchar](100) NULL,
[hostname] [varchar](100) NULL,
[blkBy] [varchar](50) NULL,
[dbname] [varchar](50) NULL,
[cmd] [varchar](200) NULL,
[CPUTime] [int] NULL,
[DiskIO] [int] NULL,
[LastBatch] [varchar](50) NULL,
[ProgramName] [varchar](200) NULL,
[spid2] [int] NULL,
[request_id] [int] NULL,
[insertdate] [datetime] NOT NULL
ALTER TABLE [dbo].[Who2] ADD CONSTRAINT [DF_Who2_insertdate] DEFAULT (getdate()) FOR [insertdate]

Pretty simple and easy there. Set this up in a utility/admin database and you’re off to the next step, the Agent job. The INSERT script below puts the results of sp_Who2 into your new table. All you need to do is determine the timing. I have it setup for every 10 minutes.

INSERT INTO Who2(spid,[Status],[LoginName],hostname,blkBy,dbname,[cmd],CPUTime,DiskIO,LastBatch,Programname,spid2,request_id) EXEC sp_who2

Second leg – Helpful
Not all homegrown or off the shelf apps have ways of measuring usage. This is how this came about for me. One application had a hard limit on number of concurrent users and we wanted warnings when we were close and be able to look at it over time.

DECLARE @maxLicensedUsers AS INT = 10 /*What's your app limit? I hope it is more than 10, but who am I to judge.*/
DECLARE @currentLicensedUsers AS INT
DECLARE @emailMessage AS VARCHAR(100)

SET @currentLicensedUsers = (
WHERE ProgramName IN (”) /*Add your app name(s) here*/
AND DBName = ” /*Add your db name here*/
AND insertdate = (SELECT MAX(insertdate) FROM Who2)

IF @currentLicensedUsers >= @maxLicensedUsers
SET @emailMessage = ‘Your app is super popular and and has ‘ + CAST(@currentLicensedUsers AS VARCHAR(4)) + ‘ sessions.’
EXEC msdb.dbo.sp_send_dbmail @profile_name=”, /*Go set up a db mail profile if you don’t already have that*/
@recipients=”, /*Someone important should get these emails. DBAs are important, maybe one of those people.*/
@subject=’Super popular app license warning’,
@body= @emailMessage

This app also has an issue with users opening up duplicate sessions. That doesn’t help with the hard limit on concurrent users. An alert on duplicate sessions is also easy, helpful and cheap.

Working for a global agency means you never sleep or maybe that is your app never sleeps. Logging this data regularly helps point out when the application is most used. This query gives me average and max session counts by hour for the last month. This helps plan maintenance windows, which are generally scheduled for Friday evenings. Ah, the exciting life of a DBA.

, ROUND(AVG(SessionCount),0) as S_AveCount
, MAX(SessionCount) AS S_MaxCount
DATEPART(YEAR,insertdate) AS yr,DATEPART(MONTH,insertdate) AS mn,DATEPART(DAY,insertdate) AS dy,DATEPART(HH,insertdate) as hr,
DATEPART(minute,insertdate) as 'minute'
, COUNT(DISTINCT loginname) as loginCount
, COUNT(DISTINCT loginname + ' ' + ProgramName) as programCount
, COUNT(spid) as SessionCount
FROM dbo.Who2
WHERE dbname = ''
and ProgramName IN ('')
AND insertdate > DATEADD(M,-1,GETDATE()) --Get data for previous month
AND DATEPART(WEEKDAY,insertdate) <> 7 --Saturday is a low activity day in most countries so we're filtering that out
GROUP BY DATEPART(YEAR,insertdate),DATEPART(MONTH,insertdate),DATEPART(DAY,insertdate),DATEPART(HH,insertdate), DATEPART (minute,insertdate)) AS a

Third leg – Cheap
This is super cheap. You found some code on the internets, you have a SQL Server (presumably) and there isn’t anything to license. I’m sure you could find a way to send me money if you really wanted to, but there are better things to do with your cash. Like helping out in Nepal

Perfection is not one of the legs
This solution hits easy, helpful and cheap for me, but it is not perfect. It is a snapshot of activity every 10 minutes. It is not real time or complete monitoring. Are there users logging in and out between the 10 minute samples? Could be, but I’m not capturing that. This works as long as I and the business owners are clear about what is being measured and what is possibly being missed. Will this satisfy auditors who need to verify system access? I doubt it, but you would need to ask an auditor about that.

TSQL Tuesday


It is TSQL Tuesday and I’m giving it my first shot. The topic is performance tuning and is graciously hosted by Russ Thomas  (@SQLJudo).

What is my “go BIG or flame out” trick. It’s none other than NOLOCK. It’s flames on sports cars. It’s the turbo button. It is the SQL developer fairy dust and unicorns. In somewhat recent pop culture it’s Tiger blood and Adonis DNA. It’s the Charlie Sheen of TSQL performance tuning.


It sounds great and is easy to implement. Who wouldn’t want to use that? Sure it’s a little risky, but what’s development without a bit of danger. Isn’t that why we got into the DBA business? A craving for addrenaline fueled risk


NOLOCK is the lovely table hint that doesn’t put shared locks on your tables. Great! It doesn’t stop other transactions from modifying data. Perfect! It increases concurrecy. What could be better! Like other vices it can get addictive.


Also, like other vices there are down sides. Without those locks, users could be reading data that is in the middle of being updated. Can the application handle that? Will they be reading incomplete data? Is that data being deleted and never should have existed in the first place? That’s NOLOCK for you. We don’t know!


But who am I to tell you not to use NOLOCK. These people are all far smarter and more qualified than I.

All I can do is end with a slightly altered Charlie Sheen quote, “I’m on a drug. It’s called NOLOCK. It’s not available because if you try it you will die. Your face will melt off and your children will weep over your exploded body.