SQLSaturday #47 Postmortem



  1. extreme tiredness; fatigue
  2. the condition of being used up; consumption: exhaustion of the earth’s resources
  3. the act of exhausting or the state of being exhausted


Exhaustion is the first thing that comes to mind. Wow is a close second! To put it into perspective: imagine you have been tasked with instructing Amy Winehouse about sobriety… Yeah it’s gonna require a lot of time and energy! This was my first event so without any shadow of a doubt I was on pins and needles up until the end of the after party. Like anyone else you want your event to be successful from start to finish. Essentially you want things to go according to plan, in short… perfect. We all know Murphy’s law so there is no denying that things will not always go according to plan, so don’t let it bother you.

However I do feel that being experienced does help you roll with the punches much like an agile process. You are able to adjust as things come your way. That is exactly what Chris Coneybeer (blog | @coneybeer) brought to the table… experience among several other things! So I commend Chris on doing an exceptional job on adjusting to everything that presented itself.


I want to say thank you for helping SQLSaturday #47 come together. It’s not easy and our time is valuable so thank you for taking time out of your schedule to help with the preparations of the community event. There is nothing like supporting our community.

So thank you: (in no particular order)

  • Anthony Williams (blog | twitter)
  • Qui Ly (blog | @QLy27)
  • Steve Cable (LinkedIn | twitter)


Because of your willingness to share the wealth of knowledge SQLSaturday #47 was able to offer five tracks of information packed sessions. The fact that our presenters actually participated in the event on their own dime speaks volumes about the their commitment to the community. That’s right! The flight, hotel, car rental and other incurred expenses were out of their own pocket. So thank you for your generosity and thank you for being you!

So thank you: (in no particular order)

Presenter Topic
Dale Cunningham
(blog | @fly_scuba)
Using Confio Ignite 8 for Production
Jonathan Atkins
(blog | @JonAtkins57)
TSQL Fun – Avoiding cursors
Amy Lewis
(blog | @amy_lewisAZ)
First Flight: Taking off with Analysis Services
First Flight: Earn your ETL wings with SSIS
Janis Griffin
(blog | @DoBoutAnything)
Performance Management – 2008 MDW – How & Why
Tuna Helper – Proven Process for Tuning SQL
William E. Pearson III
(blog | @Bill_Pearson)
Entity Strategies: Structuring Your Consultancy
Getting Started with MDX
Robert Miller
(blog | @RobertMiller)
Mirroring 101
Manage and Monitor Your Mirror
Colin Smith
(blog | @smithco32)
Powershell for the DBA
Michael Collins
(blog | @mfcollins3)
Database Development using Visual Studio
Introduction to Microsoft SQL Azure
Ira Whiteside
(blog | @irawhiteside)
Creating a Metadata Mart w/ SSIS – Data Governance
(SSIS, TSQL and MDS) – Record Linkage(Fuzzy Matchi
Bill Ramos
(blog | @billramo)
SQL Server Data Collection Strategies
The Making of the SQL Server Denali Always On Dash
Meredith Ryan-Smith
(blog | @coffegrl)
Work/Life Balance? Just A Myth?
Backup Basics – Knowing your Options and When to
Denny Cherry
(blog | @mrdenny)
SQL Server Clustering 101
Back To Basics; Getting Back To The Basics of SQL
Where should I be encrypting my data?
Orion Gebremedhin
(blog | @OrionGM)
SSRS-Subscriptions & Render Device Settings
SSAS Partitioning and Usage Based Optimization
John Racer
(blog | @speedracer)
Where Are My Reports? Managing Reporting Services
Empowering End Users with Report Models
Zach Mattson
(blog | @WIDBA)
Powershell Eye for the Monitoring Guy
Denise McInerney
(blog | @denisemc06)
DBA as Protector of the Data: Notes from the Field
Tara Kizer
(blog | @TaraKizer)
Performance Tuning with Traces


Without your support SQLSaturday #47 would have never happened. Your contributions help more than you know. You provided valuable information about the many services and tools available to help us do our job better, easier and faster. Thank you for your generosity and continued support for the SQL Community!

So thank you: (in no particular order)

Website Logo


Thanks to the generous staff at Chandler-Gilbert Community College for letting us use their awesome campus to host the event. It was a beautiful venue and we look forward to continuing our partnership for events to come!

thank you

So to the volunteers, presenters, sponsors, CGCC and to all that attended… THANK YOU!. We couldn’t have done it without you and I look forward to seeing and meeting you at the next one!!!


Needing a change… Modify your Filename

A friend of mine told me that he felt a need for a change. Without question I said cool. Change is always inviting as it keeps things interesting. Little did I know or realize he was referring to the changing of his database file name. So he issued the simple ALTER DATABASE MODIFY FILE statement and specified a new name for his database.

MODIFY FILE (NAME = AdventureWorks_data, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\AdventureWorks_data.mdf')
MODIFY FILE (NAME = AdventureWorks_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\AdventureWorks_log.ldf')

and in the results were

The file "AdventureWorks_data" has been modified in the system catalog. The new path will be used the next time the database is started.
The file "AdventureWorks_Log" has been modified in the system catalog. The new path will be used the next time the database is started.

So he took this message as needing to restart the SQL Server Engine Service. Which he did; however, that did not satisfy the underlying issue. After refreshing the databases he attempted to expand the db and received a nice little error indicating that the database was not accessible. Which is when I received the call.

Having experienced this myself I remembered that:

1. the database needed to be taken offline


2. he would need to physically rename the files accordingly

3. bring the database back online


Once these steps were completed he was back in business. On a side note this permitted the changing of the file names but you can also change the actual file path location for each of the files as well using the same syntax. If you wanted to update the logical name for both the data and log file below is the syntax for doing so.

ALTER DATABASE [AdventureWorks] MODIFY FILE (NAME=N'AdWks', NEWNAME=N'AdventureWorks_data')
ALTER DATABASE [AdventureWorks] MODIFY FILE (NAME=N'AdWks_log', NEWNAME=N'AdventureWorks_log')

Get row count from all tables

I was reviewing some of my older scripts and I came across one in particular that stood out. I wrote it quite a while ago and looking back does indicate where I came from in terms of logic. To give a little background my script was written to grab the row counts from all tables within the given database. The problem… it created a temp table then loaded the temp table from querying the INFORMATION_SCHEMA.TABLES. Then it looped through each of the table names loading them it into a variable and then passed them to an update statement which updated the temp table. As you can see this was a very inefficient way to go about it.

I was reading up on catalog views and it seemed that I could query sys.sysobjects and join sys.sysindexes. So I figured I would get this verified. I posed the question and following are the responses I received.

Yep. sys.sysindexes is depreciated in Denali. Information here: msdn.microsoft.com

So here is my script. I originally left out line 15 and when I did I saw a lot of duplication in the results set. So I surfed around and found a post titled, “Display Row count for all tables” by Shyam Skj that used the indid field from sysindexes to filter the records. The closes thing I found was the index_id from sys.partitions and lo and behold using similar syntax the results returned nicely.

/* Check if object already exists */
IF OBJECT_ID('usp_GetTableRowCounts') IS NOT NULL
/* If available drop object */
DROP PROCEDURE usp_GetTableRowCounts
/* Create procedure */
CREATE PROCEDURE usp_GetTableRowCounts
SELECT o.name 'TableName', p.rows 'Rows'
FROM sys.objects o
JOIN sys.partitions p ON o.object_id = p.object_id
WHERE (o.type = 'U')
AND (p.index_id IN (0,1))

AdventureWorks Results

T-SQL Tuesday: Lucky 7-Resolutions

Official TSQL2sday Logo


Every year I make a resolution list just like anyone else and often I am able to achieve them while other items are pushed aside for reasons beyond my control. This year however is slightly different now that I have completed my Bachelors of Science degree in Computer Information Systems with a focus on Database Management. This alone makes some of my resolutions much more attainable. With the added time that I have I can finally focus on certification studies among other things that pertain to personal and professional development. If I am able to juggle my time wisely I can achieve the ultimate ying-yang of personal time management.

So to get down to business here is my list of things I want to accomplish in 2011:

1. Health

Without no surprise in order to achieve my goals I need to be healthy. I dropped about 30 lbs during 2010 and have about 20 lbs to go to reach my ideal weight, which is 190. So before 2011 comes to a close I should be at or around 190 lbs. Chances are I will meet that goal before mid year. I am pretty confident it will be definitely before mid year.

2. Certifications

I currently hold three certifications, but it’s time to buckle down and concentrate on SQL Server centric certifications. Before the end of the first quarter I want to complete the MCITP Database Administrator certification for SQL Server 2008. Which means I started studying for exam 70-432 the beginning of this year. And in order to make things more realistic I am going to schedule for the test by this Friday (to be taken in the near future) which will add the much-needed pressure thus making accountability that more important.

3. ETL Stizzuf

There is just something about Business Intelligence that tickles my fancy. It’s hard to describe but I love reports, especially those I don’t have to hand write. 2011 will be the year I become a BI bad ass so-to-speak. To paint a picture I want to strive to become the west coast version of Brian Knight but in a smaller scale all while wearing Adidas kicks or sandals with socks on because I feel strange without socks… all other attire is optional. At one point or another within the year I want to actually spend a night in a warehouse while developing a data warehouse. I think that would be peachy! Icing on the cake if you will.

4. Get more involved

I touched the waters last year and started to participate where I could. To my surprise I found that people are actually willing to lend a helping hand and share the wealth of knowledge with one another. When I was handed the keys to manage the organization’s data I really didn’t know where to begin or who to consult in the event I ran into a brick wall. I picked up some books, started browsing blogs and joined 26 social networking sites. Then after learning more about social networking I found that twitter and facebook seem to have the highest concentration of SQL Socialites. I can’t say they accepted me, but I can say they have made an impact on my life professionally and personally. I won’t get into details but know that I am grateful.

5. MVP or runner up

This might be a long shot given the pool of MVPs out there. So the stakes are high and quite frankly I have no idea how to toss my hat into the ring. This is something I have only read about lightly and never really asked someone how they achieved MVP status. But nonetheless I need goals to keep me aiming high and moving in the right direction.

6. Organize a SQLSaturday

This might be cheating along with a bit of self promotion since we are slated to launch #sqlsat47 this February 19th, 2011 at Chandler-Gilbert Community College. This ties into list (Item 4) about getting more involved. I am a newbie in terms of event coordinating but I plan all with my co-organizer @coneybeer to deliver a successful SQLSaturday. There is still time to register for the event… I hope to see you there!!!!

7. Participate in the TSQL2sDay blog party

I have seen many of these pop up last year and I find myself very intrigued. I did some research about #TSQL2sDay (which was founded by Adam Machanic) and anyone can become a host provided they participate in at minimum two #TSQL2sDay events which are held monthly, keep an active blog followed up with an email top Adam Machanic. So with the list I have this year and the participation that I plan on doing I should be ready come next year to host a #TSQL2sDay event. Check out the

Think and review before diving in head first

On a recent project assignment I was tasked to comb through a client database and target a specific field that resided within a single table. This field was powered by a free form text field on a .NET web application. Instead of separating the required bits of data into their own fields respectively everything was lumped into one NVARCHAR field. Definitely not an ideal situation. Nonetheless the task needed to be done. I spoke with the our Business Analyst and obtained the requirements.

So I fired up SSMS 2008R2 and started to formulate the query. That was my first mistake! I regressed back to my rookie days without really thinking it through and instead of really researching in-depth I started to ask for assistance (twitter hash tag: #sqlhelp to the rescue). Don’t get me wrong… asking for help isn’t terrible, it’s just that thinking it trough first really makes a significant difference. To make a long story short let me tell you what steps I took to make this move along smoother.

Step 1: Always review the data first

If I would have carefully inspected the data first I would have noticed the different ways the end users were entering the data. Then I could have easily constructed special conditions to target those specific instances. For example here is a short list of what the data kind of looked like:

  • Select on plan/Spec 283
  • Side Menu/Spec 118 Sht 30
  • Side Menu/Spec 474 Sht C1
  • Select on plan/Spec 283 Sheet 17
  • Select on Plan/Spec 278 Sht 44 Dtl A
  • Select on Plan/Spec 389 Sht 4 Bypass Piping
  • Select on Plan/Spec 408.1 Sht M-2
  • Site Piping Menu/Spec 408.1 Sht C-1 Plan
  • Surge Tank Menu/Spec 408.1 Sht M-4 Dtl B
  • Select on plan/Sheet 8 Detail C
  • Select from Side Menu/Sheet 7 Detail C
  • Menu from Discharge Relief Valve/Sheet 8 Detail F
  • Menu for Valve/Sheet 8 Detail A
  • Select on plan/Sheet 12 Detail C
  • Menu for Pressure Transmitter Cabinet/Sheet 20 Detail B
  • Menu for Suction Surge Tank 2/Sheet 15 Detail V

My task was to find the instance of SPEC, SHEET and SHT and grab the following numbers or char. As you can see there is only one instance of each one so that made it a bit easier.

Step 2: Identify the patterns

I took each keyword and began analyzing the data carefully:

For SPEC the following three were always numeric (e.g. Spec 283) and there were a small handful the had three numeric followed by a period and another numeric digit (e.g Spec 408.1) so my options are grab the following four char or six to the right.

For SHEET it was pretty easy. Always trailing SHEET was a numeric digit but the only difference was either a single digit (e.g. Sheet 2) or double digits (e.g. Sheet 12). So that means I can grab the following three char to the right and trim the left and right to satisfy this requirement.

The SHT on the other hand was a bit more difficult since it varied the most. This either had a trailing single numeric digit (e.g. Sht 2), double numeric digit (e.g. Sht 13) or mixed with alpha-numeric chars like: (e.g. Sht M-1 or Sht P8 or Sht P20).

Now that I can clearly see what patterns to look for I can begin to construct my query. Nothing like a little Function action using CHARINDEX mixed with a little dose of PATINDEX to assist in situations likes this. Of course there are many ways to approach this, but this worked for me and satisfied the requirements.

Step 3: Approach each requirement one at a time

This permitted me to concentrate on a specific goal until I was able to correctly return the desired output. Naturally I felt that attacking the easiest ones first would be best, but after reading the book, “Eat that Frog!” I learned to eat the biggest and ugliest frog first as it provides the greatest sense of accomplishment. So I started swinging at SHT. Then once that was crystal clear I moved onto SPEC and SHEET. I must mention that a fellow SQL buff by the name of Aaron Bertrand [ Twitter | Blog ] actually provided a very cool script that tackled the SPEC and covered all the basis. So Cheers to him!

Step 4: Piece everything together and test again

Once everything is working accordingly piece everything together and test it again. I cannot emphasize this enough. Q/A is necessary and must be done until everything comes out the way we need it to. Data is critical to any organization and we as data professionals know it better than anyone else. So test, test and retest. Everything worked out as needed and the results are positive.

Off the subject
If I didn’t mention this before #sqlhelp is extremely useful and there are many people that are extremely talented that are willing to lend you a helping hand. It’s a community thing and I love it! If twitter ain’t your thing that’s fine no biggie; however, if you are looking into it… do it! You won’t be sorry.

Dropping a subscriber database

Usually I can simply run an ALTER statement against the database and SET SINGLE_USER WITH NO_WAIT like so:


but I ran into a problem with active processes that prevented me from switching to SINGLE_USER mode. So I executed the sp_Who2 stored procedure and identified the spids responsible and killed them!

EXEC sp_Who2

I essentially ran the kill command for every spid that was associated to the specific database. So you can see it was very inefficient and a great waste of time.


Now imagine if I had to do this will 200 subscribers… not a fun time!

Then I re-ran the stored procedure sp_Who2 again to make sure they were dead.

EXEC sp_Who2

This worked for me at the time, but later in the future I don’t want to deal with the hassles of running through all the steps involved. I turned to BOL and then I remembered I could easily query the sysprocesses and mock up a script but I didn’t really have the time to run through the whole Q/A process. So I searched the interwebs and stumbled upon the blog post [Script to drop all connections to a Database] by Tijo. It was exactly what I was looking for. So I borrowed it and merged it with some additional items that are specific to our clean up steps. In a nutshell here is the final outcome.

DECLARE @spid int
SET @db = 'myDBName'

SELECT @spid = MIN(spid) 
FROM sys.sysprocesses
WHERE dbid = db_id(@db)
		SELECT @spid
		EXEC ('KILL ' + @spid)
		SELECT @spid = MIN(spid) 
		FROM sys.sysprocesses
		WHERE dbid = db_id(@db)
SET @cmd2 = 'DROP DATABASE ' + @db
IF  EXISTS (SELECT * FROM sys.syslogins WHERE name = + HOST_NAME() + '\MergeAgent')
SET @cmd3 = 'DROP LOGIN [' + HOST_NAME() + '\MergeAgent]'

Also thinking about the future I combined this sql script with a batch file using SQLCMD and provided it to the onsite support staff so that they can handle the reinitialization for subscriptions that fall outside of their threshold.

@echo off
color 17
Title Device Reinit Clean Up . . .

:: =========================
SET srv=(local)\Instance
:: =========================

SQLCMD -S %srv% -E -i "Device.ReinitCleanUp.sql"

Both files need to reside in the same directory in order to work; however, if you decide to call the sql script from a network share be sure to make the necessary adjustments. Reason being is that we don’t deploy management studio on the subscribers and some of the onsite technical support staff are unfamiliar with SQL Server. So I would rather not take any chances which is why I am going to add an additional condition.

IF HOST_NAME() = 'servername'
	PRINT '**** WARNING ****'
	PRINT 'You attempted to apply the cleanup script against the Production Server!'
	PRINT 'Terminating process!'

		DECLARE @spid int
		SET @db = 'myDBName'

		SELECT @spid = MIN(spid) 
		FROM sys.sysprocesses
		WHERE dbid = db_id(@db)
				SELECT @spid
				EXEC ('KILL ' + @spid)
				SELECT @spid = MIN(spid) 
				FROM sys.sysprocesses
				WHERE dbid = db_id(@db)
		SET @cmd2 = 'DROP DATABASE ' + @db
		IF  EXISTS (SELECT * FROM sys.syslogins WHERE name = + HOST_NAME() + '\MergeAgent')
		SET @cmd3 = 'DROP LOGIN [' + HOST_NAME() + '\MergeAgent]'

Calculate Subscription Expiration Threshold

While I was piecing together a few queries for a report intended to deliver subscription information about subscribers that are nearing the expiration threshold for replication I found myself needing to calculate warning threshold levels. In one case I found not only do I need to calculate the threshold warning criteria but I also needed to round the outcome to the nearest whole integer.

As you probably know the default retention period for replication defaults to 14 days; however, in replication monitor you can set up warnings to indicate if a subscriber is nearing the threshold. Typically this is 80% so in short I am going to take the retention value and multiple it by .8 and round to the nearest whole integer. For this example I really only need to query data from one table [MSreplication_monitordata] which resides within the distribution database. If you have replication setup then you will have a distribution database, but if you don’t then the distribution database will not be available. Also note this example is based on a single publication so if you have multiple publications then you will see more than one entry per subscriber per publication.

Step 1: Simple Query

I just want the retention period…

SELECT retention 
FROM [Distribution]..[MSreplication_monitordata] 

Now we know what the retention period is set to and that is 14 days. Now I need to multiply the retention value by .8 and the result will be 11.2. So 80% of 14 = 11.2.

Step 2: Simple Calculation

SELECT retention
      ,(retention * .8) 
FROM [Distribution]..[MSreplication_monitordata]

Side Note:

Keep in mind if you want to get the threshold value of the metric being monitored from the database JOIN [Distribution]..[MSpublicationthresholds] t ON m.publication_id = t.publication_id and pull the t.[value] which will return the value of 80 which is the default. If it was changed then of course the value will differ. Just note that this is a sql_variant data type so you will need to CAST or CONVERT it to perform the calculation.

SELECT	m.retention 
		-- These steps are just for illustration
		,(CAST(t.[value] AS DECIMAL(10,2))/100) '1'
		,(m.[retention] * (CAST(t.[value] AS DECIMAL(10,2))/100)) '2'
		,CAST((m.[retention] * (CAST(t.[value] AS DECIMAL(10,2))/100)) AS INT) '3'
		-- The following is faster and easier
		,(m.retention * CAST(t.[value] AS INT)/100) '4'
FROM [Distribution]..[MSreplication_monitordata] m
JOIN [Distribution]..[MSpublicationthresholds] t ON m.publication_id = t.publication_id

What this means in terms of Replication Monitor is that any subscriber that has not synchronized for 11 days with the retention set to 14 days will have a subscription status of “Expiring Soon/Expired“. If the retention was set to 30 days which is also common and the threshold set to the default of 80% then the subscribers would show the same the status when they reach 24 days without synchronizing.

Step 3: Round the Results

11.2 is a good start, but I need a whole number so I will wrap line 2 from (Step 2) inside a ROUND function and supply 0 for the length or precision as it needs one argument.

SELECT retention
      ,ROUND((retention * .8) ,0)
FROM [Distribution]..[MSreplication_monitordata]

Step 4: Remove the Decimal

The above statement returns a value of 11.0 which is rounded, but now I need to get rid of the decimal place so that 11 is the only thing returned. Again I will take line 2 this time from (Step 3) and wrap it in a CAST function and CAST it as INT.

SELECT retention
      ,CAST(ROUND((retention * .8) ,0) AS INT)
FROM [Distribution]..[MSreplication_monitordata]

Or you could have saved yourself a few steps and done this…

SELECT retention 
      ,CAST((retention * .8) AS INT) 
FROM [Distribution]..[MSreplication_monitordata] 

Final Results

When I apply the rounded value to my query as part of the WHERE clause only the records that fall within the expiration threshold (retention * .8) will be returned. Meaning subscriptions that are close to expiration and subscriptions that are expired.

The results are exactly what I wanted. Now I just need to take it a bit further… think email delivery!

I have wrapped this into a stored procedure and added some logic that will check for expiring records and if there are records found it will then tie the results into a nice html formatted message delivered by database mail. Then I setup a sql job which runs the stored procedure daily including weekends and fires off an email only when records are found. Technically I could fire off a report subscription which can be formatted very nicely and just fire off the job using exec sp_start_job @JobName = ‘Report Subscription ID goes here’ when ever records are detected.

For more information about using ROUND please visit: http://msdn.microsoft.com/en-us/library/ms175003.aspx

Download the code


**	Author	:	Samson J. Loo (justsamson.com | @just_samson)
**	Created : 9/26/2010
**	Intent	: Notification of subscriptions that are nearing the
**						the expiration threshold
**	Notes	: Requires db mail
**	Version	: 0.01

IF OBJECT_ID('sp_RP_GetExpiringSubscribers') IS NOT NULL
DROP PROCEDURE sp_RP_GetExpiringSubscribers
CREATE PROCEDURE [dbo].[sp_RP_GetExpiringSubscribers]

SET @cnt = 0
SELECT	@cnt = COUNT(a.[subscriber_name])
FROM [Distribution]..[MSmerge_sessions] s
	JOIN [Distribution]..[MSmerge_agents] a ON s.agent_id = a.id
	JOIN [Distribution]..[MSreplication_monitordata] m ON s.agent_id = m.agent_id
	JOIN [Distribution]..[MSpublicationthresholds] t ON m.publication_id = t.publication_id
WHERE s.end_time IN
	SELECT TOP 1 s1.end_time 
	FROM [Distribution]..[MSmerge_sessions] s1
	WHERE s.agent_id = s1.agent_id
	ORDER BY s1.end_time DESC
) AND DATEDIFF(d,s.[start_time],getdate()) >= CAST((CAST(t.[value] AS DECIMAL)/100)* m.[retention] AS INT)
AND t.[isenabled] = 1
GROUP BY s.[start_time]

IF @cnt > 0

		SET @tableHTML =
			N'<H1><font color="#FF0000">Expiring Subscription Report</font></H1>' +
			N'<table border="0" cellspacing="2" cellpadding="2">' +
			N'<tr><th bgcolor="#BDBDBD">Subscriber</th>' +
						N'<th bgcolor="#BDBDBD">Status</th>
							<th bgcolor="#BDBDBD">Delivery Rate</th>
							<th bgcolor="#BDBDBD">Last Sync</th>' +
						N'<th bgcolor="#BDBDBD">Duration</th>
							<th bgcolor="#BDBDBD">Conn Type</th>
							<th bgcolor="#BDBDBD">Result</th>
							<th bgcolor="#BDBDBD">Days Behind</th>					
						  <th bgcolor="#BDBDBD">Subscriber Status</th></tr>' +
			CAST ( ( 

		td = CASE
					WHEN CHARINDEX('\',a.[subscriber_name]) > 0 THEN LEFT(a.[subscriber_name],CHARINDEX('\',a.[subscriber_name])-1)
					ELSE a.[subscriber_name]
			,td = CASE
				WHEN s.[runstatus] = 3 THEN 'Synchornizing'
				WHEN s.[runstatus] = 5 THEN 'Retrying failed command'
			ELSE 'Not Synchronizing'
			,td = CAST(s.[delivery_rate] AS VARCHAR) + ' rows/sec'
			,td = s.[start_time]
			,td = CAST((s.[duration]/86400) AS VARCHAR) 
			+ '.' + CAST(REPLACE(STR(((s.[duration]/3600) - ((s.[duration]/86400) * 24)),2),SPACE(1),0) AS VARCHAR)
			+ ':' + CAST(REPLACE(STR((s.[duration] % 3600/60),2),SPACE(1),0) AS VARCHAR)
			+ ':' + CAST(REPLACE(STR((s.[duration] % 60),2),SPACE(1),0) AS VARCHAR)	
			,td = CASE 
				WHEN s.[connection_type] = 1 THEN 'LAN'
				WHEN s.[connection_type] = 2 THEN 'Dialup'
				WHEN s.[connection_type] = 3 THEN 'Web Sync'
			,td = CASE
				WHEN s.[runstatus] = 1 THEN 'Start'
				WHEN s.[runstatus] = 2 THEN 'Succeed'
				WHEN s.[runstatus] = 3 THEN 'In Progress'
				WHEN s.[runstatus] = 4 THEN 'Idle'
				WHEN s.[runstatus] = 5 THEN 'Retry'
				WHEN s.[runstatus] = 6 THEN 'Error'
			,td = DATEDIFF(d,s.[start_time],getdate())
			,td = CASE 
				WHEN (DATEDIFF(d,s.[start_time],getdate()) < CAST((CAST(t.[value] AS DECIMAL)/100)* m.[retention] AS INT)) THEN 'Good'
				WHEN (DATEDIFF(d,s.[start_time],getdate()) <= m.[retention]) THEN 'Expiring Soon'
				WHEN (DATEDIFF(d,s.[start_time],getdate()) > m.[retention]) THEN 'Expired'
		FROM [Distribution]..[MSmerge_sessions] s
			JOIN [Distribution]..[MSmerge_agents] a ON s.agent_id = a.id
			JOIN [Distribution]..[MSreplication_monitordata] m ON a.id = m.agent_id
			JOIN [Distribution]..[MSpublicationthresholds] t ON m.publication_id = t.publication_id
		WHERE s.end_time IN
			SELECT TOP 1 s1.end_time 
			FROM [Distribution]..[MSmerge_sessions] s1
			WHERE s.agent_id = s1.agent_id
			ORDER BY s1.end_time DESC
		) AND DATEDIFF(d,s.[start_time],getdate()) >= CAST((CAST(t.[value] AS DECIMAL)/100)* m.[retention] AS INT)
		AND t.[isenabled] = 1
		ORDER BY s.[start_time]    
			FOR XML PATH('tr'), TYPE 
			N'</table>' ;

		EXEC msdb.dbo.sp_send_dbmail 
			@profile_name = 'WorkingNotifier',
			@subject = 'Expiring Subscription Report',
			@body = @tableHTML,
			@body_format = 'HTML' ;
		PRINT 'No Records Found!'