Archive

Archive for June, 2010

Processing Command Line Arguments in an Offline ClickOnce Application

June 23, 2010 11 comments

Several years ago when ClickOnce first emerged I thought it was a great idea.  Unfortunately at the time, I found it so difficult and tedious that I didn’t play with it for long.  My buddy Stuart was at my office recently and I was lamenting some distribution issues I was having with one of our WPF apps. 

One of our deployed apps changes frequently, and I don’t send out updates as often as I should because our users environments typical limit their install rights (not to mention many of them don’t even have IT staffs).  This means they rarely have the latest and greatest because it can be a serious ordeal to deploy the changes.  Stuart brought up ClickOnce, and the discussion quickly came to the question “what happens when the user doesn’t have administrator rights?”  Naturally, we had to put this to the test!

What I love about ClickOnce

I love how easy it is to setup and use – We spent about an hour playing with ClickOnce and I was amazed to find how far the tooling has come since I last checked it out.  The setup is pretty straight forward, so I’m not going to cover that right now.  If you specifically want me to write a post or do a video about it, respond in the comments below.

I love the control that I have – The options are pretty sophisticated: you can make an application available online only or online and offline.  When you make one available offline it creates a Start Menu option for you.  You can select Prerequisites, like the .NET Framework version, and where to download them if they aren’t present at install time. NOTE: Some caveats may apply to that, like needing administrator rights to install the Framework.

I love Automatic Updates – You can configure the application to automatically check for updates, the exact feature I needed.  You can configure when to check for updates, how often, etc.  You can even configure it NOT to check for updates.  I set mine to check for updates every time the application starts, which will prompt the user to install the new version before the software executes.

I love that users without administrator rights can install apps – This is a huge win for us.  We did a little testing and found that a user without Administrator rights can install these applications.  I presume this is because they operate in some kind of a sandbox, but I don’t actually know.  Pssst: if this is a bug, please don’t tell Microsoft – I love this feature!

Again, I was struck by how easy all this was to setup and configure, and once again I’m beating myself up for all the time I spent NOT using a supplied feature.

What I don’t love about ClickOnce

So far I have discovered two things I don’t like about ClickOnce: the first is that while I can easily create a desktop shortcut, I don’t seem to have any control over the Icon that gets displayed.  Even more strange is that the icon that is being displayed is a Blend icon of some sort.  It seems to me that this would be a pretty common requirement, so I’m surprised there isn’t a readily apparent way to assign an icon.

But far worse than that is the fact that conventional wisdom says that ClickOnce applications can’t handle command line arguments.  This was a deal breaker for me: virtually all of our desktop apps need to be executed from AS/400 sessions.  These sessions pass arguments into the applications via the command line, and losing that capability would negate most of the value of the software.  While I’m sure I’ll figure out the Icon issue, the command line arguments problem needed addressing immediately.

Command Line Arguments

So it is not entirely true that ClickOnce applications can’t handle command line arguments, but until .NET 3.5 SP1, they could only handle them as query string parameters.  This underscores the fact that ClickOnce is a heavily network dependent technology.  You may get an application that installs on your machine with a Desktop Shortcut, but this is still a network deployed application, and as such it relies on some URI scheming.  For me, this isn’t going to help much, so what I was after was the ability to pass in arguments in offline mode using a more traditional approach, like C:\> MyApp.exe arg1 arg2 arg3

I want to get to the nitty gritty of the blog post, so I won’t go into all the details, but here are a few things you need to know:

  • You have no idea where the app is actually installed.  If you do manage to find it, it will have a user-unfriendly name
  • The Shortcut name is the same as the “Product Name” field in the Publish Options Description in the ClickOnce configuration
  • The Shortcut on the Desktop has a special extension: .appref-ms
  • The easiest way to execute the application from the command line is like so: C:\> %userprofile%\Desktop\AppName.appref-ms
  • If your “Product Name” has spaces, you will need to wrap that in double quotes: C:\> “%userprofile%\Desktop\My App Name.appref-ms”

What would be perfect is if I could just append the Command Line Arguments to the end of that call, so it would look like this: C:\> “%userprofile%\Desktop\My App Name.appref-ms” arg1 arg2 arg3

Try it out, though, and you’ll quickly find that this does nothing: the standard args string array is empty.

Not A New Problem

Naturally I hit the Interwebs in search of a solution, I mean, it has to be out there, right?  I was quickly discouraged though to find hundreds of references all saying the same thing: you cannot pass command line arguments to an offline ClickOnce application.

Go ahead, go search for yourself, I’ll wait.

See what I mean?  The question has been asked a million times all with a resounding NO as the answer.  I was about to give up when I spotted something that gave me hope.  An article on MSDN entitled How to: Retrieve Query String Information in an Online ClickOnce Application has a note block with the following text:

Starting in .NET Framework 3.5 SP1, it is possible to pass command-line arguments to an offline ClickOnce application. If you want to supply arguments to the application, you can pass in parameters to the shortcut file with the .APPREF-MS extension.

That sounds like exactly what I want!  Problem is, I’ve already tried that and it doesn’t work.  And naturally, there is no related article on MSDN telling me how to do it, just that it can be done.  Finally, something to give me a little hope!

Finally, I hit the mother lode: an article by RobinDotNet explaining How to pass arguments to an offline ClickOnce application.  Robin’s blog is all about ClickOnce, and interestingly enough I found plenty of earlier posts and forum entries by Robin stating this couldn’t be done, even as late as January 2010, long after .NET 3.5 SP1 was released.  I’m not criticizing, but I am pointing out that even to those “in the know” it would appear this is a non-documented feature.

A Simple Solution

So I read through RobinDotNet’s post and it seemed like a lot of stinking work, and I try to avoid that as much as possible!  She explains a bunch of stuff I didn’t need, like how to locate the shortcut using SpecialFolders, and ho
w to programmatically execute the application.  She even discusses how to create and pass the arguments.  Good stuff, but overkill for me.

There is one key line of code I found in her post that was exactly what I needed:

//Get the ActivationArguments from the SetupInformation property of the domain.
string[] activationData =
  AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData;

What I found is that this will return the arguments provided on the command line.  Well, sort of …

From the code above you can see that ActivationData is a string[].  At first, I assumed I would be able to simply replace the old e.Args with this value, but I found that only the first argument is available in ActivationData.  I don’t know why this is, and it doesn’t make sense, but all my testing proved this out.

To solve this, I changed the way the arguments are passed in by making them comma delimited.  It doesn’t seem to matter what delimiter you use, as long as it isn’t a space.  Then, I parse ActivationData[0] using Split.  In the code sample below, I am also checking to see if this is a ClickOnce application and checking for NULL, the default value of ActivationData if no arguments are passed.

string[] args = null;
if (ApplicationDeployment.IsNetworkDeployed)
{
    var inputArgs = AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData;
    if (inputArgs != null && inputArgs.Length > 0)
    {
        args = inputArgs[0].Split(new char[] { ',' });
    }
}
else
{
    args = e.Args;
}

I assume if I wanted I could still deploy this with a traditional MSI, so doing it this way supports compatibility with a traditional command line execution. In this case, I am parsing it to build another string[] because I have existing code that works with Command Line Arguments.  This example is executing inside my WPF application’s App.xaml.cs file’s Application_Startup event handler.

Getting it to work in .NET 3.5 SP1

In order to get the code above to compile, you will need to add a reference to System.Deployment.Application, which is in System.Deployment.dll.  If you are coding against .NET 4.0, this DLL is available in the “Add Reference -> .NET” dialog.  When I tried to add the reference to an existing .NET 3.5 application, the DLL was not available in the list.  I thought this was a little strange, since the documentation claims it is supported as far back as 2.0.

I did a little digging and found the DLL location.  I was then able to Browse for the DLL and add a reference to it manually and everything seems to work:

C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Deployment.dll

The Results

The results are just what the doctor ordered: I can now call my application from the command line and pass it arguments like so: C:\> “%userprofile%\Desktop\My App Name.appref-ms” arg1,arg2,arg3

It’s not perfect, but it is easy.  The biggest change is sending all the arguments as a single string, but that is easily handled.  Now if I can just figure out that Icon …

Categories: .NET

Free Blend Event: Blend-O-Rama!

June 15, 2010 4 comments

I’m happy to announce an event that I have been working on with Kevin Griffin, president of the Hampton Roads .NET User Group (HRNUG).

One of the frustrating things for me as a Blend Speaker and self-proclaimed “Blend Evangelist” is that at most Code Camps or User Groups I just don’t feel like I have enough time to really dig in and show as much Blend goodness as I would like.  And my presentations always run long because I’m just so excited and I hate to leave anything out!

Last year I was lamenting the situation to Kevin, and he joked around and said we should have a “Joel Camp” so I could talk about Blend all day long.  We’ve had a lot of laughs about that ever since, but as time went on we got a little more serious about it.  After one Code Camp, where I had submitted 5 separate Blend presentations, the idea of a “Blend Camp” became something we were seriously discussing.

That was last fall, and no one’s schedule was allowing any such event to go forward.  Intrigued by the idea, I also began thinking about putting a site together dedicated to hosting Blend Tutorial Videos.  I even purchased a domain name and some hosting access, but time and laziness being what they are, I never got very far in the project.

Meanwhile, Back at the Ranch…

So between all the Code Camps and other things going on in the Mid-Atlantic, the idea of an all day event just kept losing ground.  Fortunately, Kevin never quite forgot the idea and he came up with a viable alternative: instead of one long day, what if we did a week of lunch and learn webcasts? That’s great! I can do that! 

And so I am proud to announce the first ever Blend-O-Rama! Kevin and HRNUG are going to host the event on LiveMeeting, and I will be giving the presentations.  We are going to do one presentation each day, Monday through Friday, July 19th – 23rd,  from 11:30 to 1pm EST.  Each day will cover a different topic:

July 19th – An Introduction to Expression Blend
Microsoft Expression Blend is the premier GUI editor for WPF and Silverlight applications.  In this presentation you will receive an introduction to the Blend UI and how to use it to quickly and easily build an application interface.  Topics will include Configuration, Layout Controls, how to leverage Blend with Visual Studio, and more.

July 20th – Data Binding in Expression Blend
Data Binding is one of the key features of WPF and Silverlight.  This presentation demonstrates and explains the Data Binding tools and features of Blend.  Includes discussions of Data Context, Value Converters, Element Binding, Sample Data and more.  With these tools you really can bind anything to anything.

July 21st – Templating in Expression Blend
An introduction to using Microsoft Expression Blend to leverage Templating in WPF and Silverlight applications. Templates provide a powerful way to enact GUI changes in the application presentation layer without requiring code behind manipulation. The power of Templating provides unlimited potential for enhancing your GUI.

July 22nd – Animating Business in Blend
Discusses how to use animation to enhance line of business applications and demonstrates how to create and employ them in Blend. Topics will include Animations, Triggers, Events, and the Visual State Manager.

July 23rd – Advanced Topics in Blend
Discusses how to take Blend to the next level. Topics will include Behaviors, Graphic manipulations, Audio and Video, Theming, and more.

Registration is now Open

The most awesome news of all is that, thanks to HRNUG, this event it totally FREE!  Registration for the event is now open.  You can also click on the banner in the sidebar or this page, or watch Kevin’s website or HRNUG’s website for more details.  Please pass the link around, tweet about it, etc., everyone is welcome!  And through the magic of LiveMeeting we will be fielding questions from you the audience!

Drum Roll Please …

I’m probably going to regret this later, but remember that Video Tutorial website I mentioned above? Today I am officially announcing that I will launch the website during the event.  Who knows, it worked for getting my certification: maybe setting a date will be the motivation I need to finally get the website going!

Categories: .NET

Stored Procedure vs Parameterized Query

June 10, 2010 3 comments

I want to start by saying I am not a database expert or an DBA by any stretch of the imagination.  I am, though, an old AS/400 RPG developer.  I believe that by necessity AS/400 developers tend to be more familiar with the native database functions and maintenance than their .NET counterparts.  Even so, there are a great many capabilities that go unused by the day to day coder.

On the other hand, .NET developers tend to be more familiar with things like SQL and Stored Procedures.  While these tools are second hand to most developers, they are far less common in RPG applications because RPG has direct database access.  These are merely observations: each access technique has its strengths and weaknesses, a discussion of which is not the purpose of this article: I’ve probably rambled on about it too long already.

My day to day experience is typically making these two worlds peacefully coexist.  The task this week is another conversion project, but this time from AS/400 tables to AS/400 tables.  An argument could be made that I should have done the conversion in RPG using native database access, but in this particular case, I have some string manipulation that is far easier to handle in .NET than RPG.

The Project

The project is simple enough: in our legacy application we are making several significant changes: normalizing data, expanding configuration options, and extracting embedded data into a supporting table.  The normalizing part requires that we combine data from two tables into a single repository.  At the same time, those values contain embedded information that needs to be extracted and stored in a separate table. 

The configuration of that embedded data has been expanded.  It used to be fixed but is now much more flexible, allowing the client to configure the description, rules, and presentation of the data.  As a result, each embedded item also produces a series of table entries in a related table: the number of entries and rules are not known at design time.

This may not be totally clear, but the result is the important part here.  In a small client data set, what used to be 3,458 records in one table and 3,013 in the 2nd table is now 6,471 records in a single table.  Those 6,471 also create 38,862 records in the extracted data table.  There is also a 3rd table with 6,471 rows that acts as a cross reference between these this table and the legacy system that adds some additional information.  So all in all, we’ve gone from 6,471 records in two tables, poorly normalized, into three tables with a total of 51,804 records.

The Problem of Performance

Obviously these are not big tables, but performing the table inserts over the wire from a .NET application incurs a big performance hit.  I’m not going to point fingers, because I honestly don’t know where to point: it could be a problem with the SQL engine on the iSeries, or with the ADO.NET Managed Provider (our version is V5R4, which is still based on .NET 1.1).  It’s unlikely that it is the network or the my PC, but I won’t rule anything out for certain.

I wrote previously about using Stored Procedures on the iSeries to wring out the last bit of performance.  Overall, we’ve been very satisfied, and performance has been excellent.  I wrote a few Stored Procedures to handle the inserts to these three tables and used them from my .NET application to perform the inserts.  Quite unexpectedly, when I tested this small data set the performance was terrible.  It took 57 minutes to perform this task, and this was our smallest data set.  Our largest is more than 10 times the size, so it could potentially run 10-12 hours.

At first, I was tempted to leave it alone: this is not production code, but rather a utility that will be run once for each of our clients and then discarded.  At least, that’s what I thought.  I later realized that each client may have multiple data sets, so this may be run as many as 10 times per client.  Then on top of that, I realized that my staff has to run this for all the clients.  At that point, performance for this “one time” application suddenly became a huge problem.

Things aren’t always what they seem…

I was floored at these initial results.  The reason we started using Stored Procedures is because they are supposed to provide the highest degree of performance.  I refactored the logic twice trying to speed it up but to no avail.  I assumed I had done something wrong in the procedure, so I hit the books, but I didn’t find anything. There’s not a lot to a procedure that just maps incoming parameters to an Insert statement. 

There is one thing in the procedure, though, that I believe was the culprit.  The way our legacy database is structured, you never know until run time what library (database) and file (table) you need to execute the function against.  This means that the variables indicating those have to be passed in to the Stored Procedure … every … time. 

This has not been a problem when pulling data, but I believe because of the frequency and interval with this mass of Insert statements that it was really hampering performance.  I believe that if the Stored Procedure had specified the library (database) and file (table) names then the performance would not have been an issue. The system had to determine what access path to use to the database on every single request.  The purpose of using Stored Procedures in the first place had been completely negated.  I might as well have been using dynamic strings in my code.

The Solution

Fortunately, I came up with a solution I can live with: I turned to an old friend, the Parameterized Query.  As I got to thinking about the problem, I realized that what I needed was to create the command object once and reuse it.  I used Parameterized Queries frequently before turning to Stored Procedures, and they suit this particular problem because I only need to set the library (database) and file (table) names once.

So I fired up a branch in Mercurial and hacked up my code.  I refactored out some of my refactorings and replaced all the Stored Procedure calls with Parameterized Queries.  I made sure I was only creating the command once and then reusing it for each insert.  I anxiously ran the program again: 9 minutes later I had the confirmation I needed.  This was MUCH better.

So the lesson for today is to never trust your assumptions.  I was sure that Stored Procedures would automatically give me the best performance.  I wasted time developing the procedures, I wasted time implementing them, then I wasted time testing and debunking my incorrect assumption.  Now I’m wasting time writing about it :-)  But it reminds me of a very valuable tip: choose the right tool for the right job.

Categories: AS400, Database, iSeries, System i

Expression Studio 4 Released

June 7, 2010 4 comments

Today, Microsoft released Expression 4, a suite of products for designing and building graphics, web sites, video, and WPF/Silverlight applications.  If you’ve ever seen me speak at a Code Camp or User Group, then odds are you’ve heard me talk about Expression.  Most of my use has centered around Blend, but I have done some work with Design and more recently with Encoder.

I’ve been working with Expression since Version 1, and I’ve truly enjoyed it: Expression made me better at what I do: create applications.  I’ve also had my share of complaints, but most of those center around Microsoft’s handling of the Expression Studio in general.  I’m both happy and a little annoyed this time around, so I thought I’d share some of my feelings, complaints, and suggestions.

The Good

First, the good news is in the pricing.  Last time I was a little miffed at the upgrade cost.  I’d like to point out that I was never upset at having to pay for the tool, but the things that really got under my skin were removing Expression Studio from MAPS and then forcing me to upgrade the entire Studio.  With Expression Studio 3 they removed the option to buy the tools individually.

While you could buy a package that included everything BUT Blend, apparently someone at Microsoft thinks I can’t make use of Blend without Web, Design, and Encoder.  So the most valuable tool in the Studio, by far, is the one with the most limited (and least acceptable) purchasing options.  So I was forced to purchase an upgrade of the entire Studio when all I wanted was Blend 3.

So that was my rant last time, and unfortunately some of it is still true: you just can’t get Blend without buying the entire Studio.  The good news, if you already own Studio 3, is that the upgrade is FREE.

If you go to the Upgrade page you’ll see this helpful text:

How To Upgrade

If you purchased Expression 3 from a store, download the trial of Expression Studio 4 Ultimate or Expression Studio 4 Web Professional. Provided you have Expression 3 installed on that computer, the full version will automatically be enabled. Owners of Expression Web 3 can upgrade for free to Expression Studio 4 Web Professional and owners of Expression Studio 3 can upgrade to Expression Studio 4 Ultimate.

Great!  This means I can upgrade for Free.  You cannot purchase any of the packages yet, all the buttons say “Buy Coming Soon”.  They also list the “Estimated Upgrade Price”.  They really shouldn’t call this an upgrade, instead it should be some kind of a discount, because it is not limited to previous ownership of Studio.  Both the main packages allow purchase at the upgrade price to “Any Adobe Creative Suite product”.  That is a great idea! 

Upgrade pricing is also available for Expression 1 and 2 users: the site states “Any Microsoft® Expression product”.

The Changes

The new breakdown of Expression products has been renamed to align with the rest of the Microsoft Universe.

Expression Studio Ultimate – the everything version:

  • Blend
  • SketchFlow
  • Web with SuperPreview
  • Encoder
  • Design

You are also eligible for Upgrade Pricing to Ultimate if you have Visual Studio 2005 or later.  I’m glad they are finally making some step towards developers, but it’s not enough (more on that later.)

Expression Studio 4 Web Professional – for web developers, as long as you don’t include Silverlight in “The Web”:

  • Web with SuperPreview
  • Encoder
  • Design

You are also eligible for Upgrade Pricing to Web Professional if you have Any Microsoft Office product.

Expression Encoder 4 Pro – OK, so I correct myself: you can buy one product by itself, which is great if all you need to do is encode video.   In fact, it’s such a great idea, they should make it available for the rest of the products in the Studio!  Oh, and is Expression Encoder 4 Pro any different than the version included in the previous two versions of Studio?  If not, why isn’t it listed as “Pro” in the previous packages?

And check out the “Which Product is Right For Me?” option on the page.  It’s a Silverlight Application – I’ll bet the designer of this Web site had access to Blend!

The Confusing

There is one particularly confusing thing on the web site at the moment:

The Ultimate Development Resource, including tools for designers

A MSDN Subscription offers you access to virtually all of the Microsoft products you need to design, develop and test your next app. Expression Studio 4 is now available through a MSDN subscription. Expression Studio 4 Ultimate is included in the Ultimate level subscription and the Premium level comes with Expression Studio 4 Premium.

Check out that last statement: “the Premium level comes with Expression Studio 4 Premium.”  What the heck is Expression Studio Premium?  If it’s not more than a typo, then I’d like some clarification please.

My Rant

What gives with requiring MSDN Premium?  Why not include it with Professional?  Why are we still holding developers back from getting their hands on Blend?  I’ll tell you why: Microsoft hasn’t figured out that the long term market for Blend is NOT Designers: it’s Developers.  This incorrect attitude is evident in statements like the one above labeling Expression as “tools for designers”.

I make no bones about my opinion on this matter: I think Blend should be bundled with Visual Studio Professional and higher.  Not the whole Studio, just Blend.  Unwillingness to purchase yet another tool is one of the main obstacles I encounter in my Blend Evangelism.  They think Blend isn’t for them – if it was, Microsoft would give it to them.  If it was for them, it would be built in to Visual Studio, because THAT is for developers.

Instead, we hold it just out of reach for many developers.  There are lots of shops that can’t afford to subscribe to MSDN Premium or Ultimate – mine is one of them.  Fortunately, mine is willing to invest separately in Expression: most aren’t.  And truthfully, the only reason they are willing to pay for it now is because I initially got Studio 1 and 2 as part of MAPS.  This gave me free time to experiment with the tool and prove it’s usefulness.  If we were just starting today it may have been a different story.

For now, the good news is we’ve gotten those developers a little closer: they can buy the Studio at the Upgrade Price.  It’s a good start, but I think if we really want adoption to increase, we need some other options.  If we can’t have Blend as a companion to Visual Studio, then at least have the ability to buy Blend as a stand alone tool, preferably with a stand alone price tag.

Categories: Expression

Clearing a SQL Server Database, Take 2

June 1, 2010 Comments off

In the previous article Clearing a SQL Server Database I wrote about my adventures in writing a Stored Procedure to clear all the data from all the tables in a SQL Server database.  I also said I’d be writing about using Stored Procedures in Entity Framework next, but I’ve belayed that for a while, so my apologies to anyone who is dying to read that article.

More than one way to skin a Table

My buddy Stuart posted another way to do this in the comments:

Hey Joel,

I would generalize the script so you don’t have to keep updating the stored proc each time you add another table. This is a script that I’ve used that does the same thing (but gets around having to order the tables by disabling the constraints first):

– disable referential integrity
EXEC sp_MSForEachTable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
GO

EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasForeignRef”) = 1
DELETE FROM ?
else
TRUNCATE TABLE ?

GO

– enable referential integrity again
EXEC sp_MSForEachTable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
GO

– Reseed each table
EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasIdentity”) = 1
DBCC CHECKIDENT (”?”, RESEED, 0)’
GO

Cheers,
Stuart

He makes a great point: what if I, or some other developer, comes along later and adds a table, but forgets to add it to the Stored Procedure?  I’m not actually sure what the consequences would be, so why take the chance?

What’s all this doing?

If you are an old hat at SQL Server, you probably are thinking “So what?”, but I have professed several times that I am a SQL Server newbie, so I found this really cool.  Here are the things I learned about thanks to this little gem:

sp_MSForEachTable 

According to my Bing research, this is an “undocumented feature” built in to SQL Server.  Since “undocumented” really means “unsupported”, the caveat here is that you shouldn’t rely on it because Microsoft could change it or pull the plug without notice.  Personally, given the wealth of information out there about this procedure, I’ve decided to take my chances.

This procedure will loop through every Table in the current database and execute the given statement against it.  See the “?” part?  That is where the Table identifier will be inserted into each statement.  So in the example above, the first thing we do is loop through all the tables and remove the Constraints.  These were the pesky things in the previous article that prevented us from using TRUNCATE.  Then we loop through again and clear the tables, using TRUNCATE when we can and DELETE when we must.  Then we reinstate the Constraints and reseed the Identity Columns. 

All in all, this is a very nice bit of work and has really opened my eyes to the power of these SQL Server goodies.  You can bet I’ll be returning to this topic in the future.

sp_MSForEachDB 

On a side note, there is a similar method that will loop through and operate the specified statement against every Database as well.  This feels a lot more scary to me, and I am unlikely to use it anytime soon, but it is cool knowing it is there.

OBJECTY_ID and OBJECT_PROPERTY

You’ll also notice that we use a built in function called OBJECT_ID.  Basically, OBJECT_ID retrieves a handle to the Table itself, which we can then pass to other functions to retrieve information about the table.

In the example above, we are passing this handle to another function called OBJECT_PROPERTY.  This allows us to retrieve a metadata property containing information about the Table in question.  We use it twice above, once to check the "TableHasForeignRef" property and again to check the "TableHasIdentity" property.  These values are then used to further determine whether or not to execute specific actions against these tables.

But what if I don’t want ALL the tables?

I actually ran into this issue pretty quickly: in my project I have a table that holds some configuration information.  It’s not in my diagram because it isn’t related to any of the other tables, it’s just holding data.  The problem I ran into was that this table was getting cleared along with the rest, but it needs to retain its data when the other tables are cleared. 

The configuration table wasn’t getting cleared previously because it wasn’t specifically listed in the original Stored Procedure.  Now that we are doing things more dynamically, however, it got caught in the cross fire and cleared out.  Naturally, this created an assortment of “unintended consequences” the first time the software was run following a CLearDatabase command.

So the question became “how do I execute against all tables except this one?”  I needed a way to ignore that table.  Fortunately, part of the answer was right in front of my face: OBJECT_ID.

Ignoring a specific Table

I realized I should be able to use the same OBJECT_ID function to specify which Table I wanted to skip, I just didn’t know what other function to use.  I did a little Binging and found the OBJECT_NAME function, but I had a little trouble getting the result to match a string I was using to specify the Table name.  A little more digging led me to an example that allowed me to query the database and see metadata about the table.  To display the Table name, it used another function called QUOTENAME, which had as a parameter the results of OBJECT_NAME.  So I strung all this together and came up with the following:

IF QUOTENAME(OBJECT_NAME(object_id("?"))) != ”[Configuration]”

Where "Configuration” is the name of the Table I want to ignore.  Notice the use of the Square Brackets: QUOTENAME always seems to return the Table name wrapped in Square Brackets.  I’ve seen these before in other SQL Server samples, but I don’t know yet what their significance is, I just know that adding them made this work.

Also, since it may not come across too clearly, those are paired single quotes (or Tics) around the Table name.  I don’t know if it matters, perhaps regular double quotes would be fine, I just know that in the final version that worked that is what I used.

The only part of the procedure that needs this is the part that TRUNCATEs or DELETEs, so here is the updated version of that part:

EXEC sp_MSForEachTable

IF QUOTENAME(OBJECT_NAME(object_id("?"))) != ”[Configuration]”
IF OBJECTPROPERTY(object_id("?"), "TableHasForeignRef") = 1
DELETE FROM ?
else
TRUNCATE TABLE ?

;

Thanks

So far this has worked very well, and I now know that there are a lot of options open to me in SQL Server for solving these kinds of problems.  I plan to spend some time getting to know more of them very soon.  There is plenty of stuff to learn in the Transact-SQL Reference on MSDN.

A Special Thanks to Stuart for sharing, I love a spirit of co
llaboration: “Good on ya, Mate!”

Categories: SQL Server