Blend-O-Rama Redux

July 26, 2010 Comments off

In case you missed it, July 19-23rd we held the first ever Blend-O-Rama Event.  Hosted by HRNUG, moderated by Kevin Griffin (MVP and HRNUG President), and presented by yours truly, the event was a week of Lunch and Learn webcasts all about Expression Blend and presented over Microsoft Live Meeting.  This was the first event I’ve done of this type and scope, so I wanted to post my review of the event.

Lessons Learned: Live Meeting

This is my first time presenting over Live Meeting (LM), so I’ve definitely learned a few things. The experience itself is a little strange, and I feel disembodied not being able to see the attendees.  As a speaker, I frequently take cues from the audience: I can usually tell whether or not I need to reiterate something, or I can see that they get it and we can move on.  You’d be surprised how much you can get out of a few nodding heads!  Over LM, it’s just me staring at my screen and talking at the microphone, and I really miss the feedback.

We did some testing over LM before the event and intentionally slowed the pace down, but on Day 1 it wasn’t enough.  Day 2 was better and by Day 3 I’d say we had the pace just right.  One side effect is that I can’t cover as much material as I might with a live audience.  I also had to adjust the content I would usually share because Live Meeting has serious issues with things like gradients and animation. 

The hardest thing for me to get comfortable with was Context Switching.  I’m an Alt-Tab fanatic, my left hand is practically deformed from sitting in the Alt-Tab position.  In this case, I couldn’t use it because I took some advice from a friend of mine and did not share my entire desktop over Live Meeting.  This meant that I had to actively select the shared application every time I wanted to switch from one to another.  BTW – if you are a seasoned LM presenter, I’d love to hear how you handle this problem: post them in the comments below.

This also takes time and in the videos you will see a lot of “gray space” while I’m fumbling through the Live Meeting controls.  I found that narrating it helps: letting the attendees know what I’m doing not only helps them understand what they are seeing, it also helps fill the silence which made me feel like I wasn’t just sitting there all alone.

I’m really glad we did a test run using Live Meeting the week before with some unbiased attendees.  We also got some help from David Makogon who gave us some great tips for dealing with Live Meeting and also stepped in a couple of times to moderate when Kevin had other responsibilities.  The biggest lesson here is the Boy Scout Motto: Be Prepared.

Lessons Learned: The Event Itself

I also learned that this sort of commitment is tough.  While the presentations were “only” an hour and a half per day, the event really consumed my entire week.  I’ll admit to underestimating this when we set the event up, mostly because I had given the first 4 presentations many many times: only the last day contained new material.  The pressure I felt to deliver a quality event, however, made me spend all my free time prepping for the next day.

And it really bothered me when something went wrong, as it did nearly every day.  I try to joke about it, but nothing is more frustrating than something not working when other people are watching.  It’s especially tough when it’s something you’ve done a thousand times and for the life of you can’t figure out what’s wrong.  Lesson learned: Expect the Unexpected.

I haven’t done near as much in Silverlight as I have in WPF.  Usually that isn’t an issue, and in Blend it rarely causes a problem, but a few of the issues had to do with my lack of familiarity with Silverlight.  More specifically, I’m accustomed to doing things one way in WPF that I may have to do differently (or can’t do at all) in Silverlight.  In this case the point I was trying to illustrate was that Blend is essentially the same in both platforms, and I do think we succeeded there.

I also learned that I didn’t know as much as I thought I did.  I’ve never been afraid to say “I don’t know” during a presentation: it’s honest and fair.  But I haven’t had to say it so often as I did this week.  Some of the questions really stumped me: in fact, I said several times it was “Stump the Presenter” week.  While I wish I knew all the answers, I also appreciate the tough questions: it means the attendees were really paying attention and thinking about Blend.  What presenter could ask for more?

Day 1 – An Introduction to Blend

I watched the playback and realized I needed to go even slower: there were too many times when the speaking got ahead of the presentation, or when I could have hovered over an item longer to let the Tooltip show through.  I also learned that Gradients are a no-go.  If you watch the video they just show up as huge blocks of color.

Day 2 – Data Binding

They say every presentation has a glitch, and this one was no exception.  I ran into a problem near the end trying to bind a Grid’s DataContext to the SelectedItem of a ListBox.  I can’t tell you how many times I’ve done this, hundreds at least.  It is simple and straightforward… until today.  It just didn’t work and I tried it several times in different ways.  I finally cleared all the DataContexts and Bindings out of the details within the Grid and rebound the DataContext and finally everything worked.  I’m still not sure what the cause was, but I’m starting to think there was something left over from when I tested the Solution earlier that day.

Day 3 – Styling and Templating

I really think on this day we finally nailed the speed and delivery.  The only glitch that stands out in my mind was trying to get Smooth Scrolling to work in Silverlight.  Turns out this is one of those areas that just isn’t the same as WPF. The downside for me is that I feel I spent too much time trying to get it to work.

BTW, the solution is to go to the ItemsPanel Style and add a FluidMoveBehavior to the VirtualizingStackPanel.  Nifty trick!

Attendees: 98

Day 4 – Animations

So understanding that Gradients and things like that don’t work well over LM, I really wasn’t expecting much out of this session.  That being said, it turned out very well.  The Live Meeting video has some odd happenings between minutes 40 and 48, but the WMV file is just fine, so be sure to watch that.

Day 5 – Advanced Topics

By and far this was my favorite day.  As I said in the presentation, this stuff is just plain fun!  I think overall I was happiest with this presentation.  I’m sure some of that has to do with my growing familiarity with Live Meeting, but naturally just as I’m getting comfortable with the format we finish!  Based on the feedback, this one was as much a hit with the audience as it was with me, so  you can expect to see me presenting this one out on the road.

Conclusions

Overall, this was a great event.  I’ll admit, I was drained by the end of it, but the steady stream of thanks and nice messages from attendees tells me we did something worthwhile here.  So much so, in fact, that I think I’d like to have one s
ession a month on different topics. 

What are your thoughts?  Your suggestions and comments will determine what shape this takes going forward, so comment below, email, or Twitter (tag #BlendORama) and let us know what you think.

Thanks again to everyone who took part, and watch out soon for the launch of www.blend-o-rama.com.

Categories: .NET

Sending Email From This Site

July 26, 2010 Comments off

Hi folks, this is just a house cleaning message.

I just wanted to let you know that apparently the email service from this server has been non-functional for quite a while and no one knew it.  It appears to have been corrected and I am receiving email submitted from this site again as of last week sometime.

On the off chance that you submitted an email to me through this site over the last couple of months and I never responded, then it means your message was never delivered.  I pride myself on responding to every message a reader sends me, so I don’t want anyone to think I’m ignoring their messages.

Just a reminder that you can also contact me on Twitter.

Categories: .NET

Holy Cow, what a week!

July 6, 2010 1 comment

This last week has been awesome, probably the best week of my professional career, and I’m just bursting to share.

Wednesday, June 30th

If you follow me on Twitter you may have seen me mention that I took and passed the .NET 3.5 Windows Forms certification exam.  While I already had an MCTS for WPF, I needed this one to qualify for the MCPD (Windows Developer).  If you check out the requirements, you’ll see only the Windows Forms exam counts.  I would go on a rant about how little sense this makes given that WPF is the future of Desktop development and WinForms is on it’s last legs, but this is a happy post so I’ll skip it this time.

I had debated about whether or not I was going to pursue MCPD, but then Microsoft sent me a voucher for a free exam, so I figured why not?  I won’t lie to you: I was surprised I passed.  It’s been 2 years and 2 versions of VS since I last developed for WinForms, and on top of that I barely had time to review.  Still, I did pretty well on the exam and so now I have another MCTS.

Thursday, July 1st

I posted last week about being awarded Expression Blend MVP: I am ecstatic, honored, and humbled.  I spent the next few days in a haze just trying to adjust, it really is a great honor.  A friend asked me at lunch the other day if I was going to slow down the crazy community schedule now.  Naturally I informed him that it would be quite the contrary!  I expect things will get even more intense and I can’t wait!

Monday, July 5th

Pathetic though it may be, I was in the office working yesterday while everyone else was out frolicking for the holiday weekend. It was near the end of the day and I was about to go home when an interesting email arrived informing me that I had been selected as an INETA Community Champion!  Looking at the past winners shows me I’m in pretty good company.  Again I am honored and humbled.

The accolades are certainly nice, everyone likes a little recognition once in a while.  I want to thank Microsoft and INETA, and specifically those who nominated me for these awards: I hope I can live up to your expectations.

However, while it may sound cliche, the truth is that being part of this community is its own reward: I wouldn’t change a thing.  Thanks everyone, and I’ll see you in the community!

Categories: .NET

Expression Blend MVP

July 1, 2010 4 comments

Hi Folks!

I wanted to share a bit of good news with you all: I just received the official announcement that I am a 2010 Microsoft Expression Blend MVP!  You can tell by my overuse of exclamation marks!!!!

I am truly honored, there aren’t a lot of Blend MVPs and it means the world to me that I’m now on that list.  I’ve got a great year planned and some big announcements coming up, so if you are interested in Expression Blend, watch this space!

In the meantime, make sure you attend Blend-O-Rama! Only 18 days left!

Categories: .NET

Processing Command Line Arguments in an Offline ClickOnce Application

June 23, 2010 11 comments

Several years ago when ClickOnce first emerged I thought it was a great idea.  Unfortunately at the time, I found it so difficult and tedious that I didn’t play with it for long.  My buddy Stuart was at my office recently and I was lamenting some distribution issues I was having with one of our WPF apps. 

One of our deployed apps changes frequently, and I don’t send out updates as often as I should because our users environments typical limit their install rights (not to mention many of them don’t even have IT staffs).  This means they rarely have the latest and greatest because it can be a serious ordeal to deploy the changes.  Stuart brought up ClickOnce, and the discussion quickly came to the question “what happens when the user doesn’t have administrator rights?”  Naturally, we had to put this to the test!

What I love about ClickOnce

I love how easy it is to setup and use – We spent about an hour playing with ClickOnce and I was amazed to find how far the tooling has come since I last checked it out.  The setup is pretty straight forward, so I’m not going to cover that right now.  If you specifically want me to write a post or do a video about it, respond in the comments below.

I love the control that I have – The options are pretty sophisticated: you can make an application available online only or online and offline.  When you make one available offline it creates a Start Menu option for you.  You can select Prerequisites, like the .NET Framework version, and where to download them if they aren’t present at install time. NOTE: Some caveats may apply to that, like needing administrator rights to install the Framework.

I love Automatic Updates – You can configure the application to automatically check for updates, the exact feature I needed.  You can configure when to check for updates, how often, etc.  You can even configure it NOT to check for updates.  I set mine to check for updates every time the application starts, which will prompt the user to install the new version before the software executes.

I love that users without administrator rights can install apps – This is a huge win for us.  We did a little testing and found that a user without Administrator rights can install these applications.  I presume this is because they operate in some kind of a sandbox, but I don’t actually know.  Pssst: if this is a bug, please don’t tell Microsoft – I love this feature!

Again, I was struck by how easy all this was to setup and configure, and once again I’m beating myself up for all the time I spent NOT using a supplied feature.

What I don’t love about ClickOnce

So far I have discovered two things I don’t like about ClickOnce: the first is that while I can easily create a desktop shortcut, I don’t seem to have any control over the Icon that gets displayed.  Even more strange is that the icon that is being displayed is a Blend icon of some sort.  It seems to me that this would be a pretty common requirement, so I’m surprised there isn’t a readily apparent way to assign an icon.

But far worse than that is the fact that conventional wisdom says that ClickOnce applications can’t handle command line arguments.  This was a deal breaker for me: virtually all of our desktop apps need to be executed from AS/400 sessions.  These sessions pass arguments into the applications via the command line, and losing that capability would negate most of the value of the software.  While I’m sure I’ll figure out the Icon issue, the command line arguments problem needed addressing immediately.

Command Line Arguments

So it is not entirely true that ClickOnce applications can’t handle command line arguments, but until .NET 3.5 SP1, they could only handle them as query string parameters.  This underscores the fact that ClickOnce is a heavily network dependent technology.  You may get an application that installs on your machine with a Desktop Shortcut, but this is still a network deployed application, and as such it relies on some URI scheming.  For me, this isn’t going to help much, so what I was after was the ability to pass in arguments in offline mode using a more traditional approach, like C:\> MyApp.exe arg1 arg2 arg3

I want to get to the nitty gritty of the blog post, so I won’t go into all the details, but here are a few things you need to know:

  • You have no idea where the app is actually installed.  If you do manage to find it, it will have a user-unfriendly name
  • The Shortcut name is the same as the “Product Name” field in the Publish Options Description in the ClickOnce configuration
  • The Shortcut on the Desktop has a special extension: .appref-ms
  • The easiest way to execute the application from the command line is like so: C:\> %userprofile%\Desktop\AppName.appref-ms
  • If your “Product Name” has spaces, you will need to wrap that in double quotes: C:\> “%userprofile%\Desktop\My App Name.appref-ms”

What would be perfect is if I could just append the Command Line Arguments to the end of that call, so it would look like this: C:\> “%userprofile%\Desktop\My App Name.appref-ms” arg1 arg2 arg3

Try it out, though, and you’ll quickly find that this does nothing: the standard args string array is empty.

Not A New Problem

Naturally I hit the Interwebs in search of a solution, I mean, it has to be out there, right?  I was quickly discouraged though to find hundreds of references all saying the same thing: you cannot pass command line arguments to an offline ClickOnce application.

Go ahead, go search for yourself, I’ll wait.

See what I mean?  The question has been asked a million times all with a resounding NO as the answer.  I was about to give up when I spotted something that gave me hope.  An article on MSDN entitled How to: Retrieve Query String Information in an Online ClickOnce Application has a note block with the following text:

Starting in .NET Framework 3.5 SP1, it is possible to pass command-line arguments to an offline ClickOnce application. If you want to supply arguments to the application, you can pass in parameters to the shortcut file with the .APPREF-MS extension.

That sounds like exactly what I want!  Problem is, I’ve already tried that and it doesn’t work.  And naturally, there is no related article on MSDN telling me how to do it, just that it can be done.  Finally, something to give me a little hope!

Finally, I hit the mother lode: an article by RobinDotNet explaining How to pass arguments to an offline ClickOnce application.  Robin’s blog is all about ClickOnce, and interestingly enough I found plenty of earlier posts and forum entries by Robin stating this couldn’t be done, even as late as January 2010, long after .NET 3.5 SP1 was released.  I’m not criticizing, but I am pointing out that even to those “in the know” it would appear this is a non-documented feature.

A Simple Solution

So I read through RobinDotNet’s post and it seemed like a lot of stinking work, and I try to avoid that as much as possible!  She explains a bunch of stuff I didn’t need, like how to locate the shortcut using SpecialFolders, and ho
w to programmatically execute the application.  She even discusses how to create and pass the arguments.  Good stuff, but overkill for me.

There is one key line of code I found in her post that was exactly what I needed:

//Get the ActivationArguments from the SetupInformation property of the domain.
string[] activationData =
  AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData;

What I found is that this will return the arguments provided on the command line.  Well, sort of …

From the code above you can see that ActivationData is a string[].  At first, I assumed I would be able to simply replace the old e.Args with this value, but I found that only the first argument is available in ActivationData.  I don’t know why this is, and it doesn’t make sense, but all my testing proved this out.

To solve this, I changed the way the arguments are passed in by making them comma delimited.  It doesn’t seem to matter what delimiter you use, as long as it isn’t a space.  Then, I parse ActivationData[0] using Split.  In the code sample below, I am also checking to see if this is a ClickOnce application and checking for NULL, the default value of ActivationData if no arguments are passed.

string[] args = null;
if (ApplicationDeployment.IsNetworkDeployed)
{
    var inputArgs = AppDomain.CurrentDomain.SetupInformation.ActivationArguments.ActivationData;
    if (inputArgs != null && inputArgs.Length > 0)
    {
        args = inputArgs[0].Split(new char[] { ',' });
    }
}
else
{
    args = e.Args;
}

I assume if I wanted I could still deploy this with a traditional MSI, so doing it this way supports compatibility with a traditional command line execution. In this case, I am parsing it to build another string[] because I have existing code that works with Command Line Arguments.  This example is executing inside my WPF application’s App.xaml.cs file’s Application_Startup event handler.

Getting it to work in .NET 3.5 SP1

In order to get the code above to compile, you will need to add a reference to System.Deployment.Application, which is in System.Deployment.dll.  If you are coding against .NET 4.0, this DLL is available in the “Add Reference -> .NET” dialog.  When I tried to add the reference to an existing .NET 3.5 application, the DLL was not available in the list.  I thought this was a little strange, since the documentation claims it is supported as far back as 2.0.

I did a little digging and found the DLL location.  I was then able to Browse for the DLL and add a reference to it manually and everything seems to work:

C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Deployment.dll

The Results

The results are just what the doctor ordered: I can now call my application from the command line and pass it arguments like so: C:\> “%userprofile%\Desktop\My App Name.appref-ms” arg1,arg2,arg3

It’s not perfect, but it is easy.  The biggest change is sending all the arguments as a single string, but that is easily handled.  Now if I can just figure out that Icon …

Categories: .NET

Free Blend Event: Blend-O-Rama!

June 15, 2010 4 comments

I’m happy to announce an event that I have been working on with Kevin Griffin, president of the Hampton Roads .NET User Group (HRNUG).

One of the frustrating things for me as a Blend Speaker and self-proclaimed “Blend Evangelist” is that at most Code Camps or User Groups I just don’t feel like I have enough time to really dig in and show as much Blend goodness as I would like.  And my presentations always run long because I’m just so excited and I hate to leave anything out!

Last year I was lamenting the situation to Kevin, and he joked around and said we should have a “Joel Camp” so I could talk about Blend all day long.  We’ve had a lot of laughs about that ever since, but as time went on we got a little more serious about it.  After one Code Camp, where I had submitted 5 separate Blend presentations, the idea of a “Blend Camp” became something we were seriously discussing.

That was last fall, and no one’s schedule was allowing any such event to go forward.  Intrigued by the idea, I also began thinking about putting a site together dedicated to hosting Blend Tutorial Videos.  I even purchased a domain name and some hosting access, but time and laziness being what they are, I never got very far in the project.

Meanwhile, Back at the Ranch…

So between all the Code Camps and other things going on in the Mid-Atlantic, the idea of an all day event just kept losing ground.  Fortunately, Kevin never quite forgot the idea and he came up with a viable alternative: instead of one long day, what if we did a week of lunch and learn webcasts? That’s great! I can do that! 

And so I am proud to announce the first ever Blend-O-Rama! Kevin and HRNUG are going to host the event on LiveMeeting, and I will be giving the presentations.  We are going to do one presentation each day, Monday through Friday, July 19th – 23rd,  from 11:30 to 1pm EST.  Each day will cover a different topic:

July 19th – An Introduction to Expression Blend
Microsoft Expression Blend is the premier GUI editor for WPF and Silverlight applications.  In this presentation you will receive an introduction to the Blend UI and how to use it to quickly and easily build an application interface.  Topics will include Configuration, Layout Controls, how to leverage Blend with Visual Studio, and more.

July 20th – Data Binding in Expression Blend
Data Binding is one of the key features of WPF and Silverlight.  This presentation demonstrates and explains the Data Binding tools and features of Blend.  Includes discussions of Data Context, Value Converters, Element Binding, Sample Data and more.  With these tools you really can bind anything to anything.

July 21st – Templating in Expression Blend
An introduction to using Microsoft Expression Blend to leverage Templating in WPF and Silverlight applications. Templates provide a powerful way to enact GUI changes in the application presentation layer without requiring code behind manipulation. The power of Templating provides unlimited potential for enhancing your GUI.

July 22nd – Animating Business in Blend
Discusses how to use animation to enhance line of business applications and demonstrates how to create and employ them in Blend. Topics will include Animations, Triggers, Events, and the Visual State Manager.

July 23rd – Advanced Topics in Blend
Discusses how to take Blend to the next level. Topics will include Behaviors, Graphic manipulations, Audio and Video, Theming, and more.

Registration is now Open

The most awesome news of all is that, thanks to HRNUG, this event it totally FREE!  Registration for the event is now open.  You can also click on the banner in the sidebar or this page, or watch Kevin’s website or HRNUG’s website for more details.  Please pass the link around, tweet about it, etc., everyone is welcome!  And through the magic of LiveMeeting we will be fielding questions from you the audience!

Drum Roll Please …

I’m probably going to regret this later, but remember that Video Tutorial website I mentioned above? Today I am officially announcing that I will launch the website during the event.  Who knows, it worked for getting my certification: maybe setting a date will be the motivation I need to finally get the website going!

Categories: .NET

Stored Procedure vs Parameterized Query

June 10, 2010 3 comments

I want to start by saying I am not a database expert or an DBA by any stretch of the imagination.  I am, though, an old AS/400 RPG developer.  I believe that by necessity AS/400 developers tend to be more familiar with the native database functions and maintenance than their .NET counterparts.  Even so, there are a great many capabilities that go unused by the day to day coder.

On the other hand, .NET developers tend to be more familiar with things like SQL and Stored Procedures.  While these tools are second hand to most developers, they are far less common in RPG applications because RPG has direct database access.  These are merely observations: each access technique has its strengths and weaknesses, a discussion of which is not the purpose of this article: I’ve probably rambled on about it too long already.

My day to day experience is typically making these two worlds peacefully coexist.  The task this week is another conversion project, but this time from AS/400 tables to AS/400 tables.  An argument could be made that I should have done the conversion in RPG using native database access, but in this particular case, I have some string manipulation that is far easier to handle in .NET than RPG.

The Project

The project is simple enough: in our legacy application we are making several significant changes: normalizing data, expanding configuration options, and extracting embedded data into a supporting table.  The normalizing part requires that we combine data from two tables into a single repository.  At the same time, those values contain embedded information that needs to be extracted and stored in a separate table. 

The configuration of that embedded data has been expanded.  It used to be fixed but is now much more flexible, allowing the client to configure the description, rules, and presentation of the data.  As a result, each embedded item also produces a series of table entries in a related table: the number of entries and rules are not known at design time.

This may not be totally clear, but the result is the important part here.  In a small client data set, what used to be 3,458 records in one table and 3,013 in the 2nd table is now 6,471 records in a single table.  Those 6,471 also create 38,862 records in the extracted data table.  There is also a 3rd table with 6,471 rows that acts as a cross reference between these this table and the legacy system that adds some additional information.  So all in all, we’ve gone from 6,471 records in two tables, poorly normalized, into three tables with a total of 51,804 records.

The Problem of Performance

Obviously these are not big tables, but performing the table inserts over the wire from a .NET application incurs a big performance hit.  I’m not going to point fingers, because I honestly don’t know where to point: it could be a problem with the SQL engine on the iSeries, or with the ADO.NET Managed Provider (our version is V5R4, which is still based on .NET 1.1).  It’s unlikely that it is the network or the my PC, but I won’t rule anything out for certain.

I wrote previously about using Stored Procedures on the iSeries to wring out the last bit of performance.  Overall, we’ve been very satisfied, and performance has been excellent.  I wrote a few Stored Procedures to handle the inserts to these three tables and used them from my .NET application to perform the inserts.  Quite unexpectedly, when I tested this small data set the performance was terrible.  It took 57 minutes to perform this task, and this was our smallest data set.  Our largest is more than 10 times the size, so it could potentially run 10-12 hours.

At first, I was tempted to leave it alone: this is not production code, but rather a utility that will be run once for each of our clients and then discarded.  At least, that’s what I thought.  I later realized that each client may have multiple data sets, so this may be run as many as 10 times per client.  Then on top of that, I realized that my staff has to run this for all the clients.  At that point, performance for this “one time” application suddenly became a huge problem.

Things aren’t always what they seem…

I was floored at these initial results.  The reason we started using Stored Procedures is because they are supposed to provide the highest degree of performance.  I refactored the logic twice trying to speed it up but to no avail.  I assumed I had done something wrong in the procedure, so I hit the books, but I didn’t find anything. There’s not a lot to a procedure that just maps incoming parameters to an Insert statement. 

There is one thing in the procedure, though, that I believe was the culprit.  The way our legacy database is structured, you never know until run time what library (database) and file (table) you need to execute the function against.  This means that the variables indicating those have to be passed in to the Stored Procedure … every … time. 

This has not been a problem when pulling data, but I believe because of the frequency and interval with this mass of Insert statements that it was really hampering performance.  I believe that if the Stored Procedure had specified the library (database) and file (table) names then the performance would not have been an issue. The system had to determine what access path to use to the database on every single request.  The purpose of using Stored Procedures in the first place had been completely negated.  I might as well have been using dynamic strings in my code.

The Solution

Fortunately, I came up with a solution I can live with: I turned to an old friend, the Parameterized Query.  As I got to thinking about the problem, I realized that what I needed was to create the command object once and reuse it.  I used Parameterized Queries frequently before turning to Stored Procedures, and they suit this particular problem because I only need to set the library (database) and file (table) names once.

So I fired up a branch in Mercurial and hacked up my code.  I refactored out some of my refactorings and replaced all the Stored Procedure calls with Parameterized Queries.  I made sure I was only creating the command once and then reusing it for each insert.  I anxiously ran the program again: 9 minutes later I had the confirmation I needed.  This was MUCH better.

So the lesson for today is to never trust your assumptions.  I was sure that Stored Procedures would automatically give me the best performance.  I wasted time developing the procedures, I wasted time implementing them, then I wasted time testing and debunking my incorrect assumption.  Now I’m wasting time writing about it :-)  But it reminds me of a very valuable tip: choose the right tool for the right job.

Categories: AS400, Database, iSeries, System i

Expression Studio 4 Released

June 7, 2010 4 comments

Today, Microsoft released Expression 4, a suite of products for designing and building graphics, web sites, video, and WPF/Silverlight applications.  If you’ve ever seen me speak at a Code Camp or User Group, then odds are you’ve heard me talk about Expression.  Most of my use has centered around Blend, but I have done some work with Design and more recently with Encoder.

I’ve been working with Expression since Version 1, and I’ve truly enjoyed it: Expression made me better at what I do: create applications.  I’ve also had my share of complaints, but most of those center around Microsoft’s handling of the Expression Studio in general.  I’m both happy and a little annoyed this time around, so I thought I’d share some of my feelings, complaints, and suggestions.

The Good

First, the good news is in the pricing.  Last time I was a little miffed at the upgrade cost.  I’d like to point out that I was never upset at having to pay for the tool, but the things that really got under my skin were removing Expression Studio from MAPS and then forcing me to upgrade the entire Studio.  With Expression Studio 3 they removed the option to buy the tools individually.

While you could buy a package that included everything BUT Blend, apparently someone at Microsoft thinks I can’t make use of Blend without Web, Design, and Encoder.  So the most valuable tool in the Studio, by far, is the one with the most limited (and least acceptable) purchasing options.  So I was forced to purchase an upgrade of the entire Studio when all I wanted was Blend 3.

So that was my rant last time, and unfortunately some of it is still true: you just can’t get Blend without buying the entire Studio.  The good news, if you already own Studio 3, is that the upgrade is FREE.

If you go to the Upgrade page you’ll see this helpful text:

How To Upgrade

If you purchased Expression 3 from a store, download the trial of Expression Studio 4 Ultimate or Expression Studio 4 Web Professional. Provided you have Expression 3 installed on that computer, the full version will automatically be enabled. Owners of Expression Web 3 can upgrade for free to Expression Studio 4 Web Professional and owners of Expression Studio 3 can upgrade to Expression Studio 4 Ultimate.

Great!  This means I can upgrade for Free.  You cannot purchase any of the packages yet, all the buttons say “Buy Coming Soon”.  They also list the “Estimated Upgrade Price”.  They really shouldn’t call this an upgrade, instead it should be some kind of a discount, because it is not limited to previous ownership of Studio.  Both the main packages allow purchase at the upgrade price to “Any Adobe Creative Suite product”.  That is a great idea! 

Upgrade pricing is also available for Expression 1 and 2 users: the site states “Any Microsoft® Expression product”.

The Changes

The new breakdown of Expression products has been renamed to align with the rest of the Microsoft Universe.

Expression Studio Ultimate – the everything version:

  • Blend
  • SketchFlow
  • Web with SuperPreview
  • Encoder
  • Design

You are also eligible for Upgrade Pricing to Ultimate if you have Visual Studio 2005 or later.  I’m glad they are finally making some step towards developers, but it’s not enough (more on that later.)

Expression Studio 4 Web Professional – for web developers, as long as you don’t include Silverlight in “The Web”:

  • Web with SuperPreview
  • Encoder
  • Design

You are also eligible for Upgrade Pricing to Web Professional if you have Any Microsoft Office product.

Expression Encoder 4 Pro – OK, so I correct myself: you can buy one product by itself, which is great if all you need to do is encode video.   In fact, it’s such a great idea, they should make it available for the rest of the products in the Studio!  Oh, and is Expression Encoder 4 Pro any different than the version included in the previous two versions of Studio?  If not, why isn’t it listed as “Pro” in the previous packages?

And check out the “Which Product is Right For Me?” option on the page.  It’s a Silverlight Application – I’ll bet the designer of this Web site had access to Blend!

The Confusing

There is one particularly confusing thing on the web site at the moment:

The Ultimate Development Resource, including tools for designers

A MSDN Subscription offers you access to virtually all of the Microsoft products you need to design, develop and test your next app. Expression Studio 4 is now available through a MSDN subscription. Expression Studio 4 Ultimate is included in the Ultimate level subscription and the Premium level comes with Expression Studio 4 Premium.

Check out that last statement: “the Premium level comes with Expression Studio 4 Premium.”  What the heck is Expression Studio Premium?  If it’s not more than a typo, then I’d like some clarification please.

My Rant

What gives with requiring MSDN Premium?  Why not include it with Professional?  Why are we still holding developers back from getting their hands on Blend?  I’ll tell you why: Microsoft hasn’t figured out that the long term market for Blend is NOT Designers: it’s Developers.  This incorrect attitude is evident in statements like the one above labeling Expression as “tools for designers”.

I make no bones about my opinion on this matter: I think Blend should be bundled with Visual Studio Professional and higher.  Not the whole Studio, just Blend.  Unwillingness to purchase yet another tool is one of the main obstacles I encounter in my Blend Evangelism.  They think Blend isn’t for them – if it was, Microsoft would give it to them.  If it was for them, it would be built in to Visual Studio, because THAT is for developers.

Instead, we hold it just out of reach for many developers.  There are lots of shops that can’t afford to subscribe to MSDN Premium or Ultimate – mine is one of them.  Fortunately, mine is willing to invest separately in Expression: most aren’t.  And truthfully, the only reason they are willing to pay for it now is because I initially got Studio 1 and 2 as part of MAPS.  This gave me free time to experiment with the tool and prove it’s usefulness.  If we were just starting today it may have been a different story.

For now, the good news is we’ve gotten those developers a little closer: they can buy the Studio at the Upgrade Price.  It’s a good start, but I think if we really want adoption to increase, we need some other options.  If we can’t have Blend as a companion to Visual Studio, then at least have the ability to buy Blend as a stand alone tool, preferably with a stand alone price tag.

Categories: Expression

Clearing a SQL Server Database, Take 2

June 1, 2010 Comments off

In the previous article Clearing a SQL Server Database I wrote about my adventures in writing a Stored Procedure to clear all the data from all the tables in a SQL Server database.  I also said I’d be writing about using Stored Procedures in Entity Framework next, but I’ve belayed that for a while, so my apologies to anyone who is dying to read that article.

More than one way to skin a Table

My buddy Stuart posted another way to do this in the comments:

Hey Joel,

I would generalize the script so you don’t have to keep updating the stored proc each time you add another table. This is a script that I’ve used that does the same thing (but gets around having to order the tables by disabling the constraints first):

– disable referential integrity
EXEC sp_MSForEachTable ‘ALTER TABLE ? NOCHECK CONSTRAINT ALL’
GO

EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasForeignRef”) = 1
DELETE FROM ?
else
TRUNCATE TABLE ?

GO

– enable referential integrity again
EXEC sp_MSForEachTable ‘ALTER TABLE ? CHECK CONSTRAINT ALL’
GO

– Reseed each table
EXEC sp_MSForEachTable ‘
IF OBJECTPROPERTY(object_id(”?”), ”TableHasIdentity”) = 1
DBCC CHECKIDENT (”?”, RESEED, 0)’
GO

Cheers,
Stuart

He makes a great point: what if I, or some other developer, comes along later and adds a table, but forgets to add it to the Stored Procedure?  I’m not actually sure what the consequences would be, so why take the chance?

What’s all this doing?

If you are an old hat at SQL Server, you probably are thinking “So what?”, but I have professed several times that I am a SQL Server newbie, so I found this really cool.  Here are the things I learned about thanks to this little gem:

sp_MSForEachTable 

According to my Bing research, this is an “undocumented feature” built in to SQL Server.  Since “undocumented” really means “unsupported”, the caveat here is that you shouldn’t rely on it because Microsoft could change it or pull the plug without notice.  Personally, given the wealth of information out there about this procedure, I’ve decided to take my chances.

This procedure will loop through every Table in the current database and execute the given statement against it.  See the “?” part?  That is where the Table identifier will be inserted into each statement.  So in the example above, the first thing we do is loop through all the tables and remove the Constraints.  These were the pesky things in the previous article that prevented us from using TRUNCATE.  Then we loop through again and clear the tables, using TRUNCATE when we can and DELETE when we must.  Then we reinstate the Constraints and reseed the Identity Columns. 

All in all, this is a very nice bit of work and has really opened my eyes to the power of these SQL Server goodies.  You can bet I’ll be returning to this topic in the future.

sp_MSForEachDB 

On a side note, there is a similar method that will loop through and operate the specified statement against every Database as well.  This feels a lot more scary to me, and I am unlikely to use it anytime soon, but it is cool knowing it is there.

OBJECTY_ID and OBJECT_PROPERTY

You’ll also notice that we use a built in function called OBJECT_ID.  Basically, OBJECT_ID retrieves a handle to the Table itself, which we can then pass to other functions to retrieve information about the table.

In the example above, we are passing this handle to another function called OBJECT_PROPERTY.  This allows us to retrieve a metadata property containing information about the Table in question.  We use it twice above, once to check the "TableHasForeignRef" property and again to check the "TableHasIdentity" property.  These values are then used to further determine whether or not to execute specific actions against these tables.

But what if I don’t want ALL the tables?

I actually ran into this issue pretty quickly: in my project I have a table that holds some configuration information.  It’s not in my diagram because it isn’t related to any of the other tables, it’s just holding data.  The problem I ran into was that this table was getting cleared along with the rest, but it needs to retain its data when the other tables are cleared. 

The configuration table wasn’t getting cleared previously because it wasn’t specifically listed in the original Stored Procedure.  Now that we are doing things more dynamically, however, it got caught in the cross fire and cleared out.  Naturally, this created an assortment of “unintended consequences” the first time the software was run following a CLearDatabase command.

So the question became “how do I execute against all tables except this one?”  I needed a way to ignore that table.  Fortunately, part of the answer was right in front of my face: OBJECT_ID.

Ignoring a specific Table

I realized I should be able to use the same OBJECT_ID function to specify which Table I wanted to skip, I just didn’t know what other function to use.  I did a little Binging and found the OBJECT_NAME function, but I had a little trouble getting the result to match a string I was using to specify the Table name.  A little more digging led me to an example that allowed me to query the database and see metadata about the table.  To display the Table name, it used another function called QUOTENAME, which had as a parameter the results of OBJECT_NAME.  So I strung all this together and came up with the following:

IF QUOTENAME(OBJECT_NAME(object_id("?"))) != ”[Configuration]”

Where "Configuration” is the name of the Table I want to ignore.  Notice the use of the Square Brackets: QUOTENAME always seems to return the Table name wrapped in Square Brackets.  I’ve seen these before in other SQL Server samples, but I don’t know yet what their significance is, I just know that adding them made this work.

Also, since it may not come across too clearly, those are paired single quotes (or Tics) around the Table name.  I don’t know if it matters, perhaps regular double quotes would be fine, I just know that in the final version that worked that is what I used.

The only part of the procedure that needs this is the part that TRUNCATEs or DELETEs, so here is the updated version of that part:

EXEC sp_MSForEachTable

IF QUOTENAME(OBJECT_NAME(object_id("?"))) != ”[Configuration]”
IF OBJECTPROPERTY(object_id("?"), "TableHasForeignRef") = 1
DELETE FROM ?
else
TRUNCATE TABLE ?

;

Thanks

So far this has worked very well, and I now know that there are a lot of options open to me in SQL Server for solving these kinds of problems.  I plan to spend some time getting to know more of them very soon.  There is plenty of stuff to learn in the Transact-SQL Reference on MSDN.

A Special Thanks to Stuart for sharing, I love a spirit of co
llaboration: “Good on ya, Mate!”

Categories: SQL Server

Clearing a SQL Server Database

May 28, 2010 3 comments

UPDATE: Be sure to check out the follow-up article, Clearing a SQL Server Database, Take 2.


I posted recently in Project Greenfield: Testing the TDD Waters about a small conversion project I’ve been using as a test bed for some of the techniques and technologies I’ll be using in Project Greenfield. I know SQL and am very comfortable in DB2 on the IBM System i, but this is the most extensive work I have done with SQL Server in recent memory.  I have really appreciated the ease of database management provided by the tools integrated into Visual Studio.  I especially appreciate it since I cannot seem to get SQL Server Management Studio installed on my development machine, but I won’t go into that right now.

The Database Schema

As background for the rest of this article, here is the schema we’ll be discussing.  The diagram itself is too hairy to post, but the simplified version below should suffice.  In this case, “->” means “Parent Of” and indicates a 1 to many relationship.

RealEstateMaster
->  CardDetail
    ->  CardImprovementDetail
->  TransferHistory
->  LandDetail

This collection of tables and relationships is exposed by the Entity Framework as a RealEstateMaster Entity.  In the database, these tables also hold numeric Codes for various elements, each with a corresponding table, a “look up” table for normalization. There are well over a dozen of these, so I’ll not list them all, but they all function like so:

CodeTable
->  CardDetail
->  RealEstateMaster
->  LandDetail

From an Entity standpoint, these are not child and parent relationships, but from a database standpoint they do enforce the same type of Foreign Key constraints.  In other words, each code in the CardDetail table must exist in it’s corresponding CodeTable.

Starting Fresh

I have several scenarios where the conversion process requires a “fresh start”, in other words a clean database with no data in the tables.  This means that on demand I need to be able to wipe out all the data from all the tables in the database.  This seemingly simple task turned out to take a lot more effort to figure out than I originally anticipated.

Using Entity Framework

At first, I assumed (wrongly) that since I was using Entity Framework for all of my database access that there would be a way to do this built in to the Entity Context.  I made the rookie mistake of equating my EF classes to direct access to the database and all it’s functionality.  I also made the rookie mistake of equating the EF classes to database tables: this one to one mapping is in no way necessary, so in hindsight I understand why there is no “TableName.Clear()” kind of an option.

I believe this problem can be solved using the EF classes but it would be very cumbersome.  As I see it, it would require you to loop through the entire collection of RealEstateMaster entities and delete each one.  That delete operation should loop through it’s children and delete those records as well.  Afterwards, you could then do the same to each code table, which at that point should have no constraining records.

NOTE: The statements above are theoretical: I did not try this because it seemed like way too much work and not really a proper application of EF.  I chose EF because it provides an easier way to work with SQL Server, but when EF actually gets in the way, it tells me I should find a different solution.

Back to SQL Server

Having explored my EF options, I decided the best thing to do was create a Stored Procedure in SQL Server to perform this task.  Having never written a Stored Procedure in SQL Server, I wasn’t sure exactly what I was getting into, so I reached out for help to a known SQL Server guru: Andy Leonard.  One of the great things about being involved with the community is knowing people who know things! 

Andy graciously tolerated my newbie questions and with his guidance via an email exchange he led me to the solution I finally implemented.  With his permission, I’m going to share a little of our exchange.  I’m going to leave it unedited, mostly because I love Andy’s way of putting things, but also so you can get the same undiluted experience I did.

ME: having explained the situation and schema above …

So I guess I have several questions:
1) How do the relationships affect the deletes?  Does the order matter?
2) Is there a way to do a "cascading" delete that will loop through the relationships and delete the related table rows automatically?
3) Am I making this harder than it needs to be?  Is there a better way?

ANDY:

1. Usually 1-many indicates parent-child. The parent is usually on the "one" side of this relationship; child is usually on the "many" side. Order matters. You want to remove the child(ren) first. If you want to be extra cool about it, remove the child(ren) and then the parent in a transaction. That way, if something "bad" happens (like crossing the streams </GhostBusters>) during the parent delete, the child delete can rollback. Transactions are built for this. You are set up here for something "bad" to happen – you have multiple children for a given parent. If you miss one and get all the rest – and there’s data in that parent-child relationship you miss – your best outcome is a failure with rollback. Everything will return to its pre-transaction state. Without a transaction, you risk introducing data pollution (half the data for a given entity is missing).

2. There is a way to set up cascading referential integrity. It’s rare in practice and has to be in place before you begin your delete statements.

3. This is rocket surgery. You are not adding complexity, the complexity was here when you arrived.

   My solution would be something like:

begin tran

delete sh

from SalesHistory sh

inner join MasterRecord mr on mr.ID = sh.MasterRecordID

where mr.ID in (…<list of MasterRecord table IDs>…)

<cut more examples of the same approach>

— commit tran

— rollback tran

   Notice I worked from the bottom of the list to the top – that’s intentional. Most people think of entity construction "top down." Deletes need to work in the opposite order.

  If everything appears to work, execute the commit statement. If not, you can execute the rollback and put everything back just like it was before you started. As a precaution, always execute the commit or rollback at least twice – you want to make sure you close all the transactions you opened. And it’s easy to start a new one accidentally – and it becomes a nested transaction when you do (if you close the SSMS window and leave an open transaction, the tables involved are locked. That’s "bad"…). You want to highlight the commit or rollback in SSMS and keep clicking Execute until you get an error indicating there are no open transactions to end. It’s a best practice.

ME:

My first quest
ion would be why this:

delete sh

from SalesHistory sh

inner join MasterRecord mr on mr.ID = sh.MasterRecordID

where mr.ID in (…<list of MasterRecord table IDs>…)

instead of this:

delete sh

from SalesHistory sh

Here is why I ask:
1) The purpose here is really just to clear out all the tables, completely disregarding the current data. A total purge, if you will.
2) Using the first statement leaves open the possibility of orphaned data – or does it?  If the relationships are defined, what happens when there are SalesHistory rows with no associated MasterRecord row?
3) It seems like additional complexity: won’t the joins be a performance hog?

ANDY:

If you’re just after clearing all the tables, a simple DELETE statement – starting with the children – will work. There is a popular myth that JOINs slow down performance. It’s akin to saying a farm of web servers slow down performance because there’s all that time lost deciding which server to send the request and then managing the distributed session management.

   The truth is Joins can improve performance as much as hurt it. They’re a tool. Proper indexing and server management are the keys to performance.

   That said, you can use Truncate Table to clear them. That does a couple things:

1. Wipes out the data.

2. Is not logged (so it flies).

3. Resets identity columns to the initial seed value (usually 1).

4. Requires the ddl_admin role for permission.

   That’s a nice middle ground between dropping/recreating and deleting.

Order Matters

Andy’s first response talked about the best practice for doing an operation of this nature, which I rejected only because I just wanted a total purge of the data: if I was doing something more production oriented I would have taken the approach Andy suggested.

So the idea of just issuing a bunch of DELETE commands over all the tables does what I need.  The first lesson here, though, is that Order Matters.  Because of the relationships I created between the tables I could not simply issue a DELETE against the parent tables until there were no longer any children constraining them.

Recall the relationships listed above:

RealEstateMaster
->  CardDetail
    ->  CardImprovementDetail
->  TransferHistory
->  LandDetail

I had to start at the deepest point in the heirarchy and work my way up, so the final order looks like this:

  1. DELETE CardImprovementDetail
  2. DELETE CardDetail
  3. DELETE TransferHistory
  4. DELETE LandDetail
  5. DELETE RealEstateMaster
  6. DELETE all the Code tables mentioned above

Using TRUNCATE instead of DELETE

From Andy’s last email, I decided that TRUNCATE might be a better option.  I had never heard of TRUNCATE before, but using it is very simple: ex. “TRUNCATE CardDetail”.

Unfortunately, when I changed all my DELETEs to TRUNCATEs, I discovered a little foible.  Apparently, TRUNCATE will not work when a Foreign Key Relationship is defined, even when there is no data affected.  So in other words, I can issue TRUNCATE CardImprovementDetail, because it does not define a FOREIGN KEY relationship to any other table.  I can NOT, however, issue TRUNCATE CardDetail, because it defines a FOREIGN KEY relationship to CardImprovementDetail (as well as all of it’s corresponding Code tables).  This held trus even after CardImprovementDetail had been truncated itself and held no data.

The Final Solution

So the final solution was to use TRUNCATE when possible, and DELETE when necessary.  I wrapped all of these up in a Stored Procedure and now when I need to clear the entire database I can simply execute.

Remembering that Order Matters, the final procedure execution looks like this:

ALTER PROCEDURE dbo.ClearDatabase
AS
BEGIN
TRUNCATE TABLE CardImprovementDetail;
DELETE FROM CardDetail;
TRUNCATE TABLE Land;
TRUNCATE TABLE TransferHistory;
DELETE FROM RealEstateMaster;
DELETE FROM Carport;
DELETE FROM Condition;
DELETE FROM Easement;
DELETE FROM ExteriorWall;
DELETE FROM Floor;
DELETE FROM Foundation;
DELETE FROM Garage;
DELETE FROM Heat;
DELETE FROM InteriorWall;
DELETE FROM MagisterialDistrict;
DELETE FROM Occupancy;
DELETE FROM RightOfWay;
DELETE FROM RoofMaterial;
DELETE FROM RoofType;
DELETE FROM Sewer;
DELETE FROM SiteCharacteristic;
DELETE FROM SiteTerrain;
DELETE FROM Water;
END
    RETURN

The Nuclear Option

Before I close, I did want to mention the Nuclear Option: I could just drop the database and recreate it at runtime.  I considered it briefly because while the Entity Context does not have a Clear Table kind of option, it does have CreateDatabase().  It also has CreateDatabaseScript(), which you can use to extract the schema into an executable SQL script.  It seems to me that you could just nuke the database (probably with a Stored Procedure) and use some combination of these to recreate it.

I only considered it for a moment, because it seems heavy handed.  On top of that, if something were to go wrong it could leave the application in an unusable state.  It also assumes that the SQL Script generated by EF will match the standards required by the application or the client.  I’m not saying the generated schema would not function, but there could be outside factors.  I suppose you cold store the schema in a file locally and use it to recreate the database, outside of EF, but it just feels ill advised.

Back to Entity Framework

At the end of all this, what I was left with was a Stored Procedure defined in my SQL Server that will do the task required.  Unfortunately, if I leave it as is it means I will need to use a tool like SQL Server Management Studio to execute the procedure manually.  Since my users will occasionally need to do this themselves, I don’t think that is a viable option.

Instead, I need to be able to run the Stored Procedure programmatically, but doing so means using traditional ADO.NET methods.  I would then turn around and create an Entity Context, and it feels silly to do both in the same program.  To solve that problem, I added the Stored Procedure to my Entity Context.  And that’s where we’ll pick up next time: using Stored Procedures in Entity Framework.