Archive

Archive for the ‘Visual Studio’ Category

Upgrade your C# Skills part 2 – Compiler Inference, Object Initializers, and Anonymous Types

November 27, 2007 7 comments

Well, that Title sure is a mouthful! “Compiler Inference”, “Object Initializers”, “Anonymous Types”: as fancy as these items sound, they are among some of the more mundane changes to C# 3.0. Don’t get me wrong: these changes lay the groundwork for all the really cool stuff going on in C#, especially LINQ. Truthfully, I should have covered these first, but I had actually been dreaming about Extension Methods, so I hope you’ll understand if I had to get that out of my system!

Compiler Inference

It probably isn’t fair for me to say this is “mundane”, but look at the two examples below:

// Old Way
Person p = new Person();
string name = p.LastName + " " + p.FirstName;

// New Way
var p = new Person();
var name = p.LastName + " " + p.FirstName;

Doesn’t seem like much, does it? All we did was replace the explicit Type with a new keyword called var. When you see var, what you are seeing is an instruction to the compiler to figure out for itself what Type the resulting variable will be. This is NOT var like in PHP or other loosely typed languages. Once assigned by the compiler, all the Strongly Typed rules still apply. And var can only be used for local variables, so it won’t be replacing all type references any time soon.

In the example above, “var p = new Person();”, the compiler will infer from the return type that variable “p” should be of type “Person”. Granted, in this simple example it doesn’t really mean much. It does mean less typing, especially when looping through generic collections, but where it will really come into play is with Anonymous Types and LINQ. In fact, without Compiler Inference, Anonymous Types could not function, but more on that later in the article.

There is some debate going on as to when to use Compiler Inference. Some feel that using var whenever possible is just lazy and leads to less readable code. These people think that you should use it only when it is really needed. In other words, if I know p is going to be a Person, then why not say so explicitly? Personally, I don’t have an opinion. I’ll use it when it seems convenient and of course where it is required. It certainly makes looping through Collections nicer to write:

// assume people is a List<Person>
foreach (var person in people)
{
    Console.WriteLine("{0} {1} is {2} years old.", person.FirstName, person.LastName, person.Age);
}

I have to think that if Person is later subclassed, fewer changes would be required to make this code continue to function. Is it lazy? Perhaps, but we all know that a good programmer is a lazy thief!

Object Initializers

Another cool feature are Object Initializers. Initializers allow you to set a series of Property variables when you create the object. How many times have you coded something like this:

Person p = new Person();
p.FirstName = "John";
p.LastName = "Smith";
p.Age = 32;

Now, you can shorten this considerably. Sort of like Array Initializers, which I’m sure we’ve all used, the syntax is a list of property=value pairs inside curly braces immediately following a constructor:

// You could, of course, use var here...
Person p = new Person() {FirstName = "John", LastName = "Smith", Age = 32};

An interesting side note about IntelliSense: naturally, inside the initializer block, IntelliSense will show you the list of properties available for the Person object. As you use Properties in the Initializer block, you will notice that they disappear from IntelliSense. This is a quick way to make sure you set every property (if you need to), and should help prevent listing duplicate properties (which, incidentally, throws a compiler error).

Now, for some extra coolness, you can embed Object Initialization inside a Collection’s Object Initializing block, so that this:

List<person> people = new List<person>();
Person p = new Person();
p.FirstName = "John";
p.LastName = "Smith";
p.Age = 32;
people.Add(p);

Person p2 = new Person()
p2.FirstName = "Jimmy";
p2.LastName = "Crackcorn";
p2.Age = 57;
people.Add(p2);

Person p3 = new Person();
p3.FirstName = "Mary";
p3.LastName = "Contrary";
p3.Age = 44;
people.Add(p3);

Can now be done like this:

var people = new List<person>()
    {new Person() {FirstName = "John", LastName = "Smith", Age = 32},
     new Person() {FirstName = "Jimmy", LastName = "Crackcorn", Age = 57},
     new Person() {FirstName = "Mary", LastName = "Contrary", Age = 44}};

As you can see, this is very concise and readable code. Less Typing = More Cool!

Automatic Properties

Since we are talking about properties, at least in a round about way, I want to throw in a quick item you may not be aware of called Automatic Properties. We all know that we are supposed to be exposing access to our variables via properties, but a lot of the time all we want to do is to be able to read and write the variable data. As such, we end up with a lot of code that looks like this:

private string _firstName;
public string FirstName
{
    get { return _firstName; }
    set { _firstName = value; }
}

Now, there is nothing wrong with this approach, but it feels a little verbose. To address this, you can now use an Automatic Property:

public string FirstName { get; set; }

The two main differences are that you do not define the variable, and you do not have to write the getter and setter logic. In fact, you’ll notice there aren’t even curly braces for get or set. The upside is that it is again less code for a common function. The downside is that you do not have an internal variable to access: you must use the Property name, even internally. Also, the Property must contain both the get and the set, so you cannot have a read-only Automatic Property. You also cannot add custom logic: get and set default behaviors are it.

The plus, though, is that if you start with an Automatic Property and need to add any of this functionality, you simply create the variable and update the Property as you would have before. There is nothing magic about Automatic Properties, this is just another example of the Compiler doing menial work for you. One last tidbit: in VS2008, if you use the “prop” code snippet, you will get the format of an Automatic Property by default. It really replaces the need for my Property Builder tool.

Anonymous Types

When I first learned about Anonymous Types, I didn’t think it would be all that big of a deal. The more I think about it though, the more I can see a use for it. And when I saw how they are used with LINQ, I was sold. Basically, you can define a Type without naming it or creating a separate class definition. Let’s revisit our Person and People class from before, only this time I am not going to define the Person Class:

var p1 = new {FirstName = "John", LastName = "Smith", Age = 32, FirstName="Harold"};
var p2 = new {FirstName = "Jimmy", LastName = "Crackcorn", Age = 57};
var p3 = new {FirstName = "Mary", LastName = "Contrary", Age = 44};

Here I have created three instances of my new Type, but I have not explicitly defined the type definition. Instead, the compiler has created a class for me and given it the Properties I listed (using our new friend, Compiler Inference). All of the properties are read-only, so once created, the values cannot be changed. Since the three objects above are all identical in their property names, all three objects are of the same (unknown) type. And IntelliSense still works as well: Typing “p1.” will reveal a list of the properties. I said earlier in the article that Anonymous Types could not function without Compiler Inference: hopefully it is obvious now what is happening above. “var” is the only option here, because the Type of the variable does not exist at design time. The Compiler must infer the Type to be of whatever class the Compiler creates, otherwise, you could not have Anonymous Types in a language like C# because of its strong typing.

Of course, there is a catch: as of yet, I have not figured out a way to use these in a list. You cannot use a Generic list, because there is no Type for strong typing. You can store them as Objects, but then you cannot cast them back out because there is no type. So what does this particular method buy you? In my opinion, not much. There could be an argument for using this method to replace the use of temporary Structs, a method I use frequently to store grouped information. But since I can’t store them in a list or pass these objects around, it would be a very limited approach.

There is, however, a necessary use for Anonymous Types: LINQ requires them to be able to create custom query results as a Collection. We’ll explore that more later this week, so be sure to check back.

Advertisements
Categories: .NET 3.5, C# 3.0, Visual Studio

Upgrade your C# Skills part 1 – Extension Methods

November 26, 2007 9 comments

DOWNLOAD the Code!

Now that I have VS2008 and .NET 3.5 installed, I am going to begin a series of articles on some of the new features you can use in C#. My hope is to add one new article a day for the rest of this week. Along the way, we’ll explore some of the new language features for C# 3.0. Truthfully, they aren’t all that new since C# 3.0 has been around for about a year, but support for them is now built in to Visual Studio, so using them is now realistic. Also, with the release of VS208, I’m sure this will be the first time most developers will be exposed to them. I’m going to start by introducing one of my favorite enhancements: Extension Methods.

Extension Methods

Extension methods offer us a way to expand class functionality even if we do not have access to the code for those classes (including sealed classes). In other words, we can add our own functionality to any object type. Have you ever looked at the String class and said “why can’t I do {fill in the blank} with a String?” If so, you probably created your own method, passed it the string in question, and consumed the return value:

string s = "1234";
string z = Reverse(s);

If you’ve ever written code like this, then Extension Methods are for you. In the case above, it should be obvious that Reverse is a static method, since it is not attached to an instance. Wouldn’t it be nice instead to write this?

string s = "1234";
string z = s.Reverse();

I know it is a subtle difference, and in this case not a terribly functional one, but hopefully you can see the difference. Instead of passing the string variable to a static method, you can treat your methods as though they belong to the string class. And these new methods are available in Intellisense: when I type “s.”, I will see my extension methods right alongside all the native methods, which makes finding and using them much more palatable. And you will find in the new Intellisense that Microsoft itself is making heavy use of Extension Methods. You can tell in two ways: first, the icon for extension methods is the familiar purple box but accented with a blue down arrow. Secondly, when the method description pops up in Intellisense, the description is preceded by “(extension)”.

Let’s look at a more reasonable example. Using the Regex class, you can determine whether or not string matches a Regular Expression pattern by passing a string and a pattern to the Regex.IsMatch() method:

string s = "This is awesome";
if (Regex.IsMatch(s, "awe"))
{
    Console.WriteLine("Yep, this is a match!");
}
else
{
    Console.WriteLine("Sorry, no match.");
}

I think it would be handy on occasion to simply “ask” the string itself if it matches a certain pattern:

string s = "This is awesome";
if (s.IsRegexMatch("awe"))
{
    Console.WriteLine("Yep, this is a match!");
}
else
{
    Console.WriteLine("Sorry, no match.");
}

Pretty neat, huh? OK, I admit it doesn’t appear to lessen your code, but to me it can make your code make more syntactic sense. And given a more complex example, it could do a lot for you. Richard Hale Shaw showed us an example of a .ForEach extension for IEnumerable<T> collections that would knock your socks off! Imagine being able to loop through an entire Collection and perform some action on each item in a single line of code, without passing the Collection off to a method? It would look something like this:

DirectoryInfo dir = new DirectoryInfo("C:\\");
FileInfo[] files = dir.GetFiles();
files.ForEach<fileinfo>(f => Console.WriteLine("{0} {1}", f.FullName, f.Length));

For now, ignore the code between the () – that’s a Lambda Expression, and we’ll get to those later in the week. Just understand that with this sample above, each FileInfo object in the array will have the passed Action applied to it. How many foreach loops do you think this little nugget could eliminate?

Now, I hope this will get you interested in Extension methods: it didn’t take me too long to think these are very cool! To get you started, I’m adding a new project called DevelopingForDotNet.Extensions to the Free Code page . My take on Richard’s method(s) are included, along with a handful of String and Numeric operations. Nothing too fancy, but hopefully enough to help get you started.

Enough already, how do I do it?

Creating Extension Methods is very simple. First, you must follow these simple rules:

  1. Extension Methods must be static, defined in a static class
  2. Extension Methods must be public
  3. The this keyword precedes the first parameter

We’ll take these one at a time. First, the method must be static because of how the compiler handles extensions. Behind the scenes, whenever an Extension Method is employed, the compiler actually generates code to call the static method. The idea of the Extension Method is just a visual layer for the developer: behind the scenes the actual static method is being called. Also, these methods can be called in a static fashion, rather than as methods attached to an instance. Which brings up another good point: the class name the Extension Methods are in is essentially irrelevant (unless you are going to call them explicitly). I put all my extensions in a single static class (imaginatively called “ExtensionMethods”).

Second, they probably do not absolutely have to be public, but if they aren’t then you are severely limiting the functional scope, so what’s the point?

Finally, preceding the first parameter with “this” is what tells the compiler that this is an Extension Method. It is also probably going to be the main source of initial confusion, because you do not pass this parameter (unless you are calling the method explicitly).

Overall, these are fairly simple rules to follow. Here is a handy method I’ve used for a while:

public static string RightAdjust(string s, int Size, char FillCharacter)
{
    string ch = FillCharacter.ToString();

    if (s.Length > Size)
    {
        throw new ArgumentException("Size of Value larger than Requested Size.");
    }

    while (s.Length < Size)
    {
        s = ch + s;
    }

    return s;
}

This method receives a string and right adjusts it to the given size, filling the leading characters in with the passed char. Calling it looks like this:

string s = "1234";
s = RightAdjust(s, 7, '0');
// Value of s is now "0001234"

Now, let’s convert this to an Extended Method:

public static string RightAdjust(this string s, int Size, char FillCharacter)
{
    string ch = FillCharacter.ToString();

    if (s.Length > Size)
    {
        throw new ArgumentException("Size of Value larger than Requested Size.");
    }

    while (s.Length < Size)
    {
        s = ch + s;
    }

    return s;
}

All we did was add “this” before the first parameter. Now we can call it like so:

string s = "1234";
s = s.RightAdjust(7, '0');
// Value of s is now "0001234"

Naturally, you will need to add a reference to the DLL that contains your Extension Methods in order to find and use them.

Overloading Extension Methods:

Just like other methods, Extension Methods can be easily overloaded. My approach for overloading has always been to put all the functionality in the method that requires the most parameters. I then simply have my overloading method signatures call the primary method, sending it the appropriate parameter values. Looking at the RightAdjust method above, I want to establish a method that will use a blank character as the default fill character if one is not supplied. What’s different about overloading extension methods, is that I actually employ the primary extension method in my overloading methods:

public static string RightAdjust(this string s, int Size)
{
    return s.RightAdjust(Size, ' ');
}

So now I can call this method passing it just the Size parameter, and that method then calls the primary method using the Extension Method mechanism.

Conclusion:

Extension Methods can be as simple or complex as you like. They are a nice syntactical enhancement to the language that lets you enhance other objects and use them how you would like. But beware: you could easily go overboard. I mean, there is no reason to create a MakeDirectory method for a TimeStamp instance, but you could. Just use common sense and make sure that the extensions you create apply to the object type and way that you would use them.

Installing VS2008

November 21, 2007 Comments off

FINALLY! After 5 hours and 45 minutes of downloading, the ISO file is here! I also downloaded Microsoft’s VirtualCD software, mounted the ISO file, and began the Install process at 3:01pm EST.

First, after mindlessly bypassing the EULA, I saved 1.6 GB of install space on my machine by not installing C++ or Crystal Reports. I’ve had C++ since I first installed VS2003 Pro and have never once used it, same for Crystal Reports, so why bother? I could have saved more but not installing SQL Server, but I figured better safe than sorry.

A few things about this machine while the installer runs:

  • IBM ThinkPad G41, about two years old, with some kind of Pentium IV Mobile processor.
  • Running Windows XP Pro SP2.
  • It had a little over 8 GB free when I started.
  • 1 GB RAM.
  • Already installed are VS2005, Mobile 5.0 SDK, .NET 3.0 (and 2.0 and 1.1)

4:18pm –

The install is finished. I had to reboot, which meant remounting the Virtual CD image. Also, I cannot install the MSDN Documentation: the installer will not recognize the ISO image as the correct disc. Perhaps there is a separate download from MSDN for the docs.

At any rate, it seems I am up and running. I opened the IDE for the first time and selected C# as my primary environment. The software configured and opened a few minutes later.

So all in all, the installation was a breeze (except for the waiting for the download part). Now on to Blend!

Categories: Visual Studio

Visual Studio 2008 goes RTM!

November 20, 2007 Comments off

OK, so I may have lied, but not by much!

Yesterday afternoon, while waiting for my MSDN number to be granted, I saw on MSDN that VS2008 went RTM!? I did finish the VS2008 Beta 2 download, but I never installed it. After waiting all day, I was finally able tog et my MSDN number from Microsoft and am currently downloading the VS2008 ISO file.? Tomorrow will be install day!

But since I am here, let me grouse a bit: why in the world does it take 24-72 hours for a couple of databases at Microsoft to share information?? I just can’t believe that there is this kind of delay.? One of the MSDN techs told me that it takes one day to get the data into the “Subscriber” database, and another to get it into the “Downloads” database. What gives? Microsoft of all companies shoul dbe able to make this sort of event automatic and practically instantaneous!? It just boggles the mind a little.

But hey, who cares!? I’ll soon be in .NET 3.5, and I hope you’ll join me there soon.

Categories: .NET 3.5, Visual Studio

Lies, Dang Lies, and the Developers Who Tell Them

November 19, 2007 Comments off

First thing I did this morning was subscribe to MSDN. It’s just a no-brainer since I’m typically going to buy the new development tools as soon as they surface anyway. I’m also in the process of putting together the specs for my new lap top. I was going to get one over a year ago, and then I said “I’ll wait for Vista”. Then I said “I’ll wait for Vista SP1”. Recently I’ve been saying “I’ll wait for Visual Studio 2008”. Well, I guess I’m not waiting for SP1, and VS2008 is just around the corner and may even be out by the time I get my new machine.

So I lied about SP1, and I also lied about waiting for VS2008 to RTM: I am downloading Beta2 now and I’ll be downloading Blend as soon as I can. I’m just so excited about these new features that I just can’t wait. And I have the perfect test project going on in house. On top of everything else, the timing for these releases couldn’t be better for us. So the next few days will probably be spent downloading and installing software.

Categories: .NET 3.5, Visual Studio

Live Blogging VSLive! Austin – Day 4

November 15, 2007 Comments off

9:00am:

Just about to start a Post-Conference workshop entitled “Advanced C#: Moving up to LINQ, VS2008, and .NET 3.5” by Richard Hale Shaw. Richard is extremely knowledgeable and has a tendency to go into deeper detail than anyone else, so this should prove to be a challenging class. Watch for updates throughout the day.

10:50am update:

Wow! I can’t wait to get my hands on VS2008 and .NET 3.5! Lots of stuff going on: we’ve been covering Custom Iterators again, which is pretty cool in its own right but has been around since 2.0. We’ve been covering them, though, in the context of Extension Methods which are HOT. Keep watching this site, you’ll see lots of stuff coming up in the future about Extension Methods.

I do have a couple quick bullets that came up:

  • DataContext is the key object type in LINQ
  • DataContext has a .Log method that will reveal the actual SQL that was executed
  • LINQ for SQL only works against SqlServer 2000+ : more evidence that we need to switch to SqlServer
  • Compiler Inference is very cool and reduces a lot of code. For instance: var allows the Compiler to infer the variable type from the return object and Anonymous Methods can be replaced with Lambda Expressions.
  • var will only work for local variables

1:30pm update:

Lambda Expressions are something else I’ll be exploring and posting about in the near future. Essentially, a Lambda Expression is a syntax method of creating Action<T> and Predicate<T> objects (there is another but I don’t recall its name). These objects are used by LINQ and some other constructs. Without Lambda expressions, these could be created using anonymous methods or by outsourcing the method call signature. Trust me, for most applications Lambda Expressions will be much better. There is too much about them to show how they work here, so here are a few bullets with the caveat that if you have not seen Lambda Expressions they won’t mean much for you:

  • Lambda Expressions can consume outer variables
  • Lambda Expressions can contain embedded code
  • Lambda Expressions can be used in LINQ or anywhere else that the return types are needed
  • Very readable, once you understand the syntax
  • Scope for Lambda variables is protected
  • Here is a sample line using a Lambda: .Filter( f => f.Length > fileSize )
  • In this case, “f” is a variable that lives only inside the (). “=>” is the Lambda Execution operator. “fileSize” is an outer variable (defined and populated elsewhere in the scope)

Other goodies:

  • Custom Iterators, and as such LINQ query objects, employ “Deferred Execution”. In other words, the actual IEnumerable<T> return value is not populated until the object is used (as opposed to when it is defined)
  • Extension Methods can be used in LINQ alongside the system defined actions
  • In VS2008, you can step through .NET Framework code in DEBUG (I believe there is an additional install to make this work)
  • LINQ can create Anonymous Types: in other words, the resulting IEnumerable<T> of a LINQ statement can be of a Type and structure that is not a defined type. This is very cool!
  • Properties in Anonymous Types can be based on Method call results
  • Properties in Anonymous Types can be named as desired
  • Properties in Anonymous Types can be other Complex Types

3:20pm update:

Lots and lots of details, and my seat is getting really sore, but I am here until the bitter end! Here are a few highlights from the last section:

  • LINQ sequences (Richard’s name for the resulting IEnumerable<T> collections) can be combined using .Concat() as long as the resulting Types are the same
  • This is even true for Anonymous Types: as long as they are defined exactly the same
  • LINQ can group results: this ends up in a IGroupedEnumerable that holds a set of Key Value pairs of the group key and IEnumerable<T>
  • The LINQ group feature can be based on an anonymous type (defined as part of the group statement)
  • Not LINQ related, but Richard mentioned something about using Partial classes as a way to protect custom code from Code Regeneration – great idea that also solves a problem I’ve had in my own CodeGen of ensuring that variables were defined as protected instead of private (since my system inherits from the generated abstract class) Update: I spoke with Richard after the workshop about employing this as a thin abstraction layer for DAL objects (Data Access Layer) and he suggested this would not be a good approach for that application.? And I may have misunderstood what he was saying in the first place, but I do think it led me to the idea that Partial classes could be a solution to this problem.

4:45pm update:

The last section was mostly about XLINQ (LINQ to XML), and we have plenty more to cover. The new XML classes are very easy to use and intuitive:

  • Essentially an OO wrapper around XML
  • XElement is the key class and represents an XML element
  • XElement has an “Elements” Collection of child XElement objects which is an IEnumerable<XElement>
  • This means that we should be able to use LINQ over the Elements collection

5:40pm update:

Down the home stretch. These are definitely long days, but well worth every minute.

I want to clarify something from above: the classes I mention for XML are XLINQ. These classes are found in System.Xml.Linq. After the XML is read in to these objects, then LINQ to Objects can be used as before. So XLINQ is LINQ, it just doesn’t look like the other flavors of LINQ.

Also, Richard showed a nice method of creating and storing much cleaner XML code based on defined classes behind the scenes. Then the Read and Write methods are abstracted out using Extension Methods and Generics. Very clean and powerful, and overall a fairly straight forward and simple approach.

Conclusions:

Well, this is the end of a great week. If you ever get a chance to attend one of these events, by all means you owe it to yourself to go. Here are the technologies I will be exploring in great detail as soon as I get back to the world:

  • LINQ – this is the KING of all the new MS technologies
  • WPF – simply awesome: a new paradigm in GUI design with great promise
  • SilverLight – a whole new web experience, both for developers and consumers
  • IIS 7.0 – looks substantially easier and more functional
  • Entity Data Model – not ready yet but will be soon and another paradigm shift
  • Visual Studio 2008 – should be RTMing this month. If you want to do any of the above, you’ve got to have the right tools.

I’m sure there’s more, but I am brain fried and butt dead. Check back often, I expect lots of new posts in the near future as I work through all this stuff.

Live Blogging VSLive! Austin – Day 3

November 14, 2007 Comments off

10:10am – Keynote:

The keynote this morning was “Take on your toughest challenges with Visual Studio 2008” by Sam Gavitt of Microsoft. The presentation was a practically oriented tour of Visual Studio 2008 ad its new features and capabilities. Some stuff we’ve already heard and seen, but this presentation was very solid. As usual, here are some highlight bullets:

  • VS is focused on three areas: User Experience, Collaboration, and Productivity.
  • Should be shipping via MSDN any day now – not to self: subscribe to MSDN!

Sam’s “Top Ten Reasons to Switch to Visual Studio 2008″ (with apologies to David Letterman):

  • WPF Integration and Interoperability
  • C# and VB language improvements
  • New HTML Designer
  • JavaScript Debugging
  • WCF/WF Integration
  • AJAX Integration
  • Unit Testing
  • Integrated Office Development
  • LINQ
  • Multi-Platform Targeting

These are definitely cool features, but I disagree on the last one. Multi-platform targeting (the ability to target code for any version of 2.0, 30, or 3.5) is a great feature, but it is not a reason to switch to VS: it the solution to a reason not to switch. LINQ is absolutely king: the absolutely best new language feature I’ve ever seen, period. Along with WPF and the promise of the Entity Data Model, LINQ represents a shift to Declarative programming. It will improve code readability, increase productivity, and result in significantly less code. If you really want to (and are stubborn) you can use LINQ in VS2005, but you get no real help from VS for using it. Switch to VS2008 and you get cool stuff like Intellisense and LINQ object creation. LINQ alone is worth the price of admission.

Web Application Stuff:

  • CSS Editing is greatly enhanced
  • CSS now has Intellisense
  • CSS Intellisense is also available in ASP.NET code
  • Tons of new file templates
  • JavaScript syntax highlighting
  • JavaScript Intellisense
  • JavaScript debugging
  • AJAX is SUPER easy in VS2008 – the demo blew me away

Windows Application Stuff:

Most of the stuff he talked about I’ve already posted a bunch on, namely WPF, VS, and Blend Coolness (but remember that Blend is not part of VS). He did briefly touch on WCF and WF integration, which I think is going to be valuable.

Office Integration Stuff:

OK, this is very very cool: Office development tools are now part of VS. You can add toolstrip buttons and features in Office products from your application. You can generate Office docs from your app. You can even create and embed your own Forms, including WPF stuff, in other Office apps. This sort of extension capabilities used to be a specialty in its own right and required specific training and tools: now anyone can do it, and it is slick.

11:37am update:

Just attended a session entitled “C#3.0 and LINQ Under-the-Hood” by Richard Hale Shaw. Richard has such a depth of knowledge that he is sometimes hard to follow, just because of the sheer amount of information he is capable of imparting. Truthfully, he didn’t talk much about LINQ at all, but rather he really got into the nitty gritty of the C# components that go into making LINQ possible.

Extension Methods:

A great technology that is going to be very useful, primarily for readability and productivity.

  • Add helper methods to classes you can’t extend
  • Operate on variables as thought he methods belonged to the actual class
  • The class name of the defining class is irrelevant, but it must be static and must be within appropriate scope
  • Extension methods must be defined as static
  • Obviously, a reference to the Extension method dll must be included in the consuming code
  • Intellisense indicates whether or not a method is an extension by prefacing it with (Extension)

Custom Iterators:

  • Lie at the heart of LINQ
  • Can be created by any method that returns IEnumerable, IEnumerable<T>, IEnumerator, or IEnumerator<T>
  • IEnumerator and IEnumerator<T> are preferred because IEnumerable is reserved for the primary Iterator of any given Collection
  • LINQ uses Deferred Execution based on Custom Iterators
  • Query does not hold data, but rather the mechanism for acquiring the data
  • Sequence methods (emplyed by LINQ) are built on Custom Iterators

Generic Delegates, Anonymous Methods, and Lambda Expressions:

  • Generic Delegates allow you to abstract and postpone the Action delegates until runtime
  • Anonymous delegates (a 2.0 technology) allow you to create and return methods on the fly: the compiler then generates the necessary code
  • Lambda expressions map to Generic Delegates to execute Anonymous Delegates in a more coder-friendly manner

12:45pm update:

“Using Visual Studio and the Expression Suite to Build Great User Experiences” by Denny Boynton.

A little bit of a misnomer: we really didn’t use Expression Studio, just Blend, but he did show something that was built using Expression video manipulation software (the name escapes me). He also did everything in a SilverLight application, which was good because it was the first one we’ve seen implemented in SilverLight.

I don’t have a lot of notes, basically we got to watch him work and start to get a feel for how the tools might be used in development. He never once edited XAML and only showed it to prove that you could get to it. He really segregated the Design work from the Development work, and the vision is starting to gel: this could really work. One of the other attendees pointed out that with these tools, you realistically could have separate designers. Maybe Microsoft is ahead of the game, predicting how it’s going to be rather than reacting to how it really is.

  • We really need a Rich User Experience (UX)
  • SilverLight is based on the RIA model (Rich Internet Application)
  • A lightweight plug in for IE, Firefox, Safari, and Opera (almost)
  • Runs on Windows, Mac OS X, and Linux (through “moonlight”)
  • Blend is really moving into the Designer Space
  • Designers are only limited by their imaginations: developers have always been limited by implementation capabilities
  • With XAML, those limitations are removed by completely separating the UI definition from the Code behind it
  • Expression Studio includes tools for Web design, Interactive design, Graphics Design, and Asset Management

3:10pm update:

Attended “IIS 7.0 for Web Developers” by Robert Boedigheimer. I need to preface all this with the statement that I am not an IIS person. I have limited experience with it, a few minor configs here and there on IIS 5.0 and 6.0, but nothing serious or production in nature. All that being said, based on the presentation and the reactions of those around me, IIS 7.0 is a great improvement over previous versions.

  • Broken up into a modular architecture
  • Uses pluggable components
  • Reduces attack surface and memory footprint
  • Configuration is all in XML
  • Installed modules are chosen in the configuration, so can be easily turned on and off without restarting IIS
  • Can be controlled from site to site (I think)
  • Modules are similar to ISAPI Extensions
  • Handlers are similar to ISAPI Filters
  • Custom Dialogs can be integrated into IIS Manager
  • Custom configurations are permissible
  • Configuration is independent for each website
  • You can write and apply .NET code to alter/extend IIS
  • Any request can fire .NET code (that you can develop!)
  • Modules are safer than filters because they are managed code BUT you still have to be wary of a performance impact (based on what you are asking the requests to do)
  • Header Details can be removed before a Response is sent – good for removing Server Identification information

.NET Integration:

It definitely appears that one of the big enhancements is the integration of custom .NET code. I can definitely see the power in being able to write my own methods to “do stuff” at any step in the process. Runs in one of two app pools: “Integrated” means that the .NET is in the Pipeline which “Classic” executes as ISAPI.

Forms Authentication:

In IIS 6.0, only ASP.NET files are protected. In IIS 7.0 all files can be protected using ASP.NET 2.0 Membership Providers for security. The authenticated user ID has been added to the log files. This non-ASP.NET file inclusion is disabled by default and must be enabled in the web.config file.

IIS Management:

  • appcmd.exe – Command line control
  • IIS Manager provides a GUI method
  • WMI Scripting is available
  • Can be done in custom .NET code by emplying the System.Web.Administration namespace

Built in Diagnostic Tools:

  • Traps and stores errors
  • Logs everything that happens
  • Failed requests are stored in an XML file
  • Can be configured to store all requests (or specific ones)

Run-time Status and Control:

You now have access to Internal Status information such as pools, processes, requests, etc. Accessible via all the above mentioned Management methods.

4:20pm update:

Just sat through a class on ASP.NET Architure by Paul Sheriff. A good overview of smart ideas for ASP developers. Not really an architecture, but lots of good common sense suggestions. Basically they boil down to these few suggestions:

  • Use CSS, Skins, Themes, and MasterPages
  • Wrap MS code that can cause you problems such as Sessions, Cache, Exceptions, ConfigurationManager, etc.
  • Create your own Base Page class and inherit all your pages from that base

CSS, Skins, Themes, and MasterPages:

  • CSS allows good separation of presentation and information
  • CSS promotes good structure (my note: this includes adding Syntactical value)
  • CSS is very flexible and is a well published web standard
  • CSS is cached on the client side and usually should be included as external files
  • SKins are like server-side CSS for ASP.NET controls
  • Themes group Skins and CSS together
  • Sites can have multiple skins and themes
  • Skins can be either Global (default) or Named (ID based)
  • Skins can be applied at runtime via the PreInit event
  • Themes are not set at runtime and cannot be established at the MasterPage level
  • There can be more than one MasterPage, but only one specified per page
  • MasterPages allow a consistent look and feel across an application
  • MasterPages can contain code
  • Subsequent pages use the Content are to deliver their information

Wrapping ASP.NET Constructs:

Wrappers are a powerful tool in many coding scenarios. In ASP.NET, wrapping such things as the Session variable array has many benefits:

  • Insulates code form future MicroSoft “enhancements”
  • Places code management in a single location
  • Allows variable initialization and implements default behaviors
  • Use this approach to wrap Exception management, Caching, ConfigurationManager, etc.

The last bit of advice is great: create a BasePage class that inherits from System.Web.UI.Page. Then have all of your Pages inherit from that class. This gives you global control over behaviors, can be used to implement security, provides consistency, etc. This is a rgeat idea and easy to implement, even after the fact.

The Last Session:

I went to a session on Distributed Data Application Design and Architecture, but I have to tell you I was pretty brain fried by then. The content was a little over my head, the presenter was very fast, and I really struggled to follow along, let along take notes for you guys. It’s probably for the best anyway as I doubt I would have made much sense.

See you tomorrow for the last day, I’m going to get some rest. And have some Tex-Mex food: benefits of visiting Texas!