Supporting the Client Profile

August 11, 2010

This blog-post was triggered by an emphatic discussion on the NHibernate developers mailing list regarding the support for .NET’s Client Profile and more to the point, why log4net is blocking it.

The gist of it is that the Client Profile is a trimmed down version of the .NET Framework, lacking assemblies typically associated with server applications, e.g. System.Web. Unfortunately, log4net references System.Web to implement its ASP.NET TraceAppender.

Now, there are two positions one could defend. One is that log4net references System.Web, thus making it incompatible with the .NET Client Profile and subsequently preventing any application needing it from being compatible as well. Easy solution: let’s switch the logger framework. The other position is that I like log4net, and I don’t just want to give up. Or resort to runtime binding only. Which works by the way. As to the why, please keep reading.

So, how do I create an application compatible with the .NET Client Profile? Easy, just set the Target Framework to “Client Profile” in the project’s properties.

clip_image002

As soon I do this and try to link log4net, I get the following compiler error and warning:

clip_image004

Apparently, MSBuild is recursively checking all dependencies of my project to make sure no unsupported assemblies are required. That’s actually a pretty nice thing to do of MSBuild. Except for one caveat: As long as I don’t use the types located inside a referenced assembly, either by runtime or using reflection, I don’t need them on my machine. And in the case of log4net, this means that as long as I don’t configure my console application to log to the ASP.NET trace, I don’t care whether or not log4net references System.Web.

Unfortunately, there’s nothing I can do to change the compiler’s mind. The only thing I can do is go back and build my project as a regular .NET 3.5 project. But what happens when I try to start it on a machine that only has the Client Profile installed?

clip_image006

Yep, I get a wonderfully helpful error message. Fortunately, there’s an easy workaround: Just do the same thing Visual Studio does when you select the Client Profile – add the ‘sku’ attribute to the application’s config file, set it to ‘client’, and I’m golden.

clip_image008

The reason this works is that there’s not really that much of a difference between the various packages of the .NET Framework. They all have the same shim, the same mscorlib, the same assembly format. It really boils down to just one config setting telling the .NET Framework that the application can make do with the Client Profile assemblies.

Now, I’m not going to go around and advocate to just link System.Web, etc to your heart’s content and set the SKU-attribute as needed because sooner or later it will bite you. But in the case of a mulish third party component there is a workaround.

As for the practical applications. Well, if you need to be compatible with .NET Framework 2.0, you don’t need to worry because the concept of a Client Profile does not exist in the first place. Then there’s the .NET Framework 3.5 Client Profile, which is obsolete with Windows 7 since it already contains a complete .NET Framework 3.5. This leaves .NET Framework 4.0 Client Profile, which is a recommended update in Windows Updater, meaning sooner or later this is what you can expect to find on current client machines. So, in parting, I hope the log4net maintainers will get around to making their logger a first class citizen in .NET Framework 4.0.

Michael

1

My first day with Visual Studio 2010!

April 26, 2010

Well, okay, not exactly my first. After all, I had to evaluate .NET 4.0 for re-motion when it was still beta. But last weekend, I finally got around to upgrading re-motion to VS 2010 and ReSharper 5.0, and today was my first day working with the released bits.

So, why the excitement? Certainly not because of .NET 4.0, mainly because that’s still a while away for us before we can really use the new features and a topic for another blog-post.

But, VS 2010 is still pretty cool. And fast. Loading re-motion took about thirty seconds on my machine, now it’s down to ten. ReSharper is a bit trickier. I know that it took R# 4.5 another thirty seconds to initialize its caches. R# 5.0 moved a lot of that into the background, and I think the editor is already responsive after another fifteen seconds, but full initialization still takes twenty-five seconds. So, bottom line, the time it takes to refresh the solution after an SVN update has been cut in half.

Feature-wise, both Visual Studio and ReSharper have gained a lot. There’s the spiffy WPF-based editor, complete with naturally feeling multi-monitor support and awesome window drag’n’drop support, including for a single code-file. Btw, in order to get the column guidelines, you now need the Editor Guidelines extension.

The debugger also got some really nice improvements, namely saving and exporting breakpoints, pinning the DataTips into the editor window, and having the information available after the debugging session. I recommend checking out Scott Guthrie’s Blog for additional stuff, both on VS2010 and .NET 4.0 features.

For ReSharper, the new features include solution-wide code analysis, which appears to not have a noticeable performance impact. I’ve turned it on for now, will see how it goes. I truly recommend checking out their What’s New page and their blog. For me, it’s hard to say which is the coolest, the ‘Structural Search&Replace’ ([…] configure custom, sharable code patterns, search for them, replace them, include them in code analysis, and even use quick-fixes […]) or the value and call tracking. The latter was already available with Reflector, but now I can actually find out where a value originates, a couple dozen stack-frames away.

JetBrains also messed with the identifier coloring, adding cases, and added a fun feature: highlight mutable variables. I’m thinking of giving them an evil, crimson background…

There’s tons more information on the new bits out there in the ether and I’m not going to compile a comprehensive list. At least not right now.

So long, Michael

Comments Off

What defines a hotfix?

April 15, 2010

This blog post was inspired by a recent post from Eric Lippert (Putting a base in the middle). In there, he describes a peculiarity of the C# compiler that is both completely logical and yet totally unexpected to the uninitiated.

In short (for the long version, please check out Eric’s blog post): Base-calls are always executed non-virtually. This means, if you have a class hierarchy spanning two assemblies and you replace the referenced assembly without recompiling the referencing assembly, you can miss a base-call if the referenced assembly added an override in the middle of your class hierarchy.

What makes this so interesting are two things.

Firstly, this behavior isn’t documented in CLR via C# by Jeffrey Richter. I checked, including the latest release for .NET 4.0. This surprised me, since he did warn about changing a non-virtual member to a virtual one in a referenced assembly and how this could result in a NullReferenceException if you don’t recompile the referencing assembly (see page 170 in the 3rd Edition).

The second and more important aspect is about what this means for versioning. These two types of change are not something that would normally get flagged as breaking changes, simply because your code will still compile with the new version.

The operative word being compile. But is there a reason not to recompile? Well, some people advocate not changing the assembly version for a hotfix. Doing this enables the developer to replace only one assembly in the production environment, with the express purpose of limiting the change and thus reducing the noise generated on the project management level.

And this is where things start to become interesting. What defines a hotfix? The simplest explanation is “A change that fixes only one bug and does not change to public API”. This means, a hotfix has to be contained within the private details of your types. And due to the CLR’s policy regarding overrides, this prohibits fixing the issue by adding an override with you hotfix, sometimes the most opportune way of doing it. And if it’s the only way to fix the problem, then you just have to bite the bullet and recompile/deploy the entire application.

BTW, for re-motion we’ve long since (actually, since its inception in 2004) decided to increment the version number for each build.

Michael

Comments Off

Spotting Design Flaws

February 28, 2010

Today, I want to tell you about a startling revelation regarding the effects of pair-programming on your code’s quality. I mean, sure, there’s tons of literature out there raving about pair-programming and its benefits on code quality, but when it comes right down to it, you always have to explain why it’s cheaper to have two programmers working on a problem instead of just one.

Okay, sure, there’re the obvious reasons, such as breaking in the new guy, establishing collective code-ownership, creating a highly complex piece of software or security critical feature. But how do you argue for the benefit of pair-programming when you’re writing more or less run-of-the-mill code that’s just supposed to be rather bug-free and reasonably well refactored, and you have a team of great programmers already attuned to the problem domain?

Well, there’s at least one scenario, and that’s spotting design flaws and more importantly, finding a solution to them. See, here I was, happily sketch a design and handing it off to a colleague for implementation, then doing regular reviews. Eventually, there was a point where an alteration was necessary. At the time, the proposed solution seemed reasonable. I probably wasn’t perfectly happy about it, but it appeared to be well refactored and following the principal of separation of concerns, so I just went with the flow because it got the problem solved.

Obviously, it wasn’t the right solution, or I wouldn’t be blogging about it now. The big surprise was the way I realized not just the problems in it—aside from a bug-report—but also the solution: I was refactoring to fix the bug.

You see, that’s the big difference between doing a review and sitting right there next to your partner in crime. In one scenario, you’re a passive observer, highlighting obvious troublesome spots and talking about established facts. In the other, you’re actively involved in massaging the code, feeling the awkwardness, and hopefully sparking that part of your brain that’s responsible for solving the big mysterious of our time.

Unfortunately, all things considered, this example perfectly establishes why being actively involved in the process leads to better results, but it still doesn’t make a good case for twisting the review-knob all the way to 100%.

Why, you ask? Well, writing that part of the library took weeks. Fixing the design problem in an intense refactoring session took two days.

So, bottom line: Two guys working on two parts of the system can still be more profitable than two guys sitting right next to each other and sharing a keyboard. But you have to be aware that the design will suffer from it. But if you have a great team, the odds that you will come out ahead of the game are still better than even.

On a parting note: Right now I’m musing whether or not turning review sessions into refactoring sessions might be a way to mitigate this problem a little bit more…

Michael

1

Adding a Layer of Abstraction

February 5, 2010

In my last blog post, I talked about the pain a misused singleton can introduce into your development lifecycle, particularly if you’re doing test-driven development. But what can you do when the singleton in question isn’t under your control, but provided by the framework? And to top it off, it’s not even fitted with an interface so you can’t mock it. Oh, and in order to instantiate it and use it in your test fixture, you need to resort to reflection.

Well, for years my answer had been to bite the bullet. Accept that the framework isn’t suited for a test-first approach. And write reflection-based helpers that allow me to test my most-critical components, at least. Any guesses which real world example I’m talking about?

Enter HttpContext.Current, System.Web.UI.Page, and just basically the entire ASP.NET WebForms stack.

Let me start by listing a couple of scenarios where this is an issue:

  •  Developing an HTTP handler
    A simple handler doesn’t even have a user interface, and still, as soon as you need to interact with the request, the response, the session, etc, you’re back to fighting in the trenches. There are, of course, ways to get at least some test coverage, but eventually you have to accept that there is a place where no unit test will dare to go.
  • Developing a custom web control
    In addition to dealing with the HttpContext, now you also have to deal with the page reference, the page-lifecycle, protected and internal methods you’d have to call from your tests, etc. Not fun. Not fun at all.
  • Developing a UserControl or a Page
    This is where it get’s *really* interesting—or ugly—and I’m not even going to begin listing the problems you face when trying to unit test those monsters. To put it simply, there’s a very good reason why Microsoft developed ASP.NET MVC.

Okay, but what about the first two scenarios? Well, the answer to testing an HTTP handler is surprisingly simple—just introduce a mockable layer of abstraction between your code and the HttpContext. So, why did it take me till summer 2008 to come up with it? Mainly because I had scruples doing something this radical. And no time. See, if you want to add an abstraction for HttpContext, you need to provide delegation for all members. You need to test it. And you need to provide abstractions for all dependent types as well, e.g. HttpRequest, HttpSessionState, etc. Plus documentation, because you don’t want to expose the users of your types to the bare metal APIs.

So, what changed in 2008? Easy answer: I realized the infinitive potential of ReSharper’s ‘Extract Interface’ and ‘Delegate Members’ refactorings. Now all I had to do was create a new type, add a field for my wrapped instance, execute two refactorings with ReSharper, and I had my layer of abstraction. Took me all of thirty minutes or so. Okay, I didn’t write tests for it, but I trust ReSharper not to muck up the code generation. Add a bit of plumbing and some null checks, and the entire HttpContext infrastructure is now mockable.

Of course, a few months later we started to depend on .NET Framework 3.5 SP1 in re-motion, and I realized that the old saying about inventions happening at multiple places independently when the time is right still held true. Microsoft had moved the System.Web.Abstractions assembly from the ASP.NET MVC Preview into the core framework and lo and behold, Phil Haack and his gang chose the same approach for ASP.NET MVC’s testability. The only difference is that the official stack uses abstract base classes and doesn’t expose the wrapped instance. In .NET 4.0, those types will actually get moved into the regular System.Web assembly. And in case you’re wondering, my implementation is already earmarked for the big ‘Safe Delete’ refactoring ;)

This leaves me with control development. Here, you have to distinguish between two basic aspects. One is logic that depends on the correct invocation of methods according to the page lifecycle. In this case, the only testable approach is to gut the control, create some controller classes and generally follow a divide and conquer approach. Once you are able to do this, all that’s left of the control is a façade, the properties, and the design-time support.

Much more meatier is the stuff that happens when the control is interacting with the outside, e.g. registering scripts, rendering its contents, etc.

So, ever checked out the mechanism provided by ASP.NET for script registration? The API you’re looking for is the ClientScriptManager exposed by the property ClientScript on Page. It was introduced with .NET 2.0, replacing the previously used methods exposed directly on the page. Then, a year or so later, along came the ASP.NET AJAX Extensions, back then a separate download and located in the assembly System.Web.Extensions. Now, if you want to use asynchronous postbacks via the UpdatePanel, you have to use static methods on the ScriptManager to register your scripts. Suffice to say, I was not a happy camper.

This time, the solution was a tad more complicated and intrusive, but it integrated well with a concept introduced into our web-stack back in 2004—interfaces for Control and Page. The original reasoning behind it was to give projects the option of using a custom or third-party layer-super-type and still integrate their pages with re-call, re-motion’s control flow architecture for ASP.NET WebForms. In order to achieve this, we merely copied the signatures of all public members into the interface, derived our own interfaces (e.g. IWxePage) and that was it. Of course, this also meant exposing the ClientScriptManager and the HttpContext as is.

The implementation was simple and straight forward, and it served it’s designated purpose. But it also failed to open the door to do actual test-first development because I couldn’t just mock the Page property on a control, nor the Context property on the page returned by said Page property, not to mention making it possible to test my script registration logic. All these requirements called for a much larger refactoring.

To make a long story short, I introduced an interface and a wrapper for ClientScriptManager, merged the ScriptManager’s (relevant) API into this interface and changed the type of the ClientScript property on re-motion’s page interface (IPage). Same goes for the Context property and the Page property. And the result of this effort is that I’m now able to actually expect the registration of a specific script when I implement a new feature. The first benefactor of this approach are the web components implemented in re-vision, our document management system.

I will do a separate blog post some time in the future that shows how to leverage our web-stack, but for now the conclusio is that it is now possible to write unit tests for your extensions to re-motion’s web-stack.

Michael

Comments Off

The Root of All Evil

January 23, 2010

Okay, it’s 2010, and my New Year resolution is to fill this blog with life. Luckily for me, I have the perfect candidate desperately deserving some spotlight – the singleton pattern.

“The singleton, seriously?” you might ask. After all, it’s probably the most widely known pattern out there. For instance, let’s take a question right out of the classic job interview for a developer position. “What patterns do you know?” Care to make a guess which pattern gets picked the most? Yep, you’re right, it’s the singleton. And you know what else? It’s even a valid answer. The singleton pattern is listed by the GoF.

So, why did I choose it to get picked on? Easy enough to answer – it’s also the pattern that’s causing the most trouble when it comes to application design. Don’t get me wrong, there are tons of anti-patterns that can cause even greater harm in an application, but programs suffering from really bad code typically don’t employ too many of the patterns described by Gamma et al.

But enough of that. Let’s start chipping away at our global variables instead. Oops, I meant singletons! After all, there are no global variables in our nice, clean, object oriented world. Or are there?

Okay, I guess, that’s what you could call a loaded question. But why is a global variable a bad thing? It’s accessible from every scope. It’s perfect when different parts of the application need to access the same shared state. And I could list so many scenarios where a singleton makes the code really simple. All it takes is a reference to the instance and I’m golden.

Or not.

To illustrate my point, I’ll take one of the most common use cases for the singleton pattern – aside from the System.Web.HttpContext and the application configuration – the current User. That’s information I need all the time when I’m building an application that requires authentication, enforces security, logs changes to the business objects, etc.

A quick disclaimer in between: Yes, I’m aware that ‘singleton’ refers to there being just a single instance of a specific type, and there’s hardly just one User or HttpContext in a web application. That’s why I’m using the term ‘well-known instance’ most of the time. But for the sake of this blog-post, they aren’t all that different and ‘singleton’ is so much easier to type and read.

So, here I am, happily building a little booking application. One requirement is that each Order must be associated with the User who created the Order. Easy enough to do; I just assign the value from User.Current to the Order.CreatedBy field inside the Order’s constructor, and I’m done. Next, I launch the application, log in, create a new Order, and save it. I can easily verify the correct behavior through the application’s UI, via the debugger, or by checking the database.

Hmm… I know that I’m missing something. Oh, yes, the unit tests. Stupid me! I have to write a test first, and then the code that’s doing the heavy lifting. Let’s see… I get a new Order from the OrderFactory, assert that it is correctly initialized and that CreatedBy is set. Wait, that’s strange… The assertion fails; CreatedBy is still null. Oh, right, I forgot about User.Current. I set it, and the test is green.

Now that the Order class is done, I can start using it all over the place. After all, it’s one of the core types of my business domain. For instance, I want to check that each User can only select his own Orders. So, I’m creating two Users, Alice and Bob, and create a couple of orders for each, remembering to change the value of User.Current halfway through the test setup…

Anyone starting to feel the pain? Don’t be shy, raise your hand. I’ll even go first. And before I start sketching a scenario where each and every one of my domain tests requires a correctly set current User – and potentially triggering all sorts of nasty flashbacks of uprooting such evil – I’ll head on to the message-part of this post:

Don’t use a singleton just because you need a specific instance all over the place. And if you really do have a use case for it, think about it one more time and then don’t use a singleton.

Why? Because it flat out ruins your code’s testability. For each and every singleton your system under test depends on, you need to add setup and teardown logic, always making sure that your tests don’t have side effects. You don’t want to do that. It’s painstaking. It usually makes your test-runtime skyrocket. It is an external dependency. And it can drive the next guy working on this code nuts. Btw, chances are that’s going to be the you from a couple of weeks in the future.

Does this mean I don’t use singletons? No, not really. Especially when I’m working on re-motion, it’s usually the way to go. And by ‘usually’, I mean about a dozen config classes, holding the deserialized configuration or reflection-based metadata. Neither tends to change at runtime in applications built on top of re-motion, but forcing the users of re-motion to pass those instances around when creating objects, well, that would make the API really hard to use.

If that sounds like it’s okay to use singletons in a framework but not in an application, well, that’s because it is. Or at least, that’s what I’m telling myself to stop from obsessing over designs I chose before I saw the havoc a misused singleton could wreck out there in the wild.

But what about applications? How can you get rid of your configuration singletons? Well, you can’t. You still want to cache the config once it’s deserialized, but you can pass it into the constructor, thus removing the dependency on the singleton instance. Same goes for current User and other pieces of data used far and wide in the application.

Of course, as soon as you start passing this data around via constructor arguments, you find yourself thinking about object composition, inversion of control, factories and strategies, and generally structuring your code in such a way that you only have a few, carefully selected entry points. But that’s stuff for another blog post.

Michael

2