Fabian's Mix

Mixins, .NET, and more

Archive for the ‘.NET development’ Category

Getting Visual Studio 2010 SP1 to run elevated when launching .sln files

with 16 comments

For different reasons, I want Visual Studio 2010 to always run as an administrator on my Windows Server 2008 R2 machine (that has UAC enabled). Therefore, I have the “Run as administrator” checkbox checked on the Compatibility tab of Windows Explorer’s Properties dialog for devenv.exe:

Compatibility properties devenv

This causes Windows to always show me the UAC prompt when I run Visual Studio 2010:

UAC prompt devenv

Unfortunately, it also causes double clicking solution files in Explorer to stop working.

The reason is that Visual Studio associates .sln files with a special program, called VSLauncher.exe, which inspects the solution file in order to decide what version of Visual Studio to open it with. This enables side-by-side installation of different versions of Visual Studio to run correctly. When VSLauncher.exe is executed by Windows Explorer because I double-clicked a solution file, it is run with normal privileges, and is therefore not permitted to run devenv.exe, which requires elevation. VSLauncher thus silently fails.

The obvious solution is to also check “Run as administrator” for VSLauncher.exe, which, in my case, is located in “C:Program Files (x86)Common Filesmicrosoft sharedMSEnv”.

And, of course, the obvious solution doesn’t work. Any more.

With my installation, it used to work just fine, but with installing SP1 for Visual Studio 2010, or maybe even earlier, Windows somehow started to ignore my “Run as administrator” checkbox, and VSLauncher.exe would silently fail again.

After some research, I found that the reason for Windows ignoring my compatibility setting was that VSLauncher.exe now had a manifest embedded, which contained the following fragment:


   <requestedExecutionLevel level="asInvoker" uiAccess="false">



So, VSLauncher.exe now specified that it always wanted to be run at the same execution level as its invoker. And, since of course the program must know better than the user, this caused Windows to ignore my own execution level setting.

And now, to the solution. Since Windows wouldn’t let me override what the program said it wanted, I needed to override what the program said it wanted.

To do that, I used the Manifest Tool that comes with the Windows SDK (and thus with Visual Studio):

mt -inputresource:"VSLauncher.exe" -out:VSLauncher.exe.manifest

This command extracted the manifest from VSLauncher.exe into a file called VSLauncher.exe.manifest. I then edited the manifest to request the desired execution level:


   <requestedExecutionLevel level="requireAdministrator" uiAccess="false">



Then, I could write back the manifest:

mt -outputresource:VSLauncher.exe -manifest VSLauncher.exe.manifest

With the desired result:

UAC prompt VSLauncher

One note of caution: Please make a backup copy of VSLauncher.exe before manipulating the manifest. And perform at your own risk.

This trick should also work with Windows 7, by the way.

Written by Fabian

May 3rd, 2011 at 9:53 am

C# 5: await and ThreadPool.SwitchTo()

with one comment

Last week, I managed to read the 38-page whitepaper explaining the .NET Task-based Asynchronous Pattern in detail. That’s the pattern behind the new C# and VB.NET “async” support everybody is blogging about, for example, Eric Lippert, Alexandra Rusina, or Jon Skeet.

Those blog posts I’ve linked to give good examples on how you can write code orchestrating asynchronous tasks in a very simple, linear fashion. For example, from Eric Lippert’s blog:

async void ArchiveDocuments(List<Url> urls)
Task archive = null;
  for(int i = 0; i < urls.Count; ++i)
    var document = await FetchAsync(urls[i]);
    if (archive != null)
      await archive;
    archive = ArchiveAsync(document);

This code starts tasks that asynchronously (“without blocking the application”) fetch documents and archive them, with the coordination between the tasks written using an ordinary for loop and an if clause. This is cool because with previous asynchronous patterns in .NET, ie., the Begin…/End… Asynchronous Programming Model, or the Event-Based Asynchronous Pattern (heavily used by Silverlight applications), coordination of asynchronous tasks is quite cumbersome, and cannot be written in such a linear manner.

With the Task-based Asynchronous Pattern, the term asynchronous does not necessarily mean multithreaded – whether the FetchAsync and ArchiveAsync methods run their work on a background thread, in a separate process, in the cloud, or via some esoteric operating system mechanism does not matter for the orchestration code. The Task-based Asynchronous Pattern uses the .NET synchronization context and task scheduler mechanisms to ensure the coordination method is resumed on the correct thread when an awaited task has finished performing its work; it doesn’t care how (or on which thread) exactly the work was performed.

There is an interesting consequence of this, which I haven’t seen anyone blog about yet: you can also await pseudo-tasks that do not schedule work, but only switch to another context!

Here is an example from the aforementioned whitepaper:

public async void button1_Click(object sender, EventArgs e)
  string text = txtInput.Text;

  await ThreadPool.SwitchTo(); // jump to the ThreadPool

  string result = ComputeOutput(text);
  string finalResult = ProcessOutput(result);

  await txtOutput.Dispatcher.SwitchTo(); // jump to the TextBox’s thread
  txtOutput.Text = finalResult;

I like this a lot! Without await, you would use the ThreadPool.QueueUserWorkItem() method, passing in a continuation delegate that first performs the expensive computations, then calls Dispatcher.BeginInvoke(), passing in a continuation delegate that sets the text box text:

public async void button1_Click(object sender, EventArgs e)
  string text = txtInput.Text;

  ThreadPool.QueueUserWorkItem (() => {
      string result = ComputeOutput(text);
      string finalResult = ProcessOutput(result);

      txtOutput.Dispatcher.BeginInvoke ((Action) () => txtOutput.Text = finalResult);

Of course, await more or less uses the same continuation concepts under the hoods. But code using await is so much easier to read than explicitly wiring up the delegates.

Asynchrony is one of these situations where syntactic sugar matters a lot, I think.

Written by Fabian

November 8th, 2010 at 9:29 am

Posted in .NET development

.NET 4.0 expression trees: Extension expressions

with 5 comments

As I mentioned in my last blog post, .NET 4.0 has introduced an extension expression model. Now, what did I mean by that, and why is it important?

Many LINQ providers define their own custom expression types that derive from the Expression base class and can be inserted into the expression tree. A SqlColumnExpression, for example, might represent a SQL column in an expression tree that is meant to be converted to SQL. A VBCompareExpression might represent a comparison with VB-specific semantics. In previous versions, the .NET framework provided little support for that; among other things, you had to use undefined ExpressionType enumeration values to represent your expressions, and including visitor support for them was a difficult thing.

With .NET 4.0, it’s now possible to define extension expressions with the following features:

  • They need not use an undefined ExpressionType value, they can use the value ExpressionType.Extension.
  • They can implement an Accept method to delegate to a specific visitor method if the visitor supports the custom expression type. If it does not, they can dispatch to a generic VisitExtension method (see below).
  • Extension expressions can also be reduced to a semantically equivalent tree of standard expression nodes, if possible. This enables high-level nodes to be compiled to IL, for example, which wouldn’t make much sense for a SqlColumnExpression, but would be handy for a ForEachExpression or a VBCompareExpression, for example.

As noted above, ExpressionVisitor now sports a VisitExtension method, which is by default invoked by the Accept methods of extension expressions. This can be used by visitors to react on extension expression types unbeknown to them.

The interesting thing about VisitExtension is its default implementation, which is to call Expression.VisitChildren. VisitChildren enables extension expressions to apply visitors to their child nodes even when those visitors do not know how to handle the specific extension expression types. In a way, this overcomes the big problem of the Visitor design pattern, where the visitors need to (statically) know about the whole type hierarchy of the objects being visited.

To understand the implications of this, consider a generic filtering visitor that replaces all ConstantExpression instances with SqlParameterExpressions. The VisitChildren approach allows the visitor to replace ConstantExpressions that are located beneath a VBCompareExpression even when it doesn’t know anything about that specific expression kind.

Here’s a picture of the visitor in action:


Without the VisitChildren approach, the visitor – because it knows nothing of VBCompareExpressions – would have no way of inspecting those ConstantExpressions.

As I said in my last post, I think .NET 4.0 expression trees do add a lot to their 3.5 counterparts, and the extension expression model is one of the most interesting new features. Because I like that concept so much, I’m going to copy it for re-linq. That way, every LINQ provider based on re-linq will be able to use extension expressions for both .NET 3.5 and .NET 4.0. And, if you haven’t guessed by now, I’ll use that model for finally implementing VB.NET support in re-linq. But more on that in a future blog post.

Written by Fabian

February 18th, 2010 at 4:57 pm

.NET 4.0 expression trees: Code gen, blocks, and visitors

without comments

Yesterday, I did some research on how expression trees have changed with the upcoming version 4.0 of the .NET framework. Of course, this is important for LINQ providers (and thus also for re-linq) because LINQ providers may have to change when Microsoft changes the format and/or capabilities of expression trees. However, it’s also interesting to people performing custom code generation, because .NET 4.0 expression trees have gained a lot of new functionality related to custom code generation.

Let’s start with the latter: expression trees are becoming more suited for custom code generation. How’s that?

LambdaExpression has always had a Compile method that allows you to compile an expression tree to a dynamic method and execute it in the same process. It does not allow you to embed a compiled expression tree into a dynamic assembly generated via Reflection.Emit, however – and this has changed with .NET 4.0. By using the new CompileToMethod feature, expression trees can now be emitted into a MethodBuilder, which enables people to combine expression trees with Reflection.Emit. As in most new expression tree features, you can definitely see the influence of the DLR here 🙂

In addition, there is a whole bunch of new expression node types: “expression” trees can now also include statements! BlockExpression allows you to group a sequence of “expression” statements together (discarding the values of all but the last one), GotoExpression allows you to implement control flow, and BinaryExpressions have been changed to allow represent assignments. This means that .NET 4.0 expression trees really can represent arbitrary code constructs, which is a very good thing! Bart de Smet has a blog post illustrating expression trees with statements.

However, for LINQ providers, it would actually be a very bad thing to have these kinds of constructs in LINQ expressions. Imagine having to translate a GotoExpression to SQL, for example. The good news is that, according to this blog post comment, it’s unlikely that LINQ providers will have to deal with any of the new expression node types added by .NET 4.0, as the compilers still don’t allow statement expressions in LINQ queries.

So, what else is new in .NET 4.0’s System.Linq.Expressions namespace? Well, for example, there is finally a public ExpressionVisitor class. Expression visitors are at the core of every LINQ provider, and it has always been a pity that .NET 3.5’s ExpressionVisitor was internal. Therefore, re-linq had to define it’s own ExpressionTreeVisitor base class.

However, there’s more to .NET 4.0’s ExpressionVisitor than the ordinary “switch on node type and dispatch to the respective strongly typed Visit method” every LINQ visitor base class has consisted of until now: Microsoft have finally implemented real double dispatch for expressions! In .NET 4.0, the Expression base class now has an Accept method, and the concrete expression classes dispatch to the visitors, just as Gamma et al. described in 1994! And the sweetest part of it: they’ve thought of including an extension expression model. I’ll write about this in a separate blog post, but in a nutshell, I consider this a great design achievement over .NET 3.5’s expression trees.

All in all, I think .NET 4.0 expression trees really add a lot to their 3.5 counterparts, and I’m looking forward to using them. If you want to read more about the topic, I’d suggest perusing the DLR expression tree specs, as MSDN doesn’t really have much information at the moment.

Written by Fabian

February 18th, 2010 at 4:47 pm

Trying to resolve a method in a closed generic type?

without comments

Have you ever want to use Module.ResolveMethod to get a MethodInfo within a closed generic type?

Most people don’t, I know, but I’ve had this problem a few times in the past.

Consider the following example:

using System;

using System.Reflection;


class C<T>


  public void M (T t)





class Program


  public static void Main ()


    var methodToken = typeof (C<int>).GetMethod (“M”).MetadataToken;


    var method1 = typeof (C<int>).Module.ResolveMethod (methodToken);

    Console.WriteLine (method1);


    var method2 = typeof (C<int>).Module.ResolveMethod (
      typeof (C<int>).GetGenericArguments (),

    Console.WriteLine (method2);



This won’t work: method1 and method2 are both resolved to C<T>.M, not C<int>.M:

Void M(T)
Void M(T)
Press any key to continue . . .

Although Module.ResolveMethod does have an overload with a genericTypeArguments parameter, that parameter doesn’t actually mean you can specify the type parameters of the type declaring the method to be resolved.

Thus, to resolve to C<int>.M, do the following:

var method3 = MethodInfo.GetMethodFromHandle (
    typeof (C<int>).TypeHandle);

Console.WriteLine (method3);

Void M(Int32)
Press any key to continue . . .

I tried to add that to MSDN as Community Content, but (see lower right corner):

Error occurred while saving my data.

Written by Fabian

August 12th, 2009 at 3:29 pm

Posted in .NET development

re-linq: End of Refactoring – (at Least for Now)

with 4 comments

As you can see from my last update on re-linq, we’ve now finished our planned refactoring for re-linq’s frontend, i.e. its main parsing infrastructure. Sure, that’s not a guarantee that there are no rough edges left, features missing, or bugs present (I’d rather not give that guarantee). But it means that the plan I gave in the middle of June has now been implemented.

For those of you who don’t remember, let me recap:

  1. We’ve implemented a completely new structural parsing approach for LINQ queries. (See IExpressionNode, QueryParser, ExpressionTreeParser.)
  2. We’ve implemented a powerful resolution mechanism which transforms select projections, where conditions, and similar expressions so that they point back to the query sources where their data actually stems from. (See IExpressionNode.Resolve and QuerySourceReferenceExpression.)
  3. We’ve cleaned up a lot and moved the SQL-specific parts (which, by the way, have not yet been refactored because they are not so useful for general-purpose LINQ providers) out of the way. We’ve rewritten eager fetching support and made it opt-in.
  4. We’ve changed our QueryModel and the clause classes into something truly transformable. You can simply take a Where clause from the left of a join and insert it to the right, if you want to. Or you can take subqueries in from clauses and flatten them out (at least if that’s semantically possible).
  5. We’ve implemented support for the Join and GroupJoin query methods.
  6. We’ve implemented support for the GroupBy query methods.

We believe that now re-linq truly is a great LINQ provider foundation that aids significantly in writing LINQ providers for any query system. And I’m talking about feature-rich, maintainable, and extensible LINQ providers, not the “let’s cast it to MethodInfo and check whether the method name is ‘Where’” ones. (If you believe that’s not so hard anyway, read Frans Bouma’s list of difficulties in creating a LINQ provider. Or just try to implement one that supports more than the most simple chains of Select and Where calls.)

This is not going to be the end of our extending re-linq, of course. I’ve lots of good ideas for further additions, for example:

  • a simplifying transformation for identifying SQL-compatible calls to GroupBy,
  • a simplifying transformation for identifying GroupJoins whose group items are used in a SelectMany clause, maybe modified with DefaultIfEmpty,
  • an equality comparer for expression trees,
  • a caching infrastructure,
  • support for parsing of instance query methods,
  • support for the GroupBy overload with a ResultSelector,
  • and so on.

However, those ideas aren’t our top-most priorities at the moment. And this is an open-source project, after all. So if you have a good idea: send us a patch!

Written by Fabian

July 31st, 2009 at 6:43 pm

re-linq: Update Number Three

without comments

Tempus fugit, as they say, and it has already been more than three weeks since my last update on re-linq’s progress. Needless to say that there’s been a lot going on in the code base. Here’s a recap of what has changed.

Result operators

  • We redesigned the result operators (previously: result modifications). Those are query operators such as Distinct, First, or Count, which are not part of a query’s clauses, but which act on the query’s result set, grouping, filtering, aggregating, or choosing single elements. They were also moved from the SelectClause to the QueryModel.
  • When a result operator is followed by a “normal” query method, such as Where, Select, or OrderBy, we now wrap everything coming prior to that query method into a subquery. Previously, we just reordered those query methods, putting them before the result operator. Which, of course, is wrong because the order of query methods is usually very important in a LINQ query. (Although, of course, there are cases where you want to reorder clauses.)
  • We changed the handling of result operators that have (optional) selectors and predicates. We now handle those exactly as if there were Where and Select method calls directly in front of the operator.
  • We fixed the Take result operator: its Count property is now an Expression rather than an integer because Take might refer to another query part delivering the number of items to take.
  • We implemented lots of new result operators: Skip, Reverse, Union, Intersect, Except, Average, LongCount, DefaultIfEmpty, Cast, OfType, and Contains.

Data stream modeling and query execution

  • We developed and implemented a model of the data that streams from the Select clause through result operators (being transformed in that process) and finally out of the QueryModel. This data is represented in an abstract form as IStreamedDataInfo objects, concrete data values are represented as IStreamedData objects. You can obtain the data info via QueryModel.GetOutputDataInfo(); this replaces QueryModel.GetResultType().
  • Based on IStreamedData, we improved the ExecuteInMemory facilities of the result operators. This means that a LINQ provider based on re-linq can now easily execute result operators in memory that it can’t translate to its destination query language. This should of course be used very carefully, as fetching all data in memory and then filtering it there might quickly become very inefficient. However, it’s a quick way to get started.
  • We eliminated IExecutionStrategy. Executing queries is now performed by QueryModel.Execute() in conjunction with IStreamedDataInfo.
  • We refactored IQueryExecutor, there’s now an additional ExecuteSingle() method for queries that end with a single-item result operator, such as First, Last, Single, Min or Max.

Group-By support

  • We added support for GroupBy. We regard group operations as result operators, i.e. they are attached to a QueryModel and executed on the query’s result set. GroupBy is interesting insofar as it is the only query operator (yet) that also acts as a query item source. This means that a result operator following a GroupOperator (for example another GroupOperator) will have the IGroupings produced by the GroupOperator as its input.
  • In order to support GroupOperator.ExecuteInMemory(), we’ve developed a reverse resolver. This takes a resolved expression such as [student].Name (where [student] is a reference to a query item source) as well as the structure of the data streaming into the GroupOperator (e.g. new {[student], [course]}) and produces a LambdaExpression that evaluates the expression when passed an input item. We call this reverse resolving because its exactly the opposite of what our field access resolution mechanism does when a LINQ query is parsed.
    If now you ask why we implemented this even though the information is available when parsing queries _before_ we resolve expressions, well, the answer is simple: we have a transformable query model. You can easily append a new GroupOperator or change the selector of a SelectClause at runtime. This means that we cannot just keep the information we have from parsing – that information might be completely outdated. So we have to recalculate the LambdaExpressions if we want to perform an ExecuteInMemory operation.

Eager fetching

  • We rewrote eager fetching. Eager fetch requests are now represented by result operators attached to the QueryModel. The ad-hoc fetchRequests parameters passed to IQueryExecutor are gone; instead, FetchFilteringQueryModelVisitor should be used to extract fetch requests from a QueryModel (if a query executor supports eager fetching).
  • The query methods representing the entry points to eager fetching were moved to re-store. For users of re-linq, this means that eager fetching is now an opt-in functionality: just provide the respective query methods if you want to support eager fetching. Otherwise, don’t.

Other query methods

  • We implemented support for the Join and GroupJoin query methods.
  • We now also support Enumerable’s query methods. This means that expressions such as from expressions or a where conditions that use Enumerable.Select() or similar methods are now parsed as subqueries.

Other refactorings

  • We restructured the classes in the Remotion.Data.Linq namespace. We moved our Data Model as well as everything related to SQL generation to a Backend sub-namespace. The classes in this namespace are only relevant to LINQ providers producing SQL (such as the one for re-store).
  • We did lots of other, minor refactorings.
  • And we also fixed a few conceptual bugs.

Wow, that’s a long list. But we had nearly four weeks, and we did use them well, I think.

Written by Fabian

July 31st, 2009 at 6:38 pm

May the Params be With you

with 2 comments

This is the fifth in a series posts. The previous items are:

Last time, I announced that I’d probably write about mixins and inheritance next. That’s still on my list, but today, I would like to talk about two lines of code I’ve posted in the past and never really explained. Here they are:

ObjectFactory.Create<MyTargetClass> ().With (x, y, z);

ObjectFactory.Create<File> (ParamList.Empty);

The first line is from the post “What can we do for you? (Features of re-motion mixins)”, and in that post, I wrote:

The With is a wart, unfortunately. Short story: x, y, and z are the constructor parameters for the mixed object, and the With method takes care of their types being respected. (If you are thinking, “Why aren’t they just using a params object[]?” – because that wouldn’t respect the parameters’ types. I’ll explain another time.)

With will soon be replaced by a ParameterList object or similar, so that it will read somewhat like this:

MyTargetClass myMixedObject =
ObjectFactory.Create<MyTargetClass> (ParameterList.Create(x, y, z));

So, With was a wart, and it has been replaced. Not by a ParameterList object, but by a ParamList object. But why had it been there in the first place?

To answer this, let’s take a look at the interestingly amusing topic of dynamically instantiating classes. With parameters.

Suppose you are implementing a framework, for example one providing mixin support for statically typed .NET languages. To do so, you will probably need to provide a generic factory, ie. a class that allows dynamic instantiation of arbitrary objects. Let’s call it TheObjectFactory:

public class TheObjectFactory


  public static T Create<T> (…)





How do you implement that Create factory method so that you can pass it an arbitrary number of arguments?

The standard way to do this in .NET is a params array. It looks as follows:

public static class TheObjectFactory


  public static T Create<T> (params object[] args)





With a params array, the C# compiler allows the Create method to be called with any number of arguments. It will wrap them up into an array and pass that array to the method at runtime. This is the approach used by factory methods in the .NET framework library, such as Activator.CreateInstance, so it can’t be wrong, can it?

Well, most of the time it works. Unless you are in one of the following situations:

  • You want to pass a single argument to the factory method and the argument is an array.
    In this case, the C# compiler thinks the array you’re passing is the params array, and it will thus not wrap it up.
    At runtime, the factory method will then look for a constructor with a signature corresponding to the array’s elements, which will either fail or find the wrong constructor.
    The workaround is to manually wrap up the array into another array and to pass that wrapper as the params array.
  • You want to pass a single argument to the factory method and the argument is null.
    Again, the C# compiler will think the null you’re passing is the params array, and again, it will not wrap it up.
    At runtime, the factory method will either throw an exception or think you’re looking for the default constructor (Activator.CreateInstance does that).
    The workaround is again to manually wrap it up.
  • You want to pass a null argument to the factory method, and there are two or more constructors accepting reference types.
    Even when your null value is wrapped up correctly, overload resolution at runtime will fail if more than one constructor accepting reference types exists.
    Of course, this must fail, after all, you cannot infer which of the constructors to call.
    However, the workaround is really awkward: you need to pass an additional Type array, which denotes the signature of the constructor to be called.

These situations tend to surface quite often, they surface only at runtime, and sometimes you might not even notice you’re calling the wrong constructor! Also, the workarounds aren’t really beautiful.

Therefore, we considered other possibilities for passing arbitrary arguments to a factory method. And we (well, Stefan, actually) had an idea: we could exploit C# generics to have the C# compiler hand over the type information together with the argument’s values. This was first implemented in a fluent interface way (With), which worked nicely but was a little unintuitive, so it got rewritten as a generic parameter object (ParamList).

ParamList is implemented as follows:

public abstract class ParamList


  public static ParamList Empty = new ParamListImplementation ();


  public static ParamList Create () { return Empty; }


  public static ParamList Create<A1> (A1 a1)


    return new ParamListImplementation<A1> (a1);



  public static ParamList Create<A1, A2> (A1 a1, A2 a2)


    return new ParamListImplementation<A1, A2> (a1, a2);



  public static ParamList Create<A1, A2, A3> (A1 a1, A2 a2, A3 a3)


    return new ParamListImplementation<A1, A2, A3> (a1, a2, a3);



  // More Create overloads …

  public abstract object InvokeConstructor (

      IConstructorLookupInfo constructorLookupInfo);


The ParamList.Create methods have overloads taking up to 20 generic arguments. When one of them is called, the C# compiler will automatically infer the generic argument types from the actual parameters specified at the call site, and the method will instantiate a specific ParamListImplementation object that not only knows the parameter values but also the parameter types inferred by the compiler.

There are variants of the ParamListImplementation classes for up to 20 generic arguments as well, and each of those implements the InvokeConstructor method. Here is one of them (from the overload with one generic argument A1):

public override object InvokeConstructor (

    IConstructorLookupInfo constructorLookupInfo)


  var funcDelegate =

      constructorLookupInfo.GetDelegate (typeof (Func<A1, object>));

  return funcDelegate (_a1);


InvokeConstructor gets passed a IConstructorLookupInfo object, which has the ability to lookup constructors with a specific signature and return a delegate calling that constructor. This is used with the actual parameter types inferred when calling the Create method, and the delegate is called with the argument passed to the method.

With ParamList, the factory can be implemented as follows:

public static class TheObjectFactory


  public static T Create<T> (ParamList ctorArgs)


    var info = new ConstructorLookupInfo (typeof (T));

    return (T) ctorArgs.InvokeConstructor (info);



Now, why does this help us?

First, the common usage scenarios of the factory still work. You can create a ParamList with an arbitrary number of arguments (see below about the limit of twenty), and type inference will ensure that the ParamList passes the right argument types on to the ConstructorlookupInfo.

Second, the cases I mentioned above, now either work, yield a compiler error, or at least offer a decent solution:

  • You want to pass a single argument to the factory method and the argument is an array.
    In this case, the C# compiler will infer that you are passing an array, and the ConstructorLookupInfo will look for a constructor taking an array. Works.
  • You want to pass a single argument to the factory method and the argument is null
    Here, the C# compiler will not be able to infer the type (since null does not have any ordinary type). It will will yield a compiler error, but you can cast the null value to a specific type.
  • You want to pass a null argument to the factory method, and there are two or more constructors accepting reference types.
    This must fail at runtime, but to make the call non-ambiguous, you can cast the value to one of the specific types taken by the constructors.

Since it eliminates two possibilities for silent failure and adds a decent way of removing the ambiguity for the third, we decided to go this way with re-motion’s ObjectFactory class.

But what about the limit of 20 arguments? If you really need to call a constructor with more than 20 arguments (what?!), you can use the ParamList.CreateDynamic method, which falls back to the array approach. But this really shouldn’t be needed on a regular basis.

So, to wrap this up: re-motion’s ObjectFactory has to be able to take an arbitrary number of constructor arguments. We didn’t want to go the params array route because that has a few important shortcomings. Therefore, we built a parameter object ParamList, which uses C#’s generic type inferencing features to not only collect the constructor arguments’ values but also its types. What we gained is cleaner overload resolution, fewer gotchas, and better type safety.

Written by Fabian

March 2nd, 2009 at 2:48 pm

Could not load something or something else

without comments

Have you ever tried to debug an exception message similar to the following?

Could not load file or assembly ‘XXX, Version=, Culture=neutral, PublicKeyToken=fee00910d6e5f53b’ or one of its dependencies.

Sometimes, but not always, the message includes additional details. For example, the following might be appended to the exception message: “The system cannot find the file specified.”

To debug this, first of all, try to find out which kind of exception it is.

Then, check the file.

  • FileNotFoundException: Is it actually there?
  • BadImageFormatException: Is it a valid .NET assembly? (Check this via peverify.exe.)
  • FileLoadException: Did you deploy the correct version of the file?

Up to now, this has been very basic, nothing special going on. But what if you’ve checked the above questions, and still can’t figure it out? Ie. the file is there, it is valid, and it is of the version you think is right? (Or peverify.exe won’t tell you what’s wrong, since it can’t load the file either.)

Don’t fret – as the message tells you, the error needn’t be caused by assembly XXX at all! The reason might also be one of its dependencies, ie. one of the assemblies referenced by XXX.

Thank you, dear exception message, but which one of the twenty-something assemblies referenced by XXX do you mean?

To find this out, you can inspect the Fusion log. This is the trace information written by Fusion, .NET’s assembly loader. To view it, run Fuslogvw.exe, the assembly binding log viewer. In the viewer, enable the kind of logging you need (eg. log bind failures), then run your piece of software, then click “Refresh”, and you should see detailed logging information. If you don’t because the log tells you the load result is cached, you need to restart the application causing the error.

Great this, but shouldn’t the same information be contained in the exceptions’ FusionLog property?

Exactly, it should be, and it is; but only if you’ve enabled assembly bind failure logging. (You can do that by by setting the registry key HKLMSoftwareMicrosoftFusionLogFailures to 1 (DWORD value).) Once logging has been enabled, your application or debugger can read the failure information from that property, and the ToString representation of the exception should also contain the log message.

You can find more information about Fusion and assembly loading in Suzanne Cook’s .NET CLR Notes blog, which (among other things) has great information about how Fusion actually works and how to debug it.

Written by Fabian

January 23rd, 2009 at 11:15 am

Posted in .NET development

We have a Facility!

with 2 comments

Lee Henson has implemented a Castle Windsor facility for re-motion’s ObjectFactory class. With it, it is possible to have the container resolve components that are mixed by re-motion. Lee’s original intention was to have a mixed MonoRail controller, which should now be perfectly possible.

Quoting from Lee’s announcement on the Castle developers list:

Sample usage is the specs in the project, but sample (notepad) code
would be:

<wherever you are creating your container>

var remotionFacility = new RemotionFacility
container.AddFacility(“remotion.facility”, remotionFacility);

<register components>

<later on>

var component = container.Resolve<ComponentWithAvailableMixins>();
((MyMixin) component).SomeMethod();

Here’s the link: http://github.com/leemhenson/re-motion/tree/c98d2439d8e78a31cc866ecc47a772b5a74ec34f/RemotionFacility.

UPDATE: We now have a Remotion-Contrib repository, find the facility here: https://svn.re-motion.org/svn/Remotion-Contrib/WindsorFacility/trunk/.

Thanks a lot for making this work, man!

How it works

Lee and I mailed around a little to see how to do this, and while Lee did all the implementation work, it’s quite easy to explain what it does. The RemotionFacility is a simple Windsor facility that hooks up a special RemotionActivator class for components that have mixins configured. That activator uses re-motion’s ObjectFactory to instantiate the component (after all of its constructor dependencies have been resolved) – and voilà, there is your mixed component.

What can go wrong

The RemotionFacility has constructors allowing you to use an auto-generated configuration, specify the assemblies to scan, or specify a dedicated MixinConfiguration instance. However, even when you specify the configuration by hand, re-motion will still want to do an assembly scan of those assemblies in your bin folder at runtime. This is the default policy and cannot be easily disabled in the current re-motion trunk revisions, so you might get exceptions when you have assemblies with dangling references in your project’s bin folder. Such an exception is nearly always a deployment bug, so you should really clean up your folder, but if you can’t (or don’t want to), the only way around it currently is to set a custom ITypeDiscoveryService via ContextAwareTypeDiscoveryUtility.SetDefaultInstance(…) before doing anything mixin-related. (Or, better, before doing anything re-motion-related, since a lot in re-motion is mixin-related.)

(This will be fixed to make it easier in the future.)

EDIT (2009-02-10): Updated link to facility.
EDIT (2009-03-23, 2011-04-22): Updated link to facility. Again.

Written by Fabian

January 21st, 2009 at 1:26 pm