IE crashes when opening document in Sharepoint

For some reason my browser started to crash when I tried to open documents in Sharepoint. After some investigation I found out that I had multiple versions of an assembly called Owssupp.dll confusing IE. One version from Office 2003 and another from Office 2007. The later one was installed while evaluating Onenote 2007. I solved the problem by uninstalling Onenote 2007. I found a link that describes the problem http://geekswithblogs.net/redwards/archive/2007/06/18/113287.aspx. Additional information can be found here http://support.microsoft.com/kb/833714.

Exception has been thrown by the target of invocation

I was leading a lab today. The first exercise has a starting solution with some basic code that you are suppose to enhance during the implementation. One of the very first thing you shall do is to run the unit tests to make sure that the code works. It has always worked! That is, until today. One of the participants couldn’t execute any of unit tests. Visual Studio kept on displaying the same error message for every attempt to run any of the unit tests: “Exception has been thrown by the target of invocation”.

We searched on the Internet and found many suggestions on what might cause the problem. Finally we found out that the reason was that his development environment wasn’t connected to TFS. The lab is of course checked into TFS and there is a bug in VS 2008 that causes this problem if you open a bounded solution in a studio disconnected from TFS (http://blogs.msdn.com/b/shitals/archive/2009/12/31/9942725.aspx).

Slow Menus in VS 2010

In one of my assignment they use VS 2010 and I recently had to install it. My first impression wasn’t very good. The menus in VS 2010 were extremely slow. Tried different settings in the environment but nothing helped. I looked if Google could give me any answer, and it could! It turned out that my computer was configured to use 16 bit colour quality. By increasing it to 32-bit, the problem went away.

Refactoring and Code Coverage

When re-factoring the code it is sometimes necessary to remove a project. This has happened a couple of times in my current project. For a while ago, I had to remove an assembly from the application when I re-factored the code. All my unit tests worked both locally and on the build server, but the build report said that the build just partially succeeded. The build log included an error message that told that it couldn’t find and copy the old assembly during the build process. This was a bit strange, since I was pretty sure that I managed to remove all dependencies and Visual Studio shouldn’t be looking for it. After some discussion with my colleagues, googling and testing I found out that the problem was that the assembly still was registered for code coverage. That’s why Visual Studio still was looking for it, and by removing it from code coverage the build succeeded completely again.

Deployment Folder

The unit tests run in a separate deployment folder and not in the folder where they were created. The separation helps to ensure better test runs and more accurate test results. During a test run, Visual Studio performs several steps, and one of them is to copy all necessary files to the deployment folder. The deployment folder can be local or remote. A remote deployment folder is used when working with test rigs.

The files are copied in a specific order to the deployment folder. Files that are copied later, overwrites files copied earlier if they have the same name. Visual Studio copy the files in the following order:

1) The files that comprise the test.

2) Binaries that are instrumented, for example files that are used in code coverage analysis.

3) Files specified in the run configuration.

4) Dependent files, for example dependent assemblies and config files.

5) Deployment items specified per test.

DeploymentItem

I found a better solution for my problems with the build server and the assembly that wasn’t copied to the out directory. You can register which additional items that shall be copied to the out directory in a unit test with the attribute DeploymentItem. I annotated my unit test with the following line of code.

[DeploymentItem(“Microsoft.Practices.Unity.Interception.Configuration.dll”)]

Thereafter I could remove my dummy code and the unit tests worked as expected. More information can be found at http://msdn.microsoft.com/en-us/library/ms182475(VS.80).aspx.

Instrumented assemblies and the build server

In my previous post I linked to a lab on the unity interception extension. I have used a similar solution in one of my current projects. In this project we use TFS and a build server with automatic builds. All unit tests passed when I run them locally, but they failed on the build server. I received the following error message.

System.TypeInitializationException: The type initializer for ‘xxxContainer’ threw an exception. —> System.Configuration.ConfigurationErrorsException: An error occurred creating the configuration section handler for unity: Value cannot be null.

When I examined the out directory on the build server, I noticed that the file Microsoft.Practices.Unity.Interception.Configuration.dll was missing. It seems like the build server just copies instrumented dll´s to the out directory. I do not explicitly use anything from this assembly in my code, so it wasn’t copied. The types are loaded dynamically during runtime. I added a piece of dummy code that used a type in the assembly, and then the assembly was copied to the out directory and all unit tests passed on the build server. There must be a better way to solve this…

Unity Interception Extension

maintain. Dependency injection also makes the applications easier to test and you can mock dependency to other systems.

Interception is a design pattern used to separate cross cutting concerns from the business logic and make it possible to apply it on different parts of the system when needed. For example to temporary monitor the load of a component or to apply extra trace message on a method.

An object must be interceptable before it is possible to apply different policies to it during runtime. A policy defines which handlers should execute for which method or object. For example to log a message before and after the execution of a method.

Unity has an interception extension and can resolve objects that are interceptable. It is possible to use the extension in combination with the configuration made in the Policy Application Block. That is, we can define which objects that shall be resolved as interceptable in the configuration file for Unity and use the policies entered in the configuration file for PIAB. I have worked with this and made a sample application that can be downloaded from here. Here follows a walkthrough of the solution.

I started with just creating a simple interface (IPerson) that is implemented by a class (Person). See code example below.

public interface IPerson
{
string FirstName { get; set; }
string LastName { get; set; }
string GetFullName();
}

public class Person: IPerson
{
public string FirstName { get; set; }
public string LastName {get; set;}

public string GetFullName()
{
return FirstName + ” ” + LastName;
}
}

I wanted to load the setup of unity from a configuration file. I added a container that loads the type mapping from the applications config file during the construction of the object.

public sealed class Container
{
private static readonly IUnityContainer container = new UnityContainer();
private static readonly Container instance = new Container();

private Container()
{
//Configure Unity
UnityConfigurationSection config = (UnityConfigurationSection
ConfigurationManager.GetSection(“unity”);

config.Containers.Default.Configure(container);
}

public static T Resolve<T>()
{
return container.Resolve<T>();
}
}

It’s common to keep Unity’s configuration in a separate file, since it tends to include a lot of information. I added a new configuration file called Unity.config to my project. I added the type aliases for my interface and my class. I also added the type mapping between them.

<unity>
<typeAliases>
<typeAlias alias=”IPerson” type=”UnityInterceptionLab.IPerson, UnityInterceptionLab” />
<typeAlias alias=”Person” type=”UnityInterceptionLab.Person, UnityInterceptionLab” />
</typeAliases>
<containers>
<container>
<types>
<type type=”IPerson” mapTo=”Person” />
</types>
</container>
</containers>
</unity>

In the app.config I can link to this file with the configSource attribute. First I have to register the unity section in the config file.

<configSections>
<section name=”loggingConfiguration” type=”Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />
<section name=”dataConfiguration” type=”Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />
<section name=“unity” type=”Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ requirePermission=”false”/>
</configSections>

Then I can link to the external file with the configSource attribute:

<unity configSource=”Unity.config”/>

I need this file during runtime and must make sure that it is copied to the output directory. This can be configured in the property sheet of the file by setting the option Copy to Output Directory to Copy Always.

The file is also needed during the unit tests and must be deployed when the tests run. This can be configured from the menu choice Test->Edit Test Run Configuration. Select Deployment and add the file.

I then created a unit test to make sure that the container worked.

[TestMethod]
public void UnityTest()
{
IPerson person = Container.Resolve<IPerson>();
person.FirstName = “John”;
person.LastName = “Doe”;
Assert.AreEqual(“John Doe”, person.GetFullName());
}

To integrate unity with interception, I must add an extension in the container. I added this to the constructor of the container. I also want to use the configuration of policies made in PIAB and have to register them as well. The updated container constructor becomes:

private Container()
{
//Configure Unity
UnityConfigurationSection config = (UnityConfigurationSection)ConfigurationManager.GetSection(“unity”);
config.Containers.Default.Configure(container);

//Add interception extension
container.AddNewExtension<Interception>();

//Register the policy injection configuration
PolicyInjectionSettings policyInjectionSection = (PolicyInjectionSettings)GetSectionInConfigFile(assemblyConfigPath, PolicyInjectionSettings.SectionName);

if (policyInjectionSection != null)
{
policyInjectionSection.ConfigureContainer(container, new SystemConfigurationSource());
}
}

All code is now written and the rest is just about configuring the application. To configure that the resolved Person object shall be interceptable, I must add the extension to Interception in Unity.config file.

<container>
<extensions>
<add type=“Microsoft.Practices.Unity.InterceptionExtension.Interception, Microsoft.Practices.Unity.Interception” />
</extensions>
<types>

An object can be interceptable by either implementing a known interface or by inheriting from the class MarshalByRefObject. I use a know interface and register a type alias for interface based interceptor in Unity.config.

<typeAlias alias=”InterfaceInterceptor” type=”Microsoft.Practices.Unity.InterceptionExtension.InterfaceInterceptor, Microsoft.Practices.Unity.Interception, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />

Then I can configure that the Person class shall be returned as interceptable in the extensionConfig section.

<extensionConfig>
<add name=”interception” type=”Microsoft.Practices.Unity.InterceptionExtension.Configuration.InterceptionConfigurationElement, Microsoft.Practices.Unity.Interception.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″>
<interceptors>
<interceptor type=”InterfaceInterceptor”>
<key type=”IPerson” />
</interceptor>
</interceptors>
</add>
</extensionConfig>

To test this I set up a policy that logged a message before and after a method was called in the Person class. All configurations are included in the test project that you can download.

Concurrency Control

In a system with several users we must ensure that they modify shared data without any conflicts. This is called concurrency control and is traditionally handled by locking mechanisms. ANSI SQL defines a set of problems that can occur in a system without any concurrency control, and different levels of isolations that prevents the problems. The more strict isolation level, the more problems are avoided.

The following table summarizes different concurrency problems defined by ANSI SQL.

Problem Description
Lost Update Two transactions both update a row, and then the second transaction aborts, causing both changes to be lost.
Dirty Read One transaction reads changes made by another transaction that hasn’t yet been committed. This is dangerous, because those changes may later be rolled back.
Unrepeatable Read A transaction reads a row twice and reads different state each time. For example, another transaction may have written to the row, and committed between the two reads.
Second lost update problem Two concurrent transactions both read a row, one writes to it and commits, and then the second writes to it and commits. The changes made by the first writer are lost.
Phantom Read A transaction executes a query twice, and the second result set includes rows that weren’t visible in the first result set. This situation is caused by another transaction inserting new rows between the execution of the two queries.

The following table summarizes different isolation levels defined by ANSI SQL.

Isolation Level Description
Read Uncommitted Permits dirty reads but not lost updates. One transaction may not write to a row if another uncommitted transaction has already written to it. But any transaction may read any row.
Read Committed Permits unrepeatable reads but not dirty reads. Reading transactions don’t block other transactions from accessing a row. But an uncommitted writing transaction blocks all other transactions from accessing the row.
Repeatable Read Permits neither unrepeatable reads nor dirty reads. Phantom reads may occur. Reading transactions block writing transactions (but not other reading transactions), and writing transactions block all other transactions.
Serializable Provides the strictest transaction isolation. It emulates serial transaction execution, as if transactions had been executed one after another, serially, rather than concurrently.

SQL server supports all ANSI SQL isolation levels. The default isolation level for SQL Server is usually Read Committed. Which level of isolation to select depends on the particular requirements of the system. In most scenarios Read Commited provides to low isolation which results in corrupt data and should be avoided. Serializable is the most strict form but result in very high performance penalty and should also be avoided.

Lost Update Dirty Read Unrepeatable Read Second Lost Problem Phantom Read
Read Uncommitted No Yes Yes Yes Yes
Read committed No No Yes Yes Yes
Repeatable Read No No No No Yes
Serializable No No No No No

Optimistic Locking

In a system with transactions spanning over a user interaction, the locking mechanisms in the database doesn’t scale very well. In those cases, NHibernate includes an additional locking mechanism handled by the application tier called optimistic locking. In optimistic locking the system assumes that current transactions modification will not conflict with another transaction. If it does, it will be rolled back and the user has to try again. In optimistic locking, the data must be versioned and preferable include an attribute for the current version. NHibernate automatically increments the version and check for conflicts. When a transaction synchronizes changes with the database, NHibernate checks the version and if current transaction uses stale data a StaleObjectStateException will be thrown.

To achieve a higher level of isolation, you can also use other mechanism than locking. NHibernate use a first level cache that prevents unrepeatable reads (and some phantom reads). Optimistic locking prevents Second Lost Problem. So by combing the database isolation level Read Comitted, optimistic locking and first level cache we achieve a high level of isolation with low impact on performance.

To enable optimistic locking in NHibnernate:

1) The recommendation is to use an additional attribute to store the version of the entity. Add it as a property in the entity and as new column in the underlying database table.

2) Enable optimistic locking in the mapping file for the entity.

The version attribute can be either an integer or a timestamp. An integer performs better and is considered to be safer than a timestamp.

To enable optimistic locking, add the attribute optimistic locking with the value version to the class element of the mapping file. In the mapping file we must also enter the attribute used for versioning. The tag <version> is used to register which property in the entity and which column in the database is used. The tag must be placed after the <id> element.

<?xml version=”1.0″ encoding=”utf-8″ ?>
<hibernate-mapping xmlns=”urn:nhibernate-mapping-2.2″ namespace=”NHibPOC” assembly=”NHibPOC”>
<class name=”Person” table=”Person” optimistic-lock=”version”>
<id name=”Id” column=”Id” type=”Int32″>
<generator class=”identity”/>
</id>
<version column=”version” name=”version”></version>
<property name=”FirstName” column=”FirstName” type=”String”/>
</class>
</hibernate-mapping>

NHibernate 2.1.2 Security Constraints

So far I have only tested NHibernate 2.1.0. The latest version is currently 2.1.2. Yesterday I decided to download and install it. I was surprised to find that my unit test wouldn’t execute. The error message said that the NHibernate assembly wasn’t trusted.

Never seen this error before. I thought that I had to grant some permission to the assemblies with Caspol.exe or .Net Framework Configuration tool. It turned out to be easier. In the property dialog for the assembly, there was a new section called security. By pressing the unblock button the assembly became trusted.

1) I deleted all assemblies that is created and copied to the build directory.

2) I keep the referenced assemblies from NHiberenate etc in a lib directory. I unblocked all blocked assemblies in the lib directory.

3) Restarted Visual Studio

4) Rebuild the solution and it was possible to execute my unit tests again.