Troubleshooting NHibernate

There are many different ways to troubleshoot NHibernate. I mainly use show_sql, log4net and SQL Server Profile.

show_sql

In the configuration of NHibernate, there is a property called show_sql. If you set this to true, NHibernate will output all generated sql to the console.

You enable show_sql by adding the following piece of code to the configuration file.

<property name=”show_sql”>true</property>

To view the output from a unit test, you must select the unit test in the Test Result window and use View Test Result Details.

The unit test’s result window displays the output.

log4Net

If you need more detailed debug information you can use log4net. Log4Net is an open source library that NHibernate uses to log all interesting events. You can use it to log information to a file, the console or both. To use it, you have to add some information to the configuration file. First declare the log4net section.

<configSections>
<section name=”hibernate-configuration” type=”NHibernate.Cfg.ConfigurationSectionHandler, NHibernate” />
<section name=”log4net” type=”log4net.Config.Log4NetConfigurationSectionHandler, log4net”/>
</configSections>

Here is an example how you can configure log4net to write events from NHibernate both to a file and the console. Read more about log4net on the homepage.

<log4net>
<appender name=”NHibernateFileLog” type=”log4net.Appender.RollingFileAppender”>
<file value=”c:\nhibernate2.txt” />
<appendToFile value=”true” />
<rollingStyle value=”Size” />
<maxSizeRollBackups value=”10″ />
<maximumFileSize value=”100KB” />
<staticLogFileName value=”true” />
<layout type=”log4net.Layout.PatternLayout”>
<conversionPattern value=”%d{HH:mm:ss.fff} [%-5p] %c – %m%n” />
</layout>
</appender>
<appender name=”Console” type=”log4net.Appender.ConsoleAppender, log4net”>
<layout type=”log4net.Layout.PatternLayout,log4net”>
<param name=”ConversionPattern” value=”%d{ABSOLUTE} %-5p %c{1}:%L – %m%n” />
</layout>
</appender>
<root>
<level value=”DEBUG”/>
<appender-ref ref=”Console” />
</root>
<logger name=”NHibernate” additivity=”false”>
<level value=”DEBUG”/>
<appender-ref ref=”NHibernateFileLog”/>
<appender-ref ref=”Console” />
</logger>
</log4net>

You also have to initialize log4net from the application

//Start Log4Net
log4net.Config.XmlConfigurator.Configure();

The output now contains much more information about the internal works of NHibernate:

SQL Server Profiler

SQL Server Profiler can be very useful when you line step an application and want to view each database call NHiberante makes. In this way you can follow how NHibernate flushes changes to the database and manages caching. SQL Server Profiler can be started from SQL Server Management Studio.

To use it your user must be a member of sysadmin.

When the Profiler is started, you can create a new trace and view each database call NHibernate makes.

Slow Menus in VS 2010

In one of my assignment they use VS 2010 and I recently had to install it. My first impression wasn’t very good. The menus in VS 2010 were extremely slow. Tried different settings in the environment but nothing helped. I looked if Google could give me any answer, and it could! It turned out that my computer was configured to use 16 bit colour quality. By increasing it to 32-bit, the problem went away.

Domain Driven Design

I recently attended to a course that covered Domain Driven Design (DDD). After the course, I asked some of the attendees how they would summarize DDD in one sentence. There were a couple of interesting suggestions, and here is the one I think summarized DDD in the best way.

“A mindset based on object oriented principles to drive the understanding of the business context.”

Refactoring and Code Coverage

When re-factoring the code it is sometimes necessary to remove a project. This has happened a couple of times in my current project. For a while ago, I had to remove an assembly from the application when I re-factored the code. All my unit tests worked both locally and on the build server, but the build report said that the build just partially succeeded. The build log included an error message that told that it couldn’t find and copy the old assembly during the build process. This was a bit strange, since I was pretty sure that I managed to remove all dependencies and Visual Studio shouldn’t be looking for it. After some discussion with my colleagues, googling and testing I found out that the problem was that the assembly still was registered for code coverage. That’s why Visual Studio still was looking for it, and by removing it from code coverage the build succeeded completely again.

Deployment Folder

The unit tests run in a separate deployment folder and not in the folder where they were created. The separation helps to ensure better test runs and more accurate test results. During a test run, Visual Studio performs several steps, and one of them is to copy all necessary files to the deployment folder. The deployment folder can be local or remote. A remote deployment folder is used when working with test rigs.

The files are copied in a specific order to the deployment folder. Files that are copied later, overwrites files copied earlier if they have the same name. Visual Studio copy the files in the following order:

1) The files that comprise the test.

2) Binaries that are instrumented, for example files that are used in code coverage analysis.

3) Files specified in the run configuration.

4) Dependent files, for example dependent assemblies and config files.

5) Deployment items specified per test.

DeploymentItem

I found a better solution for my problems with the build server and the assembly that wasn’t copied to the out directory. You can register which additional items that shall be copied to the out directory in a unit test with the attribute DeploymentItem. I annotated my unit test with the following line of code.

[DeploymentItem(“Microsoft.Practices.Unity.Interception.Configuration.dll”)]

Thereafter I could remove my dummy code and the unit tests worked as expected. More information can be found at http://msdn.microsoft.com/en-us/library/ms182475(VS.80).aspx.

Instrumented assemblies and the build server

In my previous post I linked to a lab on the unity interception extension. I have used a similar solution in one of my current projects. In this project we use TFS and a build server with automatic builds. All unit tests passed when I run them locally, but they failed on the build server. I received the following error message.

System.TypeInitializationException: The type initializer for ‘xxxContainer’ threw an exception. —> System.Configuration.ConfigurationErrorsException: An error occurred creating the configuration section handler for unity: Value cannot be null.

When I examined the out directory on the build server, I noticed that the file Microsoft.Practices.Unity.Interception.Configuration.dll was missing. It seems like the build server just copies instrumented dll´s to the out directory. I do not explicitly use anything from this assembly in my code, so it wasn’t copied. The types are loaded dynamically during runtime. I added a piece of dummy code that used a type in the assembly, and then the assembly was copied to the out directory and all unit tests passed on the build server. There must be a better way to solve this…

Unity Interception Extension

maintain. Dependency injection also makes the applications easier to test and you can mock dependency to other systems.

Interception is a design pattern used to separate cross cutting concerns from the business logic and make it possible to apply it on different parts of the system when needed. For example to temporary monitor the load of a component or to apply extra trace message on a method.

An object must be interceptable before it is possible to apply different policies to it during runtime. A policy defines which handlers should execute for which method or object. For example to log a message before and after the execution of a method.

Unity has an interception extension and can resolve objects that are interceptable. It is possible to use the extension in combination with the configuration made in the Policy Application Block. That is, we can define which objects that shall be resolved as interceptable in the configuration file for Unity and use the policies entered in the configuration file for PIAB. I have worked with this and made a sample application that can be downloaded from here. Here follows a walkthrough of the solution.

I started with just creating a simple interface (IPerson) that is implemented by a class (Person). See code example below.

public interface IPerson
{
string FirstName { get; set; }
string LastName { get; set; }
string GetFullName();
}

public class Person: IPerson
{
public string FirstName { get; set; }
public string LastName {get; set;}

public string GetFullName()
{
return FirstName + ” ” + LastName;
}
}

I wanted to load the setup of unity from a configuration file. I added a container that loads the type mapping from the applications config file during the construction of the object.

public sealed class Container
{
private static readonly IUnityContainer container = new UnityContainer();
private static readonly Container instance = new Container();

private Container()
{
//Configure Unity
UnityConfigurationSection config = (UnityConfigurationSection
ConfigurationManager.GetSection(“unity”);

config.Containers.Default.Configure(container);
}

public static T Resolve<T>()
{
return container.Resolve<T>();
}
}

It’s common to keep Unity’s configuration in a separate file, since it tends to include a lot of information. I added a new configuration file called Unity.config to my project. I added the type aliases for my interface and my class. I also added the type mapping between them.

<unity>
<typeAliases>
<typeAlias alias=”IPerson” type=”UnityInterceptionLab.IPerson, UnityInterceptionLab” />
<typeAlias alias=”Person” type=”UnityInterceptionLab.Person, UnityInterceptionLab” />
</typeAliases>
<containers>
<container>
<types>
<type type=”IPerson” mapTo=”Person” />
</types>
</container>
</containers>
</unity>

In the app.config I can link to this file with the configSource attribute. First I have to register the unity section in the config file.

<configSections>
<section name=”loggingConfiguration” type=”Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />
<section name=”dataConfiguration” type=”Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />
<section name=“unity” type=”Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ requirePermission=”false”/>
</configSections>

Then I can link to the external file with the configSource attribute:

<unity configSource=”Unity.config”/>

I need this file during runtime and must make sure that it is copied to the output directory. This can be configured in the property sheet of the file by setting the option Copy to Output Directory to Copy Always.

The file is also needed during the unit tests and must be deployed when the tests run. This can be configured from the menu choice Test->Edit Test Run Configuration. Select Deployment and add the file.

I then created a unit test to make sure that the container worked.

[TestMethod]
public void UnityTest()
{
IPerson person = Container.Resolve<IPerson>();
person.FirstName = “John”;
person.LastName = “Doe”;
Assert.AreEqual(“John Doe”, person.GetFullName());
}

To integrate unity with interception, I must add an extension in the container. I added this to the constructor of the container. I also want to use the configuration of policies made in PIAB and have to register them as well. The updated container constructor becomes:

private Container()
{
//Configure Unity
UnityConfigurationSection config = (UnityConfigurationSection)ConfigurationManager.GetSection(“unity”);
config.Containers.Default.Configure(container);

//Add interception extension
container.AddNewExtension<Interception>();

//Register the policy injection configuration
PolicyInjectionSettings policyInjectionSection = (PolicyInjectionSettings)GetSectionInConfigFile(assemblyConfigPath, PolicyInjectionSettings.SectionName);

if (policyInjectionSection != null)
{
policyInjectionSection.ConfigureContainer(container, new SystemConfigurationSource());
}
}

All code is now written and the rest is just about configuring the application. To configure that the resolved Person object shall be interceptable, I must add the extension to Interception in Unity.config file.

<container>
<extensions>
<add type=“Microsoft.Practices.Unity.InterceptionExtension.Interception, Microsoft.Practices.Unity.Interception” />
</extensions>
<types>

An object can be interceptable by either implementing a known interface or by inheriting from the class MarshalByRefObject. I use a know interface and register a type alias for interface based interceptor in Unity.config.

<typeAlias alias=”InterfaceInterceptor” type=”Microsoft.Practices.Unity.InterceptionExtension.InterfaceInterceptor, Microsoft.Practices.Unity.Interception, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″ />

Then I can configure that the Person class shall be returned as interceptable in the extensionConfig section.

<extensionConfig>
<add name=”interception” type=”Microsoft.Practices.Unity.InterceptionExtension.Configuration.InterceptionConfigurationElement, Microsoft.Practices.Unity.Interception.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″>
<interceptors>
<interceptor type=”InterfaceInterceptor”>
<key type=”IPerson” />
</interceptor>
</interceptors>
</add>
</extensionConfig>

To test this I set up a policy that logged a message before and after a method was called in the Person class. All configurations are included in the test project that you can download.

Policy Injection Application Block

The Policy Injection Application Block (PIAB) in Enterprise Library (EntLib) includes some very nice features that makes it easier to monitor and troubleshoot the application. An administrator can add custom trace messages and view performance on particular methods. I found a web cast from Microsoft that gives a very good introduction to PIAB in EntLib 3.0. It can be downloaded from here.

Concurrency Control

In a system with several users we must ensure that they modify shared data without any conflicts. This is called concurrency control and is traditionally handled by locking mechanisms. ANSI SQL defines a set of problems that can occur in a system without any concurrency control, and different levels of isolations that prevents the problems. The more strict isolation level, the more problems are avoided.

The following table summarizes different concurrency problems defined by ANSI SQL.

Problem Description
Lost Update Two transactions both update a row, and then the second transaction aborts, causing both changes to be lost.
Dirty Read One transaction reads changes made by another transaction that hasn’t yet been committed. This is dangerous, because those changes may later be rolled back.
Unrepeatable Read A transaction reads a row twice and reads different state each time. For example, another transaction may have written to the row, and committed between the two reads.
Second lost update problem Two concurrent transactions both read a row, one writes to it and commits, and then the second writes to it and commits. The changes made by the first writer are lost.
Phantom Read A transaction executes a query twice, and the second result set includes rows that weren’t visible in the first result set. This situation is caused by another transaction inserting new rows between the execution of the two queries.

The following table summarizes different isolation levels defined by ANSI SQL.

Isolation Level Description
Read Uncommitted Permits dirty reads but not lost updates. One transaction may not write to a row if another uncommitted transaction has already written to it. But any transaction may read any row.
Read Committed Permits unrepeatable reads but not dirty reads. Reading transactions don’t block other transactions from accessing a row. But an uncommitted writing transaction blocks all other transactions from accessing the row.
Repeatable Read Permits neither unrepeatable reads nor dirty reads. Phantom reads may occur. Reading transactions block writing transactions (but not other reading transactions), and writing transactions block all other transactions.
Serializable Provides the strictest transaction isolation. It emulates serial transaction execution, as if transactions had been executed one after another, serially, rather than concurrently.

SQL server supports all ANSI SQL isolation levels. The default isolation level for SQL Server is usually Read Committed. Which level of isolation to select depends on the particular requirements of the system. In most scenarios Read Commited provides to low isolation which results in corrupt data and should be avoided. Serializable is the most strict form but result in very high performance penalty and should also be avoided.

Lost Update Dirty Read Unrepeatable Read Second Lost Problem Phantom Read
Read Uncommitted No Yes Yes Yes Yes
Read committed No No Yes Yes Yes
Repeatable Read No No No No Yes
Serializable No No No No No

Optimistic Locking

In a system with transactions spanning over a user interaction, the locking mechanisms in the database doesn’t scale very well. In those cases, NHibernate includes an additional locking mechanism handled by the application tier called optimistic locking. In optimistic locking the system assumes that current transactions modification will not conflict with another transaction. If it does, it will be rolled back and the user has to try again. In optimistic locking, the data must be versioned and preferable include an attribute for the current version. NHibernate automatically increments the version and check for conflicts. When a transaction synchronizes changes with the database, NHibernate checks the version and if current transaction uses stale data a StaleObjectStateException will be thrown.

To achieve a higher level of isolation, you can also use other mechanism than locking. NHibernate use a first level cache that prevents unrepeatable reads (and some phantom reads). Optimistic locking prevents Second Lost Problem. So by combing the database isolation level Read Comitted, optimistic locking and first level cache we achieve a high level of isolation with low impact on performance.

To enable optimistic locking in NHibnernate:

1) The recommendation is to use an additional attribute to store the version of the entity. Add it as a property in the entity and as new column in the underlying database table.

2) Enable optimistic locking in the mapping file for the entity.

The version attribute can be either an integer or a timestamp. An integer performs better and is considered to be safer than a timestamp.

To enable optimistic locking, add the attribute optimistic locking with the value version to the class element of the mapping file. In the mapping file we must also enter the attribute used for versioning. The tag <version> is used to register which property in the entity and which column in the database is used. The tag must be placed after the <id> element.

<?xml version=”1.0″ encoding=”utf-8″ ?>
<hibernate-mapping xmlns=”urn:nhibernate-mapping-2.2″ namespace=”NHibPOC” assembly=”NHibPOC”>
<class name=”Person” table=”Person” optimistic-lock=”version”>
<id name=”Id” column=”Id” type=”Int32″>
<generator class=”identity”/>
</id>
<version column=”version” name=”version”></version>
<property name=”FirstName” column=”FirstName” type=”String”/>
</class>
</hibernate-mapping>