General

Getting started

Using the NHibernate Profiler is easy. First, we need to make the application that we profile aware of the profiler. Then, just start the profiler.

Preparing an application to be profiled

Install NHibernateProfiler NuGet package (or NHibernateProfiler.Appender to install only the profiler's Appender):

Install-Package NHibernateProfiler -IncludePrerelease

In the application startup (Application_Start in web applications, Program.Main in console applications, or the App constructor for WPF applications), make the following call (after the init of NHibernate):

HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.Initialize();

Getting NHibernate statistics

If you want to view the NHibernate internal statistics, you need to modify your NHibernate configuration and set generate_statistics to true. This is how you do it using the standard NHibernate configuration mechanism:

<property name="generate_statistics">true</property>

This isn't the only way to profile an application; you can also configure your application to work with the profiler without changing any code.

Profiler shortcut Keys

Beyond the standard operating systems shortcut keys, the profiler supports the following shortcut keys:

SFocus on Sessions tab header
TFocus on Statements tab header
DFocus on Details tab header
FFocus on Statistics tab header
/ Move to the next/prev tab
/ Move to next session/statement

Programmatic Access to the Profiler

One of the DLLs that comes with the profiler is HibernatingRhinos.Profiler.Integration, this DLL can give you programmatic access to the profiler object model. On NuGet it's available as NHibernateProfiler.Integration package. For example, you may do so inside your unit tests to be able to assert on the behavior of your code.

var captureProfilerOutput = new CaptureProfilerOutput(@"/path/to/profiler.exe"); 
captureProfilerOutput.StartListening();

NHibernateProfiler.Initialize();

// run code that uses NHibernate

Report report = captureProfilerOutput.StopAndReturnReport();

// Assert / inspect on the report instance
  1. Create a new CaptureProfilerOuput() file and pass it the path to the nhprof.exe executable.
  2. Call StartListening() to start the listening to your application's behavior.
  3. Execute your code
  4. Call StopAndReturnReport() to get the report and assert / inspect it.

You can also use this DLL to give you programmatic access to the XML report files that are generated during the continuous integration build. This is done using the following code:

var xml = new XmlSerializer(typeof (Report)); 
using(var reader = File.OpenText(reportFile)) 
{ 
    Report result = (Report)xml.Deserialize(reader); 
}

Using the profiler with Continuous Integration

The profiler supports the following command line interface, which allows you to execute the profiler as part of your continuous integration builds:

/CmdLineMode[+|-]         (short form /C)
/File:<string>            (short form /F)
/ReportFormat:{Json|Html|Xml}  (short form /R)
/Port:<int>               (short form /P)
/InputFile:<string>       (short form /I)
/Shutdown[+|-]            (short form /S)

Starting up with no options will result in the UI showing up.

Here is how we can get a report from our continuous integration build:

NHProf.exe /CmdLineMode /File:report-output /ReportFormat:Json /ReportFormat:Html # starts listening to applications
xunit.console.exe Northwind.IntegrationTests.dll
NHProf.exe /Shutdown # stop listening to applications and output the report

This approach should make it simple to integrate into your CI process. The JSON and XML output allows you to do programmatic access to the report, while the HTML version is human readable. You can generate JSON, XML and HTML reports by specifying the ReportFormat option multiple times.

One thing that you might want to be aware of, writing the report file is done in an async manner, so the shutdown command may return before writing the file is done. If you need to process the file as part of your build process, you need to wait until the first profiler instance is completed. Using PowerShell, this is done like this:

NHProf.exe /CmdLineMode /File:report-output /ReportFormat:Json /ReportFormat:Html
xunit.console.exe Northwind.IntegrationTests.dll
NHProf.exe /Shutdown
get-process NHProf | where {$_.WaitForExit() } # wait until the report export is completed

Licensing

Please note that from a licensing perspective, the CI mode is the same as the normal GUI mode. On one hand, it means that you don't need to do anything if you have the profiler already and want to run the CI build on your machine. On the other, if you want to run it on a CI machine, you would need an additional license for that.

Usage

Aggregate related sessions under scopes

NHibernate Profiler can aggregate related sessions under the same scope. By default, in web applications, we aggregate sessions which are opened in the same web page under the same scope. But we provide an API that you can use to aggregate related session in your applicatoin using a different strategy.

You can use:

HibernatingRhinos.Profiler.Appender.ProfilerIntegration.StartScope();

in order to start new scope, and dispose the returned object in order to end the scope. You can also give a name to the scope using:

ProfilerIntegration.StartScope("Task #" + taskId);

In addition, you can override the ProfilerIntegration.GetCurrentScopeName() method in order to set the current scope using your application specific details:

ProfilerIntegration.GetCurrentScopeName = () => { return "Scope ID" };

Azure Storage Channel

By default the profiler logs are sent from an application via TCP. Although sometimes your application might not be able to reach the profiler over the network. Using Azure Storage Channel feature you can configure the appender to send the logs as blobs to the Azure storage container and analyze them in NHibernate Profiler. This way you can profile your NHibernate queries from Azure Functions.

How it works

The idea of Azure Storage Channel is that profiler logs are uploaded as blobs to an Azure container. In Azure you can configure a subscription so on each BlobCreated event an appropriate message will be created in the Azure storage queue.

The NHibernate Profiler will listen to those queue messages, download the blobs, analyze them and displays your queries in real time. Also you can load existing profiler logs later on.

Setup on Azure

In order to make the uploaded logs to be available for NHibernate Profiler you need to execute the following steps:

  • create a storage account,
  • create a container,
  • create a storage queue,
  • create a subscription to enqueue messages about blobs creation.

You can define it using the Azure portal as shown here or use Azure CLI script (Bash, Powershell)

Appender configuration

When initializing the appender you need to specify AzureStorageToLogTo property and provide the connection string and container name where the blobs of profiler logs will be sent.

HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.Initialize(new NHibernateAppenderConfiguration()
                    {
                        AzureStorageToLogTo = new AzureStorageConfiguration("azure-storage-connection-string", "container-name")
                    });

Connecting to Azure storage from Profiler

In order to be able to load the logs you need to connect to Azure using Options > Azure Storage Channel

Connecting to Azure storage from Profiler

Once you're connected to the Azure channel, the profiler will start showing data from the profiled application which is configured to send logs to Azure container. You can choose whether the logs should be deleted from the container after processing or they should stay there. If the container already contains some profiling data you can also load it into the profiler by providing a date range.

Configuring the application to be profiled without changing the application code

While the easiest way to prepare the application to be profiled is to just call NHibernateProfiler.Initialize(), this is often not a desirable way of doing things, since adding/removing the profiler requires changing the application code.

Therefore, we also made it possible to configure the application without this explicit initialization approach. The method of doing so is:

  • Add a reference to the following dll's:
    • HibernatingRhinos.Profiler.Appender.dll
    • HibernatingRhinos.Profiler.Appender.NHibernateLog4Net.dll
  • To log to a live instance of the NHibernate Profiler, create a log4net.config file with the following content:
    <?xml version="1.0" encoding="utf-8" ?>
    <log4net>
    <appender name="NHProfAppender"
    type="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfAppender,
    HibernatingRhinos.Profiler.Appender.NHibernateLog4Net">
    <sink value="tcp://127.0.0.1:22897" />
    </appender>
    <logger name="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfAppender.Setup">
    <appender-ref ref="NHProfAppender"/>
    </logger>
    </log4net>
    The following properties can be added to the appender section to configure the logger behavior:
    • Stop doing fixups for runtime generated code stack traces.
      <dotNotFixDynamicProxyStackTrace value="true"/>
    • Stop capturing all stack traces.
      <skipCapturingStackTraces value="true"/>
  • To log to a file that can be loaded later into the NHibernate Profiler, create a log4net.config file with the following content:
    <?xml version="1.0" encoding="utf-8" ?>
    <log4net>
    <appender name="NHProfAppender"
    type="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfOfflineAppender,
    HibernatingRhinos.Profiler.Appender.NHibernateLog4Net" >
    <file value="nhprof_output.nhprof" />
    </appender>
    <logger name="HibernatingRhinos.Profiler.Appender.NHibernate.NHProfAppender.Setup">
    <appender-ref ref="NHProfAppender"/>
    </logger>
    </log4net>
    The following property can be added to the appender section to configure the file rolling mode:
    • Create a new file per day.
      <isRolling value="true"/>
    • The file(s) will be located under the folder: 'NHProfFiles'.
  • In the application startup, call:
    log4net.Config.XmlConfigurator.Configure(new FileInfo("/path/to/log4net.config"));

Configuring your application for offline profiling

It is possible to generate a snapshot file from your application to be analyzed later using this code:

HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.InitializeOfflineProfiling(filename);

This will generate a file with a snapshot of all the NHibernate activity in the application, which you can use for later analysis by loading the file into the profiler.

Ignore / Resume profiling

You can instruct the profiler to ignore some parts of your application, and not publishing any events generated from those sections.

This can be achieved using one following ways:

using (ProfilerIntegration.IgnoreAll())
{
     // Ignore all events generated from here
}

Or

ProfilerIntegration.IgnoreAll();
// Ignore all events generated from here
ProfilerIntegration.ResumeProfiling();

You'll want to use this feature when you have a massive amount of operations that you don't need to profile. A typically scenario for this will be building a huge database as part of the test process, which can be time consuming to profile.

NHibernate 2.0.1 GA issue with generate_statistics and caching

NHibernate 2.0.1 GA has a bug that show up when you have generate_statistics = true and the use of the 2nd level cache.

Details

If you have generate_statistics = true, and use 2nd level caching and the query returns no result, an exception is generated.

All three conditions must be present for the bug to appear. This is a bug that has since been fixed and will be included in NHibernate 2.1.

Production Profiling

Sometimes you want to profile your production application. It can be a website running on a remote machine or a service application. Using the production profiler feature of NHibernate Profiler, you can connect to your production application using the Options > Remote Servers:

Remote Servers

Once you’re connected to the remote application, the profiler will start showing data from the remote application.

In order to allow production profiling, your application should add the following line on the very beginning of your application:

NHibernateProfiler.InitializeForProduction(9090, "FfIVXyUkze38r/b2ulve26LQ88NK5AYig+ecYzp3r88=");

The above method will make your application start listening on port 9090 for incoming connections. The second parameter we send to this method is a password which will be used to filter unauthorized access to your application. You should make it hard enough, so your application will be secured.

The traffic over the connection between the profiler to your application will use SslStream so it is encrypted and secured.

Troubleshooting

In order to diagnose issues in your production profiling environment you can set the following value:

ProfilerInfrastructure.ThrowOnConnectionErrors = true;

Common errors are missing permissions in the running service. Setting this value to true will throw an error in your application, so you can diagnose what the error is.

Star statements using API

You can star some statements using the ProfilerIntegration API, in order to mark them with a star in the profiler, by calling:

HibernatingRhinos.Profiler.Appender.ProfilerIntegration.StarStatements("Yellow");

After this call, each new statement will get the yellow star. You can stop new statements from be starred using:

ProfilerIntegration.StarStatementsClear();

Here is a usage example:

using (var db = new Entities(conStr))
{
    var post1 = db.Posts.FirstOrDefault();

    using (ProfilerIntegration.StarStatements("Blue"))
    {
        var post2 = db.Posts.FirstOrDefault();
    }

    var post3 = db.Posts.FirstOrDefault();
    
    ProfilerIntegration.StarStatements();
    var post4 = db.Posts.FirstOrDefault();
    ProfilerIntegration.StarStatementsClear();

    var post5 = db.Posts.FirstOrDefault();
}

In the above example, only post2 and post4 will be starred. post2 would get a blue star while post4 would get the default star which is yellow.

Note that StarStatements can use the following colors: Yellow, Orange, Red, Green, Blue and Gray.

What firewall permissions does the profiler need?

While using the profiler you may be prompted by your firewall to allow the profiler to continue working.

The way the profiler works is by listening on a TCP sockets for events from Hibernate/NHibernate, in order to do that, it needs to listen to a socket, an action that sometimes trigger an alert in firewalls.

Denying permission to start listening on a socket will disable the profiler's ability to do live profiling. It is strongly recommended that you would allow it.

The profiler also makes a check for updated version at each startup. In the case of a profiler error, the profiler will ask you if it is allowed to submit the error details so it can be fixed.

In addition to that, the profiler will periodically report (anonymously) which features are being used in the application, which allows us to focus on commonly used parts of the product.

When using guids with SQLite, the database cannot find any data from formatted queries

The Problem

When you are using SQLite in conjunction with guids, NHibernate Profiler will show formatted queries that will not find any data in the database, even if the data actually exists.

The actual reason is related to the way SQLite parses guids. Since it isn't actually aware of guids, SQLite treats them as strings, and that obviously fails when matching against the actual guid value.

The Solution

The SQLite ADO.NET provider has a workaround for that. You can use the following connection string parameter to do so:

"Data Source=database.sqlite;Version=3;New=True;BinaryGuid=False"

You can specify this parameter when using the Show Results feature and when using a SQLite browser to inspect the actual database content. That will ensure that the SQL that NH Prof generate will be able to find the actual results in the database.

Faq

By per seat do you mean I need one license for home and one for work?

If you bought them, you may use them wherever you want. As long as only you are using it.

If your company bought the license, then you would require a separate license if you want this for home. Per seat means the number of people in an organization using it. Not the number of machines it is used on.

Note that there is a corollary to this, a single machine used by two developers, all of them using NH Prof requires two licenses, one for each user of the profiler.

Common Issues

The profiler shows the session but without any statements

If you try to run a program that exits very fast, you may see the session in the profiler without any log statements. This is because that the profiler didn't get the chance yet to log the statements because your application already exits. To solve this, you can force the profiler the get all the statements by adding the following line at the end of your code:

ProfilerInfrastructure.FlushAllMessages();

Ignore Connection String

Since the NHibernate Profiler 3.0 have the option called Detect Connection String.
When it's turned on, it will add the connection string of latest connection performed, if this connection does not exist yet in the list of connections, and will set it as the default connection.

Since connection string can contain sensitive information, that we don't want to expose. We provided an option to ignore passing of the connection string, when HibernatingRhinos.Profiler.Appender is initialised:

NHibernateProfiler.Initialize(new NHibernateAppenderConfiguration() 
{ 
    IgnoreConnectionStrings = false
});

Reporting External Queries

From time to time, while you are using NHibernate, you also use additional methods to query the database. The profiler provide hooks that allow you to integrate those external querying options into the profiler as well.

You can do that using the following code:

HibernatingRhinos.Profiler.Appender.CustomerQueryReporting.ReportQuery(
   sessionId: Guid.NewGuid().ToString(),                            // create a new session
   queryText: "select * from Users where Id = @users",              // the sql 
   parameters: dbCmd.Parameters,                                    // contains the @users parameter
   databaseTime: 10,                                                // time in milliseconds from executing query to first row
   totalTime: 15,                                                   // time in millseconds from executing query to completing processing the query
   rowCount: 10                                                     // number of rows returned by this query
);

This will cause the select * from Users where Id = @users query to show up as a new session in the profiler.

Resolving a permission error when trying to save license

You might get an error about access to the license file being denied when first trying to use the Profiler. This is usually a result of Windows' locking out the Profiler because it was downloaded from the Internet.

To resolve this issue, go to the folder where you extracted the Profiler, right click on HibernatingRhinos.Profiler.Client.exe and select Properties.

Next, click on the Unlock button to grant the Profiler standard application permissions and resolve the issue.

UltraMon Compatibility Issue

There is a known compatibility issue of the NHibernate Profiler with UltraMon program on some computers. If you running UltraMon program on your computer and you try to maximize the profiler screen on one of your monitors you may get the following exception:

An error occurred in NHibernate Profiler which is known to be caused by using UltraMon on your computer. 
Please follow the instructions in the following URL in order to solve this issue: http://nhprof.com/Learn/Faq/UltraMon

This is because the UltraMon program try to inject some controls to the NHibernate Profiler windows.

If you get the above message, please instruct your UltraMon program to not inject buttons to the NHibernate Profiler windows. This can be done by following the following steps:

  1. Right click on the UltraMon icon in the System Tray and choose option.
  2. On the Compatibility tab select Configure Settings.
  3. Select the NHibernate Profiler from the Running Applications list and press Next.
  4. Make sure to check the disable window buttons option and press Finish.
  5. Press OK to exit the UltraMon options window.

When you buy a license, is this a license for some period? some version? or buy one and keep getting updates for ever?

The license include all updates within a point release. If you buy a license for v1.0, you will get all updates for free for any v1.x version.

Alert

Avoid too many joins

The profiler has detected a query that contains a large number of joins, such as this one:

select *
from Blogs blog0_
    inner join Posts posts1_
        on blog0_.Id = posts1_.BlogId
    inner join Comments comments2_
        on posts1_.Id = comments2_.PostId
    inner join Users user3_
        on posts1_.UserId = user3_.Id
    inner join UsersBlogs users4_
        on blog0_.Id = users4_.BlogId
    inner join Users user5_
        on users4_.UserId = user5_.Id

Queries with too many joins might be a performance problem.

Each join requires the database to perform additional work, and the complexity and cost of the query grows rapidly with each additional join. While relational database are optimized for handling joins, it is often more efficient to perform several separate queries instead of a single query with several joins in it.

For OLTP systems, you should consider simplifying your queries or simplifying the data model. While we do not recommend avoiding joins completely, we strongly discourage queries with large numbers of joins.

Another issue to pay attention to is possible Cartesian products in queries contains joins, it is very easy to create such a thing and not notice it during development.

Column type mismatch

With an OR/M abstraction such as NHibernate, it easy to generate queries which are not optimized. In particular, this alert is raised when we detected that the type of the entity's field, which you have been used as a parameter in the query that has this alert, is not an exact match to the table’s column type in the database.

Column type mismatch between the entity’s property type and the tables column type can cause several implicit conversion issues, which can lead to performance issues and conversion overflow issues. Most commonly you’ll see them causing issues because they prevent the database from using indexes properly.

Queries still work, and everything looks fine externally, but the cost of the query is many time what it would be otherwise.

Common examples of type mismatches are:

  • Int32 vs bigint

  • Int64 vs int

  • String vs char / varchar

  • AnsiString vs nvarchar / nchar

In order to resolve this issue, you need to make sure that your column type and entity definitions are a match in all respects. Pay attention to the common type mismatches shown above to guide you in this.

Different parameter sizes result in inefficient query plan cache usage

The profiler detected identical statements that use different sizes for the same parameters. Let us say that we issue two queries, to find users by name. (Note: I am using syntax that will show you the size of the parameters, to demonstrate the problem).

We can do this using the following queries:

exec sp_executesql 
      N'SELECT * FROM Users WHERE Username = @username',
      N'@username nvarchar(3)',
      @username=N'bob'
      
exec sp_executesql 
      N'SELECT * FROM Users WHERE Username = @username',
      N'@username nvarchar(4)',
      @username=N'john'

This sort of code result in two query plans stored in the database query cache, because of the different parameter sizes. In fact, if we assume that the Username column has a length of 16, this single query may take up 16 places in the query cache.

Worse, if you have two parameters whose size change, such as username (length 16) and password (length 16), you may take up to 256 places in the query cache. Obviously, if you use more parameters, or if their length is longer, the number of places that a single query can take in the query cache goes up rapidly.

This can cause performance problems as the database needs to keep track of more query plans (uses more memory) may need evict query plans from the cache, which would result in having to rebuild the query plan (increase server load and query time).

NHibernate provides the following configuration option to control this:

<property name='prepare_sql'>true</property>

After specifying this, NHibernate will generate consistent query plan for all identical queries, regardless of the actual parameter size.

Don’t Query from the View

This alert is raised when the profiler detect that a query was generated from the view in an MVC application. Issuing queries from the view is a bad practice for several reasons.

  • It increase the time that the connection to the database have to be open. The recommendation is to keep that open only for the duration of the action, not throughout the lifetime of the request.
  • It make it that much harder to understand what are the data requirements for a particular action is.
  • When writing views, you shouldn't be bothered with thinking about persistence, or the number of queries that you views are generating.
  • The views are often the most changeable parts in the application, and having the application issue queries from the views may result in significant changes to the way the application data access pattern between revisions.
  • Most often, queries from the views result from lazy loading, Select N+1 or similar bad practices.

We strongly recommend that you'll avoid generating queries in the view, instead, perform all your queries in the action, and provide in memory access only to the view for them to render themselves.

Ends with query (like '%...') will force the database to scan the full table

The database is very good in answering queries that look for an exact match such as this:

select * from Users where Name = 'ayende'

But it requires a lot more work when you are using a like, such as this query:

select * from Users where Name like 'ayende%'

In many cases, the database can still optimize this query, and assuming that you have an index on this field, use the index.

But when things are drastically different when you have a query that checks for contains:

select * from Users where Name like '%ayende%'

Or using ends with:

select * from Users where Name like '%ayende'

The problem is that the databases cannot use an index for this sort or query, and it is force to issue a full table scan, inspecting each of the values in the database for a match. This tends to be very inefficient process.

You should carefully consider whatever you should use this feature, and if you really need to support ends with and contains queries, you should consider using either the database's own full search indexing, or using an external full text search option, such as Lucene or Solr.

Excessive number of rows returned

The excessive number of rows returned warning is generated from the profiler when... a query is returning a large number of rows. The simplest scenario is that we loaded all the rows in a large table, using something like the following code snippet:

session.CreateCriteria(typeof(Order)).List();

This is a common mistake when you are binding to a UI component (such as a grid) that performs its own paging. This is a problem on several levels:

  • We tend to want to see only part of the data.
  • We just loaded a whole lot of unnecessary data.
  • We are sending more data than necessary over the network.
  • We have a higher memory footprint than we should.
  • In extreme cases, we may crash as a result of an out of memory exception.

None of these are good, and like the discussion on unbounded result sets, this problem can be easily prevented by applying a limit at the database level to the number of rows that we want to load at any given time.

But it is not just simple queries without limit that can cause this issue. Another common source of this error is the Cartesian product when using joins. Check out this query:

session.CreateCriteria(typeof(Order))
    .SetFetchMode("OrderLines", FetchMode.Join)
    .SetFetchMode("Snapshots", FetchMode.Join)
    .List();

Assuming we have 10 orders, with 10 order lines each and 5 snapshots each, we are going to load 500 rows from the database. Mostly, they will contain duplicate data that we already have, and NHibernate will reduce the duplication to the appropriate object graph.

The problem is that we still loaded too much data, with the same issues as before. Now, we also have the problem that Cartesian products don't tend to stop at 500, but escalate very quickly to ridiculous numbers of rows returned for a trivial amount of data that we actually want.

The solution for this issue is to change the way we query the data. Instead of issuing a single query with several joins, we can split this into several queries, and send them all to the database in a single batch using Multi Query or Multi Criteria.

Large number of individual writes

This warning is raised when the profiler detects that you are writing a lot of data to the database. Similar to the warning about too many calls to the database, the main issue here is the number of remote calls and the time they take.

We can batch together several queries using NHibernate's support for MultiQuery and MultiCriteria, but a relatively unknown feature for NHibernate is the ability to batch a set of write statements into a single database call.

This is controlled using the adonet.batch_size setting in the configuration. If you set it to a number larger than zero, you can immediately start benefiting from reduced number of database calls. You can even set this value at runtime, using session.SetBatchSize().

More than one session per request

Using more than one session per request is generally a bad practice. Here is why:

  1. Each session has its own database connection. Using more than one session means using more than one database connection per request, resulting in additional strain on the database and slower overall performance.
  2. Typically we expect the session to keep track of our entities. When we have a multiple sessions, each session is not aware of the entities that tracked by the other session and might have to query the database again for its current state or have to issue an unnecessary update.
  3. Having more than a single session also mean that we can't take advantage on NHibernate's Unit of Work and have to manually manage our entities' change tracking and we might end up with multiple instances of the same entities in the same request (which using a single session for the whole request would prevent).
  4. Having more than one session means that the ORM has more work to do. In most cases, this is unnecessary and should be avoided.
  5. You can no longer take advantage of features scoped to the session, such as the first level cache.

It's usually recommended to use one session per request. You should investigate the Session Per Request pattern.

Typically this situation results from micromanaging the session, meaning that we open the session just before the query and close it immediately after the query or operation is executed. For example, see the following code:

public T GetEntity<T>(int id)
{
    using (var session = sessionFactory.OpenSession())
    {
         return session.Get<T>(id);
    }
}

It's strongly recommended to use an ambience / contextual session, that will be the same across the entire request, such as the following code:

public T GetEntity<T>(int id) 
{ 
      var session = sessionFactory.GetCurrentSession();
      return session.Get<T>(id); 
}

This code make use of the current session, so multiple calls to this method will always use the current session, and not open a new one.

Multiple write sessions in the same request

Writing to the database from more then one session in the same request it is bad for several reasons. Here is why:

  1. Each session uses a separate database connection. Using more than one session means using more than one database connection per request. This can hurt overall performance, and puts more pressure on the database.
  2. Using different sessions mean that we cannot take advantage of the NHibernate built-in transaction support and have to rely on System.Transactions which is significantly slower.
  3. We can't rely on the database to ensure transactionally safe view of the data, since we are using several different transactions to access the database. Note that this is the case even when using System.Transactions.
  4. When using System.Transactions it forcing you to use DTC in order to keep all sessions in the same transaction. Using DTC leads to bad performance and is more brittle than not using it.
  5. Without using System.Transactions, there is no ability to use a transaction across all the sessions.

For example, let us consider the following code, which is using multiple sessions to perform a single operation:

public void TransferMoney(Account from, Account to, Money amount)
{
    Dao.Withdraw(from, amount);
    Dao.Deposit(to, amount);
}

public void Withdraw(Account account, Money amount)
{
    using(var session = sessionFactory.OpenSession())
    using(var tx = session.BeginTransaction())
    {
        account.MoneyAmount -= amount;
        session.Update(account);
        tx.Commit();
    }
}

public void Deposit(Account account, Money amount)
{
    using(var session = sessionFactory.OpenSession())
    using(var tx = session.BeginTransaction())
    {
        account.MoneyAmount += amount;
        session.Update(account);
        tx.Commit();
    }
}

In this example, we call to the database twice, each time from different session. Because of that, we cannot take advantage of the database native transaction support, since each operation happen in a different transaction. Our choices are to either operate essentially without transactions (in which case, money can literally disappear in the air), or participate in a distributed transaction (System.Transactions).

We are also unable to take advantage of NHibernate's features to increase system performance such as batch multiple statements against the database.

The solution for this is simple: Use just one session per request. The strong recommendation of the NHibernate development team is that you should always use the ambient / contextual session. NHibernate also provide API that can help you to do so.

Queries & data binding shouldn’t mix

This alert is raised whenever the profiler detects that a query has been generated as a result of a data binding operation.

The simplest from of this is binding to a linq query against the database:

DataContext = from user in ctx.Users
              where user.IsActive
              select user;

The problem with this is that the Win Forms / WPF data binding code was designed with the assumption that it would work against in-memory data. Therefore, setting a data source usually triggers multiple calls to interface methods to obtain the data. As a consequence, when you perform data binding against an a database query directly, the query is often evaluated a couple of times, triggering multiple round-trips to the server.

The recommendation is to bind to the resultof the query:

var activeUsers = from user in ctx.Users
                  where user.IsActive
                  select user;
DataContext = activeUsers.ToList();

Another problem with data binding that the profiler will detect and warn about is lazy loading as a result of data binding operation. For example, binding to Order.Customer.Name will lead to loading the customer entity and binding to its Name property. The problem with those sort of queries is that they are likely to repeat for as many items as there are in the query, leading to Select N+1.

Even assuming that you are binding to just a single entity, lazy loading through data binding is a bad practice. You should use eager loading to load the data up front, rather than let the UI load it in the worst possible manner.

Query on un indexed column

We have detected a query that was made on an unindexed column. The problem here is that a query on an unindexed query force the database to perform a table scan. Such an act require the database to go through all the rows in the table.

In some cases, on very small tables, that is fine and the best way to find the results you want. On larger tables, however, that can be a prohibitively expensive operation which usually end up biting you only after the data set has grown to production size and then fail at 2 AM or there about.

Consider this query carefully and evaluate whatever an index needs to be created, the query needs to be modified or whatever this table is small enough that a table scan is the appropriate behavior.

Select N+1

Select N + 1 is a data access anti-pattern where the database is accessed in a suboptimal way. Take a look at this code sample, then we'll discuss what is going on. Say you want to show the user all comments from all posts so that they can delete all of the nasty comments. The naive implementation would be something like:

// SELECT * FROM Posts
foreach (Post post in session.CreateQuery("from Post").List())
{
   // lazy loading of comments list causes: 
   // SELECT * FROM Comments where PostId = @p0
   foreach (Comment comment in post.Comments) 
   {
       //print comment...
   }
}

In this example, we can see that we are loading a list of posts (the first select) and then traversing the object graph. However, we access the collection in a lazy fashion, causing NHibernate to go to the database and bring the results back one row at a time. This is incredibly inefficient, and the NHibernate Profiler will generate a warning whenever it encounters such a case.

The solution for this example is simple: Force an eager load of the collection up front. Using HQL:

var posts = session
    .CreateQuery("from Post p left join fetch p.Comments")
    .List();

Using the Criteria API:

session.CreateCriteria(typeof(Post))
    .SetFetchMode("Comments", FetchMode.Eager)
    .List();

Using Linq:

var posts = (from post in session.Query<Post>().FetchMany(x => x.Comments) 
             select post).ToList();

In each case, we will get a join and only a single query to the database.

Note: this is the classical appearance of the problem. It can also surface in other scenarios, such as calling the database in a loop, or more complex object graph traversals. In those cases, it it generally much harder to see what is causing the issue.

Having said that, NHibernate Profiler will detect those scenarios just as well, and give you the exact line in the source code that cause this SQL to be generated.

Other options for solving this issue are MultiQuery and MultiCriteria, which are also used to solve the issue of Too Many Queries.

Select N+1 in the same request

Select N + 1 is a data access anti-pattern where the database is accessed in a suboptimal way.This alert in essence is a combination of the classical N+1 with the more than one session per request alerts.

Take a look at this code sample, then we'll discuss what is going on. Say you want to show the user all comments from all posts so that they can delete all of the nasty comments. The naive implementation would be something like:

var posts = Dao.GetRecentPosts(); 
foreach(var post in posts) 
{ 
       post.Comments = Dao.GetCommentsForPost(post); 
}
public IEnumerable<Post> GetRecentPosts()
{
    using(var session = sessionFactory.OpenSession())
    {
        return (from post in session.Query<Post>()
            orderby post.PublishedDate descending
            select post)
            .Take(50)
            .ToList();
    }
}
public IEnumerable<Comment> GetCommentsForPost(Post post)
{
    using(var session = sessionFactory.OpenSession())
    {
        return (from comment in session.Query<Comment>()
            where comment.Post.Id = post.Id
            select comment)
            .ToList();
    }
}

In this example, we can see that we are opening a session and loading a list of posts (the first select) and then for each of the posts that we loaded, we open up a new session, including a new connection to the database, and make an additional query.

Because we are using multiple sessions in this fashion, we can't really take advantage of NHibernate features that are meant to deal with this exact situation. The first step is to avoid using more than one session per request, and the next step is to follow the guidance on solving select N + 1 problems for NHibernate.

Superfluous update - use inverse='true'

One of the more common mistakes that are made when mapping associations from the database is not setting the proper inverse attribute to let NHibernate know which side of the association is the owner of the association.

This is important because while in the object world, all associations are unidirectional, in the database world, all associations are bidirectional. As a result, NHibernate assumes by default that all associations are unidirectional and require and explicit step (setting inverse='true' on the association to recognize bidirectional associations).

The actual problem is simple, NHibernate issue a superfluous update statement. The NHibernate Profiler is able to detect and warn against such mistake.

Notice that while on the object model this is a bidirectional association and is maintained by two different places, it is maintained on a single place in the database.

This is a very common case, and quite easy it wrong. By default, NHibernate has to assume that it must update the column on both sides, so creating a new post and adding it to the Blog's Posts collection will result in two statements being written to the database:

INSERT INTO Posts(Title,
                 Text,
                 PostedAt,
                 BlogId,
                 UserId)
VALUES     ('vam' /* @p0 */,
           'abc' /* @p1 */,
           '1/17/2009 5:28:52 PM' /* @p2 */,
           1 /* @p3 */,
           1 /* @p4 */);

select SCOPE_IDENTITY()


UPDATE Posts
SET    BlogId = 1 /* @p0_0 */
WHERE  Id = 22 /* @p1_0 */

As you can see, we are actually setting the BlogId to the same value, twice. Once in the insert statement, the second using the update statement.

Now, there is a very easy fix for this issue, all you have to do is to tell NHibernate on the Blog's Posts mapping that this is a collection where the responsibility for actually updating the column value is on the other side. This is also something that I tend to check in code reviews quite often. The fix is literally just specifying inverse='true' on the one-to-many collection association.

Too Many Cache Calls In The Same Request

NHibernate Profiler has detected a large number of cache calls being made in a single request. Note that this alert is only applicable if you are using a distributed cache, if you are using a local cache, you can safely ignore this alert.

If you are using a distributed cache, however, this alert indicate a possible performance issue that you should be aware of. While making a single call to a cache tends to be significantly faster than calling the database, making a large number of fast calls can end up being slower than making a smaller amount of slow calls.

It is easy to get into situations where you are making hundreds of calls to the cache, and with a distributed cache, this turn out to hundreds of remote calls, which end up being very costly in terms of performance.

To avoid this issue, consider limiting the number of returned results in cache queries (since NHibernate will have to make a cache calls for each returned result). This is a best practice in general, even if you aren't using the cache or using a local cache.

Another common reason for this alert is trying to rely on the cache as a crutch, making large amount of queries and assuming that the cache will make it work fast enough after the first time. The problem with this approach is that hitting the cache does take time, and even when using a local cache, consider the case when the data is expired from the cache or is being purged from the cache.

Too many cache calls per session

NHibernate Profiler has detected a large number of cache calls being made in a single session. Note that this alert is only applicable if you are using a distributed cache, if you are using a local cache, you can safely ignore this alert.

If you are using a distributed cache, however, this alert indicate a possible performance issue that you should be aware of. NHibernate currently does not batch calls to the cache (this is a planned feature), and while making a request to a cache tends to be significantly faster than calling the database, making a large number of fast calls can end up being slower than making a smaller amount of slow calls.

It is easy to get into situations where you are making hundreds of calls to the cache, and with a distributed cache, this turn out to hundreds of remote calls, which end up being very costly in terms of performance.

To avoid this issue, consider limiting the number of returned results in cache queries (since NHibernate will have to make a cache calls for each returned result). This is a best practice in general, even if you aren't using the cache or using a local cache.

Another common reason for this alert is trying to rely on the cache as a crutch, making large amount of queries and assuming that the cache will make it work fast enough after the first time. The problem with this approach is that hitting the cache does take time, and even when using a local cache, consider the case when the data is expired from the cache or is being purged from the cache.

Too many database calls in the same request

One of the most expensive operations that we can do in our applications is to make a remote call. Going beyond our own process is an extremely expensive operation. Going beyond the local machine is more expensive still.

Calling the database, whether to query or to write, is a remote call, and we want to reduce the number of remote calls as much as possible. This warning is raised when the profiler notices that a single request is making an excessive number of calls to the database. This is usually an indication of a potential optimization in the way the sessions are used in the request.

There are several reasons why this can be:

  • A large number of queries as a result of a Select N + 1
  • Calling the database in a loop
  • Updating (or inserting / deleting) a large number of entities
  • A large number of (different) queries that we execute to perform our task

For the first reason, you can see the suggestions for Select N + 1.

Calling the database in a loop is generally a bug, and should be avoided. Usually you can restructure the code in such a way that you are not required to call the database in that way.

The last two issues have to be dealt with on an individual basis. Sometimes, using a different approach (such as bulk copy for batching inserts) or a specific query or stored procedure that will get all the data from the database in a single round trip.

Too many database calls per session

One of the most expensive operations that we can do in our applications is to make a remote call. Going beyond our own process is an extremely expensive operation. Going beyond the local machine is more expensive still.

Calling the database, whether to query or to write, is a remote call, and we want to reduce the number of remote calls as much as possible. This warning is raised when the profiler notices that a single session is making an excessive number of calls to the database. This is usually an indication of a potential optimization in the way the session is used.

There are several reasons why this can be:

  • A large number of queries as a result of a Select N + 1
  • Calling the database in a loop
  • Updating (or inserting / deleting) a large number of entities
  • A large number of (different) queries that we execute to perform our task

For the first reason, you can see the suggestions for Select N + 1. Calling the database in a loop is generally a bug, and should be avoided. Usually you can restructure the code in such a way that you are not required to call the database in that way.

Updating a large number of entities is discussed in Use statement batching, and mainly involves setting the batch size to reduce the number of calls that we make for the database.

The last issue is more interesting. We need to get data from several sources, and we issue multiple queries for that data. The problem is that we issue multiple separate queries to accomplish this, which has the problems listed above.

NHibernate provides a nice way to avoid this by using MultiQuery and MultiCriteria, both of which allow you to aggregate several queries into a single call to the database. If this is your scenario, I strongly recommend that you take a look at MultiQuery and MultiCriteria and see how to use them in your applications.

Too many expressions per where clause

With the abstraction of an OR/M such as NHibernate, it easy to generate queries which are hard to follow and has performance penalties, since you don’t see the exact SQL that is produced at the end. This often encourages you to write code that produces a bad SQL queries.

The SQL is bad not only because of the for the readability of the SQL, but also because the work that the database’s query optimizer has to work in order to serve such a query.

In this case, we detected that you have a query with a lot of work inside where clauses. Having too much expressions inside the where statement can lead to poor performance, especially when you grow and have a big data set.

Consider the complexity of your statement and whatever you can reduce the work done by refactoring your code to reduce the amount of effort required by the database to answer your queries.

Too many nesting select statements

With the abstraction of an OR/M such as NHibernate, it easy to generate queries which are hard to follow and has performance penalties. Since you don’t see the exact SQL that is produced at the end, you can write code that produces a bad SQL. The SQL is bad not only because of the for the readability of the SQL, but also because the work that the Database’s query analyzer has to work in order to serve such a query.

To demonstrate such a query, look on the query below. The query have lots of nested SELECT statements which makes the SQL hard to follow and to also slower for the database to run. The query below goes on for quite a while, and it doing a lot of things, none of them very fast.

The problem with such queries is that the code that generate them looks innocent and hides its costly impact. Consider modifying the queries that raised this warning, even splitting them up to separate queries executed independently might be a faster approach.

SELECT Project4.Id                             AS Id,
       Project4.Name                           AS Name,
       Project4.Description                    AS Description,
       Project4.CategoryTemplateId             AS CategoryTemplateId,
       Project4.MetaKeywords                   AS MetaKeywords,
       Project4.MetaDescription                AS MetaDescription,
       Project4.MetaTitle                      AS MetaTitle,
       Project4.ParentCategoryId               AS ParentCategoryId,
       Project4.PictureId                      AS PictureId,
       Project4.PageSize                       AS PageSize,
       Project4.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
       Project4.PageSizeOptions                AS PageSizeOptions,
       Project4.PriceRanges                    AS PriceRanges,
       Project4.ShowOnHomePage                 AS ShowOnHomePage,
       Project4.IncludeInTopMenu               AS IncludeInTopMenu,
       Project4.HasDiscountsApplied            AS HasDiscountsApplied,
       Project4.SubjectToAcl                   AS SubjectToAcl,
       Project4.LimitedToStores                AS LimitedToStores,
       Project4.Published                      AS Published,
       Project4.Deleted                        AS Deleted,
       Project4.DisplayOrder                   AS DisplayOrder,
       Project4.CreatedOnUtc                   AS CreatedOnUtc,
       Project4.UpdatedOnUtc                   AS UpdatedOnUtc
FROM   (SELECT Limit1.Id                             AS Id,
               Limit1.Name                           AS Name,
               Limit1.Description                    AS Description,
               Limit1.CategoryTemplateId             AS CategoryTemplateId,
               Limit1.MetaKeywords                   AS MetaKeywords,
               Limit1.MetaDescription                AS MetaDescription,
               Limit1.MetaTitle                      AS MetaTitle,
               Limit1.ParentCategoryId               AS ParentCategoryId,
               Limit1.PictureId                      AS PictureId,
               Limit1.PageSize                       AS PageSize,
               Limit1.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
               Limit1.PageSizeOptions                AS PageSizeOptions,
               Limit1.PriceRanges                    AS PriceRanges,
               Limit1.ShowOnHomePage                 AS ShowOnHomePage,
               Limit1.IncludeInTopMenu               AS IncludeInTopMenu,
               Limit1.HasDiscountsApplied            AS HasDiscountsApplied,
               Limit1.SubjectToAcl                   AS SubjectToAcl,
               Limit1.LimitedToStores                AS LimitedToStores,
               Limit1.Published                      AS Published,
               Limit1.Deleted                        AS Deleted,
               Limit1.DisplayOrder                   AS DisplayOrder,
               Limit1.CreatedOnUtc                   AS CreatedOnUtc,
               Limit1.UpdatedOnUtc                   AS UpdatedOnUtc
        FROM   (SELECT Distinct1.Id AS Id
FROM   (SELECT Extent1.Id               AS Id1,
                Extent1.ParentCategoryId AS ParentCategoryId,
                Extent1.LimitedToStores  AS LimitedToStores
        FROM   dbo.Category AS Extent1
                LEFT OUTER JOIN dbo.AclRecord AS Extent2
                    ON (Extent1.Id = Extent2.EntityId)
                    AND (N'Category' = Extent2.EntityName)
        WHERE  (Extent1.Published = 1)
                AND (Extent1.Deleted <> 1)
                AND ((Extent1.SubjectToAcl <> 1)
                        OR ((Extent2.CustomerRoleId IN (3))
                            AND (Extent2.CustomerRoleId IS NOT NULL)))) AS Filter1
        LEFT OUTER JOIN dbo.StoreMapping AS Extent3
            ON (Filter1.Id1 = Extent3.EntityId)
            AND (N'Category' = Extent3.EntityName)
WHERE  (Filter1.ParentCategoryId = 7 /* @p__linq__0 */)
        AND ((Filter1.LimitedToStores <> 1)
                OR (1 /* @p__linq__1 */ = Extent3.StoreId))) AS Distinct1) AS Project2
OUTER APPLY (SELECT TOP (1) Filter3.Id2                            AS Id,
                    Filter3.Name                           AS Name,
                    Filter3.Description                    AS Description,
                    Filter3.CategoryTemplateId             AS CategoryTemplateId,
                    Filter3.MetaKeywords                   AS MetaKeywords,
                    Filter3.MetaDescription                AS MetaDescription,
                    Filter3.MetaTitle                      AS MetaTitle,
                    Filter3.ParentCategoryId               AS ParentCategoryId,
                    Filter3.PictureId                      AS PictureId,
                    Filter3.PageSize                       AS PageSize,
                    Filter3.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
                    Filter3.PageSizeOptions                AS PageSizeOptions,
                    Filter3.PriceRanges                    AS PriceRanges,
                    Filter3.ShowOnHomePage                 AS ShowOnHomePage,
                    Filter3.IncludeInTopMenu               AS IncludeInTopMenu,
                    Filter3.HasDiscountsApplied            AS HasDiscountsApplied,
                    Filter3.SubjectToAcl                   AS SubjectToAcl,
                    Filter3.LimitedToStores                AS LimitedToStores,
                    Filter3.Published                      AS Published,
                    Filter3.Deleted                        AS Deleted,
                    Filter3.DisplayOrder                   AS DisplayOrder,
                    Filter3.CreatedOnUtc                   AS CreatedOnUtc,
                    Filter3.UpdatedOnUtc                   AS UpdatedOnUtc
        FROM   (SELECT Extent4.Id                             AS Id2,
                    Extent4.Name                           AS Name,
                    Extent4.Description                    AS Description,
                    Extent4.CategoryTemplateId             AS CategoryTemplateId,
                    Extent4.MetaKeywords                   AS MetaKeywords,
                    Extent4.MetaDescription                AS MetaDescription,
                    Extent4.MetaTitle                      AS MetaTitle,
                    Extent4.ParentCategoryId               AS ParentCategoryId,
                    Extent4.PictureId                      AS PictureId,
                    Extent4.PageSize                       AS PageSize,
                    Extent4.AllowCustomersToSelectPageSize AS AllowCustomersToSelectPageSize,
                    Extent4.PageSizeOptions                AS PageSizeOptions,
                    Extent4.PriceRanges                    AS PriceRanges,
                    Extent4.ShowOnHomePage                 AS ShowOnHomePage,
                    Extent4.IncludeInTopMenu               AS IncludeInTopMenu,
                    Extent4.HasDiscountsApplied            AS HasDiscountsApplied,
                    Extent4.SubjectToAcl                   AS SubjectToAcl,
                    Extent4.LimitedToStores                AS LimitedToStores,
                    Extent4.Published                      AS Published,
                    Extent4.Deleted                        AS Deleted,
                    Extent4.DisplayOrder                   AS DisplayOrder,
                    Extent4.CreatedOnUtc                   AS CreatedOnUtc,
                    Extent4.UpdatedOnUtc                   AS UpdatedOnUtc
                FROM   dbo.Category AS Extent4
                    LEFT OUTER JOIN dbo.AclRecord AS Extent5
                        ON (Extent4.Id = Extent5.EntityId)
                            AND (N'Category' = Extent5.EntityName)
                WHERE  (Extent4.Published = 1)
                    AND (Extent4.Deleted <> 1)
                    AND ((Extent4.SubjectToAcl <> 1)
                            OR ((Extent5.CustomerRoleId IN (3))
                                AND (Extent5.CustomerRoleId IS NOT )))NULL) AS Filter3
            LEFT OUTER JOIN dbo.StoreMapping AS Extent6
                ON (Filter3.Id2 = Extent6.EntityId)
                    AND (N'Category' = Extent6.EntityName)
        WHERE  (Filter3.ParentCategoryId = 7 /* @p__linq__0 */)
            AND ((Filter3.LimitedToStores <> 1)
                    OR (1 /* @p__linq__1 */ = Extent6.StoreId))
            AND (Project2.Id = Filter3.Id2)) AS Limit1) AS Project4
ORDER  BY Project4.DisplayOrder ASC

Too many tables in selected statement

With the abstraction of an OR/M such as NHibernate, it's easy to generate queries which are not optimized.

In this case, we detected that you query over a large number of tables, which may lead to poor performance, and run too slow especially once you grow to a large data set. The more tables that are in the query, the more work the database has to do, whatever this is via subselect, join or nested queries.

Consider optimizing the query to use less tables, even splitting it up to multiple separate queries if that proves to be more performant.

Example query with multiple tables:

select blog0_.Id as Id15_0_,
  posts1_.Id as Id20_1_,
  comments2_.Id as Id19_2_,
  blog0_.Subtitle as Subtitle15_0_,
  blog0_.AllowsComments as AllowsCo4_15_0_,
  blog0_.CreatedAt as CreatedAt15_0_,
  posts1_.Title as Title20_1_,
  posts1_.BlogId as BlogId20_1_,
  comments2_.PostId as PostId1__,
  comments2_.Id as Id1__,
  user3_.Password as Password22_3_,
  user3_.Username as Username22_3_,
  user3_.Bio as Bio22_3_,
  user5_.Password as Password22_4_,
  user5_.Username as Username22_4_,
  user5_.Bio as Bio22_4_,
  users4_.BlogId as BlogId2__,
  users4_.UserId as UserId2__
from Blogs blog0_
  inner join Posts posts1_
   on blog0_.Id = posts1_.BlogId
  inner join Comments comments2_
   on posts1_.Id = comments2_.PostId
  inner join Users user3_
   on posts1_.UserId = user3_.Id
  inner join UsersBlogs users4_
   on blog0_.Id = users4_.BlogId
  inner join Users user5_
on users4_.UserId = user5_.Id

Too many where clauses in statement

With the abstraction of an OR/M such as NHibernate, it easy to create code which is hard to follow and has performance penalties, since you don’t see the exact SQL that is produced at the end. This often encourages you to write code that produces a bad SQL. The SQL is bad not only because of the for the readability of the SQL, but also because the work that the Database’s query analyzer has to work in order to serve such a query.

In this case, we detected that you have a query with a lot of Where statements. Having too much Where statements can lead to poor performance, especially when you grow and have a big data set.

We suggest that you keep the where clauses as few as possible.

Transaction disposed without explicit rollback / commit

NHibernate Profiler has detected a transaction that was disposed without calling either Commit() or Rollback().

This is a common bug, forgetting to call transaction.Commit() before disposing the session, which results in an implicit rollback. For example:

using(var transaction = session.BeginTransaction())
{
     var post = session.Get<Post>(5);
     post.AddComment(comment);
} // bug, implicit rollback goes here

Instead of the code above, we should have written:

using(var transaction = session.BeginTransaction())
{
     var post = session.Get<Post>(5);
     post.AddComment(comment);
     transaction.Commit()
}

This piece of code will work and save the new comment. If you want to rollback the transaction, it is recommended that you would do so using an explicit Rollback(), since that would be much easier to understand when reading the code later.

Unbounded result set

An unbounded result set is where a query is performed and does not explicitly limit the number of returned results using SetMaxResults() with NHibernate, or TOP or LIMIT clauses in the SQL. Usually, this means that the application assumes that a query will always return only a few records. That works well in development and in testing, but it is a time bomb waiting to explode in production.

The query may suddenly start returning thousands upon thousands of rows, and in some cases, it may return millions of rows. This leads to more load on the database server, the application server, and the network. In many cases, it can grind the entire system to a halt, usually ending with the application servers crashing with out of memory errors.

Here is one example of a query that will trigger the unbounded result set warning:

session.CreateQuery("from OrderLines lines where lines.Order.Id = :id")
       .SetParameter("id", orderId)
       .List();

If the order has many line items, we are going to load all of them, which is probably not what was intended. This can be fixed fairly easily by using pagination:

session.CreateQuery("from OrderLines lines where lines.Order.Id = :id")
    .SetParameter("id", orderId)
    .SetFirstResult(0)
    .SetMaxResults(25)
    .List();

Now we are assured that we only need to handle a predictable, small result set, and if we need to work with all of them, we can page through the records as needed. But there is another common occurrence of the unbounded result set problem from directly traversing the object graph, as in the following example:

var order = session.Get(orderId);
    DoSomethingWithOrderLines(order.OrderLines);

Here, again, we are loading the entire set (which is, in fact, identical to the query we issued before) without regard for how big the result set may be. NHibernate provides a robust way of handling this scenario using filters:

var order = session.Get(orderId);
var orderLines = session.CreateFilter(order.OrderLines, "")
    .SetFirstResult(0)
    .SetMaxResults(25)
    .List();

DoSomethingWithOrderLines(orderLines);

This allows us to page through a collection very easily, and saves us from having to deal with unbounded result sets and their consequences.

Use of implicit transactions is discouraged

A common mistake when using a database is to use transactions only when orchestrating several write statements. In reality, every operation that the database is doing is done inside a transaction, including queries and writes (update, insert, delete).

Note: In SQL Server, Implicit Transactions are usually called Autocommit Transactions.

When we don't define our own transactions, it falls back into implicit transaction mode, where every statement to the database runs in its own transaction, resulting in a large performance cost (database time to build and tear down transactions), and reduced consistency.

Even if we are only reading data, we should use a transaction, because using transactions ensures that we get consistent results from the database. NHibernate assumes that all access to the database is done under a transaction, and strongly discourages any use of the session without a transaction.

Example of valid code:

using(var session = sessionFactory.OpenSession()) 
using(var tx = session.BeginTransaction()) 
{ 
// execute code that uses the session 
tx.Commit(); 
}

Leaving aside the safety issue of working with transactions, the assumption that transactions are costly and that we need to optimize them is false. As previously mentioned, databases are always running in a transaction. Also, they have been heavily optimized to work with transactions.

The real question here is: Is the transaction per-statement or per-batch? There is a non-trivial amount of work that needs to be done to create and dispose of a transaction; having to do it per-statement is more costly than doing it per-batch.

It is possible to control the number and type of locks that a transaction takes by changing the transaction isolation level (and, indeed, a common optimization is to reduce the isolation level).

NHibernate treats the call to Commit() as the time to flush all changed items from the unit of work to the database, and without an explicit call to Commit(), it has no way of knowing when it should do that. A call to Flush() is possible, but it is frowned upon because this is usually a sign of improper transaction usage.

We strongly suggest using code similar to that shown above (or another approach to transactions, such as TransactionScope, or Castle's Automatic Transaction Management) in order to handle transactions correctly.

Transaction and the second level cache

Another implication of not using explicit transactions with NHibernate is related to the use of the second level cache.

NHibernate goes to great length in order to ensure that the 2nd level cache maintains a consistent view of the database. This is accomplished by deferring all 2nd level cache updates to the transaction commit. In this way, we can assert that the data in the 2nd level cache is the one committed to the database.

Forgetting the use of explicit transactions has the effect of nulling the 2nd level cache. Here is an example that would make this clear:

using(var session = sessionFactory.OpenSession()) 
{ 
var post = session.Get<Post>(1);
// do something with post
}

Even if the 2nd level cache is enabled for Post, it is still not going to be cached in the 2nd level cache. The reason is that until we commit a transaction, NHibernate will not update the cache with the values for the loaded entities.

This code, however, does make use of the 2nd level cache:

using(var session = sessionFactory.OpenSession()) 
using(var tx = session.BeginTransaction()) 
{
var post = session.Get<Post>(1);
// do something with post
tx.Commit();
} 

Using a single session in multiple threads is likely a bug

NHibernate Profiler has detected a session that is used in a different thread than the one it was opened on.

NHibernate sessions are not thread safe, and attempting to use them in multiple threads requires careful synchronization. It is generally better to consider a session only useful within the thread that created it.

There are valid scenarios for cross thread session usage (background loading with careful sync, multi request spanning session), but usually this alert indicate a problem in serializing access to the session.

The following code will trigger the alert:

using(var session = factory.OpenSession())
{
    var wait = new ManualResetEvent(false);

    //simulating work on the session using a different thread...
    ThreadPool.QueueUserWorkItem(state =>
    {
        using(var tx = session.BeginTransaction())
        {
            session.Get<Blog>(1);

            tx.Commit();
        }

        wait.Set();
    });

    wait.WaitOne();
}

If you are using the session across multiple threads intentionally and are certain that you are correctly serializing access to it, then you can safely ignore this alert.

You can also disable this alert completely in the options dialog.

Error

error was detected

There are two main reasons for this to show up.

NHibernate Warning Surfacing

The NHibernate Profiler will surface logged warnings from NHibernate, so you could figure out what was the reason for the warning and fix that.

NHibernate is quite good in detecting problematic situations and warning you about that, so please pay attention to whatever warning are surfaced.

NHibernate Error Detection

Whenever NHibernate is running into an error (transaction deadlock, optimistic concurrency exception, etc), the NHibernate Profiler will detect that and show you the full error (including the exception message and the stack trace).

SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.

Most of the times, this error is caused by using DateTime.MinValue or DateTime.MaxValue. Make sure that your datetime value is in the supported range of Sql Server.