Dependency Injection in ASP.NET MVC3

February 11, 2011 2 comments

What dependency injection means is that instead of writing code like this in your controller

private IBlogService _BlogService;
public BlogController()
{
    _BlogService = new BlogService();
}

you write code like this

private IBlogService _BlogService;
public BlogController(IBlogService blogService)
{
    _BlogService = blogService;
}

the benefits of dependency injection are your classes are not tightly coupled, are more testable, and really is pluggable.

To enable dependency injection into your controllers in ASP.NET MVC2 you had to create a new class derived from DefaultControllerFactory and override the GetControllerInstance method to create the controller using your dependency injection container e.g.

public class IoCControllerFactory : DefaultControllerFactory
{

    protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
    {
        return (IController)_container.GetInstance(controllerType);
    }
}

and then you had to register this controller factory as the default in the Application_Start event in the global.ascx file

protected void Application_Start()
{

    ControllerBuilder.Current.SetControllerFactory(typeof(IoCControllerFactory));
    ...
}

The problem here is that you need to create a separate custom class for the model binders, and for custom model metadata.

ASP.NET MVC 3 makes things easier to inject dependencies by introducing a new interface IDependencyResolver. The benefit here is that this DependencyResolver is responsible to resolve dependencies not only for the controller but also for the services (repository, logger etc.) consumed by the controller, view engine, view binders, and the model and model metadata.

The interface has two methods

object GetService(Type serviceType);
IEnumerable<object> GetServices(Type serviceType);

which return either a single object or a list of object of serviceType. If a type cannot be resolved by the dependency resolver then ASP.NET MVC3 expects the resolver to return null.

If the dependency resolver returns null then ASP.NET MVC3 will fall back to the default factory class to instantiate the interface object.

To use this new interface simply create a new class which implements this interface

public class UnityDependencyResolver : IDependencyResolver
{
    private IUnityContainer _contianer;
    UnityDependencyResolver(IUnityContainer container)
    {
        _container = container;
    }

    #region IDependencyResolver Members
    public object GetService(Type serviceType)
    {
        try
        {
            return _container.GetInstance(serviceType);
        }
        catch (ResolutionFailedException)
        {
            return null;
        }
    }
    public IEnumerable<object> GetServices(Type serviceType)
    {
        try
        {
            return _container.GetAllInstances(serviceType);
        }
        catch (ResolutionFailedException)
        {
            return null;
        }
    }
    #endregion
}

I’m using Unity as my dependency container and since Unity throws a ResolutionFailedException exception if it cannot resolve a  type we need to wrap it in a try/catch block and return null in case of exception.

Just like the controller factory we need to register our dependency resolver at Application_Start event in our global.ascx

protected void Application_Start()
{
    var container = new UnityContainer()
        .LoadConfiguration();

    DependencyResolver.SetResolver(container);
    ...
}

You can either configure your container at runtime or via the .config file. I prefer the .config approach because then I can easily take my application to any environment (DEv vs QA) and switch out my EmailLogger with NullLogger or as required by changing the mapping in the .config file.

SQL to LINQ Cheat Sheet

September 27, 2009 2 comments

If you are already working with SQL and are familiar with SQL queries then you may find you at time are thinking of converting SQL syntax to LINQ syntax when writing LINQ. Following cheat sheet should help you with some of the common queries

 

SQL

LINQ

Lambda

SELECT *

FROM HumanResources.Employee

from e in Employees

select e

Employees
   .Select (e => e)

SELECT e.LoginID, e.JobTitle

FROM HumanResources.Employee AS e

from e in Employees

select new {e.LoginID, e.JobTitle}

Employees
   .Select (
      e =>
         new 
         {
            LoginID = e.LoginID,
            JobTitle = e.JobTitle
         }
   )

SELECT e.LoginID AS ID, e.JobTitle AS Title

FROM HumanResources.Employee AS e

from e in Employees

select new {ID = e.LoginID, Title = e.JobTitle}

Employees
   .Select (
      e =>
         new 
         {
            ID = e.LoginID,
            Title = e.JobTitle
         }
   )

SELECT DISTINCT e.JobTitle

FROM HumanResources.Employee AS e

(from e in Employees

select e.JobTitle).Distinct()

Employees
   .Select (e => e.JobTitle)
   .Distinct ()

SELECT e.*

FROM HumanResources.Employee AS e

WHERE e.LoginID = ‘test’

from e in Employees

where e.LoginID == "test"

select e

Employees
   .Where (e => (e.LoginID == "test"))

SELECT e.*

FROM HumanResources.Employee AS e

WHERE e.LoginID = ‘test’ AND e.SalariedFlag = 1

from e in Employees

where e.LoginID == "test" && e.SalariedFlag

select e

Employees
   .Where (e => ((e.LoginID == "test") && e.SalariedFlag))

SELECT e.*
FROM HumanResources.Employee AS e

WHERE e.VacationHours >= 2 AND e.VacationHours <= 10

from e in Employees

where e.VacationHours >= 2 && e.VacationHours <= 10

select e

Employees
   .Where (e => (((Int32)(e.VacationHours) >= 2) && ((Int32)(e.VacationHours) <= 10)))

SELECT e.*

FROM HumanResources.Employee AS e
ORDER BY e.NationalIDNumber

from e in Employees

orderby e.NationalIDNumber

select e

Employees
   .OrderBy (e => e.NationalIDNumber)

SELECT e.*

FROM HumanResources.Employee AS e

ORDER BY e.HireDate DESC, e.NationalIDNumber

from e in Employees

orderby e.HireDate descending, e.NationalIDNumber

select e

Employees
   .OrderByDescending (e => e.HireDate)
   .ThenBy (e => e.NationalIDNumber)

SELECT e.*
FROM HumanResources.Employee AS e

WHERE e.JobTitle LIKE ‘Vice%’ OR SUBSTRING(e.JobTitle, 0, 3) = ‘Pro’

from e in Employees

where e.JobTitle.StartsWith("Vice") || e.JobTitle.Substring(0, 3) == "Pro"

select e

Employees
   .Where (e => (e.JobTitle.StartsWith ("Vice") || (e.JobTitle.Substring (0, 3) == "Pro")))

SELECT SUM(e.VacationHours)

FROM HumanResources.Employee AS e

 

Employees.Sum(e => e.VacationHours);

SELECT COUNT(*)

FROM HumanResources.Employee AS e

 

Employees.Count();

SELECT SUM(e.VacationHours) AS TotalVacations, e.JobTitle

FROM HumanResources.Employee AS e

GROUP BY e.JobTitle

from e in Employees

group e by e.JobTitle into g

select new {JobTitle = g.Key, TotalVacations = g.Sum(e => e.VacationHours)}

Employees
   .GroupBy (e => e.JobTitle)
   .Select (
      g =>
         new 
         {
            JobTitle = g.Key,
            TotalVacations = g.Sum (e => (Int32)(e.VacationHours))
         }
   )

SELECT e.JobTitle, SUM(e.VacationHours) AS TotalVacations

FROM HumanResources.Employee AS e

GROUP BY e.JobTitle

HAVING e.COUNT(*) > 2

from e in Employees

group e by e.JobTitle into g

where g.Count() > 2

select new {JobTitle = g.Key, TotalVacations = g.Sum(e => e.VacationHours)}

Employees
   .GroupBy (e => e.JobTitle)
   .Where (g => (g.Count () > 2))
   .Select (
      g =>
         new 
         {
            JobTitle = g.Key,
            TotalVacations = g.Sum (e => (Int32)(e.VacationHours))
         }
   )

SELECT *

FROM Production.Product AS p, Production.ProductReview AS pr

from p in Products

from pr in ProductReviews

select new {p, pr}

Products
   .SelectMany (
      p => ProductReviews,
      (p, pr) =>
         new 
         {
            p = p,
            pr = pr
         }
   )

SELECT *

FROM Production.Product AS p

INNER JOIN Production.ProductReview AS pr ON p.ProductID = pr.ProductID

from p in Products

join pr in ProductReviews on p.ProductID equals pr.ProductID

select new {p, pr}

Products
   .Join (
      ProductReviews,
      p => p.ProductID,
      pr => pr.ProductID,
      (p, pr) =>
         new 
         {
            p = p,
            pr = pr
         }
   )

SELECT *

FROM Production.Product AS p

INNER JOIN Production.ProductCostHistory AS pch ON p.ProductID = pch.ProductID AND p.SellStartDate = pch.StartDate

from p in Products

join pch in ProductCostHistories on new {p.ProductID, StartDate = p.SellStartDate} equals new {pch.ProductID, StartDate = pch.StartDate}

select new {p, pch}

Products
   .Join (
      ProductCostHistories,
      p =>
         new 
         {
            ProductID = p.ProductID,
            StartDate = p.SellStartDate
         },
      pch =>
         new 
         {
            ProductID = pch.ProductID,
            StartDate = pch.StartDate
         },
      (p, pch) =>
         new 
         {
            p = p,
            pch = pch
         }
   )

SELECT *

FROM Production.Product AS p

LEFT OUTER JOIN Production.ProductReview AS pr ON p.ProductID = pr.ProductID

from p in Products

join pr in ProductReviews on p.ProductID equals pr.ProductID

into prodrev

select new {p, prodrev}

Products
   .GroupJoin (
      ProductReviews,
      p => p.ProductID,
      pr => pr.ProductID,
      (p, prodrev) =>
         new 
         {
            p = p,
            prodrev = prodrev
         }
   )

SELECT p.ProductID AS ID

FROM Production.Product AS p

UNION

SELECT pr.ProductReviewID

FROM Production.ProductReview AS pr

(from p in Products

select new {ID = p.ProductID}).Union(

from pr in ProductReviews

select new {ID = pr.ProductReviewID})

Products
   .Select (
      p =>
         new 
         {
            ID = p.ProductID
         }
   )
   .Union (
      ProductReviews
         .Select (
            pr =>
               new 
               {
                  ID = pr.ProductReviewID
               }
         )
   )

SELECT TOP (10) *

FROM Production.Product AS p

WHERE p.StandardCost < 100

(from p in Products

where p.StandardCost < 100

select p).Take(10)

Products
   .Where (p => (p.StandardCost < 100))
   .Take (10)

SELECT *

FROM [Production].[Product] AS p

WHERE p.ProductID IN(

    SELECT pr.ProductID

    FROM [Production].[ProductReview] AS [pr]

    WHERE pr.[Rating] = 5

    )

from p in Products

where (from pr in ProductReviews

where pr.Rating == 5

select pr.ProductID).Contains(p.ProductID)

select p

Products
   .Where (
      p =>
         ProductReviews
            .Where (pr => (pr.Rating == 5))
            .Select (pr => pr.ProductID)
            .Contains (p.ProductID)
   )

 

Also, here is an excellent LINQ query comprehension diagram http://www.albahari.com/nutshell/linqsyntax.emf

Tags: , , ,

Select top n rows from a table for each group

March 6, 2009 1 comment

We have a retail shop online and with just 2 weeks of our website launch we already have close to 30 orders. Now of course marketing wanted to get some ads up on the site to derive more orders and one for the reports was to get the top two products from each manufacturer which have the best promotional price.

A GROU BY clause immediately comes to mind for the above scenario, but SQL 2005/2008 offer a much better solution.

SELECT    
    ProductId,
    MfgPartNumber,
    DescriptionText,
    MfgListPrice,
    SellPrice,
    SavingsPercentage
FROM    (
    SELECT
        ROW_NUMBER() OVER(PARTITION BY P.MfgCode ORDER BY (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice DESC) AS RowNumber,
        P.ProductId,
        P.MfgPartNumber,
        P.DescriptionText,
        P.MfgListPrice,
        PR.SellPrice,
        (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice AS SavingsPercentage
    FROM    Product P
    INNER JOIN    Price PR
        ON    P.ProductId = PR.ProductId
    INNER JOIN    SpecialPricingXRef SP
        ON    PR.PriceGroupId = SP.PriceGroupId
    WHERE    SP.PricingType = 'Promo'
        AND    P.MfgListPrice > PR.SellPrice
    ) AS InnerTable
WHERE    RowNumber < 3
ORDER BY    SavingsPercentage DESC;

Aside from normal use of the ROW_NUMBER() function to generate sequence numbers for the result set the above is the perfect scenario where the ROW_NUMBER() function is really useful. ROW_NUMBER() function creates a new column in the result set with a unique / incremental number for each row. If the PARTITION BY clause (think of a partition as a category or group) is also specified then the sequence number of a row is reset to 1 for each new partition / category.

Lets dissect the above query; following query returns all the products which have a promo price record with a RowNumber incremented for each row. I’ll get to why we need the RowNumber in our query later.

SELECT
    ROW_NUMBER() OVER(ORDER BY (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice DESC) AS RowNumber,
    P.ProductId,
    P.MfgPartNumber,
    P.DescriptionText,
    P.MfgListPrice,
    PR.SellPrice,
    (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice AS SavingsPercentage
FROM    Product P
INNER JOIN    Price PR
    ON    P.ProductId = PR.ProductId
INNER JOIN    SpecialPricingXRef SP
    ON    PR.PriceGroupId = SP.PriceGroupId
WHERE    SP.PricingType = 'Promo'
    AND    P.MfgListPrice > PR.SellPrice

Adding the PARTITION BY clause to the above changes the output slightly. When the PARTITION BY clause is also specified the sequence number generated by ROW_NUMBER() is reset to 1 for each new partition. In concept the PARTITION BY clause is similar to a GROUP BY except it only applies to the function and not the select as a whole.

SELECT
    ROW_NUMBER() OVER(PARTITION BY P.MfgCode ORDER BY (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice DESC) AS RowNumber,
    P.ProductId,
    P.MfgPartNumber,
    P.DescriptionText,
    P.MfgListPrice,
    PR.SellPrice,
    (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice AS SavingsPercentage
FROM    Product P
INNER JOIN    Price PR
    ON    P.ProductId = PR.ProductId
INNER JOIN    SpecialPricingXRef SP
    ON    PR.PriceGroupId = SP.PriceGroupId
WHERE    SP.PricingType = 'Promo'
    AND    P.MfgListPrice > PR.SellPrice

The where clause in our outer select is just filtering the results so that only those rows with RowNumber less than 3 are returned. Recall from our previous query that RowNumber is reset to 1 for each manufacturer; so to get just the top 2 rows for each manufacturer we need to get rows where RowNumber is 1 and 2 (< 3). The result is the list of 2 rows with highest savings for each manufacturer… simple really.

P.S. If you prefer Common Table Expressions then you can rewrite the query to:

WITH InnerTable AS(
    SELECT
        ROW_NUMBER() OVER(PARTITION BY P.MfgCode ORDER BY (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice DESC) AS RowNumber,
        P.ProductId, 
        P.MfgPartNumber,
        P.DescriptionText,
        P.MfgListPrice,
        PR.SellPrice,
        (P.MfgListPrice - PR.SellPrice) / P.MfgListPrice AS SavingsPercentage
    FROM    Product P
    INNER JOIN    Price PR
        ON    P.ProductId = PR.ProductId
    INNER JOIN    SpecialPricingXRef SP
        ON    PR.PriceGroupId = SP.PriceGroupId
    WHERE    SP.PricingType = 'Promo'
        AND    P.MfgListPrice > PR.SellPrice
) 

SELECT    ProductId,
    MfgPartNumber,
    DescriptionText,
    MfgListPrice,
    SellPrice,
    SavingsPercentage
FROM    InnerTable
WHERE    RowNumber <= 3
ORDER BY    SavingsPercentage DESC;

the query plan for sub query vs CTE is the same so its all a matter of choice and readability.

Caching Application Block and database backing store

February 6, 2009 2 comments

Caching can help to overcome some of the challenges associated with enterprise-scale distributed web applications:

  • Performance – Caching improves application performance by storing relevant data as close as possible to the data consumer. This avoids repetitive data creation, processing and transportation
  • Scalability – Storing information in a cache helps save resources and increases scalability as the demands on the application increase
  • Availability – By storing data in a local cache, the application may be able to survive system failures such as network latency, web service problems, and hardware failures

Out of the box ASP.NET provides three primary forms of caching:

  • Page Level output caching – A copy of the HTML that was sent in response to a request is kept in memory and subsequent request are then sent the cached output until the cache expires. This can result in large performance gains as sending the cached output is always very fast and fairly constant
  • User Control level output caching (fragment caching) – Page level output caching may not be feasible in cases where certain parts of the page are customized for the user. Yet, there may be other parts of the page e.g. menus and layout elements which are common to the entire application. The cached controls can be configured to vary based on some set property or any of the variations supported by page level caching. All pages using the same controls share the same cached entries for these controls.
  • And the Cache API – The real power of caching is exposed via the Cache object. ASP.NET includes an easy-to-use caching mechanism that can be used to store objects in memory that require a lot of server resources. The .NET Framework includes the ASP.NET cache in the System.Web namespace which can be accessed through the System.Web.HttpContext.Cache object. WinForm applications can also make use of this Cache object by referencing the System.Web assembly and can access it through the System.Web.HttpRuntime.Cache object. Instances are private to each application and the lifetime is tied to the corresponding application.

By using the Caching Application Block we can write a consistent form of code to implement caching in any application component, be it the web UI, a Windows service, a WinForm desktop application, or a web service. The Caching Application Block is optimized for performance and is both thread safe and exception safe.

The Caching Application Block works with ASP.NET cache and provides a number of features that are not available to the ASP.NET cache such as:

  • The ability to use a persistent backing store – both isolated storage and database backing store
  • The ability to encrypt a cache item’s data – this works only when using a persistent backing store
  • Multiple methods of setting expiration times – absolute time, sliding time, extended time format, file dependency, or never expires
  • The core settings are described in configuration files and can be changed without recompilation of the project
  • Can be extended to create your own expiration policies and storage mechanisms

To use the Caching Application Block you need to add references of the following assemblies to your project:

  • Microsoft.Practices.EnterpriseLibrary.Common
  • Microsoft.Practices.EnterpriseLibrary.Caching

The following namespaces need to  be included in the classes that use the Caching Block:

  • Microsoft.Practices.EnterpriseLibrary.Caching
  • Microsoft.Practices.EnterpriseLibrary.Caching.Expirations
  • Microsoft.Practices.EnterpriseLibrary.Common

If there is a requirement for a persistent backing store then the data access block needs to be included:

  • Microsoft.Practices.EnterpriseLibrary.Data
  • Microsoft.Practices.EnterpriseLibrary.Caching.database

If  there is a requirement to encrypt data in the persistent backing store then the encryption block needs to be included:

  • Microsoft.Practices.EnterpriseLibrary.Security.Cryptography

Configuration the Cache Block

In Memory Cache

<configSections> 

         <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> 

</configSections> 

<cachingConfiguration> 

          <cacheManagers> 

                    <add expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="10" numberToRemoveWhenScavenging="5" backingStoreName="Null Storage" name="Prices" /> 

          </cacheManagers> 

          <backingStores> 

                    <add encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations .NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Null Storage" /> 

          </backingStores> 

</cachingConfiguration> 

Cache Using Backing Store

Use the database backing store provider when deploying your application on a web farm on multiple computers or on multiple processes on the same machine scenario. To use the database backing store you need to first create the cache database on SQL Server. The script to do this can be found in <Enterprise Library Source Dir>\App Blocks\Src\Caching\Database\Scripts.

<configSections> 

         <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> 

         <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> 

</configSections> 

<dataConfiguration defaultDatabase="Northwind" /> 

<connectionStrings> 

         <add name="CacheDSN" connectionString="Data Source=(local);Initial Catalog=Caching;Integrated Security=True;User Instance=False" providerName="System.Data.SqlClient" /> 

        <add name="Northwind" connectionString="Data Source=(local);Initial Catalog=Northwind;Integrated Security=True" providerName="System.Data.SqlClient" /> 

</connectionStrings> 

<cachingConfiguration defaultCacheManager="Customers"> 

         <cacheManagers> 

                 <add expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="11000" numberToRemoveWhenScavenging="10" backingStoreName="DataStorage" name="Customers" /> 

         </cacheManagers> 

         <backingStores> 

                 <add databaseInstanceName="CacheDSN" partitionName="MyFirstCacheApp" encryptionProviderName="" type="Microsoft.Practices.EnterpriseLibrary.Caching.Database.DataBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching.Database, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="DataStorage" /> 

         </backingStores> 

</cachingConfiguration> 

Cache Application Block Class Reference

CacheFactory Class – The CacheFactory uses the supplied configuration information to determine the type of cache object to construct

GetCacheManager – The GetCacheManager method returns a CacheManager object determined by the configuration information

ICacheManager myCache = CacheFactory.GetCacheManager(); //uses the default cache specified in configuration 

ICacheManager myCustomerCache = CacheManager.GetCacheManager("Customers"); //overload creates the name cache CustomerData 

CacheManager Class – The CacheManger class acts as the interface between the application and the rest of the caching block. It provides all the methods required to manage the applications

GetData – The GetData method returns an object from the cache containing the data that matches the supplied ID. If the data does not exist or if it has expired Null is returned

Customer oCustomer = myCustomerCache.GetData("CustomerID"); 

Add – The Add method will add an item to the cache

myCustomerCache.Add("CustomerID", oCustomer); 

myCustomerCache.Add("CustomerID", oCustomer, scavengingPriority, refreshAction, cacheExpirations); 

Contains – The Contains method returns true if the item exists in the cache

bool dataExists = myCustomerCache.Contains("CustomerID"); 

Remove – The Remove method will delete an item from the cache

myCustomerCache.Remove("CustomerID"); 

Flush – The Flush method removes all items from the cache. If an error occurs during the flush the cache is left unchanged

myCustomerCache.Flush(); 

Monitoring Your Cache Performance

Monitoring your cache usage and performance can help you understand whether your cache is performing as expected and helps you to fine tune your cache solution. You can use the Windows performance monitor application (Perfmon) to view and analyze your cache performance data when it is not delivering the expected performance.

To monitor cache performance

  • Monitor the Cache Insert and Cache Retrieve Times under different cache loads (for example, number of items and size of cache) to identify where your performance problem is coming from. These two performance counters should be as low as possible for your cache to be more responsive to the application. You should note that the cache insert time and retrieve time should be constant regardless of the number of items in cache.
  • Check your Cache Hit/Miss ratio. A cache hit occurs when you request an item from the cache and that item is available and returned to you. A cache miss occurs when you request an item from the cache and that item is not available. If this is low, it indicates that items are rarely in cache when you need them. Possible causes for this include:
    • Your cache loading technique is not effective.
    • Your maximum allowed cache size is too small, causing frequent scavenging operations, which results in cached items being removed to free up memory.
  • Check your Cache Turnover rate. The cache turnover rate refers to the number of insertions and deletions of items from the cache per second. If this is high, it indicates that items are inserted and removed from cache at a high rate. Possible causes for this include:
    • Your maximum allowed cache size is too small, causing frequent scavenging operations which result in cached items being removed to free up memory.
    • Faulty application design, resulting in improper use of the cache.
  • Additionally you can also monitor the Cache Entries and Cache Size counters. Although the Cache Entries counter does not provide enough information regarding your cache performance it can be used with other counters to provide valuable information.

Regular monitoring of your cache should highlight any changes in data use and any bottlenecks that these might introduce. This is the main management task associated with the post-deployment phase of using a caching system.

Synchronizing Caches in a Server Farm

A common problem for distributed applications developers is how you synchronize cached data between all servers in the farm. Generally speaking, if you have a situation in which your cache needs to be synchronized in your server farm, it almost always means that your original design is faulty. You should design your application with clustering in mind and avoid such situations in the first place.

You can configure the Cache Application Block to share the backing store between servers in a web farm. All machines in the farm can have the same cache instance and partition and can read/write to the store. But the in-memory version of the cache is always unique to each server in the farm.

However, if you have one of those rare situations where such synchronization is absolutely required, you should use file dependencies to invalidate the cache when the information in the main data store changes.

To create file dependencies for cache synchronization

  • Create a database trigger that is activated when a record in your data store is changed.
  • Implement this trigger to create an empty file in the file system to be used for notification. This file should be placed either on the computer running SQL Server, a Storage Area Network (SAN), or another central server.
  • Use Application Center replication services to activate a service that copies the file from the central server to all disks in the server farm.
  • Make the creation of the file on each server trigger a dependency event to expire the cached item in the ASP.NET cache on each of the servers in the farm.

NOTE: because replicating a file across the server farm can take time, it is inefficient in cases where the cached data changes every few seconds.

The Caching Application Block should not be used if:

  • The ASP.NET cache provides all the caching functionality that the application requires
  • If security is an issue. While the persistent cache allows data to be encrypted there is no support to encrypt the in memory cache. If a malicious user could gain access to the system they can potentially retrieve cached data. Do not store sensitive information such as passwords and credit numbers in the cache if this is an issue for the application.
  • If multiple applications need to share the cache or the cache and application need to reside on separate systems.
  • The cache should ideally be used to store data that is either expensive to create or expensive to transport and that is at least semi-static in nature. It is generally not a good idea to cache transactional data

    Microsoft AntiXSS Library

    January 11, 2009 2 comments

    Cross site scripting (XSS) is the most common web application vulnerability and is listed in the Top 10 web application vulnerabilities on OWASP. XSS can also be called HTML injection attack, it occurs when un-validated user input is inserted into HTML output. This allows the attacker to construct a URL with HTML input and get it executed on the browser in the user’s context. This attack can be used to extract cookie information, steal sessions, write new html tags, invoke ActiveX controls, etc. Essentially, anything that can be done with a browser can be done with this attack without the user’s knowledge.

    Microsoft AntiXSS Library

    Microsoft recently released the v3.0 beta for AntiXSS library which helps you to protect your current applications from cross-site scripting attacks, at the same time helping you to protect your legacy application with its Security Runtime Engine. The Microsoft Anti-Cross Site Scripting Library is an encoding library, provided by the ASP.NET and Application Consulting & Engineering (ACE) teams at Microsoft, designed to help developers protect their Web-based applications from XSS attacks.

    AntiXSS 3.0 is a powerful tool in the Microsoft toolbox that mitigates XSS risks and provides a consistent level of security allowing you to focus on solving business problems and not on security problems.

    Whats new in AntiXSS 3.0:

    • Improved Performance – AntiXSS 3.0 has been completely rewritten with performance in mind, and yet retains the fundamental protection from XSS attacks that you have come to rely on for your applications
    • Secure Globalization – The web is a global market place, and cross-site scripting is a global issue. An attack can be coded anywhere, and Anti-XSS 3.0 now protects against XSS attacks coded in dozens of languages
    • Standards Compliance – AntiXSS 3.0 is written to comply with modern web standards. You can protect your web application without adversely affecting its UI

    This library differs from most encoding libraries in that it uses the principle-of-inclusions technique to provide protection against XSS attacks. This approach works by defining a valid or allowed set of characters, treating anything outside this set as invalid characters or potential attacks and encoding it.

    Cross-site scripting (XSS) attacks exploit vulnerabilities in Web-based applications that fail to properly validate and/or encode input that is embedded in response data. Malicious users can then inject client-side script into response data causing the unsuspecting user’s browser to execute the script code. The script code will appear to have originated from a trusted site and may be able to bypass browser protection mechanisms such as security zones.

    These attacks are platform and browser independent, and can allow malicious users to perform undesired actions such as gaining unauthorized access to client data like cookies or hijacking sessions entirely.

    In order for a malicious user to conduct an XSS attack against the application, they first need to find a page where all of the following are true:

    • The application is not validating input
    • The application is not encoding output that contains untrusted inputs

    A malicious user can easily exploit this by tricking a user into visiting the page while passing script such as <script>alert(‘Virus Alert!’)</script> into one of the parameters. The script injected by the malicious user gets executed in the unsuspecting user’s Web browser.

    To protect our application from XSS attacks we first need to understand the methods that malicious users can use to conduct such attacks. We can do using the following steps:

    1. Review ASP.NET code that generates output

      Remember that in order for an XSS attack to succeed, malicious users must find a way to embed their input as part of the response data from the application; therefore, we need to identify code in the application that generates output. This might not always be an easy task, especially for large applications, and some output may not necessarily require encoding.

    2. Determine if output could contain untrusted input

      Any of the output identified in the previous step could contain untrusted user inputt; if you’re unsure if the output contains untrusted input, err on the side of caution and assume it does.

    3. Determine encoding method to use

      Understand which encoding method we need to use to encode our web response data. Output will require encoding if all of the following conditions are true:

      • Input is not trusted
      • Output contains untrusted input
      • Output is used in a web response data context

      Following will be helpful in determining which encoding method to use:

      HtmlEncode – Untrusted input is used in HTML output except when assigning to an HTML attribute
      <a href="http://dotnethitman.spaces.live.com">Click Here [Untrusted input]</a>

      HtmlAttributeEncode – Untrusted input is used as an HTML attribute
      <hr size=[Untrusted input]>

      JavaScriptEncode – Untrusted input is used within a JavaScript context
      <script type="text/javascript">

      [Untrusted input]

      </script>

      UrlEncode – Untrusted input is used in a URL (such as a value in a querystring)
      <a href="Click">http://search.msn.com/results.aspx?q=[Untrusted-input]">Click Here!</a>

      VisualBasicScriptEncode – Untrusted input is used within a Visual Basic Script context
      <script type="text/vbscript" language="vbscript">

      [Untrusted input]

      </script>

      XmlEncode – Untrusted input is used in XML output, except when assigning to an XML attribute
      <xml_tag>[Untrusted input]</xml_tag>

      XmlAttributeEncode – Untrusted input is used as an XML attribute
      <xml_tag attribute=[Untrusted input]>Some Text</xml_tag>

    4. Encode output

      Now that we’ve determined which scenarios require encoding, all that’s left to do is add the Microsoft Anti-Cross Site Scripting Library to our project and encode the untrusted input as it is embedded in response data.

      After you’ve installed the Microsoft Anti-Cross Site Scripting Library, you can add the reference to the library in your ASP.NET using the following steps:

      1. Right-click the project name
      2. Select Add Reference … option
      3. Under Browse, look in the library installation directory and add the reference to AntiXSSlibrary.dll

      After we’ve added the reference to the Anti-Cross Site Scripting Library, we encode the output generated by the our page. To do this:

      1. Add the directive using Microsoft.Security.Application
      2. In the output method encode using one of the encoding methods:
        AntiXss.HtmlEncode(Request.QueryString["SomeParam"]);
      3. Rebuild the Web application

    Additional Steps

    To make malicious users’ jobs even harder, there are some additional layers of defense that can be implemented to further prevent XSS attacks in the our application.

    Conclusion

    XSS attacks are easily one of the most common encountered by IT teams, and with the number of Web applications increasing each day, that number is expected to continue growing. Developers need to protect their application users from such attacks by:

    • Validating and constraining input
    • Encoding output

    NOTE: remember, a common mistake is to encode untrusted input more than once, which can result in outputs being displayed incorrectly.

    For more information on XSS attacks, some good references are:

    Tags:

    Creating a generic CLR audit trigger

    November 9, 2008 1 comment

    There’s an interesting article at SqlJunkies http://sqljunkies.com/Article/4CD01686-5178-490C-A90A-5AEEF5E35915.scuk which shows how to create a generic CLR audit trigger. The audit trigger works great and includes tracking of:

    • insertions of new records
    • deletions of existing records
    • and modifications of fields in existing records.

    But there is just one small problem with this trigger code. The PerformedBy column of the Audit table in the sample code is set to the UserID of the connection string which in most applications would be the same UserID for all connections because of connection pooling. This means the trigger will log all operations performed by the application but it will not log the real user (application user) who made the change.

    So the first step was to make sure that the application logged in user’s UserID is passed in to all the CRUD stored procedures from my web application.

    CREATE PROCEDURE spSomeProc(
        ...
        @PerformedByUserId VARCHAR(32))
    AS
    BEGIN
        SET NOCOUNT ON;
        ...
        RETURN;
    END
    GO

    You can get the application logged in user’s UserID from HttpContext.Current.User.Identity.Name or Thread.CurrentPrincipal.Identity.Name and pass this value from the application to the CRUD stored procedure.

    Now that we have a way to pass in the UserID to our stored procedures, we need to somehow pass this parameter value to the trigger code. Although there is no direct way of passing a parameter down to the trigger, the forums at SqlJunkies suggested to create a temporary table in your procedure, insert the UserID as the only row in the temporary table and then retrieve this value in the CLR trigger by executing a query against this temporary table. Since the trigger uses the same connection session we don’t need to create a global temporary table.

    This solution works fine but it uses the tempdb which is a bit of a concern as a load test scenario performing CRUD operations could easily bloat up the tempdb.

    Another option is to pass in the UserID via the CONTEXT_INFO, which frees up tempdb for other tasks. We can do so with the following code snippet added at the beginning of every CRUD stored procedure

    CREATE PROCEDURE spSomeProc(
        ...
        @PerformedByUserId    VARCHAR(32))
    AS
    BEGIN
        SET NOCOUNT ON;
    
        DECLARE @BinaryUserId VARBINARY(128);
        SET @BinaryUserId = CAST(@PerformedByUserId AS VARBINARY(128));
    
        SET CONTEXT_INFO @BinaryUserId;
    
        ...
    
        SET CONTEXT_INFO 0x0;
        RETURN;
    END
    GO

    and in the CLR trigger you can get the CONTEXT_INFO as:

    ...
    oCmd.CommandText  = "SELECT CAST(CONTEXT_INFO() AS VARCHAR(128))";
    string userid = (string)oCmd.ExecuteScalar();
    ... 

    We can also add an additional check here to get the UserID from “SELECT CURRENT_USER” just to make sure that if the CONTEXT_INFO is not set or returns NULL as in the case where some DBA made a change to any of the tables directly (for production support issue or whatever).

    SqlCommand CurrentUserCmd = new SqlCommand("SELECT CAST(CONTEXT_INFO() AS VARCHAR(128))", Connection);
    string CurrentUser = CurrentUserCmd.ExecuteScalar().ToString();
    if (string.IsNullOrEmpty(CurrentUser))
    {
        CurrentUserCmd.CommandText = "SELECT CURRENT_USER";
        CurrentUser = CurrentUserCmd.ExecuteScalar().ToString();
    }

    The full source code from the original article and modified as described above (converted to C#) is given below

    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Data.SqlTypes;
    using Microsoft.SqlServer.Server; 
    
    public partial class Triggers
    {
        //This is the original template for Trigger metadata. Note that it is table-specific (i.e. it suggests that the trigger should apply to one table only).
        //<Microsoft.SqlServer.Server.SqlTrigger(Name:="Trigger1", Target:="Table1", Event:="FOR UPDATE")> _ 
    
        //This is our actual declaration. Note that it does not specify any particular table. We don't know if it is Microsoft's intention to allow table-agnostic trigger code, but this works and we hope that it keeps working.
        //GENERIC AUDIT TRIGGER: AuditCommon
        [Microsoft.SqlServer.Server.SqlTrigger(Name = "AuditCommon", Event = "FOR UPDATE, INSERT, DELETE")]
        public static void AuditCommon()
        {
            try
            {
    #if(DEBUG)
                EmitDebugMessage("Enter Trigger");
    #endif 
    
                //Grab the already-open Connection to use as an argument
    #if(DEBUG)
                EmitDebugMessage("Open Connection");
    #endif
                SqlTriggerContext Context = SqlContext.TriggerContext;
                SqlConnection Connection = new SqlConnection("CONTEXT CONNECTION=TRUE");
                Connection.Open(); 
    
                //Load the "inserted" table
    #if(DEBUG)
                EmitDebugMessage("Load INSERTED");
    #endif
                SqlDataAdapter TableLoader = new SqlDataAdapter("SELECT * FROM INSERTED", Connection);
                DataTable InsertedTable = new DataTable();
                TableLoader.Fill(InsertedTable); 
    
                //Load the "deleted" table
    #if(DEBUG)
                EmitDebugMessage("Load DELETED");
    #endif
                TableLoader.SelectCommand.CommandText = "SELECT * FROM DELETED";
                DataTable DeletedTable = new DataTable();
                TableLoader.Fill(DeletedTable); 
    
                //Prepare the "audit" table for insertion
    #if(DEBUG)
                EmitDebugMessage("Load AUDIT schema for insertion");
    #endif
                SqlDataAdapter AuditAdapter = new SqlDataAdapter("SELECT * FROM AUDIT WHERE 1 = 0", Connection);
                DataTable AuditTable = new DataTable();
                AuditAdapter.FillSchema(AuditTable, SchemaType.Source);
                SqlCommandBuilder AuditCommandBuilder = new SqlCommandBuilder(AuditAdapter);
                //Create DataRow objects corresponding to the trigger table rows.
    #if(DEBUG)
                EmitDebugMessage("Create internal representations of trigger table rows");
    #endif
                string TableName = "";
                DataRow InsertedRow = null;
                if (InsertedTable.Rows.Count > 0)
                {
                    InsertedRow = InsertedTable.Rows[0];
                    TableName = DeriveTableNameFromKeyFieldName(InsertedTable.Columns[0].ColumnName);
                }
                DataRow DeletedRow = null;
                if (DeletedTable.Rows.Count > 0)
                {
                    DeletedRow = DeletedTable.Rows[0];
                    TableName = DeriveTableNameFromKeyFieldName(DeletedTable.Columns[0].ColumnName);
                } 
    
                //get the current database user
                SqlCommand CurrentUserCmd = new SqlCommand("SELECT CAST(CONTEXT_INFO() AS VARCHAR(128))", Connection);
                string CurrentUser = CurrentUserCmd.ExecuteScalar().ToString();
                if (string.IsNullOrEmpty(CurrentUser))
                {
                    CurrentUserCmd.CommandText = "SELECT CURRENT_USER";
                    CurrentUser = CurrentUserCmd.ExecuteScalar().ToString();
                }
                //Perform different audits based on the type of action.
                switch (Context.TriggerAction)
                {
                    case TriggerAction.Update:
                        //Ensure that both INSERTED and DELETED are populated. If not, this is not a valid update.
                        if (InsertedRow != null && DeletedRow != null)
                        {
                            //Walk through all the columns of the table.
                            foreach (DataColumn Column in InsertedTable.Columns)
                            {
                                //ALTERNATIVE CODE to compare values and record only if they are different:
                                //If Not DeletedRow.Item(Column.Ordinal).Equals(InsertedRow.Item(Column.Ordinal)) Then
                                //This code records any attempt to update, whether the new value is different or not.
                                if (Context.IsUpdatedColumn(Column.Ordinal))
                                {
                                    //DEBUG output indicating field change
    #if(DEBUG)
                                    EmitDebugMessage("Create UPDATE Audit: Column Name = " + Column.ColumnName + ", Old Value = '" + DeletedRow[Column.Ordinal].ToString() + "'" + ", New Value = '" + InsertedRow[Column.Ordinal].ToString() + "'");
    #endif
                                    //Create audit record indicating field change
                                    DataRow AuditRow = AuditTable.NewRow(); 
    
                                    //populate fields common to all audit records
                                    long RowId = (long)InsertedRow[0]; 
    
                                    //use "Inserted.TableName" when Microsoft fixes the CLR to supply it
                                    WriteCommonAuditData(AuditRow, TableName, RowId, CurrentUser, "UPDATE"); 
    
                                    //write update-specific fields
                                    AuditRow["FieldName"] = Column.ColumnName;
                                    AuditRow["OldValue"] = DeletedRow[Column.Ordinal].ToString();
                                    AuditRow["NewValue"] = InsertedRow[Column.Ordinal].ToString(); 
    
                                    //insert the new row into the audit table
                                    AuditTable.Rows.InsertAt(AuditRow, 0);
                                }
                            }
                        }
                        break; 
    
                    case TriggerAction.Insert:
                        //If the INSERTED row is not populated, then this is not a valid insertion.
                        if (InsertedRow != null)
                        {
                            //DEBUG output indicating row insertion
    #if(DEBUG)
                            EmitDebugMessage("Create INSERT Audit: Row = '" + InsertedRow[0].ToString() + "'");
    #endif
                            //Create audit record indicating field change
                            DataRow AuditRow = AuditTable.NewRow();
                            //populate fields common to all audit records
                            long RowId = (long)InsertedRow[0];
                            //use "Inserted.TableName" when Microsoft fixes the CLR to supply it
                            WriteCommonAuditData(AuditRow, TableName, RowId, CurrentUser, "INSERT");
                            //insert the new row into the audit table
                            AuditTable.Rows.InsertAt(AuditRow, 0);
                        }
                        break;
                    case TriggerAction.Delete:
                        //If the DELETED row is not populated, then this is not a valid deletion.
                        if (DeletedRow != null)
                        {
                            //DEBUG output indicating row insertion
    #if(DEBUG)
                            EmitDebugMessage("Create DELETE Audit: Row = '" + DeletedRow[0].ToString() + "'");
    #endif
                            //Create audit record indicating field change
                            DataRow AuditRow = AuditTable.NewRow();
                            //populate fields common to all audit records
                            long RowId = (long)DeletedRow[0];
                            //use "Inserted.TableName" when Microsoft fixes the CLR to supply it
                            WriteCommonAuditData(AuditRow, TableName, RowId, CurrentUser, "DELETE");
                            //insert the new row into the audit table
                            AuditTable.Rows.InsertAt(AuditRow, 0);
                        }
                        break;
                } 
    
                //update the audit table
                AuditAdapter.Update(AuditTable);
                //finish
    #if(DEBUG)
                EmitDebugMessage("Exit Trigger");
    #endif
            }
            catch (Exception ex)
            {
                //Put exception handling code here if you want to connect this to your database-based error logging system. Without this Try/Catch block, any error in the trigger routine will stop the event that fired the trigger. This is early-stage development and we're not expecting any exceptions, so for the moment we just need to know about them if they occur.
                throw;
            }
        } 
    
        //Write data into the fields of an Audit table row that is common to all types of audit activities.
        private static void WriteCommonAuditData(DataRow auditRow, string tableName, long rowId, string currentUser, string operation)
        {
            auditRow["TableName"] = tableName;
            auditRow["RowId"] = rowId;
            auditRow["OccurredAt"] = DateTime.Now;
            auditRow["PerformedBy"] = currentUser;
            auditRow["Operation"] = operation;
        } 
    
        //SQL CLR does not deliver the proper table name from either InsertedTable.TableName or DeletedTable.TableName, so we must use a substitute based on our key naming convention. We assume that in each table, the KeyFieldName = TableName + "Id". Remove this routine and its uses as soon as we can get the table name from the CLR.
        private static string DeriveTableNameFromKeyFieldName(string keyFieldName)
        {
            return keyFieldName.Substring(0, keyFieldName.Length - 2); //assumes KeyName = TableName & "Id"
        } 
    
        //Emit debug messages. This function gives us the option to turn off debugging messages by changing one value (here).
    #if(DEBUG)
        private static void EmitDebugMessage(string message)
        {
            SqlContext.Pipe.Send(message);
        }
    #endif
    }

    Technorati Tags: ,,,,

    Unity – Dependency Injection and Inversion of Control Container

    September 8, 2008 10 comments

    The Dependency Injection Pattern
    Dependency injection is a programming technique to reduce component coupling. Dependency injection is also commonly known as “inversion of control” or IoC or sometimes as The Hollywood Principle – "Don’t call us, we’ll call you”. The goal of dependency injection is to separate the concerns of how a dependency is obtained from the core concerns of a boundary. This improves reusability by enabling components to be supplied with dependencies which may vary depending on context.

    The Old Way
    Following is an example of how you might write code if not using dependency injection

    public class WebApp { public WebApp() { quotes = new StockQuotes(); authenticator = new Authenticator(); database = new Database(); logger = new Logger(); errorHandler = new ErrorHandler(); } }

    Problem

    • What about the child objects?
    • How does the StockQuotes find the Logger?
    • How does the Authenticator find the database?
    • Suppose you want to use a TestingLogger instead? Or a MockDatabase?

    Service Locator pattern attempts to solve some of the problems mentioned above by providing a dictionary of objects. The objects are all stored in this dictionary and the Get method simply returns the object to the caller.

    Service Locator Example

    public interface ILocator { TObject Get<TObject>(); } public class MyLocator : ILocator { protected Dictionary<Type, object> dict = new Dictionary<Type,object>(); public MyLocator() { dict.Add(typeof(ILogger), new Logger()); dict.Add(typeof(IErrorHandler), new ErrorHandler(this)); dict.Add(typeof(IQuotes), new StockQuotes(this)); dict.Add(typeof(IDatabase), new Database(this)); dict.Add(typeof(IAuthenticator), new Authenticator(this)); dict.Add(typeof(WebApp), new WebApp(this)); } } public class StockQuotes { public StockQuotes(ILocator locator) { errorHandler = locator.Get<IErrorHandler>(); logger = locator.Get<ILogger>(); } }

    Pros

    • Classes are decoupled from explicit implementation types
    • Easy to externalize the config

    Cons

    • Everyone takes a dependency on the ILocator
    • Hard to store constants and other useful primitives
    • Creation order is still a problem

    Dependency Injection Containers
    The dependency injection container is a component responsible for assigning dependencies to a recipient component. Containers are generally implemented using the Factory Pattern to allow creation of the recipient and dependency components. Containers are often implemented to allow existing objects to be registered for use as a dependency, or to create new instances when required. Using a dependency injection container with our StockQuotes example provides the following benefits:

    • Gets rid of the dependency on the ILocator
    • Object is no longer responsible for finding its dependencies
    • The container does it for you

    In a nutshell, dependency injection just means that a given class or system is no longer responsible for instantiating their own dependencies. In this case “Inversion of Control” refers to moving the responsibility for locating and attaching dependency objects to another class or a DI tool. That might not sound that terribly profound, but it opens the door for a lot of interesting scenarios.
    Benefits of Dependency Injection:

    • Dependency Injection is an important pattern for creating classes that are easier to unit test in isolation
    • Promotes loose coupling between classes and subsystems
    • Adds potential flexibility to a codebase for future changes
    • Can enable better code reuse

    Unity Application Block
    Unity Application Block is a lightweight Inversion of Control container which supports constructor, property and method call injection. Unity sits on top of another framework called ObjectBuilder, but is different from the ObjectBuilder which has been a part of Enterprise Library 3.1 and earlier. Unity is based on v2 of the Objectbuilder and has been optimized for performance quite a bit. Unity is available both as a standalone and part of Enterprise Library 4.0 on codeplex at http://www.codeplex.com/unity and http://www.codeplex.com/entlib. Unity 1.1 is not part of Enterprise Library 4.0 but the good thing about it is that it will update the Unity dlls/libraries in Enterprise Library 4.0 installed folder to 1.1 during installation.

    The Unity Application Block includes the following features:

    • It provides a mechanism for building (or assembling) instances of objects, which may contain other dependent object instances.
    • It exposes RegisterType methods that support configuring the container with type mappings and objects (including singleton instances) and Resolve methods that return instances of built objects that can contain any dependent objects.
    • It provides inversion of control (IoC) functionality by allowing injection of preconfigured objects into classes built by the application block. Developers can specify an interface or class type in the constructor (constructor injection), or apply attributes to properties and methods to initiate property injection and method call injection.
    • It supports a hierarchy for containers. A container may have child container(s), allowing object location queries to pass from the child out through the parent container(s).
    • It can read configuration information from standard configuration systems, such as XML files, and use it to configure the container.
    • It makes no demands on the object class definition. There is no requirement to apply attributes to classes (except when using property or method call injection), and there are no limitations on the class declaration.
    • It supports custom container extensions that developers can implement; for example, methods to allow additional object construction and container features such as caching.

    Unity has no dependency on Enterprise Library core and can be used without having to install Enterprise Library on the host system. To use Unity in your application you need to add reference to the following dlls in your project
        Microsoft.Practices.ObjectBuilder2
        Microsoft.Practices.Unity

    The Unity container can be configured through configuration files or you can use code to register dependencies dynamically at run time. To use Unity with configuration files you need to add reference to the following dll
        Microsoft.Practices.Unity.Configuration

    Steps when using Dependency Injection

    • Write your objects the way you want
    • Setup the container
    • Ask the container for objects
    • The container creates objects for you and fulfills dependencies

    Setup the container
    The ideal place to setup the Unity container for ASP.NET applications is in the Application_Start method of the global.asax file. We would like to have a persistent container that hold it’s state during the execution of the application. The right place to put this is in the Global.asax file as a property of the current application.

    We create a simple interface for the container property so that we can access our container using this interface

    public interface IContainerAccessor { IUnityContainer Container { get; } }

    class in the Global.asax file:

    private static UnityContainer _container; public static UnityContainer Container { get { return _container; } set { _container = value; } } protected void Application_Start(object sender, EventArgs e) { BuildContainer(); } protected void Application_End(object sender, EventArgs e) { CleanUp(); } private static void BuildContainer() { IUnityContainer container = new UnityContainer(); //TODO: Register the relevant types for the container here through classes or configuration Container = container; } private static void CleanUp() { if (Container != null) { Container.Dispose(); } }

    The BuildContainer method is where we will setup our container and register our types for dependency injection. The RegisterType<TFrom, TTo>() method tells Unity that whenever someone asks for a dependency on TFrom give them Tto. In the example code below the statement container.RegisterType<ILogger, EventLogLogger>() tells Unity that whenever someone has a dependency on type ILogger go ahead and create an object of type EventLogLogger.

    There are a couple of different flavors of Dependency Injection

    • Constructor Injection – Attach the dependencies through a constructor function at object creation
    • Setter Injection – Attach the dependencies through setter properties
    • Service Locator – Use a well known class that knows how to retrieve and create dependencies. Not technically DI, but this is what most DI/IoC container tools really do

    Constructor Injection

    public interface ILogger { void LogEvent(string message); } public class FileLogger : ILogger { public void LogEvent(string message) { ... } } public class EventLogLogger : ILogger { public void LogEvent(string message) { ... } } public class StockQuotes { private Ilogger _logger; public class StockQuotes(ILogger logger) { _logger = logger; } } UnityContainer container = new UnityContainer(); ... contianer.RegisterType<ILogger, EventLogLogger>(); StockQuotes quotes = container.Resolve<StockQuotes>();

    If a class that developers instantiate using the Resolve method of the Unity container has a constructor that defines one or more dependencies on other classes, the Unity container will automatically create the dependent object instance specified in parameters of the constructor. In the above example StockQuotes has a dependency on ILogger. When we create an instance of the StockQuotes class using the Resolve method of the Unity container, Unity will automatically create an instance of EventLogLogger and pass it to the constructor of StockQuotes class.

    The benefit of using constructor injection is that the constructor function now explicitly declares the dependencies of a class. Constructor injection is often recommended as it eliminates chatty calls to the object and creates a valid object in as few steps as possible.

    Setter Injection

    public class OracleDatabase : IDatabase { Public void ExecuteQuery(string query) { ... } } public class SqlDatabase : IDatabase { public void ExecuteQuery(string query) { ... } } public class Authenticator { private IDatabase _database; [Dependency] public IDatabase DB { get{return _database;} set{_database = value;} } } UnityContainer container = new UnityContainer(); ... contianer.RegisterType<IDatabase, SqlDatabase>(); Authenticator auth = container.Resolve<Authenticator>();

    To force dependency injection of the dependent object, developers must apply the [Dependency] attribute to the property declaration. Many would argue that setter injection is really useful when legacy code needs to be upgraded and provides a smooth transition from legacy code to the new model. Making sure that any new code that depends on undesirable legacy code uses Dependency Injection leaves an easier migration path to eliminate the legacy code later with all new code. 

    As a service locator

    UnityContainer container = new UnityContainer(); contianer.RegisterType<ILogger, NullLogger>(); … ILogger logger = container.Resolve<ILogger>();

    Here we are just telling unity to give us the ILogger interface which is already registered with the container. Using the container in this manner makes it a service locator.

    Unity Dependency injection provides a number of ways to configure the container. As described above you use the RegisterType method to inform the container about dependencies. But Unity can also manage the object lifetime e.g.

    Dependencies as singleton

    UnityContainer container = new UnityContainer(); container.RegisterType<Database, SqlDatabase>(new ContainerControlledLifetimeManager());

    The above code  tells Unity that whenever someone asks for type Database give them type SqlDatabase and return the same object every time instead of creating a new one for each dependency.

    Named Instance

    UnityContainer container = new UnityContainer(); contianer.RegisterType<Database, SqlDatabase>("SQL"); container.RegisterType<Database, OracleDatabase>("Oracle"); IEnumerable<Database> databases = container.ResolveAll<Database>(); Database database = container.Resolve<Database>("SQL");

    Named instance allows you to configure Unity with multiple dependencies for the same type but assign them different names. Thi s allows you to do fancy stuff where you want a default type mapping but also want to override the mapping by providing a name during object creation.

    Registering an existing object instance
    So far all the examples above show how a type can be registered with Unity and the Unity container creates the object for you whenever requested. But what if you already have the object created and want to register this object in Unity.

    UnityContainer container = new UnityContainer(); contianer,RegisterInstance<Database>(new SqlDatabase()); contianer.RegisterInstance<Database>("Oracle", new OracleDatabase()); Database database = container.Resolve<Database>(); Database oracleDatabase = container.Resolve<Database>("Oracle");

    When using RegisterIntance Unity will automatically make the objects singletons.

    Configuring Unity via config file
    The Unity container can be configured through configuration files. To use Unity with configuration files you need to add reference to the following dll
        Microsoft.Practices.Unity.Configuration

    Use following code to read the container setup from configuration file:

    UnityContainer container = new UnityContainer(); UnityConfgurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); Section.Containers.Default.GetConfigCommand().Configure(); ILogger logger = container.Resolve<ILogger>();

    .config file:

    <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager,Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager,Microsoft.Practices.Unity" /> </typeAliases> <containers> <container> <types> <type type="ConsoleApplication1.Database, ConsoleApplication1" mapTo="ConsoleApplication1.SqlDatabase, ConsoleApplication1" lifetime="Singleton" /> <type type="ConsoleApplication1.ILogger, ConsoleApplication1" mapTo="ConsoleApplication1.EvnetLogLogger, ConsoleApplication1" lifetime="Singleton" /> </types> </container> </containers> </unity>

    Nested Containers
    Unity supports nested containers, which allows the parent container to create child containers. This provides the ability to override parent type mappings with child type mappings. Parent-Child relationship between Unity containers. What this means is that the parent-child relationship between Unity containers will search the child container to the resolve the type mapping, and if not found, will navigate up to the parent container to resolve the type.

    UnityContainer parentContainer = new UnityContainer(); IUnityContainer childContainer1 = parentContainer.CreateChildContainer(); IUnityContainer childContainer2 = parentContainer.CreateChildContainer(); parentContainer.RegisterType<ILogger, FileLogger>(new ContainerControlledLifetimeManager()); childContainer1.RegisterType<ILogger, EventLogLogger>(new ContainerControlledLifetimeManager()); ILogger logger = childContainer2.Resolve<ILogger>(); // should return FileLogger from parentContainer ILogger logger2 = childContainer1.Resolve<ILogger>(); //should return EventLogLogger from childContainer1

    While registering types with Unity there is a certain risk of introducing unintentional circular references, which are not easy to detect or prevent.
    For example, the following code shows two classes that reference each other in their constructors.

    public class Class1 { public Class1(Class2 test2) { ... } } public class Class2 { public Class2(Class1 test1) { ... } }

    It is the responsibility of the developer to prevent this type of error by ensuring that the members of classes they use with dependency injection do not contain circular references.

    References

    I Hate Bug Reports

    August 26, 2008 Leave a comment

    I hate bug reports not because I have an ego and I think I write excellent code which can never have bugs or that I don’t want to fix them but because they often leave crucial information to reproduce the bug and/or what was the expected result and why the results are wrong and I end up in a wild goose chase hunting the testers for answers.

    If the tester is not reporting the bug correctly, the developer will most likely reject this bug stating as not reproducible and assign it back the tester. The tester would most probably argue that the bug is there and it can be reproduced and a whole mess ensnares where the developer and tester both end up kicking the ball back and forth at each other. Not only is this time lost in non productivity but it gets pretty frustrating pretty fast.

    If you want a bug to be fixed you need to report the bug effectively. So here are just a few pointers to our testers:

    • The bug title should describe the issue completely. Bug title will help developers to quickly analyze the bug nature. It saves time when we don’t need to open the whole bug report to know what the problem is. Keep in mind bug title is used as a reference to search the bug in the bug tracking tool so add in any keywords in the title which you think might help in finding the bug.
    • If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug in the minimalist steps possible. Developers need to be able to get to the problem in the shortest time. Do not assume or skip any reproducing step. Always step through the steps yourself and make sure you don’t leave a step. Make sure your steps are robust enough to reproduce the bug without any ambiguity. Don’t assume that the developers can read your mind – don’t assume that they will do a few extra steps you think are obvious.
    • Never include more than one issue per report. If you have multiple issues, post separate bug reports for each and mark them as related. Reporting multiple bugs in one report will most likely cause your report to be closed when only part of it is implemented.
    • Do not post new bug or feature requests about the same bug or feature. Doing so takes a lot of time for us to merge your reports. The search feature of the bug tracking system is everyone’s friend.
    • Report the problem immediately. If you find any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report.
    • To create the highest quality bug report which will save developers time and increase the likelihood the bug is fixed, a little extra work goes a long way:
      • Is the bug tied to a single setting or is there a small range of cases? Identify the range.
      • What other user actions cause the error to occur?
      • Does the error occur with different settings or options selected?
    • Attachments are extremely helpful
      • Screenshots with comments really help understand the problem. A picture is worth 1000 words.
      • At the same time a picture should not be there to replace reproduce steps. The bug should still be captured in text even if a screenshot is attached.
    • A bug report should always contain the expected and observed results. Often times the developers don’t think that the bug is a real bug so it helps to explicitly list the expected and observed results. It is the testers duty to explain to the developers what went wrong.
    • Don’t use the bug report as a stepping stone – we have enough politics in the office already.

    No doubt that your bug report should be a high quality document. Focus on writing good bug reports, spend some time on this task because this is main communication point between tester, developer and manager. Mangers should make aware to their team that writing a good bug report is primary responsibility of any tester. Your efforts towards writing good bug report will not only save company resources but also create a good relationship between you and developers.

    Oh and one final note; just because the system performs differently that what was expected does not necessarily mean it is a bug. Remember "The best tester is the one who gets the most bugs fixed" – Cem Kaner

    Visual Studio 2008 and .NET Framework 3.5 SP1 RTM is here

    August 12, 2008 Leave a comment

    RTM SP1 for Visual Studio 2008 and .NET Framework 3.5 is out and available for download http://msdn.microsoft.com/en-us/vstudio/products/cc533448.aspx

    Visual Studio 2008 SP1 and .NET Framework 3.5 SP1 significantly improve the developer experience during the development process, and at runtime. These improvements address top issues reported by customers. For more information, see Visual Studio 2008 SP1 and .NET Framework 3.5 SP1.

    Table-Valued parameter in SQL Server 2005

    August 3, 2008 3 comments

    In pre-SQL Server 2005 in order to pass in a set of values one had to create a temporary table, populate it with data using INSERT, and then just use it in the procedure or function since they are created for the current session and are available to all processes in that session.

    I did a blog on how to pass in Table-Value Parameters in SQL Server 2008 but what if we have a need to pass in multiple rows of data to a T-SQL statement, or a routine such as stored procedure or function in SQL Server 2005?

    Turns out the same can be done in SQL Server 2005 without using temporary tables. By using the XML data type you can pass user-defined sets between queries and also between the client and the server.

    The following code shows how you can create and use XML parameters.

    USE AdventureWorks; GO CREATE PROCEDURE uspEmployeeList(@EmployeeList XML) AS BEGIN SET NOCOUNT ON; SELECT E.* FROM HumanResources.Employee E INNER JOIN @EmployeeList.nodes('/Root/E') AS Tbl(C) ON E.EmployeeID = Tbl.C.value('@EmployeeID', 'INT'); RETURN; END
     

    How are XML parameters supported in .NET? ADO.NET allows full support using SqlDbType.Xml type. Adding a XML as a parameter to the stored procedure from C# would have something like:

    //string EmployeeXml = "<Root><E EmployeeID=\"1\" /><E EmployeeID=\"3\" /><E EmployeeID=\"5\" /><E EmployeeID=\"7\" /><E EmployeeID=\"11\" /></Root>";
     
    // Create the data table
    DataTable dt = new DataTable("E");
    // Create the table schema
    DataColumn dc = dt.Columns.Add("EmployeeID", typeof(string));
    // Add a few records to the data table.
    for (int i = 1; i <= 10; i++)
    {
          // create a new row
          
    DataRow
    dr = dt.NewRow();
          // populate the fields
          
    dr[0] = i.ToString();
          // add the row to the table
          
    dt.Rows.Add(dr);
    }
    ... System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand("uspEmployeeList", sqlConn); cmd.Parameters.Add("@EmployeeList", System.Data.SqlDbType.Xml); //cmd.Parameters["@EmployeeList"].Value = EmployeeXml; ...
    // Create a temporary MemoryStream to hold the output of the WriteXml method of the DataTable
    using (MemoryStream memoryStream = new MemoryStream())
    {
    dt.WriteXml(memoryStream);
             UTF8Encoding encoding = new UTF8Encoding();
             cmd.Parameters["@EmployeeList"].Value = encoding.GetString(memoryStream.ToArray());
    }

    Now that’s cool. This is much better than passing in a comma separated list and using a dynamic query in our procedure or function.