Thursday, 13 November 2014

Umbraco Mapper with Nested Archetypes

Following a previous blog post I got a comment asking about dealing with mapping nested Archetypes to a view model using Umbraco Mapper. As I noted in response, it's not that straightforward - the flexibility of the Archetype package means really you have to build something fairly custom for your particular type you are mapping to. Rather than squeeze too much into a comment though, thought the topic worth a post in itself.

As before, looking for a simple example, I've modelled a football match but I've changed it a little to use a nested Archetype. There's a top-level type that contains a single field set to which a text field and two nested score types can be added. The score type has fields for the team and the score.

From the content editor's perspective it looks like this:

I'm looking to map this to a view model that looks like this:

    public class FootballMatch
    {
        public FootballMatch()
        {
            Scores = new List<FootballMatchScore>();
        }

        public string Venue { get; set; }

        public IList<FootballMatchScore> Scores { get; set; }
    }

    public class FootballMatchScore
    {
        public string Team { get; set; }

        public int Score { get; set; }
    }

Which can then be used in the views like this:

    <h3>Today's Match at @Model.TodaysMatch.Venue</h3>

    @foreach (var score in Model.TodaysMatch.Scores)
    {
        <div>@score.Team @score.Score</div>
    }        

The trick as before is to register a custom mapping for the top-level type that will be mapped, like this:

    Mapper.AddCustomMapping(typeof(FootballMatch).FullName, ArchetypeMapper.MapFootballMatch);

And the implementation looks like this:

    public static object MapFootballMatch(IUmbracoMapper mapper, IPublishedContent contentToMapFrom, string propName, bool isRecursive)
    {
        // Get the strongly typed ArchetypeModel from the given property
        var archetype = contentToMapFrom.GetPropertyValue<ArchetypeModel>(propName, isRecursive);
        if (archetype != null)
        {
            // Pull out the first fieldset
            var fieldset = archetype.FirstOrDefault();
            if (fieldset != null)
            {
                // Instantiate an instance of type we are mapping to
                var result = new FootballMatch();

                // Convert the fieldset to a dictionary and map the top-level properties
                var fieldsetAsDictionary = fieldset.Properties.ToDictionary(
                     k => FirstCharToUpper(k.Alias),
                     v => v.Value);
                mapper.Map(fieldsetAsDictionary, result);

                // Given the known aliases for the nested type, loop through and map all the properties of each
                // of these too
                var aliases = new string[] { "homeTeam", "awayTeam" };
                foreach (var alias in aliases)
                {
                    var scoresModel = fieldset.GetValue<ArchetypeModel>(alias);

                    foreach (var innerFieldSet in scoresModel)
                    {
                        var innerFieldsetAsDictionary = innerFieldSet.Properties.ToDictionary(
                            k => FirstCharToUpper(k.Alias),
                            v => v.Value);
                        var score = new FootballMatchScore();
                        mapper.Map(innerFieldsetAsDictionary, score);
                        result.Scores.Add(score);
                    }
                }

                return result;
            }
        }

        return null;
    }

As I say, it would be nice to come up with something more generic to solve this type of problem, and we have managed this to an extent on a recent project that used a number of Archetypes. The flexibility of Archetype does perhaps come at a cost here from a developer perspective. It does seem fairly unavoidable that you'll have to do something quite custom depending on what exactly you are mapping from and too.

Nonetheless once done and set up as a custom mapping within the mapper, any instance of your view model type will be automatically mapped from the given Archetype.

Sunday, 2 November 2014

Using Umbraco Mapper with the Umbraco Grid

Like most people in the Umbraco community I've been keen to download and have a play with Umbraco 7.2, released in beta last week. There's a lot of new features, but the shiniest new toy is without doubt the grid.

This allows a developer to set up a very flexible interface for the editor to manage their content in a layout that closely reflects what they'll see on the website.

When it comes to rendering out the content of the grid, it's very simple. In your view you can just make a call to CurrentPage.GetGridHtml("myGridAlias") and this will spit out the HTML wrapped with bootstrapped elements. You can pass in overrides to use a different template, and create your own.

Nothing wrong with this at all but my preferred approach when rendering out Umbraco content is to create strongly typed view models and, via a hijacked route, map the Umbraco content to this model with the help of the Umbraco Mapper package. Once done, this provides a very clean view model for front-end developers to work with, avoiding the need to work with the more complex Umbraco APIs and dynamics in the views.

Clearly the output of the grid, even if strongly typed, is still going to be relatively "loose" in that we need to be able to handle all that we allow our editors to throw at it. Nonetheless, it seemed to a useful exercise to see how Umbraco Mapper could be used to handle mapping from the Grid data type - and am pleased to say it looks to be rather straightforward.

In order to have something to map too we need our own custom class. The following has been constructed to follow the structure of the JSON format that the grid data is stored in the database:

namespace MyApp.Models
{
    using System.Collections.Generic;
    using System.Web.Mvc;
    using Newtonsoft.Json;

    public class GridContent
    {
        public IList<GridSection> Sections { get; set; }

        public class GridSection
        {
            public int Size { get; set; }

            public IList<GridRow> Rows { get; set; }

            public class GridRow
            {
                public string Name { get; set; }

                public IList<GridArea> Areas { get; set; }

                public class GridArea
                {
                    public IList<GridControl> Controls { get; set; }

                    public class GridControl
                    {
                        public object Value { get; set; }

                        public GridEditor Editor { get; set; }

                        public object TypedValue 
                        {
                            get
                            {
                                switch (Editor.Alias)
                                {
                                    case "headline":
                                    case "quote":
                                        return Value.ToString();
                                    case "rte":
                                        return new MvcHtmlString(Value.ToString());
                                    case "media":
                                        return JsonConvert.DeserializeObject<GridContentMediaValue>(Value.ToString());
                                    default:
                                        return string.Empty;
                                }
                            }
                        }

                        public class GridEditor
                        {
                            public string Alias { get; set; }
                        }
                    }
                }
            }
        }
    }
}

A key property to note is TypedValue. This is used to convert the raw value saved for the grid control element into a typed value appropriate for the content that is stored. This might be a simple string or something more complex like a media item. The implementation of this property would need to be extended to handle all the types we allow our editors to use.

For example here's a complex type representing a media item, again following the JSON structure of the saved information:

namespace MyApp.Models
{
    public class GridContentMediaValue
    {
        public int Id { get; set; }

        public string Image { get; set; }

        public string Thumbnail { get; set; }
    }
}

Then for a given page, I've created a view model that looks like this, following the structure of my document type that uses the grid data type.

namespace MyApp.Models
{
    public class GridPageViewModel
    {
        public string Heading { get; set; }

        public GridContent Grid { get; set; }
    }
}

Umbraco Mapper has a handy feature called custom mappings. These can be set up when the mapper is instantiated - usually in a single location like a base controller - to effectively say, "whenever you are given a given type on a view model to map to, use this function".

With that feature, I'm able to do this in my controller:

namespace MyApp.Controllers
{
    using System.Web.Mvc;
    using Newtonsoft.Json;
    using Umbraco.Core.Models;
    using Umbraco.Web;
    using Umbraco.Web.Models;
    using Umbraco.Web.Mvc;
    using Umbraco72a.Models;
    using Zone.UmbracoMapper;

    public class GridPageController : RenderMvcController
    {
        public override ActionResult Index(RenderModel model)
        {
            var mapper = new UmbracoMapper();
            mapper.AddCustomMapping(typeof(GridContent).FullName, MapGridContent);

            var vm = new GridPageViewModel();
            mapper.Map(CurrentPage, vm);

            return View("Grid", vm);
        }

        public static object MapGridContent(IUmbracoMapper mapper, IPublishedContent contentToMapFrom, string propName, bool isRecursive)
        {
            var gridJson = contentToMapFrom.GetPropertyValue<string>(propName, isRecursive);
            return JsonConvert.DeserializeObject<GridContent>(gridJson);
        }   
 }
}

Which finally allows me to render my grid content in a very clean, strongly typed way, like this:

@model MyApp.Models.GridPageViewModel

<div>
 <h1>@Model.Heading</h1>

 @foreach (var section in Model.Grid.Sections)
 {
  <div>
   @foreach (var row in section.Rows)
   {
    <div>
     @foreach (var area in row.Areas)
     {
      <div>
       @foreach (var control in area.Controls)
       {
        <div class="@control.Editor.Alias">
         @switch (control.Editor.Alias)
         {
          case "media":
           var media = control.TypedValue as GridContentMediaValue;
           <img src="@media.Image" />
           break;
          default:
           <span>@control.TypedValue</span>
           break;
         }                                        
        </div>
       }
      </div>   
     }
    </div>
   }
  </div>
 }
</div>

For me this looks to provide a nice balance between allowing the editor the flexibility that the grid editor provides whilst retaining the strongly typed view model and mark-up generation code in the view. Would love to hear comments from anyone else playing with the grid editor and similarly considering best practices for using it.

Sunday, 21 September 2014

Unit Testing Entity Framework with Effort

Previous posts in this series

Introduction

In my previous post I discussed steps I've been taking with recent application to move more logic and behaviour into a richer domain model. This has a number of advantages, a significant one being ease of testing of this behaviour. As this is coded as methods on a standalone model class, it's very easy to test due to it's lack of dependencies. Nothing needs to be mocked or stubbed; an instance can just be instantiated in the test method and the appropriate methods called and results asserted.

Given the CQRS style approach I've taken elsewhere in the applications, MVC controllers are fairly light and have less value for testing. The main part of the application where logic lies is in the individual query and command handlers. These have a tight dependency on Entity Framework though so are significantly harder to test.

Approaches to Unit Testing Entity Framework

One track I expect a lot of developers have started down is to attempt to mock out the EF context, allowing you to return a known in memory collection or instance instead of hitting the database for your tests. EF 6 provides some good support for this. This is certainly fast, but the downside with this is that querying using LINQ to Objects is not the same as querying LINQ to Entities. There's a good degree of overlap so still some value in this, but writing tests this way runs the risk of having tests that pass in your tests but fail in the application.

Another approach is to simply use the database itself. Most people wouldn't class this a unit test any more, rather an integration one, as it suffers from two downsides. One is that it's going to be a lot slower than using an in-memory data store and the second that it's hard to ensure your data remains consistent. If one test changes the data, you have to make sure you have a fresh schema and data situation restored for further tests, which can take quite a bit of effort to maintain.

Introducing Effort

Effort is an open-source provider that generates an in-memory database from your EF model, and accurately represents (almost) all EF querying operations. Thus in each test you can instantiate a clean data store and run your tests against it. In my case this meant asserting that my queries return the correct results and view models, and commands persist the data as expected.

The big advantage here therefore is removing the brittleness of the tests in ensuring you have a known base schema and data to start each test. It's also fairly fast - I say fairly, as I still find half a second or so for each test, which isn't much but adds up of course if you have a lot.

One other small downside is the "almost" note above. There are some EF operations you can run - within SqlFunctions for example for date operations - that are SQL Server specfic. Effort attempts to remain database agnostic, so can't handle the use of these functions. I get round that by passing a flag indicating whether SQL specific functions are available to the query handler - if they are, as in the application itself, they are used. In the test they are not. So there's a small difference in behaviour between my tests and the real-world application - but for my application at least this is pretty insignificant.

Effort Example

There's good documentation on the Effort Codeplex site for it's set up and use, including seeding data from code or CSV files. I'll just note here some aspects I had to work around slightly differently - in particular as I am using ASP.Net Identity, which means the constructors available for "standard" EF aren't available and some modification is required. Many thanks to the author of the library for his help with this.

Firstly, here's an example of a test. This one is testing a query handler - confirming that given a particular query, the correct view model is populated and returned.

[TestMethod]
public void SurveyQuestionViewModelQueryHandler_WithValidQuery_ReturnsExpectedResults()
{
    // Arrange
    SetUpContextAndTestData();
    var responseId = CreateAndPersistResponse();

    var handler = new QuestionViewModelQueryHandler(Context);
    handler.SqlSpecificFunctionsAvailable = false;
    var worker = Context.OrganisationWorkers.Single(x => x.LastName == "Worker");
    var questionId = Context.Questions.Single(x => x.Code == "A6").Id;

    var query = new QuestionViewModelQuery {
        UserId = worker.Id,
        ResponseId = responseId,
        QuestionId = questionId,
    };

    // Act
    var result = handler.Retrieve(query).Result;

    // Assert
    Assert.AreEqual(questionId, result.QuestionId);
    Assert.AreEqual("Which of these are vegatables?", result.QuestionText);
    Assert.AreEqual("Section 1", result.SectionName);
    Assert.AreEqual(4, result.AnswerOptions.Count());
    
    Assert.AreEqual(0, result.SurveyProgressPercent);
    Assert.AreEqual(42, result.SectionProgressPercent);
}

The first step is to set up the in-memory context using the Effort library via the SetUpContextAndTestData() method, which I'll expand on in a moment. This creates the schema and a set of base data required for all tests. After that we call a helper method to instantiate a single survey response object (an entity in my application) that we need for this specific test.

The rest of the Arrange part of the test is involved with instantiating a handler object, setting the property on the handler that indicates of SQL specific functions can be called, and looking up some values from the in-memory database to create a query object.

The single line in the Act simply calls the Retrieve method on the handler and retrieves the result. In the Assert section we call several asserts, to ensure the resulting view model is populated as we are expecting.

Test Setup

Going back to the arrange steps, the following code is used to instantiate the in-memory database using the Effort library.

protected void SetUpContextAndTestData()
{
    InitContext();
    TestDataSeeder.SeedData(Context, true);
}

private void InitContext()
{
    var dataFolder = AppDomain.CurrentDomain.BaseDirectory;

    var connection = Effort.DbConnectionFactory.CreateTransient();
    var dummyContext = new ApplicationDbContext();
    var builder = new DbModelBuilder();

    var m = typeof(ApplicationDbContext).GetMethod("ConfigureModel", BindingFlags.NonPublic | BindingFlags.Instance);
    m.Invoke(dummyContext, new object[] { builder, false });

    var model = builder.Build(connection).Compile();

    Context = new ApplicationDbContext(connection, model, false);
    Context.Configuration.AutoDetectChangesEnabled = true;
}

The call to InitContext sets up the schema using a variation of the standard Effort instantiation to work with the constructors available in ASP.Net Identity. A dummy context is first created and then the Effor in-memory model is built from that. Please note if you aren't using ASP.Net Identity then you should follow the set-up code as detailed on the Codeplex site.

We then make a call to a method responsible for seeding the data. This actually leverages the existing code for seeding data following an EF migration. It passes a flag to indicate to the seeding method whether on not to call any SQL server specific migrations. For example I had some code to override conventions for various date fields to use the smalldatetime data type - as this is SQL specific, Effort won't be able to work with this. But as this setting isn't relevant for tests, it can safely be ignored.

Conclusion

I'd certainly recommend developers looking to test EF methods take a look at Effort. Pun intended, in using it there's not much effort involved in getting the parts of your application that traditionally would be quite hard to test, under your unit test coverage.

Sunday, 14 September 2014

Rich Domain Models and Entity Framework

Previous posts in this series

Working with a Rich Domain Model and Entity Framework

As described in previous posts in this series, I'm documenting a few changes to the way I've recently been building out ASP.Net MVC applications, following various influencers from blogs and books. One area I was keen to look at was creating a rich domain model - inspired by the Eric Evans classic book and some posts by Julie Lerman and Jimmy Bogard.

Previously I would have worked more with what is known as an anaemic domain model - which isn't quite as bad as it sounds... but means that when creating entities for use in Entity Framework Code First, they would in the most part be made of a simple set of properties with public getters and setters. With a rich domain model, we look to push more behaviour into these model classes and provide tighter means of control over their use by calling code from the rest of the application.

There are a number of means to do this - in this blog post I describe a number I've used.

Control via constructors

One first method is to remove the parameterless constructor that you might otherwise have, thus preventing calling code from simply newing up an instance of the class, without necessarily setting some required properties. In fact you can't completely remove it - EF requires it - but you can make it private.

Once that's in place you can create one or more constructors that will enforce the calling code to provide whatever properties are deemed required and thus not create what might by the terms of the application be deemed an invalid object. Here's an example from a survey application that defines an entity for a question:


    private Question()
    {
        Options = new List<QuestionOption>();
    }

    public Question(string code, string text, int orderInSection,
        QuestionSection section, QuestionType type)
        : this()
    {
        Code = code;
        Text = text;
        OrderInSection = orderInSection;
        Section = section;
        Type = type;
    }

Control via properties

A second step to control more fully how your class is used, is to make the property setters private. This will prevent calling code from simply setting values for them, instead you provide methods where appropriate groups of properties can be set together, and validation can be applied.

This example from the same application controls the setting of an allowable range for a numeric question's answer:


    public int? NumericalMin { get; private set; }

    public int? NumericalMax { get; private set; }

    public void SetNumericalRange(int min, int max)
    {
        if (min > max)
        {
            throw new ArgumentException(“Max parameter must be greater than the min parameter.");
        }

        NumericalMin = min;
        NumericalMax = max;
    }

Logic in the model

The last method to discuss involves pushing more of the application's logic into the rich domain model, rather than having this in various services or other components of the application. There's some major advantages to this, particularly when compared to making additional database requests to handle particular behaviours. The code is arguably easier to write and maintain, and certainly easier to test as you can simply create instances of the classes and test the behaviour methods.

You do need to take care though - in order to fulfil certain behaviours it might be necessary to instantiate your object from the database with a more fully populated object graph that you otherwise normally would. So it's important to bear in mind the database requests that your method will require.

This final example illustrates that balance and why it's certainly worth looking to move behaviour into model methods when you can. With the survey application, we have multiple choice questions that you can obtain a score from based on which options you select in your answer. I required a means of calculating the maximum score available on a question.

For a multiple select question with check-boxes, the maximum score would be the total associated with each of the options. For a single select though, where you can only select one via radio buttons, the maximum score would be the option that had the highest score associated with it.

The following code examples illustrate how this logic can be set up as a method of the Question class, which will be valid so long as the related Options for the question are also loaded. If that is the case though, the code is very easy to write, maintain and test.


    public int GetMaximumAvailableScore()
    {
        if (Type.Id == (int)QuestionTypeId.MultipleSelect)
        { 
            return Options.Sum(x => x.Score);
        }
        else if (Type.Id == (int)QuestionTypeId.SingleSelect)
        {
            return Options.Max(x => x.Score);
        }
        else
        {
            return 0;
        }
    }

    [TestMethod]
    public void Question_GetMaximumAvailableScoreForMultiSelect_ReturnsCorrectValue()
    {
        // Arrange
        var question = CreateMultiSelectQuestion();

        // Act
        var maxScore = question.GetMaximumAvailableScore();

        // Assert
        Assert.AreEqual(8, maxScore);
    }

Sunday, 31 August 2014

A CQRS Implementation with ASP.Net MVC and Entity Framework

Previous posts in this series

Considering a CQRS approach with ASP.Net MVC

Even a quick read of the various blog posts and other web based information on using CQRS with .Net reveals it can mean a lot of things to different people. To my understanding, at scale, it's a recognition that your read and write operations are likely to require quite different approaches. Reads for example may be coming from denormalised or cached information, and writes might go to queues for subsequent processing. Which is all a bit removed from a more basic CRUD approach where we simple read and write to the database.

In truth though, I didn't need - at least yet - these type of performance improvements. Nonetheless, there remains value for me even in a smaller-scale application in handling query and command operations more distinctly. Which really means in practice that rather than having a service and/or repository layer that provides methods such as GetCustomers, GetCustomerById, SaveCustomer, DeleteCustomer etc. we instead have discrete classes that handle each of these query or command operations. It means more classes for sure, but each one being smaller, more focussed and adhering much more closely to the single responsibility principle.

Another advantage of this approach I find is a better aligning the language of the application or code that of the logic we are looking to model - the ubiquitous language between the developers and the business. One example I had was having user's sign up for a particular event. With the CRUD approach I would likely have created something like an EventUser class - containing references to the Event, the User and some supplementary information such as the date of sign up. And then have a method in my service layer for SaveEventUser().

None of this is really language a business stakeholder would talk - in contrast the creation of a SignUpUserForEvent command does align much more closely.

Finally with this approach I hoped to avoid some issues I've found in the past with a more loose approach to dealing with the Entity Framework context. With this method, we would simply create the context within the query or command object, carry out the read or write operation, save changes and allow the context to go out of scope. With that tighter control there's less room for bugs or unexpected side-effects.

Scenarios

The method I've set up - closely following the example provided by Adam Tibi in his blog post on implementing a CQRS-based architecture with MVC - contains the following, inter-related components:

  • Each controller with functions that require reads (all of them) or writes (some of them) contain dependency injected instances of QueryDispatcher and CommandDispatcher
  • The QueryDispatcher takes type parameters for a query result (or view model) that implements IQueryResult, and a query that implements IQuery. Based on that, using Ninject, the appropriate IQueryHandler is found and executed, passing the query and returning the result.
  • Within the handler, we instantiate the EF context and carry out the appropriate querying and mapping operations to construct the result.
  • Similarly the CommandDispatcher takes a type parameter for the command and the appropriate ICommandHandler is found and executed.
  • Within the handler, we instantiate the EF context and carry outs the appropriate write operation.
  • Command handlers return a simple and standard CommandResult with a status flag, any error messaging and an data object used in cases such as where we need to know the Id of the entity just created.

Given that, we have the following three types of data flow through the application.

Reads

The following diagram illustrates the components involved in handling a read (or query) operation, most usually in the preparation of a view model for presentation of information in a view:

The data flow through these components is as follows:

  • Controller action method takes a parameter of type IQuery. This might be as simple as an Id for the record to display, or something more complex like a set of search parameters.
  • The controller action calls the query dispatcher, passing the query along with the type parameters for the query and the view model (which implements IQueryResult).
  • The appropriate IQueryHandler is called, which uses the EF context to get the data specified by the query and AutoMapper to map or project it into the view model.
  • The view model is passed to the strongly typed view.
Writes

A similar pattern but involving different components handles a write (or command) operation, processing updates that come direct from a form post, an example being the sign-up of a user for an event which comes from a form posting the Id of the event:

Again, to further describe this process:

  • Controller action method takes a parameter of type ICommand that is constructed via MVC model binding from the form post.
  • Further parameters as appropriate are added to the command object (e.g. the Id of the logged in user).
  • The controller action calls the command dispatcher, passing the command along with the type parameter for the command.
  • The appropriate ICommandHanlder is called, which uses the EF context to carry out some validation and make the appropriate model updates.
  • A standard CommandResult object is returned containing a success flag, any error messages and where useful, any response data (such as the Id of the newly created record).
Validated Writes

A complication arises when we come to consider validation. Normally with ASP.Net MVC we'll handle this with data annotations on our view model, which tie in nicely with the framework to provide both server and client side validation. I don't want to lose that facility by say moving the validation up to the command handler. But equally, it doesn't seem right to be passing the view model itself as the command object - the former being clearly part of the presentation layer, not something for data processing.

The way I handle this is to post the view model and handle validation as per usual MVC model binding and validation patterns. If validation passes, I map the view model to a command object, and then continue as per the "Writes" example above.

]

Once more to flesh this diagram out a bit.

  • The presentation of the form data on first load is handled as per "Reads" above.
  • The form post controller action method takes an instance of the view model that is constructed via MVC model binding from the form post/
  • Validation is carried out both client and server side using standard MVC techniques with ModelState.
  • If validation failed, the view model is repopulated with any supplementary information that hasn't been passed in the form post (e.g. the options in selection lists) and returned to the view.
  • If validation passes the view model is mapped (using AutoMapper) to a command object that implements ICommand.
  • From here on we proceed as per "Writes" above.

An end-to-end code example

This example presents the various code excepts that implement this pattern, for the example of a simple process to present a form for editing, validate the submission and process the update.

Set-up

The first step is to create the necessary components for the actual query and command handling - this includes the handler itself, but also various marker interfaces that will be used in the process of matching the appropriate handler from a given query or command.

Firstly for queries:


    // Marker interface to signify a query - all queries will implement this
    public interface IQuery
    {        
    }
 
    // Marker interface to signify a query result - all view models will implement this 
    public interface IQueryResult
    {
    } 
 
    // Interface for query handlers - has two type parameters for the query and the query result
    public interface IQueryHandler<TParameter, TResult> 
       where TResult : IQueryResult
       where TParameter : IQuery
    {
        Task<TResult> Retrieve(TParameter query);
    } 

    // Interface for the query dispatcher itself
    public interface IQueryDispatcher
    {   
        Task<TResult> Dispatch<TParameter, TResult>(TParameter query)
            where TParameter : IQuery
            where TResult : IQueryResult;
    }
 
    // Implementation of the query dispatcher - selects and executes the appropriate query
    public class QueryDispatcher : IQueryDispatcher
    {
        private readonly IKernel _kernel;

        public QueryDispatcher(IKernel kernel)
        {
            if (kernel == null)
            {
                throw new ArgumentNullException("kernel");
            }

            _kernel = kernel;
        }

        public async Task<TResult> Dispatch<TParameter, TResult>(TParameter query)
            where TParameter : IQuery
            where TResult : IQueryResult
        {
            // Find the appropriate handler to call from those registered with Ninject based on the type parameters
            var handler = _kernel.Get<IQueryHandler<TParameter, TResult>>();
            return await handler.Retrieve(query);
        }
    } 

And similarly for commands:


    // Marker interface to signify a command - all command will implement this
    public interface ICommand
    {        
    }
 
    // Interface for command handlers - has a type parameters for the command
    public interface ICommandHandler<in TParameter> where TParameter : ICommand
    {
        Task<CommandResult> Execute(TParameter command);
    } 
 
    // Simple result class for command handlers to return 
    public class CommandResult
    {
        public bool Success { get; set; }

        public string Message { get; set; }

        public object Data { get; set; }
    } 
 
    // Interface for the command dispatcher itself
    public interface ICommandDispatcher
    {
        Task<CommandResult> Dispatch<TParameter>(TParameter command) where TParameter : ICommand;
    } 
 
    // Implementation of the command dispatcher - selects and executes the appropriate command 
    public class CommandDispatcher : ICommandDispatcher
    {
        private readonly IKernel _kernel;

        public CommandDispatcher(IKernel kernel)
        {
            if (kernel == null)
            {
                throw new ArgumentNullException("kernel");
            }

            _kernel = kernel;
        }

        public async Task<CommandResult> Dispatch<TParameter>(TParameter command) where TParameter : ICommand
        {
            // Find the appropriate handler to call from those registered with Ninject based on the type parameters  
            var handler = _kernel.Get<ICommandHandler<TParameter>>();
            return await handler.Execute(command);
        }
    }

We then need to register these types with our IoC container, in this case Ninject. This will include the dispatcher classes, and setting up the registration of the query and command handlers too.


 kernel.Bind<IQueryDispatcher>().To<QueryDispatcher>();
 kernel.Bind<ICommandDispatcher>().To<CommandDispatcher>();

 kernel.Bind(x => x
  .FromAssembliesMatching("MyApp.dll")
  .SelectAllClasses().InheritedFrom(typeof(IQueryHandler<,>))
  .BindAllInterfaces());

 kernel.Bind(x => x
  .FromAssembliesMatching("MyApp.dll")
  .SelectAllClasses().InheritedFrom(typeof(ICommandHandler<>))
  .BindAllInterfaces());

Form display

The first part of the example itself requires the display of an item for editing on a form. This involves handling the request that comes into the controller action, determining the appropriate handler, executing it to populate a view model and passing this to a strongly typed view.


    // Controller action method taking a query that consists of just the Id for the item to be edited
    public async Task<ViewResult> Edit(EditViewModelQuery query)
    {
        // Populate the view model by calling the appropriate handler
        var vm = await QueryDispatcher.Dispatch<EditViewModelQuery, EditViewModel>(query);
        if (vm.Id == 0)
        {
            throw new HttpException(404, "Page not found");
        }

        return View(vm);
    }
    
    // Query handler Retrieve method implementation
    public async Task<EditViewModel> Retrieve(EditViewModelQuery query)
    {
        // Instantiate the context (if not passed to the handler, which will only be the case when unit testing)
        Context = Context ?? new ApplicationDbContext();

        // Create the view model query result
        var result = new EditViewModel();
        
        // Pull the required item from the context
        var activity = await Context.Activities
            .SingleOrDefaultAsync(x => x.Id == query.Id);
            
        // Map from the domain model object to the view model
        Mapper.Map(activityType, result);

        return result;
    }
    
    @using (Html.BeginForm("Edit", "Activities", FormMethod.Post))
    {
        @Html.LabelFor(m => m.Name)
        @Html.EditorFor(m => m.Name)
        @Html.ValidationMessageFor(m => m.Name)

        <input type="submit" value="Save" />                    

        @Html.HiddenFor(model => model.Id)    
    } 

Processing of form submission

Once the form is posted, it'll be picked up by another controller action expecting a POST input of an instance of the view model. This will be validated, and if it passes, a command object created and mapped. This command will be passed to the appropriate handler for execution and the result returned for subsequent processing.


    // Controller action method handling the form post
    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<ActionResult> Edit(EditViewModel vm)
    {
        // First check if view model is valid
        if (ModelState.IsValid)
        {
            // If so create the command object
            var command = new AddOrEditCommand();
            
            // Use Automapper to map from the view model to the command
            Mapper.Map(vm, command);
            
            // Call the appropriate command handler
            var commandResult = await CommandDispatcher.Dispatch(command);
            if (commandResult.Success)
            {
                // If the command is successful, we can retrieve any data returned
                var newId = (int)commandResult.Data;

                // Do stuff with the generated Id of the entity if we need to...
                    
                return RedirectToAction("Index");
            }        
            else
            {
                // Command not successful, so handle error as appropriate
            }
        }

        // View model not valid, so return to view
        return View("Edit", vm);
    }
    
    // Command handler Execute method implementation    
    public async Task<CommandResult> Execute(AddOrEditCommand command)
    {
        ValidateArguments(command);
        
        // Instantiate the context (if not passed to the handler, which will only be the case when unit testing)
        Context = Context ?? new ApplicationDbContext();

        var result = new CommandResult();
        var commandValid = false;
        Activity activity;
        
        // If a new record, create a new entity, map the command to the domain object and add it to the context
        if (command.Id == 0) 
        {
            activity = new Activity();
            Mapper.Map(command, activity);
            Context.ActivityTypes.Add(activity);
            commandValid = true;
        }
        else 
        {
            // If an updated record, retrieve it from the context and apply the updates
            activity = await Context.Activities.SingleOrDefaultAsync(x => x.Id == command.Id);
            if (activityType != null) {
                Mapper.Map(command, activity);
                commandValid = true;
            }
            else {
                result.Message = "Activity type not found.";
            }
        }
        
        if (commandValid) 
        {
        
            // Commit data  changes
            await Context.SaveChangesAsync();
            
            // Prepare result
            result.Success = true;
            result.Data = activityType.Id;
        }
        
        return result;
    }    

CQRS, Rich Domains and Unit Testing with ASP.Net MVC and Entity Framework

Having spent a bit of time working on CMS (Umbraco and EPiServer) based projects, I recently had to come back to building a custom ASP.Net application using the MVC framework. Before commencing I took a bit of time to read around, and re-think some of the ways I've built applications like this previously, in terms of how the solution is architected.

Previously I've put together a fairly strict three layered architecture - data access layer using Entity Framework, wrapped in a unit of work/repository pattern; a business logic or service layer, and the presentation web application. Which worked quite nicely in truth, but there's always value in re-looking at such things and seeing how they could be improved. And where's the fun in doing things the same way each time anyway!

In particular I was keen to investigate:

  • A CQRS style architecture, where we work with distinct query and command objects, in a more "slices over layers" fashion. This breaks the application down logically more vertically than horizontally, with features being grouped and having their own distinct classes.
  • Better following the single responsibility principle, with smaller, more focussed classes.
  • Removing the probably unnecessary wrapping of the Entity Framework context, which after all, effectively provides it's own unit of work pattern.
  • In doing so, looking to use the Entity Framework context in a more discrete fashion, avoiding potential bugs and complexities that I've run into before. These can manifest themselves if you are keeping the context alive across different operations, or attempting to handle entities created directly from form posts rather than being pulled from the context.
  • Working with a richer domain model that would play nicely both with Entity Framework and the presentation layer.
  • Increased focus on unit testing.

Credits and further reading

In working up this new architecture, nothing was invented from scratch... rather I pulled together a number of different influences from various books and blog posts that led to the patterns described in this series of posts:

Posts in this series

There will be three further posts in this series, where I'll discuss in more detail the certain aspects of the application patterns I'm now using.

Friday, 8 August 2014

Using Umbraco Mapper with Archetype (Part 2)

In the previous post I discussed a method of mapping property values from the Archetype package to a view model using Umbraco Mapper. There was still a remaining issue of how to handle picked content.

With the version of Umbraco Mapper 1.4.7 (just updated on our and NuGet) this is now possible with just some changes from the method noted in the previous post.

Firstly, you will need to have the Umbraco Core Property Value Converters package installed. If you are using Archetype you'll likely be using this already, but it's needed here to convert the values returned from Archetype node picker properties to either IPublishedContent (for single items from a content picker) or IEnumerable (for multiple items from a multi-node tree picker).

Then extending our previous example, let's say we've added fields to our "Football Match" archetype to have a content picker for the match report, and a multi-node tree picker for match reports from previous games.

Our view model then looks like this:

    public class FootballMatchesPageViewModel
    {
        public string Name { get; set; }

        public IEnumerable<FootballMatch> TodaysMatches { get; set; }
    }

    public class FootballMatch
    {
        public FootballMatch()
        {
            MatchReport = new MatchReportTeaser();
            ReportsFromPreviousGames = new List<MatchReportTeaser>();
        }

        public string HomeTeam { get; set; }

        public string HomeTeamScore { get; set; }

        public string AwayTeam { get; set; }

        public int AwayTeamScore { get; set; }

        public MatchReportTeaser MatchReport { get; set; }

        public IList<MatchReportTeaser> ReportsFromPreviousGames { get; set; }
    }

    public class MatchReportTeaser
    {
        public string Name { get; set; }

        public string Url { get; set; }
    }

Note it's important to have the constructor set the complex type and collection to an instantiated object, as the mapper won't handle mapping to null values.

The custom mapping we set up in the previous example needs some minor amends too, it's now:

    public class ArchetypeMapper
    {
        public static object MapFootballMatch(IUmbracoMapper mapper, IPublishedContent contentToMapFrom, string propName, bool isRecursive) 
        {
            var result = new List<FootballMatch>();

            var archetypeModel = contentToMapFrom.GetPropertyValue<ArchetypeModel>(propName, isRecursive, null);
            if (archetypeModel != null)
            {
                var archetypeAsDictionary = archetypeModel
                    .Select(item => item.Properties.ToDictionary(m => m.Alias, m => GetTypedValue(m), StringComparer.InvariantCultureIgnoreCase))
                    .ToList();
                mapper.MapCollection(archetypeAsDictionary, result);
            }

            return result;
        }

        private static object GetTypedValue(ArchetypePropertyModel archetypeProperty)
        {
            switch (archetypeProperty.PropertyEditorAlias)
            {
                case "Umbraco.ContentPickerAlias":
                    return archetypeProperty.GetValue<IPublishedContent>();
                case "Umbraco.MultiNodeTreePicker":
                    return archetypeProperty.GetValue<IEnumerable<IPublishedContent>>();
                default:
                    return archetypeProperty.Value;
            }            
        }
    }

Again an important point to note here is the StringComparer.InvariantCultureIgnoreCase argument to the ToDictionary call - this makes it case insensitive for key look-ups, which is handy when dealing with property aliases that are usually camel cased in Umbraco aliases but Pascal cased in C# classes.

With the latest release of Umbraco mapper, there's a small update to the routine that maps from a dictionary as we are using here. It checks on the types, and if they are found to be IPublishedContent or IEnumerable the mapping routine those types is run for that property - thus allowing them to be mapped in exactly the same way as if we were just mapping the content directly.