Sunday, 14 September 2014

Rich Domain Models and Entity Framework

Previous posts in this series

Working with a Rich Domain Model and Entity Framework

As described in previous posts in this series, I'm documenting a few changes to the way I've recently been building out ASP.Net MVC applications, following various influencers from blogs and books. One area I was keen to look at was creating a rich domain model - inspired by the Eric Evans classic book and some posts by Julie Lerman and Jimmy Bogard.

Previously I would have worked more with what is known as an anaemic domain model - which isn't quite as bad as it sounds... but means that when creating entities for use in Entity Framework Code First, they would in the most part be made of a simple set of properties with public getters and setters. With a rich domain model, we look to push more behaviour into these model classes and provide tighter means of control over their use by calling code from the rest of the application.

There are a number of means to do this - in this blog post I describe a number I've used.

Control via constructors

One first method is to remove the parameterless constructor that you might otherwise have, thus preventing calling code from simply newing up an instance of the class, without necessarily setting some required properties. In fact you can't completely remove it - EF requires it - but you can make it private.

Once that's in place you can create one or more constructors that will enforce the calling code to provide whatever properties are deemed required and thus not create what might by the terms of the application be deemed an invalid object. Here's an example from a survey application that defines an entity for a question:


    private Question()
    {
        Options = new List<QuestionOption>();
    }

    public Question(string code, string text, int orderInSection,
        QuestionSection section, QuestionType type)
        : this()
    {
        Code = code;
        Text = text;
        OrderInSection = orderInSection;
        Section = section;
        Type = type;
    }

Control via properties

A second step to control more fully how your class is used, is to make the property setters private. This will prevent calling code from simply setting values for them, instead you provide methods where appropriate groups of properties can be set together, and validation can be applied.

This example from the same application controls the setting of an allowable range for a numeric question's answer:


    public int? NumericalMin { get; private set; }

    public int? NumericalMax { get; private set; }

    public void SetNumericalRange(int min, int max)
    {
        if (min > max)
        {
            throw new ArgumentException(“Max parameter must be greater than the min parameter.");
        }

        NumericalMin = min;
        NumericalMax = max;
    }

Logic in the model

The last method to discuss involves pushing more of the application's logic into the rich domain model, rather than having this in various services or other components of the application. There's some major advantages to this, particularly when compared to making additional database requests to handle particular behaviours. The code is arguably easier to write and maintain, and certainly easier to test as you can simply create instances of the classes and test the behaviour methods.

You do need to take care though - in order to fulfil certain behaviours it might be necessary to instantiate your object from the database with a more fully populated object graph that you otherwise normally would. So it's important to bear in mind the database requests that your method will require.

This final example illustrates that balance and why it's certainly worth looking to move behaviour into model methods when you can. With the survey application, we have multiple choice questions that you can obtain a score from based on which options you select in your answer. I required a means of calculating the maximum score available on a question.

For a multiple select question with check-boxes, the maximum score would be the total associated with each of the options. For a single select though, where you can only select one via radio buttons, the maximum score would be the option that had the highest score associated with it.

The following code examples illustrate how this logic can be set up as a method of the Question class, which will be valid so long as the related Options for the question are also loaded. If that is the case though, the code is very easy to write, maintain and test.


    public int GetMaximumAvailableScore()
    {
        if (Type.Id == (int)QuestionTypeId.MultipleSelect)
        { 
            return Options.Sum(x => x.Score);
        }
        else if (Type.Id == (int)QuestionTypeId.SingleSelect)
        {
            return Options.Max(x => x.Score);
        }
        else
        {
            return 0;
        }
    }

    [TestMethod]
    public void Question_GetMaximumAvailableScoreForMultiSelect_ReturnsCorrectValue()
    {
        // Arrange
        var question = CreateMultiSelectQuestion();

        // Act
        var maxScore = question.GetMaximumAvailableScore();

        // Assert
        Assert.AreEqual(8, maxScore);
    }

Sunday, 31 August 2014

A CQRS Implementation with ASP.Net MVC and Entity Framework

Previous posts in this series

Considering a CQRS approach with ASP.Net MVC

Even a quick read of the various blog posts and other web based information on using CQRS with .Net reveals it can mean a lot of things to different people. To my understanding, at scale, it's a recognition that your read and write operations are likely to require quite different approaches. Reads for example may be coming from denormalised or cached information, and writes might go to queues for subsequent processing. Which is all a bit removed from a more basic CRUD approach where we simple read and write to the database.

In truth though, I didn't need - at least yet - these type of performance improvements. Nonetheless, there remains value for me even in a smaller-scale application in handling query and command operations more distinctly. Which really means in practice that rather than having a service and/or repository layer that provides methods such as GetCustomers, GetCustomerById, SaveCustomer, DeleteCustomer etc. we instead have discrete classes that handle each of these query or command operations. It means more classes for sure, but each one being smaller, more focussed and adhering much more closely to the single responsibility principle.

Another advantage of this approach I find is a better aligning the language of the application or code that of the logic we are looking to model - the ubiquitous language between the developers and the business. One example I had was having user's sign up for a particular event. With the CRUD approach I would likely have created something like an EventUser class - containing references to the Event, the User and some supplementary information such as the date of sign up. And then have a method in my service layer for SaveEventUser().

None of this is really language a business stakeholder would talk - in contrast the creation of a SignUpUserForEvent command does align much more closely.

Finally with this approach I hoped to avoid some issues I've found in the past with a more loose approach to dealing with the Entity Framework context. With this method, we would simply create the context within the query or command object, carry out the read or write operation, save changes and allow the context to go out of scope. With that tighter control there's less room for bugs or unexpected side-effects.

Scenarios

The method I've set up - closely following the example provided by Adam Tibi in his blog post on implementing a CQRS-based architecture with MVC - contains the following, inter-related components:

  • Each controller with functions that require reads (all of them) or writes (some of them) contain dependency injected instances of QueryDispatcher and CommandDispatcher
  • The QueryDispatcher takes type parameters for a query result (or view model) that implements IQueryResult, and a query that implements IQuery. Based on that, using Ninject, the appropriate IQueryHandler is found and executed, passing the query and returning the result.
  • Within the handler, we instantiate the EF context and carry out the appropriate querying and mapping operations to construct the result.
  • Similarly the CommandDispatcher takes a type parameter for the command and the appropriate ICommandHandler is found and executed.
  • Within the handler, we instantiate the EF context and carry outs the appropriate write operation.
  • Command handlers return a simple and standard CommandResult with a status flag, any error messaging and an data object used in cases such as where we need to know the Id of the entity just created.

Given that, we have the following three types of data flow through the application.

Reads

The following diagram illustrates the components involved in handling a read (or query) operation, most usually in the preparation of a view model for presentation of information in a view:

The data flow through these components is as follows:

  • Controller action method takes a parameter of type IQuery. This might be as simple as an Id for the record to display, or something more complex like a set of search parameters.
  • The controller action calls the query dispatcher, passing the query along with the type parameters for the query and the view model (which implements IQueryResult).
  • The appropriate IQueryHandler is called, which uses the EF context to get the data specified by the query and AutoMapper to map or project it into the view model.
  • The view model is passed to the strongly typed view.
Writes

A similar pattern but involving different components handles a write (or command) operation, processing updates that come direct from a form post, an example being the sign-up of a user for an event which comes from a form posting the Id of the event:

Again, to further describe this process:

  • Controller action method takes a parameter of type ICommand that is constructed via MVC model binding from the form post.
  • Further parameters as appropriate are added to the command object (e.g. the Id of the logged in user).
  • The controller action calls the command dispatcher, passing the command along with the type parameter for the command.
  • The appropriate ICommandHanlder is called, which uses the EF context to carry out some validation and make the appropriate model updates.
  • A standard CommandResult object is returned containing a success flag, any error messages and where useful, any response data (such as the Id of the newly created record).
Validated Writes

A complication arises when we come to consider validation. Normally with ASP.Net MVC we'll handle this with data annotations on our view model, which tie in nicely with the framework to provide both server and client side validation. I don't want to lose that facility by say moving the validation up to the command handler. But equally, it doesn't seem right to be passing the view model itself as the command object - the former being clearly part of the presentation layer, not something for data processing.

The way I handle this is to post the view model and handle validation as per usual MVC model binding and validation patterns. If validation passes, I map the view model to a command object, and then continue as per the "Writes" example above.

]

Once more to flesh this diagram out a bit.

  • The presentation of the form data on first load is handled as per "Reads" above.
  • The form post controller action method takes an instance of the view model that is constructed via MVC model binding from the form post/
  • Validation is carried out both client and server side using standard MVC techniques with ModelState.
  • If validation failed, the view model is repopulated with any supplementary information that hasn't been passed in the form post (e.g. the options in selection lists) and returned to the view.
  • If validation passes the view model is mapped (using AutoMapper) to a command object that implements ICommand.
  • From here on we proceed as per "Writes" above.

An end-to-end code example

This example presents the various code excepts that implement this pattern, for the example of a simple process to present a form for editing, validate the submission and process the update.

Set-up

The first step is to create the necessary components for the actual query and command handling - this includes the handler itself, but also various marker interfaces that will be used in the process of matching the appropriate handler from a given query or command.

Firstly for queries:


    // Marker interface to signify a query - all queries will implement this
    public interface IQuery
    {        
    }
 
    // Marker interface to signify a query result - all view models will implement this 
    public interface IQueryResult
    {
    } 
 
    // Interface for query handlers - has two type parameters for the query and the query result
    public interface IQueryHandler<TParameter, TResult> 
       where TResult : IQueryResult
       where TParameter : IQuery
    {
        Task<TResult> Retrieve(TParameter query);
    } 

    // Interface for the query dispatcher itself
    public interface IQueryDispatcher
    {   
        Task<TResult> Dispatch<TParameter, TResult>(TParameter query)
            where TParameter : IQuery
            where TResult : IQueryResult;
    }
 
    // Implementation of the query dispatcher - selects and executes the appropriate query
    public class QueryDispatcher : IQueryDispatcher
    {
        private readonly IKernel _kernel;

        public QueryDispatcher(IKernel kernel)
        {
            if (kernel == null)
            {
                throw new ArgumentNullException("kernel");
            }

            _kernel = kernel;
        }

        public async Task<TResult> Dispatch<TParameter, TResult>(TParameter query)
            where TParameter : IQuery
            where TResult : IQueryResult
        {
            // Find the appropriate handler to call from those registered with Ninject based on the type parameters
            var handler = _kernel.Get<IQueryHandler<TParameter, TResult>>();
            return await handler.Retrieve(query);
        }
    } 

And similarly for commands:


    // Marker interface to signify a command - all command will implement this
    public interface ICommand
    {        
    }
 
    // Interface for command handlers - has a type parameters for the command
    public interface ICommandHandler<in TParameter> where TParameter : ICommand
    {
        Task<CommandResult> Execute(TParameter command);
    } 
 
    // Simple result class for command handlers to return 
    public class CommandResult
    {
        public bool Success { get; set; }

        public string Message { get; set; }

        public object Data { get; set; }
    } 
 
    // Interface for the command dispatcher itself
    public interface ICommandDispatcher
    {
        Task<CommandResult> Dispatch<TParameter>(TParameter command) where TParameter : ICommand;
    } 
 
    // Implementation of the command dispatcher - selects and executes the appropriate command 
    public class CommandDispatcher : ICommandDispatcher
    {
        private readonly IKernel _kernel;

        public CommandDispatcher(IKernel kernel)
        {
            if (kernel == null)
            {
                throw new ArgumentNullException("kernel");
            }

            _kernel = kernel;
        }

        public async Task<CommandResult> Dispatch<TParameter>(TParameter command) where TParameter : ICommand
        {
            // Find the appropriate handler to call from those registered with Ninject based on the type parameters  
            var handler = _kernel.Get<ICommandHandler<TParameter>>();
            return await handler.Execute(command);
        }
    }

We then need to register these types with our IoC container, in this case Ninject. This will include the dispatcher classes, and setting up the registration of the query and command handlers too.


 kernel.Bind<IQueryDispatcher>().To<QueryDispatcher>();
 kernel.Bind<ICommandDispatcher>().To<CommandDispatcher>();

 kernel.Bind(x => x
  .FromAssembliesMatching("MyApp.dll")
  .SelectAllClasses().InheritedFrom(typeof(IQueryHandler<,>))
  .BindAllInterfaces());

 kernel.Bind(x => x
  .FromAssembliesMatching("MyApp.dll")
  .SelectAllClasses().InheritedFrom(typeof(ICommandHandler<>))
  .BindAllInterfaces());

Form display

The first part of the example itself requires the display of an item for editing on a form. This involves handling the request that comes into the controller action, determining the appropriate handler, executing it to populate a view model and passing this to a strongly typed view.


    // Controller action method taking a query that consists of just the Id for the item to be edited
    public async Task<ViewResult> Edit(EditViewModelQuery query)
    {
        // Populate the view model by calling the appropriate handler
        var vm = await QueryDispatcher.Dispatch<EditViewModelQuery, EditViewModel>(query);
        if (vm.Id == 0)
        {
            throw new HttpException(404, "Page not found");
        }

        return View(vm);
    }
    
    // Query handler Retrieve method implementation
    public async Task<EditViewModel> Retrieve(EditViewModelQuery query)
    {
        // Instantiate the context (if not passed to the handler, which will only be the case when unit testing)
        Context = Context ?? new ApplicationDbContext();

        // Create the view model query result
        var result = new EditViewModel();
        
        // Pull the required item from the context
        var activity = await Context.Activities
            .SingleOrDefaultAsync(x => x.Id == query.Id);
            
        // Map from the domain model object to the view model
        Mapper.Map(activityType, result);

        return result;
    }
    
    @using (Html.BeginForm("Edit", "Activities", FormMethod.Post))
    {
        @Html.LabelFor(m => m.Name)
        @Html.EditorFor(m => m.Name)
        @Html.ValidationMessageFor(m => m.Name)

        <input type="submit" value="Save" />                    

        @Html.HiddenFor(model => model.Id)    
    } 

Processing of form submission

Once the form is posted, it'll be picked up by another controller action expecting a POST input of an instance of the view model. This will be validated, and if it passes, a command object created and mapped. This command will be passed to the appropriate handler for execution and the result returned for subsequent processing.


    // Controller action method handling the form post
    [HttpPost]
    [ValidateAntiForgeryToken]
    public async Task<ActionResult> Edit(EditViewModel vm)
    {
        // First check if view model is valid
        if (ModelState.IsValid)
        {
            // If so create the command object
            var command = new AddOrEditCommand();
            
            // Use Automapper to map from the view model to the command
            Mapper.Map(vm, command);
            
            // Call the appropriate command handler
            var commandResult = await CommandDispatcher.Dispatch(command);
            if (commandResult.Success)
            {
                // If the command is successful, we can retrieve any data returned
                var newId = (int)commandResult.Data;

                // Do stuff with the generated Id of the entity if we need to...
                    
                return RedirectToAction("Index");
            }        
            else
            {
                // Command not successful, so handle error as appropriate
            }
        }

        // View model not valid, so return to view
        return View("Edit", vm);
    }
    
    // Command handler Execute method implementation    
    public async Task<CommandResult> Execute(AddOrEditCommand command)
    {
        ValidateArguments(command);
        
        // Instantiate the context (if not passed to the handler, which will only be the case when unit testing)
        Context = Context ?? new ApplicationDbContext();

        var result = new CommandResult();
        var commandValid = false;
        Activity activity;
        
        // If a new record, create a new entity, map the command to the domain object and add it to the context
        if (command.Id == 0) 
        {
            activity = new Activity();
            Mapper.Map(command, activity);
            Context.ActivityTypes.Add(activity);
            commandValid = true;
        }
        else 
        {
            // If an updated record, retrieve it from the context and apply the updates
            activity = await Context.Activities.SingleOrDefaultAsync(x => x.Id == command.Id);
            if (activityType != null) {
                Mapper.Map(command, activity);
                commandValid = true;
            }
            else {
                result.Message = "Activity type not found.";
            }
        }
        
        if (commandValid) 
        {
        
            // Commit data  changes
            await Context.SaveChangesAsync();
            
            // Prepare result
            result.Success = true;
            result.Data = activityType.Id;
        }
        
        return result;
    }    

CQRS, Rich Domains and Unit Testing with ASP.Net MVC and Entity Framework

Having spent a bit of time working on CMS (Umbraco and EPiServer) based projects, I recently had to come back to building a custom ASP.Net application using the MVC framework. Before commencing I took a bit of time to read around, and re-think some of the ways I've built applications like this previously, in terms of how the solution is architected.

Previously I've put together a fairly strict three layered architecture - data access layer using Entity Framework, wrapped in a unit of work/repository pattern; a business logic or service layer, and the presentation web application. Which worked quite nicely in truth, but there's always value in re-looking at such things and seeing how they could be improved. And where's the fun in doing things the same way each time anyway!

In particular I was keen to investigate:

  • A CQRS style architecture, where we work with distinct query and command objects, in a more "slices over layers" fashion. This breaks the application down logically more vertically than horizontally, with features being grouped and having their own distinct classes.
  • Better following the single responsibility principle, with smaller, more focussed classes.
  • Removing the probably unnecessary wrapping of the Entity Framework context, which after all, effectively provides it's own unit of work pattern.
  • In doing so, looking to use the Entity Framework context in a more discrete fashion, avoiding potential bugs and complexities that I've run into before. These can manifest themselves if you are keeping the context alive across different operations, or attempting to handle entities created directly from form posts rather than being pulled from the context.
  • Working with a richer domain model that would play nicely both with Entity Framework and the presentation layer.
  • Increased focus on unit testing.

Credits and further reading

In working up this new architecture, nothing was invented from scratch... rather I pulled together a number of different influences from various books and blog posts that led to the patterns described in this series of posts:

Posts in this series

There will be three further posts in this series, where I'll discuss in more detail the certain aspects of the application patterns I'm now using.

Friday, 8 August 2014

Using Umbraco Mapper with Archetype (Part 2)

In the previous post I discussed a method of mapping property values from the Archetype package to a view model using Umbraco Mapper. There was still a remaining issue of how to handle picked content.

With the version of Umbraco Mapper 1.4.7 (just updated on our and NuGet) this is now possible with just some changes from the method noted in the previous post.

Firstly, you will need to have the Umbraco Core Property Value Converters package installed. If you are using Archetype you'll likely be using this already, but it's needed here to convert the values returned from Archetype node picker properties to either IPublishedContent (for single items from a content picker) or IEnumerable (for multiple items from a multi-node tree picker).

Then extending our previous example, let's say we've added fields to our "Football Match" archetype to have a content picker for the match report, and a multi-node tree picker for match reports from previous games.

Our view model then looks like this:

    public class FootballMatchesPageViewModel
    {
        public string Name { get; set; }

        public IEnumerable<FootballMatch> TodaysMatches { get; set; }
    }

    public class FootballMatch
    {
        public FootballMatch()
        {
            MatchReport = new MatchReportTeaser();
            ReportsFromPreviousGames = new List<MatchReportTeaser>();
        }

        public string HomeTeam { get; set; }

        public string HomeTeamScore { get; set; }

        public string AwayTeam { get; set; }

        public int AwayTeamScore { get; set; }

        public MatchReportTeaser MatchReport { get; set; }

        public IList<MatchReportTeaser> ReportsFromPreviousGames { get; set; }
    }

    public class MatchReportTeaser
    {
        public string Name { get; set; }

        public string Url { get; set; }
    }

Note it's important to have the constructor set the complex type and collection to an instantiated object, as the mapper won't handle mapping to null values.

The custom mapping we set up in the previous example needs some minor amends too, it's now:

    public class ArchetypeMapper
    {
        public static object MapFootballMatch(IUmbracoMapper mapper, IPublishedContent contentToMapFrom, string propName, bool isRecursive) 
        {
            var result = new List<FootballMatch>();

            var archetypeModel = contentToMapFrom.GetPropertyValue<ArchetypeModel>(propName, isRecursive, null);
            if (archetypeModel != null)
            {
                var archetypeAsDictionary = archetypeModel
                    .Select(item => item.Properties.ToDictionary(m => m.Alias, m => GetTypedValue(m), StringComparer.InvariantCultureIgnoreCase))
                    .ToList();
                mapper.MapCollection(archetypeAsDictionary, result);
            }

            return result;
        }

        private static object GetTypedValue(ArchetypePropertyModel archetypeProperty)
        {
            switch (archetypeProperty.PropertyEditorAlias)
            {
                case "Umbraco.ContentPickerAlias":
                    return archetypeProperty.GetValue<IPublishedContent>();
                case "Umbraco.MultiNodeTreePicker":
                    return archetypeProperty.GetValue<IEnumerable<IPublishedContent>>();
                default:
                    return archetypeProperty.Value;
            }            
        }
    }

Again an important point to note here is the StringComparer.InvariantCultureIgnoreCase argument to the ToDictionary call - this makes it case insensitive for key look-ups, which is handy when dealing with property aliases that are usually camel cased in Umbraco aliases but Pascal cased in C# classes.

With the latest release of Umbraco mapper, there's a small update to the routine that maps from a dictionary as we are using here. It checks on the types, and if they are found to be IPublishedContent or IEnumerable the mapping routine those types is run for that property - thus allowing them to be mapped in exactly the same way as if we were just mapping the content directly.

Thursday, 7 August 2014

Using Umbraco Mapper with Archetype

A post came up today on the Umbraco forum discussing use of the Umbraco Mapper package I've built with colleagues at Zone, and everyone's favourite new package, Archetype. We haven't had chance to work with Archetype as yet, but are planning to use it on an upcoming project, so it seemed a good idea to work out the best way for these two packages to play together.

It's a little tricky to begin with as Umbraco Mapper is primarily about mapping from IPublishedContent - single Umbraco nodes or collections picked from a node picker or pulled together via a node query - to strongly typed view models.

However it also supports mapping from a simple dictionary of strings and objects - Dictionary<string, object> - and that, along with the ability to define custom mappings for a particular view model type, means this can be done fairly cleanly.

Note: haven't found a clean way to handle picked content yet - e.g. from content picker or multi-node tree picker... will update if and when we do.. This is supported now - see the follow-up post.

To take an example, let's say I'm creating a page to display a list of football results, and so have created an archetype to represent a series of matches:

I've then created a view model to represent this information:

    public class FootballMatchesPageViewModel
    {
        public string Heading { get; set; }

        public IEnumerable TodaysMatches { get; set; }
    }

    public class FootballMatch
    {
        public string HomeTeam { get; set; }

        public int HomeTeamScore { get; set; }
            
        public string AwayTeam { get; set; }

        public int AwayTeamScore { get; set; }
    }

Which will be rendered in my view template like this:

    <h3>Today's Matches</h3>

    @foreach (var match in Model.TodaysMatches)
    {
        
@match.HomeTeam @match.HomeTeamScore v @match.AwayTeamScore @match.AwayTeam
}

As it stands, Umbraco Mapper won't know how to handle mapping to the IEnumerable<FootballMatch> as it's a type it knows nothing about. We can rectify this by setting up a custom mapping. You add this code just after where you have instantiated the mapper, probably best in a base controller so all custom mappings can be initialised in one place.

    Mapper.AddCustomMapping(typeof(IEnumerable<FootballMatch>).FullName, 
        ArchetypeMapper.MapFootballMatch);

And then we need to write the function that will handle the mapping for the given type:

    public class ArchetypeMapper
    {
        public static object MapFootballMatch(IUmbracoMapper mapper, IPublishedContent contentToMapFrom,
            string propName, bool isRecursive) 
        {
            var result = new List<FootballMatch>();

            var archetypeModel = contentToMapFrom.GetPropertyValue<ArchetypeModel>(propName, isRecursive, null);
            if (archetypeModel != null)
            {
                var archetypeAsDictionary = archetypeModel
                    .Select(item => item.Properties.ToDictionary(m => FirstCharToUpper(m.Alias), m => m.Value));
                mapper.MapCollection(archetypeAsDictionary, result);
            }

            return result;
        }

        private static string FirstCharToUpper(string text)
        {
            return text.Substring(0, 1).ToUpper() + text.Substring(1);
        }
    }

The trick here as you can see is that we are taking the strongly type ArchetypeModel, converting it to a dictionary and the utilising the mapper to map to the view model.

With that in place, the view model will be mapped as required and the football match results will be displayed in the view.

Thanks to Shinsuke and Raffaele for raising the questions and coming up with most of the answers!

See part 2 for details on handling mapping of content pickers used in an Archetype.

Another possible approach

What's also interesting here is that we are using Umbraco Mapper to effectively map from one strongly typed model to another - albeit via a dictionary. And that rings some bells for me - i.e. AutoMapper. This is a tool I've used on many projects outside of Umbraco for mapping from domain models to view models and between other types - it was trying to replicate this within Umbraco that led to the development of Umbraco Mapper.

So if preferred, you could certainly utilise AutoMapper here, to go from the Archetype model to the view model. For me I'm OK with going via the dictionary as above, and perhaps having two mapping components in one project could get a bit confusing. But given we are moving into the world of strongly typed models with Umbraco these days via the property value converters, it's certainly another approach worth considering.

Friday, 20 June 2014

Umbraco, MVC and Strongly Typed View Models

I've recently returned from Codegarden, the annual Umbraco conference in Copenhagen. It was my second time there, and as previously had a great time and learnt a lot about the CMS and the various ways people are using it.

There was a definite theme around many of the technical sessions, including the one I was asked to give, around the idea of strongly typed models and best practices and techniques for using them.

Initially I was perhaps naively hoping that in these conversations with other developers tackling these issues, we'd might come to a consensus as to the best approach. Not surprisingly that didn't happen, and actually if anything the picture is changing further due to some interesting work on the core that Stephan presented. But that's no bad thing, options are good and it's clear that different developers and teams have different priorities in this area.

Having mulled it all over a bit on the way home, seems to me we have (or soon will have) six (yes, six!) general approaches we can take in this area of using MVC patterns with Umbraco. There's a kind of continuum that ranges from working solely in the views, through to full route hijacking and mapping to view models - and teams can pick and choose where they are most comfortable along this line.

Here's my take on these approaches:

Views Only

With this method the developer simply works direct in the views, using surface controllers only for the things they are necessary for like rendering child action partials and processing form posts. All property field rendering, querying of the node tree and even Examine searches can be carried out here.

Advantages:
  • Easiest way to get started for people new to Umbraco and/or MVC
  • Accessible to developers who don't work within Visual Studio
Disadvantages:
  • It might be using MVC rendering, but it's not really MVC in terms of the architectural pattern. The views with this approach have too much logic and resonsibility. For larger applications this is likely to be hard to maintain.
  • Not so easy for shared teams with front-end developers unfamiliar with Umbraco APIs. Simply hard to see the HTML from the C#.
  • Likely to run into issues with dynamics, which although initially easier to write and understand, break down when it comes to more complex tasks - as demonstrated in Iluma's talk at Codegarden.

Surface Controllers and Child Actions

With this method the main view has little code other than an @Html.Action reference to a surface controller action method. This method returns a strongly typed partial, based on a view model that can be custom created for the required display.

Advantages:
  • Gives us strong typing and the ability to provide our own view model
  • Improved separation of concerns
  • No need for creating controllers for route hijacking
  • Can be used to extend the views only approach perhaps just for more complex pages that might benefit from using a customised view model - no need for all or nothing.
Disadvantages:
  • More complex request cycle: view --> controller action --> partial and so less obvious exactly what code is where when it comes to maintenance

Custom View Models that Inherit from RenderModel

This is the approach taken by the Hybrid Framework, so called because it allows you to extend the existing Umbraco model that is passed to the view with your own additional properties and collections. Route hijacking is used too, where the node querying is carried out and a collection passed to the view via the view model. Because the view model extends default RenderModel though, you can still use it's methods like GetPropertyValue as you like.

Advantages:
  • Provides a proper MVC structure with controllers and views having their own defined responsibilities.
  • The hybrid approach means you don't have to go all the way to creating a populating a custom view model if you prefer not to.
Disadvantages:
  • We still have some view code that could be simpler to work with

Code Generated Content Models

This is based on a new feature demonstrated at the conference, the ability to code generate strongly typed classes based on the data in your document types. I definitely need to dig into this more to understand the full details, but the idea is that having generated these classes using a custom tool, instead of getting a working with an IPublishedContent in your views - you'll get an instance of a strongly typed content model class instead.

Because the generated classes are partial, you can augment them with your own properties and methods.

Coupled with this, another new feature is being developed to allow queries to be referenced in the views but stored elsewhere in code. This would considerably clean up an approach where querying is done in the views, as the code would not need to include all the usual LINQ statements here, rather the query would be accessed via a syntax along the lines of @foreach var item in Umbraco.Queries(alias).

Another tool that supports this pattern is Umbraco CodeGen. This package also demonstrated at Codegarden piggybacks on the XML files generated by the uSync package to provide a mechanism to both generate strongly typed models from your document types, and to make changes to your document types following amends to the code.

Advantages:
  • Looks like this will be a great technique for cleaning up views and giving front-end developers an easier model to work with.
  • Will be supported in the core, meaning familiarity with the approach is likely to become fairly widespread around the Umbraco community.
Disadvantages:
  • In MVC applications, there's often a distinction made between the domain model and the view model. The former is your application entities that are generally closely affiliated with your persistence mechanism (usually, a database). The latter are a code representation of the particular data for a view.

    It's common to treat these things quite separately, and utilise a mapping layer when translating from one to the other. In Umbraco, our domain model is Umbraco itself - the database of content and the document and data types - so it could be argued that these content models are really another representation of this. It will be easier to work with being strongly typed rather than instances and collections of IPublished content, but as of itself it's not strictly a view model.

  • Whilst these content models could be used in views directly quite comfortably when it comes to property values, for queries it's not so obvious. You would either be left continuing to query in the views, or use the Umbraco helper queries collection as noted above.

It's important to note though that once the latter is available, the combination of the content models and the queries collection will offer a way to have very clean views with little logic. Even if strictly a view model isn't being used, this may well be enough to solve the "too much logic in the views" problem for most people - and possibly me too once I've played around with it a bit more.

POCO View Models and Mapping

This approach is my favoured at the moment. It is based on creating a pure view model for each view, and using route hijacking to intercept requests and construct these view models. The different in approach from the Hybrid Framework though is that these view model classes don't inherit from Umbraco's RenderModel - they are simple POCOs with no dependency on Umbraco at all.

Unless there's a need to retain access to the Umbraco helper in the view - perhaps for accessing Dictionary values - the view can be strongly typed direct to the view model, using the @model syntax. And as such front-end developers needn't be working with Umbraco APIs at all, just intellisense discoverable properties and collections.

The mapping approach itself from Umbraco's IPublished content to the view model can be simplified drastically by using the Umbraco Mapper package - in the same way that AutoMapper works between C# classes, this package has been designed to use conventions with configurable overrides to flexibly map from Umbraco content to view model instances.

Advantages:
  • This pattern utilisises MVC best practices, with Umbraco "domain models" mapped to custom view models for each view, leaving very keen view code with little or no logic
Disadvantages:
  • There is a little more work involved to firstly create the view model and then handle the mapping, though the Umbraco Mapper package takes away a lot of the grunt work around this.

Code Generated Content Models With Mapping

This approach is really combination of the previous two, taking on board the point that the code mapped content models are not strictly view models, and may need augmenting with other information drawn from elsewhere in the node tree or even other sources.

So we would as in the previous scenario create our own view models in addition to the generated content models, and use mapping to transfer data from one to the other. In this case though, rather than using the Umbraco Mappper package, given we are mapping between class instances, we can utilise AutoMapper - a well established component for these types of operations.

Advantages:
  • Another pattern that utilisises MVC best practices, with Umbraco "domain models" mapped to custom view models for each view, leaving very keen view code with little or no logic
Disadvantages:
  • There is a little more work involved to firstly create the view model and then handle the mapping, though the AutoMapper component takes away a lot of the grunt work around this.

Conclusions

Having arrived at CodeGarden to talk about three methods for working with MVC in Umbraco, I've come away with six! So is that progress? Not sure... but personally I look on this a positive thing, that there are options for teams to select between and decide what works best for them.

I'm looking forward to reviewing and experimenting with the new features coming out of the core and considering how best they fit with my team's preferred ways of working. I'm sure others will be doing this same as there was a lot of discussion on this topic this year.

Friday, 6 June 2014

Session notes for talk at Umbraco Codegarden 2014

I'm speaking at the Umbraco CodeGarden conference in Copenhagen this coming week, and for anyone interested in any of the background to the topics I discuss, here's a set of links to various resources.

Firstly a copy of the slides.

Mapping from Umbraco content to custom view models

Umbraco Mapper package GitHub page, NuGet download and Our Umbraco package page.

Dependency injection

Blog post detailing setting up Ninject with Umbraco.

Unit testing

Blog posts on techniques for testing Umbraco surface controllers via external classes and Umbraco base classes.

Blog post on use of MS Fakes and further details from MSDN