Saturday, 29 June 2019

Practical Decisions on Testing: Code Coverage Metrics

Another short reference post on the topic of unit testing, in the spirit of saving your keystrokes, based on a conversation I've had a couple of times concerning unit test code coverage. We're using Sonarqube on a current project, which, amongst other things, will review C# code and determine the percentage covered by unit tests.

We're currently running at around 80%, which to me seems a fairly healthy metric. It's not the be all and end all of course in that a high percentage of test coverage doesn't completely diagnose a healthy code base, but, all things being equal, it's a good sign.

The question then comes - why not 100%?

Not in the naive sense though - there's clearly areas of the code, property getters and setters being the classic example, that you could unit test, but wouldn't get much if any value in doing so, and hence the effort is unlikely to be worthwhile.

Rather in the sense that if there are areas that's aren't going to be covered in unit tests, we can decorate them with the [ExcludeFromCodeCoverage] attribute, and they'll be excluded from the code analysed for the test coverage metric, thus boosting it.

My view here, is that, when deciding what code should be covered by unit tests, the decision isn't black and white. Some, probably most, code would be better coupled with tests - confirming correct functionality, guiding design if adopting TDD and reducing the likelihood of regression issues - and so the tests should be written along with the feature. On the other hand, as mentioned, some code clearly offers little value in testing, and hence this can immediately be attributed, removing it from analysis, and via a comment, indicate why the decision has been taken to not apply unit tests to the code.

The grey area in between though, is code that is hard to test, and there's some, but not so much, value in doing so - the upper left quadrant as discussed in this previous post. Given that, you might well decide there are more important things to be spending time on right now - such as new feature development or paying off other technical debt. As time goes by though, that cost benefit may shift, as will the set of competing priorities. Given that, it seems best to keep the decision on "should we add tests for this" open, not closing it off by excluding the code from coverage, but rather keep it under review and re-consider doing so, when time and resources may allow.

Wednesday, 1 May 2019

A First Look at ML.Met

I've recently been investigating ML.NET, learning by doing implementing a sample project to try to predict a country's "happiness score" based on various geographic and demographic factors. Cross-posting the resulting blog post shared on Zone's digital output.

Tuesday, 2 April 2019

Umbraco Personalisation Groups: New Release Supporting Umbraco V8

I've published two new releases of the Umbraco Personalisation Groups package, providing personalisation features for websites built on Umbraco, supporting Umbraco 7 and 8.

They are available on Our for download or can be installed via NuGet (though it's best to go the package route for a first install, to help with setup of some necessary document and data types).

With Umbraco 8, a concept of segments and variants has been introduced, that will likely be able to be used for providing similar features. The initial release though only deals with language variants for multi-lingual sites, so, at least for now, think the package will still have some value.

Release Details

There are two new versions of the package:

  • Version 1.0 - targeting Umbraco 7
  • Version 2.0 - targeting Umbraco 8
Umbraco 7 (Personalisation Groups 1.0) Release

This provides almost the same functionality as the most recent 0.3.6 release. I have bumped the major version partly due to it being about time it was on a 1.0 number, but also to reflect the internal restructuring that's occurred to be able to support both versions with the maximum amount of code re-use (there are now two dlls).

There should be no breaking changes in functionality though.

Umbraco 8 (Personalisation Groups 2.0) Release

The release for Umbraco 8 should provide the same functionality as has been available for Umbraco 7, though obviously working with the new release of the CMS platform.

Wrapping Up

Hopefully this new version provides useful to anyone already using the package and migrating to Umbraco 8, or considering using it for a new project. If anyone does find any issues, please feel free to report on the forum or issue tracker.

Sunday, 31 March 2019

Umbraco Mapper: New Releases Supporting Umbraco V8

Today I've published two new versions of the Umbraco Mapper package, a package that supports the use of view models, by mapping Umbraco content to custom view model classes (original thinking was Automapper for Umbraco I guess).

They are available on our.umbraco.org for download or can be installed via NuGet.

I was surprised to see that it's been over 5 years since this package was originally built by myself and some colleagues at Zone - a long time in technology. We're still making use of it though, and as I had some popular demand - well, two people(!) - for a version compatible with the latest Umbraco release, thought it would be good to prepare one.

I've taken an approach of restructuring the solution such that there's a common project, containing non-Umbraco dependent core set of functionality, and then created two Visual Studio projects within the solution, one referencing the Umbraco 7 binaries and one the Umbraco 8 ones. That way, I can support both versions moving forward, and at least minimise the extra effort in terms of duplicated code to maintain moving forward. If all goes as expected, I'll be talking about this technique in more detail, along with other factors involved in upgrading packages to Umbraco 8, in an edition of Skrift near you in a month or so.

Release Details

There are two new versions of the package:

  • Version 3.0 - targeting Umbraco 6 and 7
  • Version 4.0 - targeting Umbraco 8
Umbraco 7 (Umbraco Mapper 3.0) Release

This provides almost the same functionality as the most recent 2.0.9 release. However I have bumped the major version due to a few minor breaking changes, mainly taking the opportunity to streamline a few things.

  • The recursiveProperties string parameter passed to the mapping operations has been removed. This an API design issue from the start really, as it was never necessary, and since support has been added for marking particular view model fields to be mapped recursively via the attributes or the property mapping dictionary you can provide in the mapping call.
  • Have changed the approach to passing a CustomMapping object in the property mapping dictionary to require a type and method rather than a CustomMapping instance itself. This aligns with the use via attribute, and improves internal code re-use as version specific Umbraco dependency is removed.
  • Have added a new FallBackMethods property mapping override and attribute. This isn't necessary to use when targeting version 7 as the only method of "falling back" to retrieve a property value when it's not found on the current content node is recursively, via the ancestors in the tree, when the MapsRecursively boolean attribute or property mapping override can be used. It paves the way though for supporting other methods - namely language fallback, in version 8.

I've also restructured the solution and some namespaces as part of the work to support Umbraco 8, though this shouldn't have any effect for users of the package.

Umbraco 8 (Umbraco Mapper 4.0) Release

The release for Umbraco 8 should provide the same functionality as has been available for Umbraco 7, though obviously working with the new release of the CMS platform.

There are two additional features available though.

One is support for language variants. The signatures for the Map and MapCollection methods have been changed to accept an optional culture code (e.g. "en-GB"), and, if provided, the mapping will use the content from the language variant indicated by the culture code.

    mapper.Map(CurrentPage, model, "en-GB");

The related second addition is the support of fallback methods, so we can ask the mapper to retrieve content from a fallback language if the property value requested has no content (in addition to falling back recursively, via the ancestors in the content tree, as was available before). This can be passed either using a dictionary in the mapping operation:

    mapper.Map(CurrentPage, model, "it",
        new Dictionary<string, PropertyMapping>
            {
                { "HelloText", new PropertyMapping 
                { 
                    FallbackMethods = Fallback.ToLanguage.ToArray() 
                } 
            }
        });

Or by decorating the view model with an attribute:

    public class MyViewModel
    {
        [PropertyMapping(FallbackMethods = new[] { Fallback.Ancestors })]
        public string HelloText { get; set; }
    }

As it's an array, you can even make use of more complicated fallback logic, supported by Umbraco, so you could retrieve content from the fall-back language, and if that fails, via the parent node:

    public class MyViewModel
    {
        [PropertyMapping(FallbackMethods = new[] { Fallback.Language, Fallback.Ancestors })]
        public string HelloText { get; set; }
    }

Wrapping Up

Hopefully this new version provides useful to anyone already using the package and migrating to Umbraco 8, or considering using it for a new project. We'll likely get to put it through it's paces at Zone with some upcoming projects. In the meantime, if anyone does find any issues, please feel free to report on the forum or issue tracker.

Monday, 25 March 2019

Practical Decisions on Testing: Trading off Value with Difficulty

Recently I had reason to condense an approach to decisions around the value versus the effort of testing - specifically unit testing, but likely it applies at higher levels - and came up with the following diagram that I think illustrates it quite well.

On it we see two axis: one being the value of the tests and the being how difficult it is to write the tests. We can consider here that code that's complex, expresses business logic etc. is something that's going to be high value, but very trivial code down to the level of property getters and setters likely won't add much, if any. Code that's easy to test will have little or no dependencies that may require mocking or stubbing. Sometimes though code can be is particularly difficult to test, often when tied to platform components that prove troublesome or even impossible to treat in this way.

The graph then breaks down into four quadrants:

  • Bottom right - easy to test and high value. This is the sweet-spot where we should be looking to focus and get close to 100% code coverage.
  • Top right - high value, but the effort in testing is higher. We should aim to do the best we can here, but sometimes we might run into something that just proves to difficult to test to justify the effort.
  • Bottom left - easy to test but not much value. We still might look to add tests here for many cases, unless the value is almost zero, given it's easy to do and so doesn't take much effort.
  • Top left - difficult to test and not much value. Here's we're we'll justifiably spend the least focus.

Above the level of writing individual tests, there are two directions that we should aim to influence regarding the health of the code-base from a testing perspective - shown on the diagram with the large blue arrows.

Firstly, we improve code coverage by pushing up along on the line of the diagonal. Areas of code that sit just over the border, in the white area above the diagonal line, are where we'll get the most value for our effort.

And secondly, we should look to improve the testability of our code-base, making the parts that are difficult to test, less so. Even when tied to platforms that aren't making testing easy in places, we can still introduce wrapping classes, draw out much of the logic that is platform agnostic into more testable constructs, etc.

Thursday, 3 January 2019

Maintaining the Sitecore Helix Architecture

Like most development teams working on modern, Sitecore projects of reasonable complexity, at the agency where I work we've followed the Helix design principles and conventions for Sitecore development in structuring the solution. This means we have 3 groups of projects:

  • Foundation - at the lowest level, these projects may depend on each other but don't depend on any in the higher levels. Modules in the Foundation layer are business-logic specific extensions on the technology frameworks used in the implementation - e.g. Sitecore itself - or shared functionality between feature modules that is abstracted out.
  • Feature - these are projects in the middle layer, which may depend on Foundation projects but not on any in the higher Project level and, importantly, not on each other. Projects here should map to a business domain concept or feature.
    • We've extended the feature level in a small way by introducing the concept of sub-features. For example, for a data import feature, we have an API project, processing implementations (using Azure Function and console apps), and a common logic project. These are all classed as a single feature and dependencies within sub-features are allowed.
  • Project - this is the highest level and can depend on any of the lower levels. It's where the website output is pulled together by combining features, into a web application output. We have a common one, and then one for specific details of each of the sub-sites in our multi-site solution.

Ensuring Adherence to the Helix Architecture

There are two forms of dependencies that go against the Helix principles and rules defined above. Hard dependencies are in the form of project references - if one feature project directly depends on another one for example. In addition to developer guidance and code reviews, we have a defence against this in the form of an architectural fitness function that runs are part of the unit test suite - thus failing a build if the conventions are broken.

The other form of dependency are soft ones, that can't be detected by automated tools so need to be part of developer consideration when implementing features and checked in code reviews. These take the form of one project having "knowledge" of another one that should be off limits, perhaps via a shared name of a template or field.

If there seems to be a need to break the Helix conventions, there are a number of ways we've used to resolve it, that can be summarised into the following three methods.

Selecting the Seams

One of the challenges when selecting which projects we should have in which layers is defining the appropriate seams between different groups of functionality. The idea of a feature project in particular, is that it follows the common closure principle which states that "classes that change together are packaged together". It's quite possible that we don't get that right first time, and as the solution develops we look to refactor to either combine or split feature projects. We need to strike a balance here between having useful separation so we follow a form of single responsibility principle for a feature, but not split too much and then find we have a lot of leakage of knowledge between features.

Introducing a Foundation Project

A standard way of resolving the conflict when two feature projects need to share logic is to extract that logic into a foundation project. For example, low-level, cross-cutting concerns such as search indexing and ORM usage are defined in specific foundation projects that are referenced and shared by multiple feature ones. As we move forward, we'll likely want to introduce more of these.

Foundation projects can depend on each other, so we don't have the same concerns at this level of the architecture. However we are of course restricted in that we can't have circular references, so there still may need to be further refactoring and splitting of logic here, such that the foundation projects can depend on each other as needed.

When it comes to softer dependencies - such as knowledge of how different sites (defined at project level) or templates (defined at feature level) behave, typically here we've introduced a foundation level "register" class. These each consist of a static collection of typed objects that each higher level project can register details of at application start-up. Other projects can then reference these registers to extract the details they need for particular renderings, computed fields or other operations.

One example of this is in a class we've called AddressableTemplatesRegister - this maintains a global dictionary of registered template names for those pages that are addressable (i.e. have URLs), such that the indexing routines can ensure to index a URL for those items. The key for the dictionary is the template namd and the value is an instance of a class which has properties for related details such as canonical URL generation, page title generation, and retrieval of site map entries.

Using these each feature project can register at start-up the set of templates that it's concerned with, along with the specifics of how concerns such as those methods are defined. Individual feature projects - e.g. we have a "Navigation" one that amongst other things can enumerate the register and retrieve site map entries appropriate for each template, perhaps filtering out certain ones based on the contents of particular fields.

Delegating to Feature Projects

Whilst Sitecore is particularly extensible via it's pipeline architecture, some concepts only exist as singletons and as such can't be "decorated" by individual features. These are rare - most things, like for example item resolution, can be defined in the appropriate features, and there can be several, each handling different templates. But one example is the LinkProvider - there can only be one of these defined for the solution.

Whilst we have only one of these, defined in the common "Project" project, we delegate the specific details of link generation for different types of pages to the appropriate feature projects, making calls to methods defined on classes there.

Wrapping Up

Using a combination of these methods we've been able to maintain a solution as it's grown over 18 months to continue to be in adherence to the Helix conventions.