We were recently discussing this article at a team meeting, and as part of that discussion we were talking about some of the inconsistencies in our product. One area where we have inconsistencies is in how different parts of the product handle the data coming from the UI. Depending what kind of problem you are looking at we have radically different paradigms for how we manage that data before sending it down to the low level engines. At the UI level the product looks fairly consistent, although once in while these under-the-hood difference do show up, but in the data management layer it’s a whole different story.
There are clearly inconsistencies in our product but is it inconsistent in a way that matters? From the end user perspective it is fairly consistent, but then once you get into the data management layer there are some very big inconsistencies. Does this matter? Should we worry about making it consistent? Well, one of the things that struck me during this discussion was that we were in a group of testers who worked on different areas of the product and we would each struggle to do deep testing if we were to switch areas of focus. I think one of the main reasons it would be difficult for us to move effectively from one area of the product to another is the inconsistencies in the data management layer. So does this inconsistency matter? I would argue that yes it does. In this case it is affecting the testability of the product.
There are many ways in which this kind of inconsistency in the product hurts us. Let me just rattle off a few of them. The automated tests look very different as you move from one area of the product to another. The testers end up somewhat tied to a particular area of the product leading to less cross pollination of ideas (although we are making deliberate moves to learn new areas). When we add new features that are used by multiple areas in the product, the testing of these is greatly increased because we have to check how it works with each of the areas. It is much more difficult to test shared features like this than it would be if we had a common data management layer. The inconsistencies below the hood on our product certainly affect the testability.
There are initiatives under way to help consolidate some of the data management layer and hopefully this will help with some of the inconsistencies, but I wonder in the meantime what we as testers can do about it? I think one of the main things we can do is to learn how these various areas work and how they are inconsistent. We can then use this information in our areas of expertise to talk with developers about the kinds of things that other groups do. We can be the stitching that pulls the various areas together. Another thing we can do is ask questions. How do other groups handle this? What have other teams done to deal this problem? By asking some questions like this we can help people to think about consistency as we move forward.
Testers need to be advocates for testability in the products we test and sometimes that also means being an advocate for consistency. How do inconsistencies in your product affect the testability?