When talking about testing in software development teams we often think of what we call quantitative tests. Tests that either pass or fail. They are everywhere: Unit Tests, System Tests, Acceptance Tests, Integration Tests, Regression Tests, Contract Testing and more. While these tests are obviously very important, checking whether your product works, they unfortunately don't account for your users. People are are not computers, but messy, analogue human beings. So if you want to build products people love, you need to move beyond simple yes/no tests.
The question Qualitative Testing tries to answer is not "Does it work?" but "Is it good enough? (for now)", tests which include (but are absolutely not limited to) User Experience, Performance, Accessibility and Internationalisation.
Automated tests can often help you make decisions, but ultimately the decision if something is good enough has to be made by a human, like when doing accessibility testing. There are a lot of automated accessibility tools (and they're a good step to take), but they can only ever tell you if your product is not accessible, like by forgetting an
alt tag for an image. It's possible to have a completely inaccessible website while still scoring 100% for accessibility on Lighthouse (read the full article here). To properly test a website for accessibility you need a person with expertise to test it manually.
The same is true with performance. Again, we can use tools like Lighthouse, WebPageTest and Calibre that will tell us exactly how many milliseconds it took to load a page, or how long a user interaction took, but it turns out humans are messy and impatient. It is often too easy to get stuck measuring raw performance data and we don't experience wait time the same as a test. Waiting a relatively short time for something to load without any sense of its progress is worse than waiting longer but with a loading animation, or even better, a progress bar. Perceived performance is what really matters.
The idea of perceived performance is touched on in a hilarious and insightful TED Talk by Rory Sutherland, where he talks about Eurostar wanting to improve the train journey between London and Paris. While they spend £6 billion on new tracks to shave 40 minutes of an almost four hour journey, Rory's idea is to hire top male and female supermodels and have them serve free champagne. You'd have £3 billion in spare change and passengers asking for the train to be slowed down.
Another famous example (and one more related to software development) of perceived performance is from Instagram. They massively improved the perceived performance of their photo upload by starting the upload straight after selecting the photo, while the user was picking out filters and adding captions. Raw numbers don't tell the whole story.
User Experience Testing
User experience is then even harder to measure than accessibility and performance testing, as there is no automated tool we can run, we need to have people actually look at our application. It can be difficult to know what the users expect or how they'll use our product, and making assumptions will likely end badly. We can't get a simple pass or fail, we need to get our users to use the application, to test it and answer our questions. Can they find the information they need, can they navigate through the application, can they find the menu? There isn't a tool or even an AI out there that can tell us how people will use our application, we need a human to tell us if they can use it, if it's good enough.
One of the biggest hurdles in doing this kind of qualitative testing is having a working application for people to test, and we believe we've found the solution with one of the most loved features of Linc – preview links. Every single commit against all your environments is available as a unique link to preview your front-end and test with real humans – instantly after the build finishes. It is to front-end development what continuous integration is for back-ends. The ability to verify the quality of a product minutes after finishing a build.