This is an abstract and technical post. In writing this, I would like to question our notion of usability: namely, that our practice applies solely to the user-interface of an application. Usability can be achieved from the “ground up” by designing a more usable application.
Usability isn’t just for your users. Today’s development environments have matured into a very practical, agile hybrid. By writing tests before developing, developers increase the speed and resourcefulness of the development process. The impact is profound. By being more robust, applications themselves can become paragons of usability, and user-experience professionals can derive benefit.
What is Test Driven Development?
Test-driven development (TDD) is a software development technique that uses short development iterations based on pre-written test cases that define desired improvements or new functions. Each iteration produces code necessary to pass that iteration’s tests. Finally, the programmer or team refactors the code to accommodate changes.
Test-driven development (TDD) is oftentimes linked to Agile Development. Maybe you’ve heard of them. However odd it may seem though, TDD is a rather novel idea as far as software design idioms. Many larger companies still design software the “old-fashioned” way—business-minded executives hand down the (often dreaded) “requirements document” and software engineers build the application all on their lonesome. They would begin by bootstrapping the application and then they would adapt it to meet the specifications. Very straightforward. A deadline is set, and by the end the software must meet the requirements. I’m simplifying greatly: generally there’s project managers, information architects, stakeholders, and many more people involved in this process. Everyone plays a part, and everyone designs the application in one way or another.
In today’s development environments, while the people remain the same, the process is slightly different. There’s still the business-minded executive. There’s still the software engineer. But today, most software is developed in iterations. And that begs us to ask the question, “why?” In finding the answer, we’ll discover how changing the process increased overall software usability.
Getting up to speed
Photo by PracticalOwl.
In the mid-1990’s programmers challenged the idea of building software around a massive requirements document. The idea caught on like wildfire, sparking many variation of a central theme: Agile Programming. Software engineers decided that the best programming methodology was, in fact, to design software to meet tests (test-driven), and to do so in small iterations (agile). For this reason, these two notions have become nearly inseparable in the software development world.
Testing an application using simple, piecemeal (unit) tests ensures that the application logic works together. The benefits of unit testing far outweigh the upfront cost; whats more, the process works in a more human way. If the client asks for a particular feature at any given time, a test is drawn up to account for the various functionality required for that feature. Once the software passes the test, the feature “works.”
What this idiom means for our readers
If you’re one of those designer-developers that we hear about all the time in today’s application world, then you should have an even greater interest in this subject. By studying the unit tests, you can learn how the software engineers designed the application to function. Because the application’s core logic is “protected” by tests, a designer can change code and see how it affects the entire application, simply by running the test suite.
The User Interface
In terms of an applications usability, many people are concerned with the front-end of the application only—and with good reason. For the most part, applications
are should be designed with the user in mind. It doesn’t make sense to produce an application that only programmers can use—unless, of course, they’re the intended audience of that application.
While testing the “back-end” (the applications internal logic) is a fairly straightforward process, testing the “front-end” (the design) has become a rather expensive and laborious process. Why the difference?
While this means that an application is only as usable as it is to the end user, this doesn’t mean that usability stops there. Think about that: “only as usable …,” that means that an application’s usability starts at the user interface, but that’s not where it ends. While a pretty user-interface will make a product usable; in this post we’re going to focus on the usability of the entire application. What does it mean for an entire application to be usable, and how is that different from an application’s user interface?
While testing the “back-end” (the applications internal logic) is a fairly straightforward process, testing the “front-end” (the design) has become a rather expensive and laborious process. Why the difference? Alas, I’m getting ahead of myself. Let’s describe something more idiosyncratic: how an application interfaces with itself; and then, how an application interfaces with it’s users.
Because there’s no hard and fast rule as to how usable an application’s internals are, we’re going to look for usability signs. Is the application robust? Does the application respond well to unusual circumstances (gracefully, one hopes)? Something with these characteristics certainly manifests usability. In this case, these ideals will serve as our usability ruler.
The Application Interface
Describing internal application interfaces is very difficult. There are a number of examples and related topics I can point to that help illustrate the ideas. Essentially, just as forms have buttons and pages have headers (and these elements have semantic, nay, intrinsic properties), so do the internal components of applications. Each part of an application interacts and depends on other parts of an application. Indeed, a software engineer parses a page of code in a way our end users parse a page of data. They are just looking for different contextual clues to figure out what’s going on.
An Application Interface provides a common set of functionality for an application. Software engineers look for things like consistency, flexibility, and footprint when they consider how parts of their application will interface with other parts. Because of this, following a convention when designing application interfaces (and application architectures) is just one of the ways to increase their utility. Indeed there are three “Laws of interface design” as laid out in Ken Pugh’s book Interface-Oriented Design:
Laws of Interface Design
An interface’s implementation shall do what it says it does.
Essentially, we should be able to trust that application functions (methods) do what they say on the box. A method called get_page_header() should return some information related to the header of a page, not the footer. A method called get_distance() should return a number and maybe a unit (centimeters, or feet for example).
Michael Hunter suggests:
[Method names] should be documented regardless. Conversely, if they need documentation, the name should be improved.
Indeed, the semantics of application interfaces should be clear and to the point.
An Interface Implementation shall do no harm
It should come as no surprise that software designers would like for an application’s basic parts to function as independently as possible. Indeed, interfacing with any given object should not disrupt interfacing with any other object. By way of example, no process should hog resources, monopolize threads, or cause the system to hang. This ensures that employing any given interface in an application is a harmless process from which the application can recover if the response isn’t as expected.
If an Implementation is unable to perform it’s responsibilities it shall notify it’s caller
An implementation should report any errors it encounters that it cannot resolve itself. Any implementation should be a self-contained resource of an application that behaves in a robust fashion. By way of example, if an implementation requires a user to be logged into the system, it should redirect unauthenticated users to a login page.
These may sound common-sensical. Everyone expects a program’s methods to adhere to these basic tenets; and yet they need to be restated. Recall that an application’s (internal) interface should be utilitarian: there is little room for design and flair, implementation aside. The utility of an interface is only as good as it’s ability to respond favorably to a myriad of situations.
Speaking the language of tests
Test-driven-development is a way of coding software to meet certain specifications. In general, a test is something that our clients will not be able to interpret. For this reason, we introduce the more palatable idea of use cases, something that conveys requirements but is easily interpreted by both engineer and layman alike. An example use case might sound like this: A user can sign up for the service, receive a login/activation email in their email inbox, activate their account, and log into the system. Notice how this statement does a number of things:
- It focuses on the user
- It defines desired functionality
- It doesn’t describe either application interfaces or user-interfaces
Use cases, while easily digestible by both aforementioned business-minded executives and programmers alike, lack specificity. No bother, each party paying attention to the use case has something to add on their end. Those who seek to know the business implications of the case need to ask themselves: why is this functionality required for the software? Therein lies their answer. For the programmer, they only need to ask: how can I make this functionality happen? The answer here lies in user-interfaces and application-interfaces.
Let’s get specific and talk about testing application interfaces. In Ken Pugh’s book, Interface-Oriented Design, Pugh defines what he calls work cases— a form of “internal” use cases. Think of them as application-interface use cases. For example: Reading a file requires (1) Opening a file for reading. (2) Reading the file. (3) Closing the file.
Now, to test this, we simply verify the read bytes of the file against a known constant (which is to say, we know the contents of the file and verify against them). This may seem easy but you have to appreciate unit testing. In contrast to how we test user interfaces, testing the applications functionality allows little room for interpretation. Whereas a user interface is generally tested with, surprise, users; an application’s interface is tested by a barrage of simple and direct tests. Programmers wishing to increase the robustness of their implementation will write good tests: ones that are generic enough to test all possible ways their object will interface with the rest of the application. In this way, they can be sure that once an object passes the tests written for it, it’s interface/architecture is more usable.
The best part is that test-driven development has hidden benefits. If a user-experience expert is allowed to test at each iteration in the development process, an application’s usability can be confirmed in a piecemeal fashion. Again, contrasting this with how usability testing has traditionally been done—at the end of a project— this allows for unprecedented flexibility and, indeed, usability of the final product.
There is only a bit more to say to this point. After an application interface passes these tests, a software engineer can be certain that any objects that implement a common interface will pass the same suite of tests. In sum, if your test-driven application implements robust interfaces, the entire application works like a well-oiled machine. Not only do your users benefit from an application that has internals that are always working all the time, so do your developers. A win for both parties: which is often rare indeed.
Defining your experience, altogether now.
In conclusion, application usability should not be gauged on the user interface alone. All too often, the general public will “judge a book by it’s cover” and write applications off as good or bad simply based on the UI. But as members of the community, we’re above that. By scratching beneath the surface and looking at all of the parts working together in an application, we’re able to improve the process at each level. Each part of an application is responsible for a discrete role, but the application depends on all of them at once to make it’s utility known.
Test-driven development affords your programmers (who use the application’s interface) the same peace of mind as your users (who use the user interface). A user who uses an application that has a great UI feels like they’ve used the application a million times, even though it’s their second time around. Good UIs conform to mental maps, catch errors efficiently, and generally elevate the user experience. With test-driven development, the objective is to do the same for software engineers, and in this way, make the application more usable to everyone involved.
If the subject is as fascinating to you as it is to me, I suggest you peruse the following list of related books.
- Head First Design Patterns
- Design Patterns: Elements of Reusable Object-Oriented Software
- Agile Retrospectives: Making Good Teams Great
- Getting Real
- Interface Oriented Design: With Patterns (Pragmatic Programmers)
- Gojko Adzic – Effective user interface testing
- Building bug-free O-O software:…
- Getting Creative With Specs: Usable Software Specifications
- Painless Functional Specifications – Part 1: Why Bother?
- Design and Development of an Issue Tracker, by Garret Dimon
UX research - or as it’s sometimes called, design research - informs our work, improves our understanding, and validates our decisions in the design process. In this Complete Beginner's Guide, readers will get a head start on how to use design research techniques in their work, and improve experiences for all users.