Test-Driven Research: Software development and research have a lot in common.
Blogs
In this blog, Professor James Brusey discusses how designing and deploying energy research projects in the humanitarian setting could benefit from test-driven research.
Software development and research have a lot in common. For example, when a piece of software has a bug, the programmer might form a hypothesis about why the problem is occurring. She might guess that an array is being indexed beyond its boundaries and that this is thus affecting another variable that otherwise should not have changed value. This leads to a question: is the array being indexed in that way? She might add some code to check that the index is within bounds, finally running the code in an experiment or test.
Not all programmers work this way. Some, when faced with difficult bugs, will shuffle bits of code around aimlessly until the problem disappears. These are the same guys who insist that their program was working before and they didn’t change anything, and it must be someone else’s fault! The worst part of their approach is that it is terribly inefficient. So good programmers really appreciate the idea of having a hypothesis, posing a good question, and trying to answer that question with an appropriate experiment.
It becomes clear that research is a lot like software development. This, then, leads to a possible insight: perhaps some modern software techniques might be used to improve the way we do research? One modern technique that jumps to mind here is test-driven development.
Before I tell you more about what that is and why it might apply, I need to explain about some potential problems with research – particularly research that involves humans.
Engineering researchers love to optimise systems. Open up any modern journal of engineering research and be prepared to see some tricky maths involving optimisation of some unsuspecting heating, cooling, water-pumping, or whatevering system. Their optimisation is usually in the sense that the resulting system is more efficient; uses less energy for the same output. However, sometimes human action affects the result.
Let’s take a simple example: Imagine you have a system to share energy between a small group of families that use a solar photo-voltaic-based microgrid. Energy comes in from the sun and goes out to their TVs, lightbulbs, karaoke machines, and so forth. Some families use more than others, and it is desirable to minimise waste, so the micro-grid is set to give a bigger allocation to the more demanding users. If they used a lot today, then their allocation for tomorrow will be proportionally larger.
Humans being humans, what does this really mean? It probably means that some will adapt. The party on Saturday is coming up. The karaoke machine will need to be turned up loud. The fridge will need to be really cold for the beers. The lights will be on all night.
They’ll prepare, counterintuitively, by using as much electricity as possible on previous days to bump up their quota for the following days. They’ll be leaving on the TV on Thursday and Friday so that they have enough electricity for the party. It’s not really their fault. The mistake was in the optimisation that rewarded a sub-optimal behaviour. Could this situation be avoided? Could we have predicted that this type of behaviour might occur? If so, could that not be considered when optimising?
Once humans are involved in any system (and few systems are without a human element), correctly optimising it becomes a harder job. In some cases, we can guess, ahead of time, what the likely human decisions will be. A perfect model of their decision making may not be possible, but a reasonable guess will at least be better than not accounting for the human in the loop at all.
So this brings me back to test-driven development.
Test-driven development starts with the idea of unit-testing – essentially where each component of software has an associated set of test-cases that probe for flaws. Do we have a function that involves an array? A test might try to trick the function into accessing an area outside the array’s boundaries. Does it complain? If not, there’s a bug!
Test-driven development (TDD) takes this idea and puts the unit-tests first. Before we even begin to devise the function, we should first prepare some test cases. It’s a great idea because it helps us formulate what we are trying to do. For example, if I want to write a function that returns all the relatives for a person given some series of parent-child relationships, I can write test cases that do things like:
- try giving no parent-child relationships; it should give me no relatives
- try giving some parent-child relationships that include the person; it should give me all those relatives
- try giving some parent-child relationships for a completely unrelated family; it shouldn’t include those in its result
TDD has a final great idea to offer us. Whenever there is a bug, we can start our debugging by coding a specific test case that shows that the bug exists. For example, we find that cousins once removed are not included as relations. We code a test-case for that which flags the problem, and when the code is fixed, the test-case reports back that all is now ok. It’s a great idea because if through some chance the proposed fix doesn’t work, we find out immediately.
TDR or Test-driven Research takes TDD into the research world. To continue the microgrid example, the test-cases are based on our understanding of the known behaviour of humans. Perhaps this is based on experience, perhaps based on case studies. We start with these test-cases and then form a model of the expected behaviour of the people. Does the model do what we think it should? This is especially important when a large team are collaborating on the research. We might not all be able to agree on exactly how the model should be designed, but we can all contribute to a discussion about what such a model should and should not do.
If all our test-cases pass, it doesn’t mean that our model is perfect and it won’t always be right. Modelling human behaviour is not an exact science, as much as we’d like it to be. However, good optimisation of human-in-the-loop depends on it and thus it is important to use techniques, like TDR, to make these models as accurate as possible.
Within the HEED project, the opportunity to test out microgrid management systems with real users is naturally very limited. In fact, as researchers, we have an ethical responsibility not to expose human participants to any form of risk that such a trial might produce. It’s conceivable, though, that we might find clues to behaviour without trialling. In fact, the idea of Test-Driven Research should guide us to what clues we really need.
James Brusey is Professor of Computer Science with the Centre for Data Science at Coventry University and theme lead for AI for the Internet of Things. His current research is in machine learning, reinforcement learning, and applied wireless networked sensing.
Back to Outputs & publications page