Defensive Programming
When we program, we often miss certain important aspects, and introduce
potential errors in the programs, that may manifest only for certain inputs.
For example, it is estimated that even in mature software, it is common to
find at least one bug in every thousand lines of code.
Defensive programming is a term used to describe a collection
of techniques that reduce the chances of errors (also called bugs)
escaping into the program.
- write specifications for functions
- Modularize programs
- Check conditions for inputs and outputs (assertions)
Two very common methods for defensive programming are:
- Testing / Validation
- Compare input/output pairs to specification. Some common
sentiments during this process:
- "How can I break my program?"
- "It is not working!"
- Debugging
- Study events leading up to an error:
- "Why is it not working?"
- "How can I fix my program?"
Set yourself up for easy testing and debugging
- from the start, design code to ease this part
- break up the program into modules that can be tested and debugged
individually
- document constraints on modules
- what do you expect the input to be?
- what do you expect the output to be?
- document assumptions behind code design. e.g., the input must be a tuple of tuples.
When are you ready to test?
- ensure code runs
- remove syntax errors
- remove static semantic errors
- Python interpreter can usually find these issues for you
- have a set of expected results
- an input set
- for each input, the expected output
Classes of tests:
- Unit testing
- validate each piece of program
- testing each function separately
- Regression testing
- add tests for bugs as you find them
- catch reintroduced errors that were previously found
- Integration testing
- does overall program work?
- tend to rush to do this
Testing approaches:
Blackbox testing
def sqrt(x, eps):
""" Assumes x and eps are non-negative floats
Returns res such that x - eps <= res*res <= x+eps"""
- designed without looking at the code
- testing can be reused if implementation changes
- Actually, it can be reused in both blackbox and glassbox testing, but for blackbox testing, we do not expect any biases due to a change in implementation.
- paths through specification (not implementation):
- build testcases in different input space partitions based on the specification
- consider boundary conditions, e.g., empty lists, singleton lists, large numbers, small numbers, ...
- Examples:
Case | x | eps |
boundary | 0 | 0.00001 |
perfect square | 25 | 0.00001 |
less than 1 | 0.05 | 0.00001 |
irrational square root | 2 | 0.00001 |
extremes | 2 | 1.0/2.0**64.0 |
extremes | 1.0/2.0**64.0 | 1.0/2.0**64.0 |
extremes | 2.0**64.0 | 1.0/2.0**64.0 |
extremes | 1/2.0**64.0 | 2.0**64.0 |
extremes | 2.0**64.0 | 2.0**64.0 |