Automated Testing
Lesson 1: The Whys and Hows of Automated Testing
This section is co-authored by Denis Petelin and Prof. Callahan.
-
Why do we test?
We test to see if the software does what we want it to do.
And whenever it doesn't, what are the effects? -
Classic way to do testing: test cases, bugs, regression,
growing "regression debt".
Please study regression testing . -
Revolutionary idea: zero-bug mindset: bugs are not tasks in
backlog, you have to fix them as you go -- by the end of the
day zero bugs exist.
This is part of the "incremental improvement" idea: if we do continuous integration and continuous delivery with small batches, each delivery into production should produce at most a "small batch" of bugs! And we can fix that small batch immediately. -
Revolutionary idea: zero-length feedback, developers can
test and fix bugs immediately.
Part of this is establishing a culture of trust: If we need to feed approval for each change through multiple levels of bureaucracy, we can't fix bugs right away. And we can relate this back to the discussion in chapter one on "Taylorism" versus "Toyota Production System": in the former, the "workers" just carry out the plans of the "managers." In the latter, everyone is responsible for the entire production process. - Test pyramid:
- Unit tests for individual classes and methods
- Integration tests to check feature top-down. We test larger units to see if the components work together.
- Acceptance test to check feature as user sees it. This is typically performed by the users.
- Terminology:
- TestCase: set of checks to be performed.
- Fixture: prepared data to be loaded into the db.
- Fake: an actual object created for the test, but with fake data.
- Stub: a crude imitation of real object returning hard-coded values.
- Mock: an elegant imitation of the object (if real object is not yet ready or expensive).
- Test suite: set of tests serving specific purpose.
- Django-specific testing:
- No need to unit-test Models (except custom query sets & business logic methods).
- No need to integration-test Autogenerated View (except live tests).
- Anatomy of test case:
- setUp()
- test_method_name
- Arrange the system to get it ready; Act in a way to test the behavior you are interested in; Assert the condition that should hold.
-
Assert kinds:
-
Method
- assertEqual(a, b): a == b
- assertNotEqual(a, b): a != b
- assertTrue(x): bool(x) is True
- assertFalse(x): bool(x) is False
- assertIs(a, b): a is b
- assertIsNot(a, b): a is not b
- assertIsNone(x): x is None
- assertIsNotNone(x): x is not None
- assertIn(a, b): a in b
- assertNotIn(a, b): a not in b
- assertIsInstance(a, b): isinstance(a, b)
- assertNotIsInstance(a, b): not isinstance(a, b)
- tearDown()
- Preparing data -- AutoFixture
- TaskModelTransactionTestCase(TransactionTestCase): regular fixture.
- For lazy guys -- AutoFixture :)
- fixture.create()
- Typical mistakes:
- Useless tests -- testing default Models methods, for example.
- Testing implementation -- method save_changes() returns OK -- everything is OK! (Test should check if changes indeed persisted).
- We don't want large tests. Large tests? Fat controllers!
- Refactoring:
- Small methods -- less than a screen.
- Small tests -- 8-10 lines.
- Refactoring palette in the PyCharm.
- Good beginners pattern:
- Create Model. Add tests if there are custom methods.
- Create Controller (View as Django calls it). Test if does what it should do. Test if it handles errors.
- Write View (Template as Django calls it). Write LiveTestCase using requirements.
- Why preparing requirements still matters (“Please show balance" in Danfoss).
- Big idea: can we somehow make requirements document testable?
- Turning use cases into tests -- gherkin
- Feature file & steps
- Passing info around -- context
- Selenium -- driving real browser around
- Behave test runner (behave-Jango)
- JIRA: acceptance tests are now part of the
- Relying strictly on this type of testing is bad idea! (See execution time for one test vs whole suite!)
Lesson 2: Testing Frameworks
The typical way a test framework works, in pseudo-code:
for every test in test_class:
test_class.setUp()
success = run test
if not success:
exit with error message
test_class.tearDown()
exit with success message
Our Test Implementation
Some add-on packages we use:
- coverage
- nose
- ddt