Automated Testing

Lesson 1: The Whys and Hows of Automated Testing
The three laws of test-driven development.
Problems in test-driven development.

This section is co-authored by Denis Petelin and Prof. Callahan.

  1. Why do we test? We test to see if the software does what we want it to do.
    And whenever it doesn't, what are the effects?
  2. Classic way to do testing: test cases, bugs, regression, growing "regression debt".
    Please study regression testing .
  3. Revolutionary idea: zero-bug mindset: bugs are not tasks in backlog, you have to fix them as you go -- by the end of the day zero bugs exist.
    This is part of the "incremental improvement" idea: if we do continuous integration and continuous delivery with small batches, each delivery into production should produce at most a "small batch" of bugs! And we can fix that small batch immediately.
  4. Revolutionary idea: zero-length feedback, developers can test and fix bugs immediately.
    Part of this is establishing a culture of trust: If we need to feed approval for each change through multiple levels of bureaucracy, we can't fix bugs right away. And we can relate this back to the discussion in chapter one on "Taylorism" versus "Toyota Production System": in the former, the "workers" just carry out the plans of the "managers." In the latter, everyone is responsible for the entire production process.
  5. Test pyramid:
    1. Unit tests for individual classes and methods
    2. Integration tests to check feature top-down. We test larger units to see if the components work together.
    3. Acceptance test to check feature as user sees it. This is typically performed by the users.
  6. Terminology:
    1. TestCase: set of checks to be performed.
    2. Fixture: prepared data to be loaded into the db.
    3. Fake: an actual object created for the test, but with fake data.
    4. Stub: a crude imitation of real object returning hard-coded values.
    5. Mock: an elegant imitation of the object (if real object is not yet ready or expensive).
    6. Test suite: set of tests serving specific purpose.
  7. Django-specific testing:
    1. No need to unit-test Models (except custom query sets & business logic methods).
    2. No need to integration-test Autogenerated View (except live tests).
  8. Anatomy of test case:
    1. setUp()
    2. test_method_name
    3. Arrange the system to get it ready; Act in a way to test the behavior you are interested in; Assert the condition that should hold.
    4. Assert kinds:
        Method
      • assertEqual(a, b): a == b
      • assertNotEqual(a, b): a != b
      • assertTrue(x): bool(x) is True
      • assertFalse(x): bool(x) is False
      • assertIs(a, b): a is b
      • assertIsNot(a, b): a is not b
      • assertIsNone(x): x is None
      • assertIsNotNone(x): x is not None
      • assertIn(a, b): a in b
      • assertNotIn(a, b): a not in b
      • assertIsInstance(a, b): isinstance(a, b)
      • assertNotIsInstance(a, b): not isinstance(a, b)
    5. tearDown()
  9. Preparing data -- AutoFixture
    1. TaskModelTransactionTestCase(TransactionTestCase): regular fixture.
    2. For lazy guys -- AutoFixture :)
    3. fixture.create()
  10. Typical mistakes:
    1. Useless tests -- testing default Models methods, for example.
    2. Testing implementation -- method save_changes() returns OK -- everything is OK! (Test should check if changes indeed persisted).
    3. We don't want large tests. Large tests? Fat controllers! 
    4. Refactoring:
      1. Small methods -- less than a screen.
      2. Small tests -- 8-10 lines.
      3. Refactoring palette in the PyCharm.
  11. Good beginners pattern:
    1. Create Model. Add tests if there are custom methods.
    2. Create Controller (View as Django calls it). Test if does what it should do. Test if it handles errors.
    3. Write View (Template as Django calls it). Write LiveTestCase using requirements.
    4. Why preparing requirements still matters (“Please show balance" in Danfoss).
  12. Big idea: can we somehow make requirements document testable?
    1. Turning use cases into tests -- gherkin
    2. Feature file & steps
    3. Passing info around -- context
    4. Selenium -- driving real browser around
    5. Behave test runner (behave-Jango)
    6. JIRA: acceptance tests are now part of the 
    7. Relying strictly on this type of testing is bad idea! (See execution time for one test vs whole suite!)
Lesson 2: Testing Frameworks

The typical way a test framework works, in pseudo-code:

                    for every test in test_class:
                        test_class.setUp()
                        success = run test
                        if not success:
                            exit with error message
                        test_class.tearDown()
                    exit with success message
                 

Python testing with pytest! Part 1: Introductions and motivating testing.
Python testing with pytest! Part 2

UnitTest documentation

Our Test Implementation

Some add-on packages we use:

  • coverage
  • nose
  • ddt
Other Material
Quiz

    The steps in a test are:

    1. Arrange - Act - Assert
    2. Argue - Assert - Abduct
    3. Assign - Abridge - Adjourn
    4. Arbitrate - Adjudicate - Abdicate

    One of Bob Martin's rules of TDD is...?

    1. Only write enough production code to make a test pass.
    2. Write all of your production code before any tests, since tests are less important
    3. Write a complete test suite before you write a line of production code
    4. All of the above

    Testing Python code is aided by a package called...?

    1. pylib
    2. numpy
    3. scipy
    4. pytest

    Fixture is...?

    1. a set of checks to be performed
    2. the initiation of an object returning hard-coded values
    3. a test object with fake data
    4. a prepared data to be loaded into the database

    One of Bob Martin's rules of TDD is...?

    1. Always code for a day before writing a test
    2. Write a test that passes before you write one that fails
    3. Before you write any production code, write a failing test for that (planned) code.
    4. All of the above

    The "zero-bug mindset" means that...?

    1. no developers should be bugging out
    2. no bug backlog should ever build up
    3. no one should ever make a coding mistake
    4. we should ridicule anyone who introduces a bug

    The "setup" portion of a test...?

    1. sets up the data our tests will use
    2. sets up the logical assertions the test will use
    3. sets up the test suite for failure
    4. sets up the user for a big surprise

    To enable zero-length feedback, we must have a culture of...?

    1. multiple levels of approval for each change
    2. suspicion
    3. waterfall model development
    4. trust

    'zero-length feedback' means...?

    1. developers can test and fix bugs immediately
    2. the length of a CI/CD pipeline should be 0
    3. all tests should take 0 time
    4. all of the above

    The Test Pyramid consists of...?

    1. Unit tests
    2. Integration tests
    3. Acceptance tests
    4. all of the above