Test Driven Development using the AIMMSUnitTest Library

This article discusses some elements from the popular software methodology Test Driven Development in relation to the AIMMS Library AIMMSUnitTest.

  1. Gather requirements from stakeholders

    • What are small examples?

    • What are the edge cases, including error cases, performance?

  2. Implement requirements as unit tests and see them fail!

  3. Development

    1. Write the code for the functions

    2. Execute tests and see them pass!

    3. Refactor until performance is acceptable.

  4. Repeat

Unit tests in AIMMS projects

Prepare your AIMMS project with the below steps to declare unit tests.

  1. Add repository library AIMMSUnittest, see

  2. Create a new library that holds the actual code

  3. Create another new library containing tests on the actual code

  4. Make sure all tests can be run, for instance by specifying MainExecution as shown below:

    Procedure MainExecution {
       Body: {
          EnvironmentSetString("aimmsunit::RunAllTests","1");
          aimmsunit::TestRunner;
       }
    }
    

Example: implement “mean”

Prototype the requirements

  1. The mean should return the average value of a series of numbers.

  2. Input check: input should not be empty.

  3. Prototype SelfDefinedMean (As both mean and average are key words in AIMMS, we need to come up with a new name):

      Function SelfDefinedMean {
          Arguments: (p);
          Body: {
             raise error "Not implemented yet";
             SelfDefinedMean := 0 ;
          }
          Parameter p {
             IndexDomain: i;
             Property: Input;
          }
          Set S {
             Index: i;
          }
    }
    

This is a dummy implementation, a function that meets the prototype requirements, but will obviously fail. Having a dummy implementation allows us to code the tests as detailed below.

Write the tests

  1. Small example test: Mean( 3, 5, 13 ) = 7

     1Procedure pr_Test_Small_Example {
     2    Body: {
     3       S := data { a, b, c };
     4       P := data { a: 3, b : 5, c: 13 };
     5       r := ml::SelfDefinedMean( P(i));
     6       aimmsunit::AssertTrue("The average of 3, 5, and 13 is 7.", r=7);
     7    }
     8    Comment: "first test: Mean( 3, 5, 13 ) = 7""    aimmsunit::TestSuite: MeanSuite;
     9    Set S {
    10       Index: i;
    11    }
    12    Parameter P {
    13       IndexDomain: i;
    14    }
    15    Parameter r;
    16}
    

    Note that the aimmsunit::AssertTrue statement (line 6) is after the call to ml::SelfDefinedMean.

  2. Edge case test: an empty series of numbers

     1Procedure pr_Test_Empty_List {
     2    Body: {
     3       aimmsunit::AssertThrow("The average of an empty list cannot be computed.");
     4       S := data { };
     5       P := data { };
     6       r := ml::SelfDefinedMean(P(i));
     7    }
     8    Comment: "Edge case, empty list.""    aimmsunit::TestSuite: MeanSuite;
     9    Set S {
    10       Index: i;
    11    }
    12    Parameter P {
    13       IndexDomain: i;
    14    }
    15    Parameter r;
    16}
    

    Note that the aimmsunit::AssertThrow (line 2) statement is before the call to ml::SelfDefinedMean.

Collecting tests in a suite

The annotation aimmsunit::TestSuite: MeanSuite is added to the test function. You can add annotations this way:

  1. Click add annotation in the attribute window

  2. Select aimmsunit::TestSuite

  3. Type in the name of the suite. In this example, we only use one suite: MeanSuite

Test suite before coding

Now, run the tests and with the above implementation of ml::SelfDefinedMean. They will fail as expected. Example result in file: log/AimmsUnit.xml

 1<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
 2<testsuites>
 3    <testsuite id="1" name="MeanSuite" timestamp="2019-04-09T08:26:10" tests="2" errors="2" time="0.002">
 4    <testcase name="tml::pr_Test_Small_Example" time="0.001">
 5        <error message="Not implemented yet."/>
 6    </testcase>
 7    <testcase name="tml::pr_Test_Empty_List" time="0.001">
 8        <error message="Not implemented yet."/>
 9    </testcase>
10    </testsuite>
11</testsuites>

There are several remarks about this file:

  1. On line 3, which suite and which tests are run, it is also important the number of tests that failed. All the tests failed as expected (errors =”2”) and we can start coding the function now.

  2. In lines 4 - 9, we see the details of the failure of our two tests. As the function hasn’t been implemented yet, it raised an error message in both the tests.

Code the function

Mean is calculated by dividing the sum of the records by the count of records. This is implemented in the code below:

 1Function SelfDefinedMean {
 2    Arguments: (p);
 3    Body: {
 4        p_NoElements := card(p);
 5        if p_NoElements then
 6            SelfDefinedMean := sum( i, p(i) ) / p_NoElements;
 7        else
 8            raise error "The average of an empty list cannot be computed." ;
 9            SelfDefinedMean := 0 ;
10        endif ;
11    }
12    Parameter p {
13        IndexDomain: i;
14        Property: Input;
15    }
16    Set S {
17        Index: i;
18    }
19    Parameter p_NoElements;
20}

Running the test now gives the following results:

1 <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
2 <testsuites>
3     <testsuite id="1" name="MeanSuite" timestamp="2019-04-09T09:31:16" tests="2" time="0.002">
4     <testcase name="tml::pr_Test_Small_Example" time="0.001"/>
5     <testcase name="tml::pr_Test_Empty_List" time="0.001"/>
6     </testsuite>
7 </testsuites>

The log indicates that both the tests passed without any issue. So, everything is good to go. Or is it?

Fix a bug

However, soon one of our stakeholders comes with a question:

Why does ml::SelfDefinedMean(3, 5, 0, 12) return 6.67 instead of 5?

Apparently, our set of requirements does not consider all edge cases. Now we will iterate on this by adding another requirement and test:

0 is a possible observation, and should count in the number of observations. So, SelfDefinedMean(3, 5, 0, 12) = 5

 1Procedure pr_Test_Zero_In_Observations {
 2    Body: {
 3          S := data { a, b, c, d };
 4          P := data { a: 3, b : 5, c: 0, d: 12 };
 5          r := ml::SelfDefinedMean(P(i));
 6          aimmsunit::AssertTrue("The average of 3, 5, 0, and 12 is 5.", r=5);
 7    }
 8    Comment: "third test: Mean( 3, 5, 0, 12 ) = 5""    aimmsunit::TestSuite: MeanSuite;
 9    Set S {
10          Index: i;
11    }
12    Parameter P {
13          IndexDomain: i;
14    }
15    Parameter r;
16}

Running the test suite again gives the below result:

 1<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
 2<testsuites>
 3<testsuite id="1" name="MeanSuite" timestamp="2019-04-09T09:59:31" tests="3" failures="1" time="0.003">
 4    <testcase name="tml::pr_Test_Small_Example" time="0.001"/>
 5    <testcase name="tml::pr_Test_Empty_List" time="0.001"/>
 6    <testcase name="tml::pr_Test_Zero_In_Observations" time="0.001">
 7        <failure message="The average of 3, 5, 0, and 12 is 5."/>
 8    </testcase>
 9</testsuite>
10</testsuites>

Our unit test reproduces the bug. See failures=”1” in line 3. Notice the difference between failures and errors in the test report. Clearly, the mistake in the above implementation is that we divided by

  • card(P) - the cardinality of the parameter which only counts non default values instead of

  • card(S) - the cardinality of the set which counts all the elements.

So, the function is updated as shown below:

 1Function SelfDefinedMean {
 2    Arguments: (p);
 3    Body: {
 4        p_NoElements := card(S);
 5        if p_NoElements then
 6            SelfDefinedMean := sum( i, p(i) ) / p_NoElements;
 7        else
 8            raise error "The average of an empty list cannot be computed." ;
 9            SelfDefinedMean := 0 ;
10        endif ;
11    }
12    Parameter p {
13        IndexDomain: i;
14        Property: Input;
15    }
16    Set S {
17        Index: i;
18    }
19    Parameter p_NoElements;
20}

Running the test suite now should give the below result which indicates that the problem was fixed.

1<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
2<testsuites>
3    <testsuite id="1" name="MeanSuite" timestamp="2019-04-09T10:03:07" tests="3" time="0.003">
4        <testcase name="tml::pr_Test_Small_Example" time="0.001"/>
5        <testcase name="tml::pr_Test_Empty_List" time="0.001"/>
6        <testcase name="tml::pr_Test_Zero_In_Observations" time="0.001"/>
7    </testsuite>
8</testsuites>

All the previously written tests (before this latest change) were also automatically run, saving us time and effort. The example project can be downloaded below:

AIMMS project download