top of page

White Paper: Automation of Test Case Generation and Software System Modeling

Author: Michael Lingg, PhD, Array of Engineers

Software model of an aircraft


Software testing is a vital part of software development to ensure correct

functionality, and necessary in safety critical systems. Various methods have been

developed to ensure that tests verify that the software correctly implements the

requirements, but increased coverage results in increased time to develop tests.

Automation can be used to provide significant improvements in test development

time, particularly when used to augment, rather than replace, human test developers.

This paper provides a format to structure software requirements, and an algorithm

to parse the requirements into a logical model of the software. This logical

model can be used to analyze the behavior of the requirements, or automatically

generate test procedures. We show that the test generation, and the information

the logical model provides to test developers, can significantly accelerate the time

required for test development.

Citation: M. Lingg, T. Kushnier, R. Proenza, H. Paul, ”Automation of Test Case Generation and Software System Modeling,” In Proceedings of the Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), NDIA, Novi, MI, Aug. 16-18, 2022.


The use of software is ever expanding, and continuously evolving. In the past, changing the behavior of equipment and vehicles would require physical servicing to replace physical components and hardware. The use of software to control behavior means mission parameters can be completely redefined in the field in seconds, making software invaluable in an ever changing environment. However the benefits of software are not without cost. Poor software quality has been estimated to cost the US around $2.84 trillion dollars [1] in 2018 alone, with losses due to software failures exceeding a third of this cost. Costs due to bugs can climb upwards of billions of dollars as evidenced in the Soviet Gas Pipeline Explosion in 1982, reach to the hundreds of billions in cost to repair the Y2K bug [2], or impact the safety and lives of people, as in the Boeing MCAS software failure that lead to the loss of two Boeing 737 MAX aircraft, and 346 lives [3].

In addition to the cost of failing to discover software bugs, the cost of sufficient testing to identify software bugs can be very high. Development of a single test case by hand can take as much as an hour or two for simple behavior, and can easily exceed ten or more hours for more complex behavior. In systems with thousands of requirements, manual test development often reaches into man years. Any use of automation can provide significant cost benefits, along with the consistency provided by automating the process. There exists a broad range of testing methods [4]. Tests can cover different levels or components of the system, require different levels of knowledge of the implementation, and be based on different types of testing stimuli. This paper focuses on Modified Condition/Decision Coverage (MC/DC) of requirements based tests as defined in DO-178B/C. DO-178B/C provides guidelines for developing safety critical software, primarily used in civilian aircraft, but is being increasingly considered for use in unmanned aircraft [5].

To help with understanding of this work, the relevant portion of DO-178B/C MC/DC requirements based tests will be discussed below. For a complete description, the DO-178B/C standards are published by the RTCA (Radio Technical Commission for Aeronautics) [6], or see NASA’s detailed tutorial on MC/DC [7]. MC/DC provides a high level of path coverage in software testing. In a system with a lower safety level, reduced path coverage, such as only verifying each output of each path, or only verifying critical path coverage, can be used. These methods of reduced path coverage result in test selection that is a subset of the rules of MC/DC. The choice of coverage method would be selected based on the chosen standard such as the DO-178B/C criticality level, AOP-52 [8] formal test coverage, or the Join Software Systems Safety Engineering Handbook (JSSSEH) [9] path coverage testing, and could be documented to satisfy the MIL-STD-882E [10] safety verification task.


DO-178B/C testing has historically used the waterfall method of software development, where first requirements are written, then the software is implemented, and finally tests are developed. With this approach, test developers are expected to have no knowledge of the implementation, ensuring independence of implementation and testing. This means tests are expected to be developed only using the requirements, and a specification of inputs and outputs that are accessible by the test. Typically a test covers a single requirement, and may include multiple cases (sets of inputs and expected outputs), in order to fully cover the requirement. While the automated test case generation algorithm was developed in this environment, we will discus in the Methods section how the algorithm integrates well to

other testing environments. MC/DC coverage defines a method of testing that ensures a known coverage of conditions within the software, by showing the independent effect of each condition within a decision. This paper will focus on two main points of MC/DC coverage [11]:

  1. Each non-constant condition in a Boolean expression is evaluated to both True and False.

  2. Every non-constant condition in a Boolean expression is shown to independently affect the expression’s outcome.

Let us take a look at what this means. First start with a basic comparison condition which could be considered as part of a larger expression:

input == 10

Next two tests cases are created, one with input set equal to the constant (10), and another with input set equal to the constant + 1 (11). This produces the truth table shown in table 1:

Table 1: Compare with constant

The first test case would set the condition to true, while the second would set the condition to false, satisfying out MC/DC subset condition 1. MC/DC condition 2 is also satisfied as only input was changed between the test cases, showing it independently affects the output. Further rules could be added, such as testing the constant - 1 to verify the condition was not implemented as ≤, but this paper is focusing on the limited subset of rules defined above. Now consider the case of two variables being compared with ≥, rather than a direct comparison:

input1 ≥ input2

Declaring that there are no constraints on the inputs and they are both numeric, a test case is chosen of both input1 being greater than input2 and a second test case reversing the values so input1 is less than input2 to satisfy our MC/DC condition 1. However this does not satisfy condition number 2. So the second case is changed such that input2 is equal to its value in the first test case, and input1 is less than the value of input2, showing that input1 independently set the output to False. Then a third case is added where input1 is equal to its value in the first case, but input2 is greater than input1, showing input2 independently set the output to False. This produces the truth table in table 2:

Table 2: Compare two variables

The test above covers both equivalences classes that are possible in the requirement, and tests the boundary conditions of input1 greater than input2 and input1 less than input2. The test could also require as a rule that the boundary condition of input1== input2 be tested to fully cover the boundaries of ≥, and ensure the implementation was not >.

Last the logical Boolean conditions of AND and OR will be considered. First the following condition of two comparisons ANDed together:

input1 == True AND input2 == True 

The condition can only be true if input1 is True AND input2 is True. This will be the first test case, satisfying the True output. Setting both inputs to False satisfies the False output for MC/DC condition 1, but does not satisfy the input independence requirement of MC/DC condition 2. So instead a second test case will be created with input1 False and input2 True, showing input1 independently sets the output to false. Finally a third test case sets input1 True and input2 False, showing input2 independently sets the output to false. This is summarized in the truth table in table 3:

Table 3: Two conditions ANDed

Because this paper is focusing on MC/DC condition 1 of each possible output, and condition 2 of each input independently toggles the output, expanding our condition to any number of values ANDed together is fairly simple. There is still only one true output case, and each new input simply needs to show it can toggle the output to False. Any number of conditions ANDed is shown in the truth table in table 4: