top of page

A Case for Detailed Software Requirements and Hardware Tolerances

Author: Michael Lingg

Software Development Life Cycle

In many projects, the level of detail necessary for requirements to properly describe a system is an open topic of discussion. Increased detailed descriptions or specifications of a system have a corresponding increase in cost and time for developing the system. At the same time, insufficient detail can lead to implementations that may produce unintended behavior, which could range from no impact to the end user, up to a completely catastrophic situation.

Software requirement case study

Consider the following example. A simple unmanned aerial vehicle (UAV) is controlled by a PC, with a video game style controller used for manual control. The system may have the following software requirement:

  • The software shall activate manual control mode when a controller is connected to the PC and a button is pressed

Assuming that the manual control mode’s behavior is defined somewhere, this software requirement seems fairly complete and testable. Simply plug in a controller, press a button, and see that the UAV now responds to manual flight inputs, and perhaps that the UAV did not respond to control stick inputs before the button is pressed, given a button press is supposed to activate manual control mode. Yet, what if the controller is plugged in and the UAV does not respond? Is the software failing to send the controller commands to the UAV, or is the controller faulty?

The first question to answer is why the software might not respond to a controller? While working with some FTDI 2232 modules connected to the PC via USB and connected to test hardware for UART, we had a mouse on a PC suddenly start moving and clicking in random places on the screen. This was because the PC thought the data coming from the FTDI 2232 was a mouse connected to a keyboard. This is a nuisance, but if a keyboard or a mouse is connected to a PC and interpreted as a UAV controller with no definition as to what inputs may correspond to what flight controllers, the situation could be catastrophic.

Additionally, there may be certain UAVs that only work with certain controllers, fixed wing vs rotor craft, or UAVs with special channels for delivering a payload or transitioning from vertical to horizontal flight. The important key is to document what the software needs to do as requirements, while optional wants for the software can be left to the developers.

Let’s consider more detailed software requirements that could help with this (note these software requirements are not meant to describe an actual existing system).

  • The software shall register a connected_controller event when a controller is connected to the PC, with a DeviceClass of Human Interface Device, a VendorID of UAVControllerMaker, and a ProductID found in ListOfApprovedDevices

  • The software shall register a device_error event, including the DeviceClass, VendorID, and ProductID, when a controller is connected to the PC with a DeviceClass that is NOT Human Interface Device, OR a VendorID that is NOT UAVControllerMaker, OR a ProductID that is NOT found in ListOfApprovedDevices

  • The software shall enter manual control mode when a connected_controller registers a controller_connect button press event

  • Upon entering manual control mode, the software shall send a manual_connection request to the UAV

  • The software shall register a communication_error event, with state of request_rejected, when the UAV responds to a manual_connection request with a reject message

  • The software shall register a communication_error event, with state of request_ignored, when the UAV does not respond to a request to a manual_connection request, within 5 seconds

Now, if the UAV fails to respond, we can check the error events to know if the controller was rejected, and get information about the rejected controller. We can also check to see if the controller was connected and we failed to communicate with the UAV. All of this may have been implemented in the software, even without requirements describing the behavior. However, documenting this in the requirements first, provides a description of the possible errors and how they are reported. Without this, one has to look to the software to find these answers. Second, having these requirements document what the system will do, ensures that this behavior will be maintained in future versions of the software, and that it will not be removed by a developer who decides this undocumented feature is unnecessary. Third, requirements provide a common system definition to both developers and testers, allowing tests to be developed in parallel with the implementation. If the desired behavior is not fully defined, the testers must wait until the software is implemented to start testing, as they have no way of knowing what the actual behavior is. If requirements do not exist that define the expected system behavior as tests are developed, any failures will lead to the testers having to confirm what the “actual desired behavior” is from the software and/or systems people. This adds time and confusion to the test creation process. Even worse, false passes may not be caught during test development if the test developers make assumptions from current software behavior that does not match with the intended behavior of the system.

Another reason to consider requirements that fully detail the steps of connecting a controller device, would be when considering automated testing of the system. Verifying that a controller being connected to a PC allows manual control of the UAV, would be a fairly simple test. However, plugging in a dozen or more different controller models to repeat this test on each model would become time consuming and tedious. An alternative would be to automate this procedure by connecting a device to a PC capable of simulating any of the controller models (similar to our Automated Test Controller hardware). Simulation of multiple devices can be performed in significantly reduced time over the manual process, and provide the capability of testing all inputs from the controller, and using a wireless receiver in the simulation loop to verify the proper commands are being sent to the UAV. However, to use this kind of automated testing, the test developer needs sufficient understanding of how the system is interacting with the controller being simulated. The additional detail in the four expanded requirements describe all the stages the test will need in order to successfully simulate a controller being connected to the PC, as well as provide enough detail so the tester can exercise the failure conditions, and ensure that an invalid controller is reported - and not allowed to control the UAV.

Of course, cost vs benefit still needs to be considered. Will the cost of the additional software requirements be offset by the savings in avoiding unintended behavior? Is the necessary testing of the system sufficient so that test automation will be valuable? Are the safety consequences of the system sufficient enough so that highly detailed testing is warranted? If we are speaking of a low cost consumer UAV that will only operate in remote areas, away from the general public, then the answer to most of these questions may be no. If we are talking about a high cost, mass produced, commercial UAV that will be flying over the general public and sharing airspace with airliners, then the answer to many or all of the questions may be yes.

Hardware tolerances

Another area where providing sufficient detail in requirements comes into play is tolerances for hardware specifications. Take for example a system that outputs an AC voltage. It may include the following requirement:

  • The system shall turn off the voltage output, if the output exceeds 1000 volts

A simple manual test may increase the output rapidly and see that somewhere over 1000 volts, the output shuts off. A more precise test may set the output to precisely 1001 volts. If the system continues to output at this level, has the system failed? What about if the system spikes to 1009 volts for a fraction of a second and returns to under 1000 volts? Further, is the system so tightly designed that spiking over 1000 volts, even briefly, will damage the system or can it handle brief spikes?

If the system cannot handle a microsecond over 1000 volts, it may be better to safeguard this as the voltage approaches the maximum:

  • The system shall turn off the voltage output, if the output exceeds 995 volts, +/- 0.5%

Now the system will operate normally up to 990 volts, be guaranteed to shut off if it reaches 1000 volts, and will shut off at some indeterminate point between these two values. If the system can handle slightly higher voltages, but we want to account for tolerances, and variations within the hardware, we center the tolerance around the chosen maximum value. The actual system may not require a full +/- 10 volt range for the shutoff, it may almost always shut off within +/- 2 volts. Whatever tolerance is chosen is one that is large enough to ensure all normal variations fall within the tolerance, and small enough to avoid negatively impacting the system. For instance, take producing a requirement like the following:

  • The system shall turn off the voltage output, if the output exceeds 1000 volts, +/- 1%

Such a requirement would be developed with knowledge of the tolerances of the system that tells us that at 990 volts and under, the system will continue to operate normally, but at 1010 volts the system will cut the output. Between the values is a “grey” area that is left untested.

Finally, the tolerance may not just be in the output value but also over time. If brief spikes in the system are acceptable, we may produce a requirement like the following:

  • The system shall turn off the voltage output, if the output continuously exceeds 1000 volts, +/- 1%, for 1 second, or if the output exceeds 1100 volts, +/- 1%

Now the behavior of the hardware has been described in detail that will pass precise tests. Any “grey” areas where the output is not guaranteed due to hardware tolerances are defined, and tests cover values above and below these ranges where the output is guaranteed.


Michael Lingg is a principal research engineer at Array of Engineers, where he leads various research and development initiatives for the medical, aerospace, and space industries. Michael has a P.h.D. in High Performance Computing from Michigan State University and has over 15 years of experience in software design, development, and testing.

107 views0 comments


bottom of page