Author: Matthew Reed
Why Create a Video Recording Tool?
Here at Array of Engineers, one of our goals is to make hardware and software testing more consistent and efficient. One of the ways we accomplish this is by creating automated scripts to perform tests on our customers’ hardware. Even though these automated tests are significantly faster than manually performing the same steps, each script still takes some amount of time to execute. Some of our test scripts can even take multiple hours to run due to complex customer requirements. So, what happens if a step fails part-way through a multiple-hour run?
At first, all we had were data logs and test results. To be fair, these can be great resources when debugging; however, there’s only so much that can be gleaned from dumps of text and the occasional screenshot. Sometimes logs just aren’t enough to identify an issue. At times like these, we would need to run the test again, constantly monitoring its progress to catch any issue that occurred. This is extremely inefficient. Not only does this use up a software tester's time, but it also decreases the number of available testing stations. There is also the chance that the software tester might miss the moment when the issue occurs. If our goal is to make the testing process more consistent and efficient, we would need to find a better way of doing things.
Our solution ended up being fairly straightforward: simply record the test as it’s running and save the video with the rest of our logs. That way, anyone could scrub through the video, find the source of an issue, and even recreate the issue to assist the debugging process.
How it Works
Our recording tool works by repeatedly taking screenshots of the test station environment and saving these screenshots to a video file. Since most of our test scripts are written in Python, our recording tool uses PyAutoGUI to capture the screen. It then passes these screenshots to OpenCV in order to process them into a video file. The entire recording tool also runs on a separate thread so that it doesn’t interfere with the test itself.
Our recording tool keeps track of both the current test step and the current test line number being executed. The test step is relatively easy to get. Our recorder simply has a string that it expects our software harness to update every time a new test step is reached. The line number being executed, on the other hand, is a little more difficult to track and monitor. One of the quirks of Python is that there is no notion of a private member variable; everything created by the Python script is accessible from anywhere else in the script. Our screen recorder leverages this advantage to get information about the main thread while it’s running. Specifically, our recording tool grabs the current execution frame from the main thread, then backtracks until what the execution frame is looking at contains the filename of the test. From there, our tool grabs the line number. Our tool then uses OpenCV to overlay this information on each frame before writing to the video file.
When the H.264 codec is available on the system, our tool can be set to use H.264 encoding. This can decrease video file sizes by an order of magnitude. Since many of our software tests for this project can have long periods of time where very little changes on the screen, this can lead to immense savings on hard drive space.
Benefits to Project and Usability Across Platforms
As mentioned before, this software test tool can help to streamline debugging efforts during test development and test execution. However, this is not the tool’s only benefit. Some other benefits include the following:
Manual Verification: If a test step is falsely reporting a failure, the video can be used to manually verify that step.
Identify Inconsistencies: Some issues are inconsistent. By keeping a video log, the causes of inconsistent issues can be more easily identified.
Proof of Execution: It can often be useful to provide evidence that a piece of hardware was sufficiently and accurately tested. This tool provides video evidence that a test was run successfully and that the reported results are accurate.
Easily Applicable to Other Projects: Currently, this tool we created is being utilized on one of our automated testing projects for the defense industry. This is because it currently relies on direct access to the project's hardware display. However, rather than taking screenshots, the tool could use a different video source to monitor a piece of hardware being tested (e.g. an IP camera).
Matthew Reed is a computer scientist at Array of Engineers. Matthew graduated from Brigham Young University-Idaho with a degree in Computer Science. Since joining Array of Engineers, Matthew has developed embedded software and numerous innovative scripts in support of projects in the aerospace, defense, medical device, and IoT sectors.