^

Products

Products

TBrun®

Automated unit and integration testing

In general, functional safety and cybersecurity standards specify a range of validation and verification activities to demonstrate that critical software is fit for purpose. Those activities have a similar part to play where the aim is simply to improve software quality. Dynamic analysis, in the form of unit, integration, and system test, has a critical part to play in providing appropriate evidence. 

However, it is also clear that any such activities require many unit and integration test artefacts to be collated, cross-referenced, and tracked. Supporting that project management overhead by means of automated unit and integration testing can make the process both more effective, and much less time consuming. 

What is TBrun?

TBrun® is a component of the LDRA tool suite®. It is a unit/integration test tool, providing a complete verification environment for the automated generation and management of test harnesses and unit/integration tests. This solution maximizes productivity by replacing burdensome and time consuming low-level manual testing activities and allowing developers to focus on implementing correct software functionality.  

TBrun’s ease of use makes it ideal for anyone to those looking to achieve the structural analysis and unit test objectives demanded by functional safety and cybersecurity standards. It is equally well suited to those simply looking to improve software quality.  

TBrun facilitates testing on the host, target, or simulator. LDRA’s optimized instrumentation technology allows information to be accessed on targets ranging from highly constrained 8- and 16-bit microcontrollers, through to high-performance 32- and 64-bit processors. 

How does TBrun help?

TBrun uses the LDRA tool suite’s comprehensive control and data flow analysis to extract details about the unit interfaces, parameters, global variables, return values, variable types, data usage, and procedure calls. Traditionally this level of information could only have been specified by a developer with an expert knowledge of the unit under test. By automating this process, TBrun enables highly qualified staff to be re-assigned to other modelling, design, and development tasks. 

Unit and integration test

TBrun is more than just a Unit Test tool. It supports the testing of: 

  • Single procedures, functions, methods (Unit test)  
  • One or more files containing many functions and/or classes (Module/integration test) 
  • Complete programs (Sub system & system test) 

The TBextreme module supplements the ability of TBrun to create unit test harnesses by automatically generating appropriate test vectors to go with them.

How does TBrun support functional safety and cybersecurity standards compliance?

In general, functional safety and cybersecurity standards (including DO-178C, ISO 26262, IEC 61508, ISO/SAE 21434, and many more) specify a range of validation and verification activities to demonstrate that critical software is fit for purpose. Dynamic analysis, in the form of unit, integration, and system test, has a critical part to play in providing that evidence. 

However, it is also clear that those standards require many artefacts to be collated, cross-referenced, and tracked. Ensuring that they are permanently up to date can be a logistical nightmare in the face of changing customer requirements, failed tests and engineering oversights. Supporting that project management overhead by means of automation can make that process both more effective, and much less time consuming. 

When used in tandem with the TBmanager component of the LDRA tool suite, traceability of test results to both project requirements and standards objectives is assured.

What are the primary features of TBrun?

Features of TBrun include: 

  • Efficient test on host, target, or simulator via intuitive graphical and command line interfaces 
  • Automated test driver/harness generation with no manual scripting requirement 
  • Test Case File management, facilitating regression test 
  • “White box” test mode, facilitating structural coverage analysis 
  • Automated stubbing of functions and variables outside the scope of test 
  • Automated exception handling 
  • Storage and maintenance of test data and results for fully automated regression analysis 
  • Automated detection of source code changes 
  • Tool driven test vector generation 
  • Execution of tests in host, target, and simulated environments 
  • Automated generation of test case documentation including pass/fail and regression analysis reports 

Testing on host, target, or simulator

TBrun facilitates testing on the host, target, or simulator. LDRA’s optimized instrumentation technology allows information to be accessed on targets ranging from highly constrained 8- and 16-bit microcontrollers, through to high-performance 32- and 64-bit processors.  

One common approach that limits the demands on often scarce target hardware is to first complete software integration testing using a simulator, and then complete software/hardware integration testing on the target. 

Automatically generated test harness

TBrun leverages the LDRA tool suite’s sophisticated control flow and data flow analysis techniques to collate detailed information on all unit interfaces in the code under test. This detailed information enables TBrun to automatically generate test drivers, removing the need for manual scripting. 

The automatically generated driver is created in C/C++, Ada or Java, reflecting the language used in the application code under test. It can be executed in the host, target or simulator environment as required.  

Test case files and regression testing

TBrun collates test cases into “sequences”, and these sequences can be stored in Test Case Files (TCF) to retain the information required to re-run the test cases. Sequences stored in TCFs can be re-run from the user interface or the command line allowing regression testing to be initiated on an ongoing basis. In this way, module interfaces and output data can be verified as changes are made to the source code.  

TCFs are standard-alone files that can be easily distributed and accessed by TBrun users across the world. They can be grouped with the reports generated when they were first executed and conveniently stored for future regression verification use, perhaps using a software configuration management (SCM) system. Requirements based testing documentation complete with requirement management system “tags” can also be stored, enabling the SCM system to record whether modified code has been tested when it is checked in.  

Structural coverage

TBrun reports on the full range of coverage metrics available in the LDRA tool suite, including the Procedure Call, Statement, Branch/Decision, MC/DC, and LCSAJ (JJ-path) metrics. Users can choose an appropriate metric or set of metrics to reflect both the nature of the software, and the demands of their development processes. For example, MC/DC coverage is used to verify results are not masked by condition input conditions and LCSAJ coverage provides a comprehensive metric to evaluate loops. All of these metrics are illustrated graphically, via flow graph displays, call graph displays and the file/view of the TBrun GUI, and compliance reports can be configured to confirm the achievement of “pass” levels for functional safety standards. Line by line views indicating which statements, branches and conditions have been executed are also shown in these reports.

Stub creation

TBrun automatically stubs system or function calls. These stubs are designed to mock calls to such as functions, methods, constructors, packages, and generics that are missing from the scope of the tests. The resulting “managed stubs” are sufficiently complete to allow the test harness to build and execute.  

The default behaviour of managed stubs can be modified via an intuitive graphical user interface to manipulate data relevant to the tests, such as return and global parameter values. For instance: 

  • Return values can be varied depending on the number of occasions on which a stubbed function has been called 
  • Passed parameter values can be interpreted as pass/fail criteria for the unit tests  

The option to write stubs manually is also available. 

Exception handling

Exceptions can be automatically caught, and test cases can be passed or failed dependent on whether an exception has been raised. The exception handling method is configurable.  

The exception handlers themselves can also be subject to unit tests, allowing test coverage to be achieved even when the raising of an exception would be impractical.  

Requirements traceability

When used in conjunction with the TBmanager component of the LDRA tool suite, TBrun provides traceability of unit and integration tests to and from software requirements documents. A host of requirements document formats is supported. 

TBmanager automates the process of collating a traceability matrix, maintaining details of which tests are to be completed by whom and identifying those tests which need to be revisited as the result of revisions. TBrun tests can be initiated through the TBmanager user interface, facilitating the automatic selection of the appropriate software files and functions associated with any given requirement. 

Extreme testing

Extreme testing leverages the TBextreme module. It builds on the ability of TBrun to create unit test harnesses by automatically generating appropriate test vectors to go with them. It automates the unit/module/integration testing processes, eliminating almost all overhead associated with bottom-up testing. For the user, it represents a fast, simple mechanism to achieve an elementary level of unit testing. 

Features include the ability to automatically fine tune the processes used to create the test vectors to optimize the level of coverage achieved. Vectors generated by means of extreme test can be complemented by means of manually generated test cases. 

Object code verification & assembly level coverage

TBrun can be used with in conjunction with Object Code Verification (OCV) features to implement and apply test cases to check assembly level code coverage. Used to confirm that the compiled code represents an accurate interpretation of both the source code and the developers’ intentions, this facility is often used to fulfil the demands of DO-178C. Paragraph 6.4.4.2b of that standard requires that for DAL A applications, “additional verification should be performed on the object code to establish the correctness of code sequences…”  

Additional information

FREE 30 Day
TRIAL

Email Us

Email: info@ldra.com

Call Us

EMEA: +44 (0)151 649 9300

USA: +1 (855) 855 5372

INDIA: +91 80 4080 8707

Connect with LDRA