Autopro Blog

Structuring Acceptance Tests for Success

September 12, 2019

by

Over the last couple of decades, the way that we perform acceptance testing of control systems has changed. In the past, it was common to stage the entire control system and perform a fully integrated test of the hardware, software, configuration and graphics. Currently the normal go-to is to test the system in smaller pieces and integrate the entire system at the customer site. While this change has improved project performance and schedule, it also introduces new risks which need to be properly managed.

Acceptance Testing History

In years gone by, a fully integrated test would require a large staging area where all of the control system panels, servers and workstations would be set up. Testing would encompass the entire control system from the I/O channel to the graphic, and was often very tedious as each input was simulated, and each control narrative was verified. An advantage of this fully integrated test was that it provided a high level of confidence that the integration of the entire system, configuration and graphics were complete and functional before shipping the system to the customer site.

Present Day Acceptance Testing

The fully integrated test has been replaced by testing in stages. You may have seen some of these commonly used acronyms to define testing stages.

HAT, CAT, SAT, FAT, CWT, IFAT, IVT

For example, control system panels are often tested on their own as they are manufactured (hardware acceptance testing). Configuration and graphics may be tested on a development system which might not include any of the actual servers and workstations that will be installed at site. There are many advantages to this method of testing in stages, but it does introduce a new risk – are we confident that the system will work as a whole when it is integrated at site?

What is the Risk?

With testing occurring in stages, it is possible that there are gaps between the test scopes.

Consider the following example:

A control system panel including a controller, I/O subsystem and cross-wiring is tested in a Hardware Acceptance Test (HAT) after manufacturing.

The control system configuration and Customer Acceptance Test (CAT) isn’t scheduled to be completed for several more weeks, so the HAT is completed using a very basic configuration that enables each I/O channel to be tested, but doesn’t include the final tag names, instrument ranges, etc.

The CAT is performed weeks after the HAT and is performed on a development system without physical I/O channels. During the CAT, I/O is simulated via soft signals in the development system.

In this example, a potential gap is in the layout and configuration of the I/O. Perhaps I/O channel assignments were different between the HAT and the CAT. These differences might not be identified during either test and might not be noticed until commissioning at site. Resolving such deficiencies at site is the most costly method when taking into consideration the time and place to resolve them. They delay the schedule and usually require more effort to identify and resolve, compared to resolving them prior to site work.

Other questions to consider include:

  • Does everyone really understand what the intent of each stage is?
  • What is the confidence level in the integrated system functionality once all testing stages are complete?

Managing the Risk

In order to properly manage this risk, it is important that the scope of each of these individual tests is clearly defined in order to ensure there are no unknown gaps between the testing stages. Where there are gaps, they must be identified and managed. In the above example, differences in the I/O configuration could potentially be identified by a database comparison of the I/O configuration used for HAT and the configuration used for CAT.
The best way to clearly define the scope of each test and identify gaps is to develop a comprehensive project test plan. It is important to note that a test plan is different from a test procedure. A test procedure defines the detailed testing steps whereas a test plan describes the overall testing philosophy which will ensure the project requirements are met.

A test plan documents the following:

  • Stages of testing
  • Scope of each test
  • How the stages of testing relate to the project schedule
  • Logistics for each stage
    • Participants and responsibilities
    • Location
    • Timing
    • Intent
    • Methodology
    • Pre-requisites
    • Required equipment
    • Test standards
  • References to any customer test requirements

On many projects, the test plan is one of the last documents written before testing begins. In reality, the test plan should be one of the first documents produced. This should be done at the same time as the system design standards to ensure a cohesive process. If the test plan is defined early and each party understands the test requirements, small adjustments may be made to the development processes that make testing easier and more efficient.

Join us for our Acceptance Testing Webinar

We will be hosting a free one-hour webinar with Tom on September 26, 2019 to delve deeper into acceptance testing.

As highlighted above, most automation project involves some form of acceptance testing to verify that the completed work meets the project requirements. Depending on the project, this could involve several stages of testing throughout the project.

During this webinar we’ll discuss how you can manage the many phases and forms of acceptance testing to ensure that they work together to provide the confidence you need that the project requirements have really been met and the project is ready to be put into service.

Autopro-Register-button

Please click here to register

Curious to know what all of those acronyms mean?

HAT - Hardware Acceptance Test
CAT - Customer Acceptance Test
SAT - Site Acceptance Test / Software Acceptance Test
FAT - Factory Acceptance Test
CWT - Customer Witness Test
IFAT - Integrated Factory Acceptance Test
IVT - Internal Verification Test