GUI Testing




Procedures and Best Practices for Writing GUI Tests 

This section describes procedures and best practices for writing GUI-based tests with the goals of complete GUI coverage and adequate test depth. After a brief overview, the first part of this section will discuss the best practices and procedures for working with TestComplete to write GUI tests. The following subsections will discuss the specific procedures and processes for testing the GMAT GUI. 

Overview 

Writing, and even more importantly, performing GUI tests that provide complete and repeatable coverage of the GMAT GUI is difficult. The GMAT GUI testing has the following goals: 

  • Reasonable Confidence that GMAT is user-ready.

  • 100% coverage of the GUI - every widget in the GUI must be exercised at least once.

  • 100% Requirements Driven Testing - every GUI-related requirement must be tested to ensure that it is fulfilled by the GUI.

  • Repeatable.

  • Maintainable.

At its most basic, GUI testing requires clicking on every button and widget in the GMAT GUI, entering valid, invalid, and boundary conditions input into every text widget, and assessing that the GUI responds to all inputs and displays all results correctly. Manually performing these actions would be error-prone, time-intensive, and impossible to repeat every time GMAT changes. So how do we ensure "reasonable confidence" and that every requirement and widget is tested? More importantly, how do we make the tests repeatable and maintainable?
This document, the GMAT Test Plan, addresses the first 2 questions. The battery of tests defined above, Verification, Validation, Stress testing, etc, and the Requirements-To-Test matrix are designed to ensure we meet every GUI requirement and hit every GMAT widget. The combination of all these tests is designed to provide reasonable confidence in the GMAT GUI.
Making the tests repeatable and maintainable requires automation of the GUI actions. The GMAT GUI Test Team has selected the TestComplete tool to provide this test automation.

Using TestComplete for GUI testing 

Best Practices 

  • Create and USE templates.

  • Use Variables (more maintainable and more general)

  • Make top-level project tests stand-alone-able.

  • When trying to fix (maintain) tests, dependencies between top-level tests make it difficult to fix-run-verify.

  • Do NOT rename NameMapping items.

  • I know the names that TestComplete gives items is… awful… but resist changing the names.

  • TestComplete is very consistent in how it names objects, which is important when separate projects are creating tests that may be able to be reused.

  • Make panel projects work with more than one object when it makes sense to do so – tests should be created using variables and then swap the variables before re-executing the tests.

For example, the Spacecraft Panel should be tested with the DefaultSC as well as with a user-created Spacecraft, or a dialog that is called in multiple places throughout the GUI.

  • Do NOT use Object Checkpoints! These types of checkpoints are exceedingly brittle and hard to update. Property check points and regions for images are the best to use. 

  • Data-driven Tests.

  • Give Input, View after Apply (e.g., is "+0" converted to "0", and how it looks in Show Script Columns.

  • Include a Log column that is printed out before each data-driven loop.

  • Use Pattern matches with Mission Show Script to test existence.

  • Make checks case sensitive since GMAT is case sensitive.

  • Avoid absolute File Paths; they WILL break. Base file paths off of a global variable, GMAT_PF_DIR, which defines the location of the GMAT program files directory (including parent of data directory). You can also use the Files Store.

  • Put Edit.KeyPress/SetText and PropertyChecks that are executed more than once in their own routines. Changing precision, compiler changes, etc mean that these values have to be changed, which is extremely tedious if this is scattered all over the place Checks for Warnings and Errors should be written to check for either, the GMAT team likes to change which dialog they use. 

Recording Tips 

  • Turn off Visualizer for playback and recording.

  • Organize Projects using directories.

  • Record your tests naturally and then immediately fix.

  • Change items to variables where it makes sense (use project level variables not local variables, easier to reuse and to find and maintain)

  • Break recordings up into smaller, reusable tests.

  • Always record using Named objects, not screen locations (e.g., right-click "Show Script" not click screen location 23, 45)

  • Change default options so it is KeyPress not SetText (more user-like)

  • If there are multiple paths to do something, there must be one instance for each path. Corollary, use the most maintainable way to do something when the purpose of the test is to test something else, e.g., to open a panel, you can either double-click or right-click "Open". Double-click uses a screen location (easier to break) – test it once. All operations that test the panel itself should use the Right-click "Open"

  • As much as possible, put logic in your tests to handle errors when they would "crash" the test. TestComplete, when it goes off the beaten path, tries to close unexpected dialogs using a default Close operation. However, that may not get GMAT back into a state that TestComplete can work with, leading to a cascading list of test errors that is not useful.

  • IfObject dlgSaveConfirmClose Exists, then log an error and click "No"

  • Avoid Region Checks as much as possible. Use them only for the Output images such as plots or Orbit Views to maximize the panel first.  Make sure to update pixel tolerance each time when the image is slightly different in case of the result is in false positive state to have an updated truth verification image for the next run. 

Things to Avoid 

  • Watch out for the ComboBox ClickItem bug! You need to insert a statement to tab into the ComboBox before the ClickItem statement. Otherwise TestComplete can get stuck on playback.

  • Tests that use Windows OS dialogs, like the OpenDialog and SaveDialog, can break when moving between Windows Versions. Use the recommended OpenFile method.

  • NameMapping is your friend. Fix most tests by fixing the name map, e.g., tests get broken by a panel being retitled from "New Object" to "New Asteroid". Don't use a new object, fix the mapping for the old object. 

Project Templates 

  • Setup to be easy to use ("By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.")

  • Use DIRECTLY (add existing item…) keyword tests (Cloning the template should only clone tests that need to be changed) Not Done Yet 

Procedures for Writing GUI Tests 

This section details the procedures and practices for writing GUI Tests using TestComplete. 

Procedures for Writing GUI Verification Tests 

For the purposes of verification testing, the GMAT GUI has been divided up into 4 logical groupings: 

  1. Resource Tree/Resource Panels (e.g., Spacecraft Panel, GroundStation panel, etc)

  2. Mission Tree/Command Panels (e.g., Propagate Panel, Maneuver Panel, etc)

  3. Script Editor

  4. Infrastructure (e.g., menus, toolbar, welcome page, about box, etc.) 

  5. End-to-end Mission tests.


The Script Editor and the Infrastructure will both be tested using one TestComplete project respectively. These projects will be responsible for testing all widgets and requirements for the respective functions, including side effects such as modifying a script unsynchronizes GMAT and vice versa.
The Resource Tree/Resource Panels and the Mission Tree/Command Panels make extensive use of TestComplete Project Templates to ensure full testing of a specific panel/object. The project automatically provides the logic and tests for almost 40 different tests with over 30 utility helper tests (providing approximately up to 80% percent of the code), with only a small, nicely partitioned, input from you (called TBD tests). By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors. 

Procedures to Write GUI Tests for a Resource Panel 

  1. Collect requirements for the resource.

  2. Collect developer notes about resource idiosyncrasies.

  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.

  4. Create Test Resource Project per instructions in GMAT Panel Test Template Instructions

    1. Clone the GMAT Panel Test Template Project and add it to your test suite.

    2. Define the Project Variables for your copy of the project.

    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)

    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)

    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)

    6. Run and Validate test results. 

  5. Add the new projects from the 'Resources' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed. 

The 'Resources Tests' Suite located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/Resources/All_Resources_Tests

The 'Resources_Tests_SolarSystems' Suite located in SVN GUI test repo:   /svn/GMAT/trunk/test/gui/Resources/All_Resources_Tests/Resources_Tests_SolarSystems

The individual 'Resources_Tests' projects located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/Resources

The individual 'SolarSystem' projects located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/Resources/FRR-16_SolarSystem

        6. Add the new tests into the RTTM spreadsheet in SVN GUI test repo: /svn/GMAT/trunk/test/gui/GMAT GATS files/Blank RTTM-By-Order.csv

Procedures to Write GUI Tests for a Command Panel 

  1. Collect requirements for the command.

  2. Collect developer notes about command panel idiosyncrasies.

  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.

  4. Create Test Command Project per instructions in GMAT Command Test Template Instructions.

    1. Clone the GMAT Command Test Template Project and add it to your test suite.

    2. Define the Project Variables for your copy of the project.

    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)

    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)

    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)

    6. Run and Validate test results.

  5. Add the projects from the 'Commands' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed. 

The 'Commands' Suite located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/Commands/All_Commands_Tests

The individual 'Commands' projects located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/Commands/

       6. Add the tests into the RTTM spreadsheet located in SVN GUI test repo/svn/GMAT/trunk/test/gui/GMAT GATS files/Blank RTTM-By-Order.csv

Procedures for Writing GUI System Validation Tests 

System Validation Tests attempt to utilize every feature the way a user would when creating a mission. Each test focuses on on particular feature but incorporates it into a larger project to ensure the feature interacts with everything in GMAT correctly. The "writing" for these tests is done in TestComplete with the assistance of a pre-defined mission given to the testers by engineers. 

  1. Collect requirements for the resource or command

  2. Collect developer notes about command panel ideosyncrasies

  3. Collect a mission sequence from engineers

  4. Collect truth data from engineers

  5. Use TestComplete to record the creation of the missions' Resource and Mission Tree.

    1. Clone the GMAT System Test Template Project and add it to your test suite.

    2. Define the Project Variables for your copy of the project.

    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)

    4. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)

    5. Run and Validate test results.

  6. Add the projects from the 'SystemTests' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.  

The 'System Tests' Suite located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/System/SystemTests/All_System_Tests

The individual 'System Test' projects located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/System/SystemTests

       7. Add the tests into the RTTM spreadsheet located in SVN GUI test repo: /svn/GMAT/trunk/test/gui/GMAT GATS files/System RTTM.csv

Procedures and Best Practices for Writing Script Tests 

This section describes procedures and best practices for writing script-based tests with the goals of complete system coverage and adequate test depth. The steps are outlined in the overview section below and then subsequent sections discuss each step in the testing process in detail. Requirements in GMAT are organized into logical groups and given a unique high level identifier (FRR-1 Spacecraft Orbit State for example). The section below describes the process used to test a logical requirements group. 

Overview

The following steps will be used to develop new script-based tests. The steps are applicable to new features as well as existing features with test gaps. Each step is discussed in more detail in the sections below. 

  1. Perform final review of requirements and update as necessary.

  2. Map existing tests to requirements.

  3. Plan new tests to cover test gaps.

  4. Review plan for new tests.

  5. Write files for new tests.

  6. Run tests/Debug tests.

  7. Check in bugs into Jira Software. 

  8. Check all new test files into SVN test repository: /svn/GMAT/trunk/test/script

  9. Inspect nightly report.
  10. Update user documentation.
Step 1: Inspecting Requirements. 

The purpose of the requirements inspection phase is to understand the requirements and perform a final bidirectional comparison of feature implementation and requirements. This is a visual inspection process to ensure the feature is ready for testing. In this step testers shall: 

  1. Verify that all features implemented in GMAT are represented by a requirement in the SRS.

  2. Verify all requirements in the SRS have appropriate features implemented. 

Step 2: Mapping existing tests to requirements. 

After inspecting requirements and addressing issues, testers shall map existing tests to the requirements by updating TC files for the requirements group. TC files are text files that accompany a script test file and contains metadata such as the test category and requirements covered by the tests. In this step of the testing process, testers shall: 

  1. Add/Review requirements Id in TC files.

    1. Include existing tests directly related to requirement.

    2. Include existing tests in other feature areas that may be applicable to requirement.

  2. Mark test categories as appropriate (Categories: Smoke, System, Functional, Numeric, End-To-End, InputValidation, Modes: Command, Function ...) 

Step 3: Writing summaries for new tests. 

The first activity in this step is to analyze the coverage provided by existing tests identified in Step 2. Once the test gaps are identified, the testers shall write a brief summary of each new test to be written to complete the coverage of the requirements group. Test summaries shall be added to the Test spreadsheet located here and categorized by specified requirement. After the tests written, the new test summaries shall be included in the header of the script test file once the tests are written and the requirements shall be added to the TC files.

Step 4: Reviewing summaries for new tests. 

The purpose of Step 4 is to ensure that tests planned to cover a given requirements group are complete and will adequately verify that the system correctly meets the requirements. During this phase of testing, a GMAT team member not supporting the tests for this particular requirements group will review the test summaries written in Step 3 and identify additional tests that are required. 

Step 5: Writing files for new tests. 

In step 5, testers shall write script files, TC files, and truth files for all tests identified in Step 4. These tests shall provide complete requirements coverage for all test categories indicated as necessary in the requirements spreadsheet. Note that not all requirements require all test categories. For example, features that are inherently graphical do not require certain types of tests. Features that are inherently commands do not require special command mode testing. The required mapping between requirements and test categories is contained in the SRS.
The testers shall: 

  1. Develop preliminary naming convention for new test scripts.

  2. Write the script files

  3. Write the truth files

  4. Write the .tc files


All Script File shall contain a comment block header with the following information: 

  • Author name

  • Brief summary of the test

  • Source of truth data 


All TC files shall be contain the following information: 

  • Test categories covered by the test.

  • Test requirements covered by the test.

  • Bugs found (this is added in Step 7)

  • GMAT ouput file names.

  • Truth file names.

  • Comparator for each file.

  • Tolerances for each comparator.


The guidelines below are a set of best practices for writing script based tests.

  • Do not include any unnecessary objects in the test. For example, if the test does not require a 3D graphics plot, then do not include one in the test script. 

Step 6: Running tests/Debugging Tests. 

In Step 6, testers shall place all tests written in step 5 in their local test system repository, execute those tests, and address any issues found in the script, TC and truth files. 

Step 7: Checking in bugs.

In step 7, testers shall follow procedures and best practices described in this document to commit bugs to the project's Jira Software database. 

Step 8: Checking in test files into SVN Test repository.

In step 7, final validation of all tests is performed. In the event that bugs were identified in Step 7, testers shall update the relevant TC files with the bug numbers. After files are updated, they are checked into the test repository. 

Step 9: Inspect nightly coverage reports. 

The final step in the testing process is to verify that the new tests are executing correctly in the DBT system. The testers shall verify that bugs identified during the test process are marked as failing in the automated DBT report. 

Step 10: Update User Documentation. 

Testers gain unique insight into a feature during the test process. After testing is complete, any information that testers deem useful to users shall be added to the appropriate reference section in the GMAT User Guide. 

Procedures and Best Practices for Checking in Bugs

All issues discovered during a test system run are reported in Jira Sortware, GMAT's issue-tracking system, located at the address: https://gmat.atlassian.net/jira/software/c/projects/GMT/issues

Procedure 

The following procedure should be followed to submit an issue: 

  1. Navigate to Jira Software at the address above.

  2. Log in, if necessary.

  3. Choose GMAT as the product.

  4. Click the "Create" button.

  5. Select Issue type (Bug, Epic, Story etc.)

  6. Select the most appropriate item from the Component list. 

  7. Select your system configuration using the Platform and OS fields.

  8. Select appropriate values for Severity and Priority. See Appendix A for guidelines.

  9. Fill out the Summary field with a one-line summary of the issue.

  10. Fill out the Description field with a full description of the issue (see Best Practices below).

  11. Add an attachment, if possible (e.g. a script file illustrating the issue).

  12. Click "Create".

Best Practices

The following best practices should be followed when submitting an issue to the Jira Software system: 

  1. Submit an issue as soon as possible after you discover it, even if no other details are known. It is better to have an incomplete issue report in the system than no report at all.

  2. Try to duplicate the bug manually (outside of the test system) by either using the GUI or loading a script.

  3. For a script-related bug, write a script that contains the minimum number of lines necessary to duplicate the bug.

  4. In the Description field, include the following items:

    1. The name of the test

    2. The steps you followed to trigger the bug.

    3. The text of any error messages that are generated.

    4. The build date of the GMAT version that contains the bug.

  5. In the Attachments section, attach the following items (if appropriate):

    1. The script that duplicates the bug.

    2. GmatLog.txt if the bug relates to an error, warning, or crash.

    3. The output file that contains erroneous data.

    4. A sample file that illustrates correct data.

    5. A screenshot that illustrates a graphical bug.

  6. If you want feedback from another team member, include their email address in the CC: field. 

Test Peer Reviews

Test Reporting

Test Evaluation/Analysis

Developers review and analyze nightly regression reports and fix regressions. Testers address flawed tests. Flawed tests are checked into Jira Software under the Category "5 Test System".

Test System Maintenance  

Appendix A: Bug Priority and Severity Values 

The priority of a bug is based on whether or not the bug must be fixed by the next release. Higher priority bugs, such as P1, are fixed before lower priority bugs, such as P3. After a release, all bugs are reviewed and what was considered a P2 or P3 bug in the last release may become a P1 bug for the next release. Because a bug is originally entered as a P3 does not mean it will remain there indefinitely. Bug severity is a measure of the impact of the defect on system behavior. Crashes and loss of data are of critical severity while a slight performance issue is minor severity. Roughly speaking, the priority of a bug is determined by the severity and likelihood of occurrence or number of users affected. Defects that have high severity but a low likelihood of occurring are not necessarily high priority. Classification has some degree of subjectivity and what may be acceptable to one person may not be acceptable to another.

Priority

Definition

P1

Must Fix by Next Major Release

Examples:

  • Crashes with high frequency of occurrence, or a cumbersome or non-existent work-around.

  • Unmet requirement.

  • Critical numeric error.

  • Unacceptably low quality in highly visible component.

  • Critical loss of data.

  • Failed input validation with critical consequence.

P2

Fix if Time Permits

Examples:

  • Hard to duplicate crashes.

  • Numeric issues regarding precision (agreement with truth is acceptable but would like it to be better).

  • Numeric issues when near a singularity.

  • Unclear/non-standard error messages.

P3

Won't Fix in Next Release

Examples:

  • Moderate Performance Issues.

  • Numeric issues regarding precision (agreement with truth is acceptable but would like it to be better).

  • Unclear/non-standard error messages.

  • Enhancement requests.

  • Interfaces to old versions of third-party software.

 

There is always some subjectivity in the above classifications. Numerical errors related to precision issues, such as difference at 10th significant figure, are handled on a case-by-case basis. Examples of loss of data include failure to report requested information or save input data to script file. Unacceptable quality issues include prominent misspellings, plots that cannot be interpreted due to.

Appendix B: GUI System Test Procedures

GMAT GUI Documents

  • \\mesa-file\595\GMAT\Builds\windows\VS2022_build_64\LatestCompleteVersion\docs
  • GMAT GUI > Help > Content


Overview

The GMAT GUI System Tests consist of 185 tests.  The tests currently consist of 24 3D Model tests, 18 Orbit/Target Color tests, 14 GUI Rename/Delete tests, 14 Open Frame Interfaces tests, 6 Dynamic Data Display tests, 7 GroundTrackPlot tests, 4 GMATFunction tests, 6 Mars tests, and the rest are different Mission end-to-end tests.

Location of the System Test Projects Spreadsheet: https://aetd-svn.gsfc.nasa.gov/svn/GMAT/trunk/test/gui/GMAT GATS files/System RTTM.csv


Obtain GMAT 
executable

  • set path file with the correct version of GMAT at: /svn/GMAT/trunk/test/guiGMAT.bat
  • on the mesa-file copy all files to the local drive: \\mesa-file\595\GMAT\Builds\windows\VS2022_build_64\LatestCompleteVersion

Edit GMAT Configuration

  • Enable mode in 'gmat_startup_file.txt' for:
    • MATLAB_MODE = SHARED
    • RUN_MODE = TESTING
    • DEBUG_MATLAB = ON
    • DEBUG_FILE_PATH = ON


Run Overnight Tests Automatically with the TestComplete

  1. Open System Test Projects via TestComplete: /svn/GMAT/trunk/test/gui/System/SystemTests/All_System_Tests
  2. There are 2 folders: 'System_Tests' and 'System_Tests_Part_2.' Run each folder separately. 
    • Open TestComplete with the System_Tests Projects > at the top left find 'Project Suite' parent tree.
    • Right click on 'Project Suite' > Run System_Tests [ Project Suite ]
    • TestComplete will run all tests with 'Enabled' checks. 
  3. After execution of all projects the log will be generated in 'Project Suite Log' parent folder. 
  4. Remove and then Delete.


Run Tests Individually with the TestComplete

  1. Open System Test Projects via TestComplete: /svn/GMAT/trunk/test/gui/System/SystemTests
  2. Open TestComplete with the System_Tests Projects > at the top left find 'Project Suite' parent tree.
  3. Expand 'Project Suite' parent tree' - there is a list of the System Test Projects.
  4. Expand the System Project > right click on 'Execution Plan' to run individual project via 'System_Tests' Suite.
  5. As an alternative, open individual System Test via TestComplete. 


Reference