Plans and Procedures

GMAT Test Plan

Introduction 

Overview 

The General Mission Analysis Tool (GMAT) is a desktop space mission analysis tool tailored to support missions involving groups of spacecraft interacting throughout a modeled time period. The complexity of this problem makes GMAT an intricate software system and this complexity necessitates a rigorous testing environment to ensure that the system meets its objectives. GMAT is designed using an object-oriented architecture and coded using extensive object-oriented structures written in C++. The object based approach employed in GMAT's design and implementation makes the system robust and relatively easy to use for experienced analysts. The extent of the object model implemented to make GMAT a complete and robust system dictates a comprehensive testing philosophy. This document describes all levels of testing for GMAT. 

Document Scope 

This document serves as the test plan for all testing performed on the GMAT application. This includes developer tests such as unit and integration tests, as well as tests performed later in the development cycle such as system, stress, acceptance and regression tests. Preparation for system testing consists of three major stages: 

  • The Tests Environment and Tools chapter describes the systems and environments required for testing and procedures for configuring those systems for testing.
  • The Test Process chapter sets the scope of system testing, the overall strategy to be adopted, the activities to be completed, the general resources required and the methods and processes to be used to test the release. It also details the activities, dependencies and effort required to conduct the System Test.
  • Test Planning details the activities, dependencies and effort required to conduct the System Test.
  • Test Cases documents the tests to be applied, the data to be processed, the automated testing coverage and the expected results. 


This document covers the first two of these items, and established the framework used for the GMAT test case development. The test cases themselves exist as separate components, and are managed outside of and concurrently with this test plan. 

Overview of GMAT Development and Testing Process 

GMAT development is conducted as a cooperative effort between an analysis team, typically composed of flight dynamics specialists, and a development team consisting of talented software developers. New requirements for the system are defined and written by the analysis team. Mathematical and design specifications are derived from these requirements and compiled into a format that can be used to code the new functionality. Requirements, Specifications, and Designs are reviewed by the development team prior to implementation.


During the development process, new features of a component under development may be detected that need further specification. When that happens, the new features are discussed and collected together. This may result in an immediate update to the design documents, or it may result in collection of the new feature implementation for inclusion in a final update performed when the component is ready for integration. In either case, the design documentation is updated to reflect the implemented functionality prior to formal acceptance of the related components.


During development, the software undergoes internal testing in the development team at both a unit and an integration level. Unit testing is intended to exercise all of the executable paths through the code, validating that the internal working of the code behaves correctly. Integration testing takes unit tested components and builds those components, either one at a time or collectively, into the system. The development team may interact with the analysis team during integration testing to confirm that the observed behavior of the new code conforms to the.


When the GMAT development team completes integration of new functionality into the system, that new functionality is ready for system test. GMAT system testing follows a more formal test procedure than unit or integration testing. New components are exercised both from the GMAT scripting language and from the GMAT Graphical User Interface (GUI). The test cases exercised are documented using the procedures described later in this document. Test cases are managed using a traceability matrix that lists all of the elements of GMAT visible at the user level, and matches those elements to test cases that are executed in system testing. This master traceability matrix is used to generate a spreadsheet of test cases each time GMAT enters a system test cycle. All tests are tracked using this spreadsheet; formal system test is complete when every test case has been exercised and the results of the tests have been tabulated and accepted after review.  

NASA IV&V 

The activities to be provided to the GMAT project by the National Aeronautics and Space Administration (NASA) IV&V Facility in Fairmont, WV have not been specified. In the event of such participation by the IV&V facility, the GMAT test team will update this section of the test plan. 

Test Objectives 

The plan developed in this document is intended to demonstrate that: 

  • The functionality delivered in GMAT is as specified by the Mathematical and Design specifications.
  • The software is stable and of high quality.
  • The software models spacecraft missions faithfully.
  • The software interfaces correctly with other systems, specifically MATLAB.
  • The software user interfaces are stable, complete, and understandable by novice and experienced users. 

These objectives are addressed through the development of a suite of test cases exercised on builds of the GMAT system. Each major release of GMAT is tested using this suite, and the results of the tests are collected and reviewed by all interested parties prior to release. 

Applicable Documents 
  • GMAT System Test Plan
  • GMAT Acceptance Test Plan
  • GMAT Requirements Specification
  • Minimum Requirements for Test Plan
  • User's Guide for Resource Tree Test Complete Template
  • GUI Test Matrix
  • "How to Write Script Tests" Wiki page
  • GUI Widget Test Procedures
  • GMAT Compiler Config (Windows) 
Roles and Responsibilities 


 DOC

MAINTAIN
TESTS

NEW GUI TESTS

NEW SCRIPT TESTS

ANALYSIS

GUI MODS

GUI Test Lead

X

X

X


X

X

Script Test Lead

XX

X

XX

Software Engineers

X



X

X

X

Test Team Members

X

X

XX

X

X

System EngineersXX
XXX

Test Environment and Tools

GUI Test Environment and Tools 

The goal of the GMAT GUI testing is to provide complete test coverage for the GMAT GUI in an automatable and repeatable way.  The GMAT project uses SmartBear Software’s TestComplete to perform automated GUI testing on Windows platforms.  TestComplete allows the GMAT GUI Test Team to perform functional and regression testing of the GMAT GUI.

Platforms 

GUI testing for GMAT will use the following platforms: 

  • Windows XP
  • Windows 10 


GMAT is being developed for, and will eventually be tested on, the following platforms: 

  • Mac
  • Windows Vista
  • Linux
Software Tools 

In order to properly test GMAT's GUI, the GMAT project will use, at minimum, SmartBear Software's TestCompleteJoel Parker: Do you want to specify a version here, or maybe in the next section? I can see mandating the configuration of the OS, the GMAT setup, and the version of TestComplete.. TestComplete is a tool that allows users to record, automate, and edit any GUI action. GMAT testers use TestComplete to record tests to be automated for functional and regression testing. Regression testing will be required when any change is made to GMAT. The tests must therefore be maintained to ensure that the changes made to GMAT will not cause a major errors when using TestComplete for acceptance testing for a GMAT release or any nightly regression tests.
The GMAT project does not use any specialized software to compare any difference in the GUI output to the desired output. Difference software is used on some occasions but the majority of outputs are analyzed and compared by the tester. If the outputs are vastly different, the tester will look for any issues that may have caused an error.

Test Environment Configuration Management 

The GMAT project has a file repository all members have access to for GMAT builds as well as TestComplete tests. It is each testers responsibility to download the most up-to-date version of GMAT and rerun, and make any necessary edits to, TestComplete tests each time there is a new GMAT build.
GMAT and TestComplete are installed on their default drives, usually the "C" drive, to allow TestComplete to work in the same basic environment no matter which tester is running the test.

Script Test Environment and Tools 

Platforms 

Script testing for GMAT is performed on a nightly basis in the following configurations: 

GMAT Build

Operating System

Software

Windows, 64-bit, Visual Studio 2017

Windows 10, 64-bit

MATLAB 2022a (64-bit)

In addition, testing is performed on demand for the following configurations: 

GMAT Build

Operating System

Software

Mac OS 

Mac OS 11.7

MATLAB 2022a (64-bit)

Linux, 64-bit

Red Hat Enterprise Linux 7.9

(should be moved to RHEL 8)

MATLAB 2022a (64-bit)

Linux, 64-bit

Ubuntu 20.04 LTS
(moving to 22.04 LTS soon)
MATLAB 2022a (64-bit)

Software Tools Script testing is performed using a custom test system implemented in MATLAB.

Test Environment Configuration Management 
GMAT Installation and Test Readiness

Test Processes 

Overview 
GUI Testing 

All GMAT GUI testing is requirements driven. Though not every type of test will test a specific requirement, this does ensure that GMAT works as intended. To ensure that all GUI-related requirements are tested, the GMAT project utilizes a Requirements-To-Test matrix (RTTM). The RTTM is designed to track the progress of testing as well as to document how and when each requirement is tested. The RTTM helps to collect all the GMAT GUI testing in one document. 

GMAT GUI testing is divided up into two broad categories of tests: Acceptance testing and Regression testing. Acceptance testing tests the GMAT GUI's functionality by ensuring it meets its requirements while regression testing is used to ensure that any changes made to the GUI do not have any negative affects.

Acceptance testing can be broken down even further into several categories: verification testing, system validation testing, stress testing, and hybrid testing. Verification testing ensures that every GUI requirement within the GMAT Requirements document is covered and that every widget of the GUI is tested. System Validation testing seeks to test the GMAT GUI at a much more holistic level, testing the entire application as a whole as opposed to direct verification of individual requirements. Stress testing verifies system performance under stressing conditions. Hybrid testing are tests that blur the boundary of script-based testing and GUI testing. For example, tests that execute scripts that manipulate inherently graphical features such as 3D graphics. 

Note that GMAT GUI testing is designed to be complementary to the GMAT script testing and should take place concurrently with the GMAT Scripting testing. Subversion repository located at: https://aetd-svn.gsfc.nasa.gov/svn/GMAT/trunk/test/gui

Requirements-To-Test Matrix 

The Requirements-To-Test Matrix (RTTM) is a document mapping the GMAT GUI requirements to specific tests that verify/validate the requirements' functionality and performance. The purpose of the RTTM is to ensure that all GMAT requirements are assigned to tests, to track the progress of the tests, and to document how and when they are tested. The RTTM is an Excel document that shall encompass both the Verification Testing and System Validation Testing. The RTTM allows the automated testing system (to be developed) to track GMAT Test Team progress as well as automated testing results. It shall contain at a minimum the following fields: 

  • Test coverage breakdown by major requirements group (FRR, FRC, etc.)
  • Tests for each unique requirements Id by category and mode
  • Path for each test Project
  • List of Test Cases for each Project
  • Name of object to be tested (e.g., Spacecraft or Scenario Name)
  • Operation (Clone object, ignored for System tests)
  • Requirements mapped to this Name/Operation
  • GUI Test Project and Test
  • Tester
  • % Complete (for development, only 100% name/operation will be tested)
  • Results from the TestComplete (Pass/Fail/Warning Not Tested of test result) if needed.
Developer Testing 
Verification 

GMAT Verification Testing verifies that an integrated GMAT build delivered to the GMAT GUI Test Team operates as designed and meets each functional and performance requirement allocated to the build. Verification Tests typically focus on each major subsystem of the GMAT GUI system and are conducted in an environment that replicates end-user environments as closely as possible. Verification Testing is the responsibility of the GMAT GUI Test Team. GUI Verification testing, or "unit" testing, ensures that there are tests of every widget within the GUI as well as interaction tests of a unit. A unit within the GUI is defined to be a logical and visual grouping of widgets (e.g., panel, toolbar, menu, etc) and generally is tied to an underlying object (e.g., Spacecraft, GroundStation, etc) in the GMAT engine. In general, one test project will test one unit exclusively (clicking buttons, entering text, etc) as well as all of its interactions within the unit. However, by necessity there is some overlap with other units; e.g., to test a GroundStation panel you need to create a GroundStation using the Resource Tree. Each unit within the GUI is tested using one TestComplete project. As much as possible, the TestComplete project should use the appropriate TestComplete project template (Resource Tree, Mission Tree, etc) that shall be developed by the GUI Test Team. The GMAT Test Template Projects for TestComplete shall be developed to provide a significant starting point for testing a GUI unit. The projects shall automatically provide the logic and tests to do most of the coverage necessary for nominal units of their types (panels, mission tree, etc).

Validation 

GMAT System Validation Testing exercises the GMAT GUI in a manner as close as possible to end-user normal use with the intent of qualifying GMAT as a fully functional application. System Validation Testing focuses on testing the GMAT application as a whole and concentrates on "fit for purpose" testing as opposed to direct verification of individual requirements and widgets. The purpose of System Validation is to certify that GMAT will accurately and effectively support all operational system level requirements. This phase will also validate that GMAT GUI can perform under all conceivable realistic stressed conditions. System Validation Testing is the responsibility of the GMAT GUI Test Team. Validation testing will also be done using TestComplete. Testers will create TestComplete validation tests using the verification tests either directly or as models for "building blocks", which are combined to create scenarios that are intended to simulate the end user experience. For example, the TestComplete test used to test the spacecraft unit will be used for a spacecraft creation test, and the test used for the graphical output unit will be used for an output generation test. These are then combined with other tests to design a mission that, for example, sends a satellite to Mars.

Acceptance 

GMAT GUI Acceptance Test is the formal execution of the full set of Verification and System Validation Tests against the final delivery of GMAT. (Final in this context refers to the GMAT build in which the complete set of functionalities defined by the GMAT Requirements has been implemented and in which all GMAT Release Bugs have been closed.) GMAT GUI Acceptance testing should take place concurrently with the GMAT Scripting Acceptance testing. The full set of Verification and System Validation Test products shall be organized into an Acceptance Test Results package and archived. The automated GMAT GUI Testing tool shall create an Acceptance Test Report summarizing the configuration of the GMAT environment, the tests executed, the requirements tested, and the pass/fail status of each test.

End-To-End 

 GMAT GUI system tests are complete End-To-End tests of the GUI using real-world mission examples and use cases. These tests start from existing script system tests and input the entire test case through the GUI and compare the results of the execution via the GUI with execution via the script. Script tests are verified using external truth data so comparing GUI results with Script results is acceptable to verify the GUI performs correctly.

Stress 

GMAT stress testing involves running stressing cases on the GUI to verify the system performs as expected under heavy loading conditions when memory may be low or to ensure memory is released after extensive use of the system. Basic stress test are performed by running existing GUI and script system tests in series in a single GMAT session and verifying that system behaves as expected. Other stress tests include creating single tests that require extensive memory or run times.

Hybrid GUI/Script 

Hybrid tests are defined as tests that are performed by executing GMAT scripts that drive GUI functionality such as 3D graphics or plots to name a few. In these cases, scripts are developed that exercise GUI functionality and the behavior is inspected by a tester to verify behavior. Subsequently, Test Complete is used to regression test the behavior by running the scripts and comparing the graphical results with previous results visually verified to be correct.

Regression 

Either nightly or weekly (TBD), the full set of Verification and System Validation Tests shall be executed against the current build of GMAT. GMAT GUI Regression testing should take place concurrently with the GMAT Scripting Regression testing. 

Script Testing 

This section describes the types of script-based tests to be performed and procedures and best practices for writing script-based tests. The goal of script-based testing is to ensure GMAT is of high quality by providing full test coverage of the system and deep test coverage for complex components.
The script-based test system is contained in the GMAT internal Subversion repository located at https://aetd-svn.gsfc.nasa.gov/svn/GMAT/trunk/test/script. All existing tests are checked into this repository, in a requirements-based hierarchy under the input folder. Tests that are required for coverage but do not yet exist will be tracked as bugs in GMAT's Jira Software database, using the issue type, component, and priority of the ticket. Tests that are required to resolve existing bugs will be tracked on those bugs, which should be assigned a status of "READY FOR TESTING" or "READY FOR VERIFICATION."

Developer Testing 

Verification 

GMAT script verification tests ensure that all features implemented in GMAT function correctly or within tolerance. Verification tests are classified according to the test type and include numeric tests, system tests, smoke tests, and I/O tests, among others. Each script test file is accompanied by a .tc (Test Case) file that contains metadata about the test (including requirement and category mappings). There are several higher level classifications of script tests used during the DBT process before committing new code or nightly to determine if code additions or changes have unexpected adverse affects. These higher level categories such as smoke and system tests are groupings of lower level test categories and provide developers and testers insight into system without running the complete test suite.

Numeric Tests 

Numeric tests are defined as tests of physical and mathematical models in GMAT. Numeric tests are performed by comparing GMAT output to external "truth" data, such as that obtained from STK, FreeFlyer, or external MATLAB programs.

Functional Tests 

Functional tests are defined as tests that verify non-numeric functionality, such as plotting styles, file formats, and control flow behavior.

Input Validation Tests 

Input validation tests are defined as tests that ensure user inputs are validated by the system and the correct error messages are provided for invalid user input.

End-to-End Tests 

End-to-end tests are defined tests that solve an end-to-end engineering problem such as a lunar transfer or orbital maneuver. These tests are "fit for use" tests and are applications of GMAT to real-world problems. The full set of verification and system validation tests is executed weekly against the current build of GMAT. GMAT GUI End-to-End or System testing should take place concurrently with the GMAT script System testing. Active GMAT testers/developers have the primary responsibility to analyze System tests and address test projects due to code modifications and additions. If analysis indicates a System test appears to be the result of a change code or update the data files, then it is the responsibility of tests analysts to modify/update the test accordingly to keep it up to date with truth verification criteria.

Special Test Categories 

Smoke tests are defined as a grouping of lower level tests (Numeric, Functional, Input Validation, etc.) such that there is at least one test for each FRC and FRR requirements group. Smoke tests provide the smallest set of tests to get shallow coverage that exercises each system component. An example of a smoke might test that the system correctly performs one of the required state transformations GMAT supports but not all state types or all special orbit cases.
System tests are defined as a grouping of lower level tests (Numeric, Functional, Input Validation, etc.) such that running all system tests ensures that broad system coverage with intermediate test depth. Continuing with the example provided in the smoke test definition, an example set of system tests for state conversions would provide at least one test for each type of state transformation required, but may not check those conversions for all orbit types (elliptic, parabolic and hyperbolic).

Acceptance Tests 

The GMAT script acceptance test is the formal execution of the full set of verification and system validation tests on the final delivery of GMAT (where "final" in this context refers to the GMAT build in which the complete set of functionality defined by the GMAT Requirements has been implemented and in which all GMAT release bugs have been closed). GMAT script acceptance testing should take place concurrently with the GMAT GUI acceptance testing. The full set of verification and system validation test products shall be organized into an acceptance test results package and archived. The GTS shall create an acceptance test report summarizing the configuration of the GMAT environment, the tests executed, the requirements tested, and the pass/fail status of each test.

Regression Tests 

The full set of verification and system validation tests is executed weekly against the current build of GMAT. GMAT GUI regression testing should take place concurrently with the GMAT script regression testing.
Active GMAT testers/developers have the primary responsibility to analyze regression reports and address test regressions due to code modifications and additions. If analysis indicates a regression appears to be the result of a change code or update the data files, then it is the responsibility of tests analysts to modify/update the test accordingly to keep it up to date with truth verification criteria.

Stress Tests

Process Control 

Procedures and Best Practices for Writing GUI Tests 

This section describes procedures and best practices for writing GUI-based tests with the goals of complete GUI coverage and adequate test depth. After a brief overview, the first part of this section will discuss the best practices and procedures for working with TestComplete to write GUI tests. The following subsections will discuss the specific procedures and processes for testing the GMAT GUI.

Overview 

Writing, and even more importantly, performing GUI tests that provide complete and repeatable coverage of the GMAT GUI is difficult. The GMAT GUI testing has the following goals: 

  • Reasonable Confidence that GMAT is user-ready
  • 100% coverage of the GUI - every widget in the GUI must be exercised at least once
  • 100% Requirements Driven Testing - every GUI-related requirement must be tested to ensure that it is fulfilled by the GUI
  • Repeatable
  • Maintainable 

At its most basic, GUI testing requires clicking on every button and widget in the GMAT GUI, entering valid, invalid, and boundary conditions input into every text widget, and assessing that the GUI responds to all inputs and displays all results correctly. Manually performing these actions would be error-prone, time-intensive, and impossible to repeat every time GMAT changes. So how do we ensure "reasonable confidence" and that every requirement and widget is tested? More importantly, how do we make the tests repeatable and maintainable?

This document, the GMAT Test Plan, addresses the first 2 questions. The battery of tests defined above, Verification, Validation, Stress testing, System testing, Regression Testing etc, and the Requirements-To-Test matrix are designed to ensure we meet every GUI requirement and hit every GMAT widget. The combination of all these tests is designed to provide reasonable confidence in the GMAT GUI. Making the tests repeatable and maintainable requires automation of the GUI actions. The GMAT GUI Test Team has selected the TestComplete tool to provide this test automation.

Using TestComplete for GUI testing 

Best Practices 
  • Create and USE templates
  • Use Variables (more maintainable and more general)
  • Make top-level project tests stand-alone-able
  • When trying to fix (maintain) tests, dependencies between top-level tests make it difficult to fix-run-verify
  • Do NOT rename NameMapping items
  • I know the names that TestComplete gives items is… awful… but resist changing the names
  • TestComplete is very consistent in how it names objects, which is important when separate projects are creating tests that may be able to be reused
  • Make panel projects work with more than one object when it makes sense to do so – tests should be created using variables and then swap the variables before reexecuting the tests. For example, the Spacecraft Panel should be tested with the DefaultSC as well as with a user-created Spacecraft, or a dialog that is called in multiple places throughout the GUI.
  • Do NOT use Object Checkpoints! These type of checkpoints are exceedingly brittle and hard to update. Property check points and regions are the best to use. 
  • Data-driven Tests
  • Give Input, View after Apply (e.g., is "+0" converted to "0", and how it looks in Show Script Columns
  • Include a Log column that is printed out before each data-driven loop. Use Pattern matches with Mission Show Script to test existence.
  • Make checks case sensitive since GMAT is case sensitive. Avoid absolute File Paths; they WILL break. Base file paths off of a global variable, GMAT_PF_DIR, which defines the location of the GMAT program files directory (including parent of data directory). You can also use the Files Store.
  • Put Edit.KeyPress/SetText and PropertyChecks that are executed more than once in their own routines. Changing precision, compiler changes, etc mean that these values have to be changed, which is extremely tedious if this is scattered all over the place Checks for Warnings and Errors should be written to check for either, the GMAT team likes to change which dialog they use.
Recording Tips 
  • Turn off Visualizer for playback and recording.
  • Organize Projects using directories.
  • Record your tests naturally and then immediately fix.
  • Change items to variables where it makes sense (use project level variables not local variables, easier to reuse and to find and maintain).
  • Break recordings up into smaller, reusable tests.
  • Always record using Named objects not screen locations (e.g., right-click "Show Script" not click screen location 23, 45.)
  • Change default options so it is KeyPress not SetText (more user-like).
  • If there are multiple paths to do something, there must be one instance for each path. Corollary, use the most maintainable way to do something when the purpose of the test is to test something else, e.g., to open a panel, you can either double-click or right-click "Open". Double-click uses a screen location (easier to break) – test it once. All operations that test the panel itself should use the Right-click "Open".
  • As much as possible, put logic in your tests to handle errors when they would "crash" the test. TestComplete, when it goes off the beaten path, tries to close unexpected dialogs using a default Close operation. However, that may not get GMAT back into a state that TestComplete can work with, leading to a cascading list of test errors that is not useful.
  • IfObject dlgSaveConfirmClose Exists, then log an error and click "No".
  • Avoid Region Checks as much as possible. Use them only for the Output images such as plots or Orbit Views to maximize the panel first.  Make sure to update pixel tolerance each time when the image is slightly different in case of the result is in false positive state to have an updated truth verification image for the next run. 
Things to Avoid 
  • Watch out for the ComboBox ClickItem bug! You need to insert a statement to tab into the ComboBox before the ClickItem statement. Otherwise TestComplete can get stuck on playback.
  • Tests that use Windows OS dialogs, like the OpenDialog and SaveDialog, can break when moving between Windows Versions. Use the recommended OpenFile method.
  • NameMapping is your friend. Fix most tests by fixing the name map, e.g., tests get broken by a panel being retitled from "New Object" to "New Asteroid". Don't use a new object, fix the mapping for the old object.
Project Templates 
  • Setup to be easy to use ("By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.")
  • Use DIRECTLY (add existing item…) keyword tests (Cloning the template should only clone tests that need to be changed) Not Done Yet
Procedures for Writing GUI Tests 

This section details the procedures and practices for writing GUI Tests using TestComplete.

Procedures for Writing GUI Verification Tests 

For the purposes of verification testing, the GMAT GUI has been divided up into 4 logical groupings: 

  1. Resource Tree/Resource Panels (e.g., Spacecraft Panel, GroundStation panel, etc)
  2. Mission Tree/Command Panels (e.g., Propagate Panel, Maneuver Panel, etc)
  3. Script Editor
  4. Infrastructure (e.g., menus, toolbar, welcome page, about box, etc.) 

The Script Editor and the Infrastructure will both be tested using one TestComplete project respectively. These projects will be responsible for testing all widgets and requirements for the respective functions, including side effects such as modifying a script unsynchronizes GMAT and vice versa.

The Resource Tree/Resource Panels and the Mission Tree/Command Panels make extensive use of TestComplete Project Templates to ensure full testing of a specific panel/object. The project automatically provides the logic and tests for almost 40 different tests with over 30 utility helper tests (providing approximately up to 80% percent of the code), with only a small, nicely partitioned, input from you (called TBD tests). By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.

Procedures to Write GUI Tests for a Resource Panel 
  1. Collect requirements for the resource.
  2. Collect developer notes about resource idiosyncrasies.
  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
  4. Create Test Resource Project per instructions in GMAT Panel Test Template Instructions
    1. Clone the GMAT Panel Test Template Project and add it to your test suite.
    2. Define the Project Variables for your copy of the project.
    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
    6. Run and Validate test results.
  5. Add the projects from the 'Resources' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.  
Procedures to Write GUI Tests for a Command Panel 
  1. Collect requirements for the command.
  2. Collect developer notes about command panel idiosyncrasies.
  3. Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
  4. Create Test Command Project per instructions in GMAT Command Test Template Instructions
    1. Clone the GMAT Command Test Template Project and add it to your test suite.
    2. Define the Project Variables for your copy of the project.
    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
    4. Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
    5. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
    6. Run and Validate test results.
  5. Add the projects from the 'Commands' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.  
Procedures for Writing GUI System Validation Tests 

System Validation Tests attempt to utilize every feature the way a user would when creating a mission. Each test focuses on on particular feature but incorporates it into a larger project to ensure the feature interacts with everything in GMAT correctly. The "writing" for these tests is done in TestComplete with the assistance of a pre-defined mission given to the testers by engineers. 

  1. Collect requirements for the resource or command.
  2. Collect developer notes about command panel idiosyncrasies.
  3. Collect a mission sequence from engineers.
  4. Collect truth data from engineers.
  5. Use TestComplete to record the creation of the missions' Resource and Mission Tree.
    1. Clone the GMAT System Test Template Project and add it to your test suite.
    2. Define the Project Variables for your copy of the project.
    3. Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
    4. Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
    5. Run and Validate test results.
  6. Add the projects from the 'SystemTests' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.  
 Procedures and Best Practices for Writing Script Tests 

This section describes procedures and best practices for writing script-based tests with the goals of complete system coverage and adequate test depth. The steps are outlined in the overview section below and then subsequent sections discuss each step in the testing process in detail. Requirements in GMAT are organized into logical groups and given a unique high-level identifier (FRR-1 Spacecraft Orbit State for example). The section below describes the process used to test a logical requirements group. 

Overview

The following steps will be used to develop new script-based tests. The steps are applicable to new features as well as existing features with test gaps. Each step is discussed in more detail in the sections below. 

  1. Perform final review of requirements and update as necessary.
  2. Map existing tests to requirements
  3. Plan new tests to cover test gaps.
  4. Review plan for new tests.
  5. Write files for new tests.
  6. Run tests/Debug tests.
  7. Check in bugs into Jira Software. 
  8. Check all new test files into SVN Test repository.
  9. Inspect nightly coverage report to verify tests have been assimilated into DBT.
Step 1: Inspecting Requirements.

The purpose of the requirements inspection phase is to understand the requirements and perform a final bidirectional comparison of feature implementation and requirements. This is a visual inspection process to ensure the feature is ready for testing. In this step testers shall: 

  1. Verify that all features implemented in GMAT are represented by a requirement in the SRS
  2. Verify all requirements in the SRS have appropriate features implemented.
Step 2: Mapping existing tests to requirements. 

After inspecting requirements and addressing issues, testers shall map existing tests to the requirements by updating TC files for the requirements group. TC files are text files that accompany a script test file and contains metadata such as the test category and requirements covered by the tests. In this step of the testing process, testers shall: 

  1. Add/Review requirements Id in TC files.
    1. Include existing tests directly related to requirement.
    2. Include existing tests in other feature areas that may be applicable to requirement.
  2. Mark test categories as appropriate (Categories: Smoke, System, Functional, Numeric, End-To-End, InputValidation, Modes: Command, Function ...)
Step 3: Writing summaries for new tests.

The first activity in this step is to analyze the coverage provided by existing tests identified in Step 2. Once the test gaps are identified, the testers shall write a brief summary of each new test to be written to complete the coverage of the requirements group. Test summaries shall be added to the Test spreadsheet located here and categorized by specified requirement. After the tests written, the new test summaries shall be included in the header of the script test file once the tests are written, and the requirements shall be added to the TC files.

Step 4: Reviewing summaries for new tests. 

The purpose of Step 4 is to ensure that tests planned to cover a given requirements group are complete and will adequately verify that the system correctly meets the requirements. During this phase of testing, a GMAT team member not supporting the tests for this particular requirements group will review the test summaries written in Step 3 and identify additional tests that are required.

Step 5: Writing files for new tests. 

In step 5, testers shall write script files, TC files, and truth files for all tests identified in Step 4. These tests shall provide complete requirements coverage for all test categories indicated as necessary in the requirements spreadsheet. Note that not all requirements require all test categories. For example, features that are inherently graphical do not require certain types of tests. Features that are inherently commands do not require special command mode testing. The required mapping between requirements and test categories is contained in the SRS.
The testers shall: 

  1. Develop preliminary naming convention for new test scripts.
  2. Write the script files.
  3. Write the truth files.
  4. Write the .tc files.

All Script File shall contain a comment block header with the following information: 

  • Author name
  • Brief summary of the test
  • Source of truth data 

All TC files shall be containing the following information: 

  • Test categories covered by the test.
  • Test requirements covered by the test.
  • Bugs found (this is added in Step 7)
  • GMAT output file names.
  • Truth file names.
  • Comparator for each file.
  • Tolerances for each comparator.

The guidelines below are a set of best practices for writing script-based tests. 

  • Do not include any unnecessary objects in the test. For example, if the test does not require a 3D graphics plot, then do not include one in the test script.
Step 6: Running tests/Debugging Tests. 

In Step 6, testers shall place all tests written in step 5 in their local test system repository, execute those tests, and address any issues found in the script, TC and truth files.

Step 7: Checking in bugs. 

In step 7, testers shall follow procedures and best practices described in this document to commit bugs to the project's Jira Software database.

Step 8: Checking in test files into SVN Test repository. 

In step 7, final validation of all tests is performed. In the event that bugs were identified in Step 7, testers shall update the relevant TC files with the bug numbers. After files are updated, they are checked into the test repository.

Step 9: Inspect nightly coverage reports. 

The final step in the testing process is to verify that the new tests are executing correctly in the DBT system. The testers shall verify that bugs identified during the test process are marked as failing in the automated DBT report.

Step 10: Update User Documentation. 

Testers gain unique insight into a feature during the test process. After testing is complete, any information that testers deem useful to users shall be added to the appropriate reference section in the GMAT User Guide.

Procedures and Best Practices for Checking in Bugs. 

All issues discovered during a test system run are reported in Jira Sortware, GMAT's issue-tracking system, located at the address: https://gmat.atlassian.net/jira/software/c/projects/GMT/issues

Procedure 

The following procedure should be followed to submit an issue: 

  1. Navigate to Jira Software at the address above.
  2. Log in, if necessary.
  3. Choose GMAT as the product.
  4. Click the "Create" button.
  5. Select Issue type (Bug, Epic, Story etc.)
  6. Select the most appropriate item from the Component list. 
  7. Select your system configuration using the Platform and OS fields.
  8. Select appropriate values for Priority. See Appendix A for guidelines.
  9. Fill out the Summary field with a one-line summary of the issue.
  10. Fill out the Description field with a full description of the issue (see Best Practices below).
  11. Add an attachment, if possible (e.g. a script file illustrating the issue).
  12. Click "Create".
Best Practices

The following best practices should be followed when submitting an issue to the Jira Software system: 

  1. Submit an issue as soon as possible after you discover it, even if no other details are known. It is better to have an incomplete issue report in the system than no report at all.
  2. Try to duplicate the bug manually (outside of the test system) by either using the GUI or loading a script.
  3. For a script-related bug, write a script that contains the minimum number of lines necessary to duplicate the bug.
  4. In the Description field, include the following items:
    1. The name of the test.
    2. The steps you followed to trigger the bug.
    3. The text of any error messages that are generated.
    4. The build date of the GMAT version that contains the bug.
  5. In the Attachments section, attach the following items (if appropriate):
    1. The script that duplicates the bug.
    2. GmatLog.txt if the bug relates to an error, warning, or crash.
    3. The output file that contains erroneous data.
    4. A sample file that illustrates correct data.
    5. A screenshot that illustrates a graphical bug.
  6. If you want feedback from another team member, include their email address in the CC: field.