Plans and Procedures
- Deepa Chavali
- Tetyana Royzman
- Steven Slojkowski
- Steven Hughes
GMAT Test Plan
Roles and Responsibilities
The following table outlines the responsibilities of various members of the GMAT team.
Documentation | Create New GUI Tests | Create New Script Tests | Analysis of New Features | GUI MODS | ||
---|---|---|---|---|---|---|
GUI Test Lead | X | X | X | X | X | |
Script Test Lead | X | X | X | X | X | |
Software Engineers | X | X | X | X | ||
Test Team Members | X | X | X | X | X | X |
System Engineers | X | X | X | X | X |
Test Environment and Tools
GUI Test Environment and Tools
The goal of the GMAT GUI testing is to provide complete test coverage for the GMAT GUI in an automatable and repeatable way. The GMAT project uses SmartBear Software’s TestComplete to perform automated GUI testing on Windows platforms. TestComplete allows the GMAT GUI Test Team to perform functional and regression testing of the GMAT GUI.
GUI testing for GMAT will use the following platforms:
- Windows 10
GMAT is developed for and tested on, the following platforms:
- Mac
- Windows
- Linux (CentOS and Ubuntu)
In order to properly test GMAT's GUI, the GMAT project will use, at minimum, SmartBear Software's TestComplete. TestComplete is a tool that allows users to record, automate, and edit any GUI action. GMAT testers use TestComplete to record tests to be automated for functional and regression testing. Regression testing will be required when any change is made to GMAT. The tests must therefore be maintained to ensure that the changes made to GMAT will not cause a major errors when using TestComplete for acceptance testing for a GMAT release or any nightly regression tests.
The GMAT project does not use any specialized software to compare any difference in the GUI output to the desired output. Difference software is used on some occasions but the majority of outputs are analyzed and compared by the tester. If the outputs are vastly different, the tester will look for any issues that may have caused an error.
The GMAT project has a file repository all members have access to for GMAT builds as well as TestComplete tests. It is each testers responsibility to download the most up-to-date version of GMAT and rerun, and make any necessary edits to, TestComplete tests each time there is a new GMAT build. GMAT and TestComplete are installed on their default drives, usually the "C" drive, to allow TestComplete to work in the same basic environment no matter which tester is running the test.
Script Test Environment and Tools
Script testing for GMAT is performed on a regular basis for all supported platforms. Script testing is performed using a custom test system implemented in MATLAB.
Test Processes
GUI Testing
All GMAT GUI testing is requirements driven. Though not every type of test will test a specific requirement, this does ensure that GMAT works as intended. To ensure that all GUI-related requirements are tested, the GMAT project utilizes a Requirements-To-Test matrix (RTTM). The RTTM is designed to track the progress of testing as well as to document how and where each requirement is tested. The RTTM helps to collect all the GMAT GUI testing in one document.
GMAT GUI testing is divided up into two broad categories of tests: Acceptance testing and Regression testing. Acceptance testing tests the GMAT GUI's functionality by ensuring it meets its requirements while regression testing is used to ensure that any changes made to the GUI do not have any negative affects.
Acceptance testing can be broken down even further into several categories: verification testing, system validation testing, stress testing, and hybrid testing. Verification testing ensures that every GUI requirement within the GMAT Requirements document is covered and that every widget of the GUI is tested. System Validation testing seeks to test the GMAT GUI at a much more holistic level, testing the entire application as a whole as opposed to direct verification of individual requirements. Stress testing verifies system performance under stressing conditions. Hybrid testing are tests that blur the boundary of script-based testing and GUI testing. For example, tests that execute scripts that manipulate inherently graphical features such as 3D graphics.
Note that GMAT GUI testing is designed to be complementary to the GMAT script testing and should take place concurrently with the GMAT Scripting testing.
Requirements-To-Test Matrix
The Requirements-To-Test Matrix (RTTM) is a document mapping the GMAT GUI requirements to specific tests that verify/validate the requirements' functionality and performance. The purpose of the RTTM is to ensure that all GMAT requirements are assigned to tests, to track the progress of the tests, and to document how and when they are tested. The RTTM is an Excel document that shall encompass both the Verification Testing and System Validation Testing. The RTTM allows the automated testing system to track GMAT test team progress as well as automated testing results. It shall contain at a minimum the following fields:
- Test coverage breakdown by major requirements group (FRR, FRC, etc.)
- Tests for each unique requirements Id by category and mode
- Path for each test Project
- List of Test Cases for each Project
- Name of object to be tested (e.g., Spacecraft or Scenario Name)
- Operation (Clone object, ignored for System tests)
- Requirements mapped to this Name/Operation
- GUI Test Project and Test
- Tester
- % Complete (for development, only 100% name/operation will be tested)
- Results from the TestComplete (Pass/Fail/Warning Not Tested of test result) if needed.
Verification
GMAT Verification Testing verifies that an integrated GMAT build delivered to the GMAT GUI Test Team operates as designed and meets each functional and performance requirement allocated to the build. Verification Tests typically focus on each major subsystem of the GMAT GUI system and are conducted in an environment that replicates end-user environments as closely as possible. Verification Testing is the responsibility of the GMAT GUI Test Team. GUI Verification testing, or "unit" testing, ensures that there are tests of every widget within the GUI as well as interaction tests of a unit. A unit within the GUI is defined to be a logical and visual grouping of widgets (e.g., panel, toolbar, menu, etc) and generally is tied to an underlying object (for example, Spacecraft or GroundStation) in the GMAT engine. In general, one test project will test one unit exclusively (clicking buttons, entering text, etc) as well as all of its interactions within the unit. However, by necessity there is some overlap with other units; for example, to test a GroundStation panel you need to create a GroundStation using the Resource Tree. Each unit within the GUI is tested using one TestComplete project. As much as possible, the TestComplete project should use the appropriate TestComplete project template (Resource Tree, Mission Tree, etc) that shall be developed by the GUI Test Team. The GMAT Test Template Projects for TestComplete shall be developed to provide a significant starting point for testing a GUI unit. The projects shall automatically provide the logic and tests to do most of the coverage necessary for nominal units of their types (panels, mission tree, etc).
Validation
GMAT System Validation Testing exercises the GMAT GUI in a manner as close as possible to end-user normal use with the intent of qualifying GMAT as a fully functional application. System Validation Testing focuses on testing the GMAT application as a whole and concentrates on "fit for purpose" testing as opposed to direct verification of individual requirements and widgets. The purpose of System Validation is to certify that GMAT will accurately and effectively support all operational system level requirements. This phase will also validate that GMAT GUI can perform under all conceivable realistic stressed conditions. System Validation Testing is the responsibility of the GMAT GUI Test Team. Validation testing will also be done using TestComplete. Testers will create TestComplete validation tests using the verification tests either directly or as models for "building blocks", which are combined to create scenarios that are intended to simulate the end user experience. For example, the TestComplete test used to test the spacecraft unit will be used for a spacecraft creation test, and the test used for the graphical output unit will be used for an output generation test. These are then combined with other tests to design a mission that, for example, sends a satellite to Mars.
Acceptance
GMAT GUI Acceptance Test is the formal execution of the full set of Verification and System Validation Tests against the final delivery of GMAT. (Final in this context refers to the GMAT build in which the complete set of functionalities defined by the GMAT Requirements has been implemented and in which all GMAT Release Bugs have been closed.) GMAT GUI Acceptance testing should take place concurrently with the GMAT Scripting Acceptance testing. The full set of Verification and System Validation Test products shall be organized into an Acceptance Test Results package and archived. The automated GMAT GUI Testing tool shall create an Acceptance Test Report summarizing the configuration of the GMAT environment, the tests executed, the requirements tested, and the pass/fail status of each test.
End-To-End
GMAT GUI system tests are complete End-To-End tests of the GUI using real-world mission examples and use cases. These tests start from existing script system tests and input the entire test case through the GUI and compare the results of the execution via the GUI with execution via the script. Script tests are verified using external truth data so comparing GUI results with Script results is acceptable to verify the GUI performs correctly.
Stress
GMAT stress testing involves running stressing cases on the GUI to verify the system performs as expected under heavy loading conditions when memory may be low or to ensure memory is released after extensive use of the system. Basic stress test are performed by running existing GUI and script system tests in series in a single GMAT session and verifying that system behaves as expected. Other stress tests include creating single tests that require extensive memory or run times.
Hybrid GUI/Script
Hybrid tests are defined as tests that are performed by executing GMAT scripts that drive GUI functionality such as 3D graphics or plots to name a few. In these cases, scripts are developed that exercise GUI functionality and the behavior is inspected by a tester to verify behavior. Subsequently, Test Complete is used to regression test the behavior by running the scripts and comparing the graphical results with previous results visually verified to be correct.
Regression
On a regular basis, the full set of Verification and System Validation Tests shall be executed against the current build of GMAT. GMAT GUI Regression testing should take place concurrently with the GMAT Scripting Regression testing.
Script Testing
This section describes the types of script-based tests to be performed and procedures and best practices for writing script-based tests. The goal of script-based testing is to ensure GMAT is of high quality by providing full test coverage of the system and deep test coverage for complex components. The script-based test system is contained in the GMAT internal Git repository located at https://aetd-git.gsfc.nasa.gov/gmat/gmat-test. All existing tests are checked into this repository, in a requirements-based hierarchy under the input folder. Tests that are required for coverage but do not yet exist will be tracked as bugs in GMAT's Jira Software database, using the issue type, component, and priority of the ticket. Tests that are required to resolve existing bugs will be tracked on those bugs, which should be assigned a status of "READY FOR TESTING" or "READY FOR VERIFICATION."
Script Testing Categories
Verification
GMAT script verification tests ensure that all features implemented in GMAT function correctly or within tolerance. Verification tests are classified according to the test type and include numeric tests, system tests, smoke tests, and I/O tests, among others. Each script test file is accompanied by a .tc (Test Case) file that contains metadata about the test (including requirement and category mappings). There are several higher level classifications of script tests used during the DBT process before committing new code or nightly to determine if code additions or changes have unexpected adverse affects. These higher level categories such as smoke and system tests are groupings of lower level test categories and provide developers and testers insight into system without running the complete test suite.
Numeric Tests
Numeric tests are defined as tests of physical and mathematical models in GMAT. Numeric tests are performed by comparing GMAT output to external "truth" data.
Functional Tests
Functional tests are defined as tests that verify non-numeric functionality, such as plotting styles, file formats, and control flow behavior.
Input Validation Tests
Input validation tests are defined as tests that ensure user inputs are validated by the system and the correct error messages are provided for invalid user input.
End-to-End Tests
End-to-end tests are defined tests that solve an end-to-end engineering problem such as a lunar transfer or orbital maneuver. These tests are "fit for use" tests and are applications of GMAT to real-world problems. The full set of verification and system validation tests is executed weekly against the current build of GMAT. GMAT GUI End-to-End or System testing should take place concurrently with the GMAT script System testing. Active GMAT testers/developers have the primary responsibility to analyze System tests and address test projects due to code modifications and additions. If analysis indicates a System test appears to be the result of a change code or update the data files, then it is the responsibility of tests analysts to modify/update the test accordingly to keep it up to date with truth verification criteria.
Special Test Categories
Smoke tests are defined as a grouping of lower level tests (Numeric, Functional, Input Validation, etc.) such that there is at least one test for each FRC and FRR requirements group. Smoke tests provide the smallest set of tests to get shallow coverage that exercises each system component. An example of a smoke might test that the system correctly performs one of the required state transformations GMAT supports but not all state types or all special orbit cases.
System tests are defined as a grouping of lower level tests (Numeric, Functional, Input Validation, etc.) such that running all system tests ensures that broad system coverage with intermediate test depth. Continuing with the example provided in the smoke test definition, an example set of system tests for state conversions would provide at least one test for each type of state transformation required, but may not check those conversions for all orbit types (elliptic, parabolic and hyperbolic).
Acceptance Tests
The GMAT script acceptance test is the formal execution of the full set of verification and system validation tests on the final delivery of GMAT (where "final" in this context refers to the GMAT build in which the complete set of functionality defined by the GMAT Requirements has been implemented and in which all GMAT release bugs have been closed). GMAT script acceptance testing should take place concurrently with the GMAT GUI acceptance testing. The full set of verification and system validation test products shall be organized into an acceptance test results package and archived. The GTS shall create an acceptance test report summarizing the configuration of the GMAT environment, the tests executed, the requirements tested, and the pass/fail status of each test.
Regression Tests
The full set of verification and system validation tests is executed weekly against the current build of GMAT. GMAT GUI regression testing should take place concurrently with the GMAT script regression testing. Active GMAT testers/developers have the primary responsibility to analyze regression reports and address test regressions due to code modifications and additions. If analysis indicates a regression appears to be the result of a change code or update the data files, then it is the responsibility of tests analysts to modify/update the test accordingly to keep it up to date with truth verification criteria.
Process Control
Procedures and Best Practices for Writing GUI Tests
This section describes procedures and best practices for writing GUI-based tests with the goals of complete GUI coverage and adequate test depth. After a brief overview, the first part of this section will discuss the best practices and procedures for working with TestComplete to write GUI tests. The following subsections will discuss the specific procedures and processes for testing the GMAT GUI.
Overview
Writing, and even more importantly, performing GUI tests that provide complete and repeatable coverage of the GMAT GUI is difficult. The GMAT GUI testing has the following goals:
- Reasonable Confidence that GMAT is user-ready
- 100% coverage of the GUI - every widget in the GUI must be exercised at least once
- 100% Requirements Driven Testing - every GUI-related requirement must be tested to ensure that it is fulfilled by the GUI
- Repeatable
- Maintainable
At its most basic, GUI testing requires clicking on every button and widget in the GMAT GUI, entering valid, invalid, and boundary conditions input into every text widget, and assessing that the GUI responds to all inputs and displays all results correctly. Manually performing these actions would be error-prone, time-intensive, and impossible to repeat every time GMAT changes. So how do we ensure "reasonable confidence" and that every requirement and widget is tested? More importantly, how do we make the tests repeatable and maintainable?
This document, the GMAT Test Plan, addresses the first 2 questions. The battery of tests defined above, Verification, Validation, Stress testing, System testing, Regression Testing etc, and the Requirements-To-Test matrix are designed to ensure we meet every GUI requirement and hit every GMAT widget. The combination of all these tests is designed to provide reasonable confidence in the GMAT GUI. Making the tests repeatable and maintainable requires automation of the GUI actions. The GMAT GUI Test Team has selected the TestComplete tool to provide this test automation.
Using TestComplete for GUI testing
Best Practices
- Create and USE templates
- Use Variables (more maintainable and more general)
- Make top-level project tests stand-alone-able
- When trying to fix (maintain) tests, dependencies between top-level tests make it difficult to fix-run-verify
- Do NOT rename NameMapping items
- I know the names that TestComplete gives items is… awful… but resist changing the names
- TestComplete is very consistent in how it names objects, which is important when separate projects are creating tests that may be able to be reused
- Make panel projects work with more than one object when it makes sense to do so – tests should be created using variables and then swap the variables before reexecuting the tests. For example, the Spacecraft Panel should be tested with the DefaultSC as well as with a user-created Spacecraft, or a dialog that is called in multiple places throughout the GUI.
- Do NOT use Object Checkpoints! These type of checkpoints are exceedingly brittle and hard to update. Property check points and regions are the best to use.
- Data-driven Tests
- Give Input, View after Apply (e.g., is "+0" converted to "0", and how it looks in Show Script Columns
- Include a Log column that is printed out before each data-driven loop. Use Pattern matches with Mission Show Script to test existence.
- Make checks case sensitive since GMAT is case sensitive. Avoid absolute File Paths; they WILL break. Base file paths off of a global variable, GMAT_PF_DIR, which defines the location of the GMAT program files directory (including parent of data directory). You can also use the Files Store.
- Put Edit.KeyPress/SetText and PropertyChecks that are executed more than once in their own routines. Changing precision, compiler changes, etc mean that these values have to be changed, which is extremely tedious if this is scattered all over the place Checks for Warnings and Errors should be written to check for either, the GMAT team likes to change which dialog they use.
Recording Tips
- Turn off Visualizer for playback and recording.
- Organize Projects using directories.
- Record your tests naturally and then immediately fix.
- Change items to variables where it makes sense (use project level variables not local variables, easier to reuse and to find and maintain).
- Break recordings up into smaller, reusable tests.
- Always record using Named objects not screen locations (e.g., right-click "Show Script" not click screen location 23, 45.)
- Change default options so it is KeyPress not SetText (more user-like).
- If there are multiple paths to do something, there must be one instance for each path. Corollary, use the most maintainable way to do something when the purpose of the test is to test something else, e.g., to open a panel, you can either double-click or right-click "Open". Double-click uses a screen location (easier to break) – test it once. All operations that test the panel itself should use the Right-click "Open".
- As much as possible, put logic in your tests to handle errors when they would "crash" the test. TestComplete, when it goes off the beaten path, tries to close unexpected dialogs using a default Close operation. However, that may not get GMAT back into a state that TestComplete can work with, leading to a cascading list of test errors that is not useful.
- IfObject dlgSaveConfirmClose Exists, then log an error and click "No".
- Avoid Region Checks as much as possible. Use them only for the Output images such as plots or Orbit Views to maximize the panel first. Make sure to update pixel tolerance each time when the image is slightly different in case of the result is in false positive state to have an updated truth verification image for the next run.
Things to Avoid
- Watch out for the ComboBox ClickItem bug! You need to insert a statement to tab into the ComboBox before the ClickItem statement. Otherwise TestComplete can get stuck on playback.
- Tests that use Windows OS dialogs, like the OpenDialog and SaveDialog, can break when moving between Windows Versions. Use the recommended OpenFile method.
- NameMapping is your friend. Fix most tests by fixing the name map, e.g., tests get broken by a panel being retitled from "New Object" to "New Asteroid". Don't use a new object, fix the mapping for the old object.
Project Templates
- Setup to be easy to use ("By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.")
- Use DIRECTLY (add existing item…) keyword tests (Cloning the template should only clone tests that need to be changed) Not Done Yet
Procedures for Writing GUI Tests
This section details the procedures and practices for writing GUI Tests using TestComplete.
Writing GUI Verification Tests
For the purposes of verification testing, the GMAT GUI has been divided up into 4 logical groupings:
- Resource Tree/Resource Panels (e.g., Spacecraft Panel, GroundStation panel, etc)
- Mission Tree/Command Panels (e.g., Propagate Panel, Maneuver Panel, etc)
- Script Editor
- Infrastructure (e.g., menus, toolbar, welcome page, about box, etc.)
The Script Editor and the Infrastructure will both be tested using one TestComplete project respectively. These projects will be responsible for testing all widgets and requirements for the respective functions, including side effects such as modifying a script unsynchronizes GMAT and vice versa.
The Resource Tree/Resource Panels and the Mission Tree/Command Panels make extensive use of TestComplete Project Templates to ensure full testing of a specific panel/object. The project automatically provides the logic and tests for almost 40 different tests with over 30 utility helper tests (providing approximately up to 80% percent of the code), with only a small, nicely partitioned, input from you (called TBD tests). By using Project Variables and panel specific Keyword tests, GMAT Panel Test Template Project is able to provide most of the coverage necessary for nominal panels and with fewer errors.
Writing GUI Tests for a Resource Panel
- Collect requirements for the resource.
- Collect developer notes about resource idiosyncrasies.
- Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
- Create Test Resource Project per instructions in GMAT Panel Test Template Instructions
- Clone the GMAT Panel Test Template Project and add it to your test suite.
- Define the Project Variables for your copy of the project.
- Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
- Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
- Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
- Run and Validate test results.
- Add the projects from the 'Resources' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.
Writing GUI Tests for a Command Panel
- Collect requirements for the command.
- Collect developer notes about command panel idiosyncrasies.
- Populate the Requirements-To-Test Matrix with line items for all top-level keyword/script tests for the project. Mark the items as incomplete so that the automated test utility will not try to execute the tests until they are finished.
- Create Test Command Project per instructions in GMAT Command Test Template Instructions
- Clone the GMAT Command Test Template Project and add it to your test suite.
- Define the Project Variables for your copy of the project.
- Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
- Fill in the InputTests.xslx excel document, which is used for the data-driven tests (Validation_ValidInput and Validation_InvalidInput)
- Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
- Run and Validate test results.
- Add the projects from the 'Commands' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.
Writing GUI System Validation Tests
System Validation Tests attempt to utilize every feature the way a user would when creating a mission. Each test focuses on on particular feature but incorporates it into a larger project to ensure the feature interacts with everything in GMAT correctly. The "writing" for these tests is done in TestComplete with the assistance of a pre-defined mission given to the testers by engineers.
- Collect requirements for the resource or command.
- Collect developer notes about command panel idiosyncrasies.
- Collect a mission sequence from engineers.
- Collect truth data from engineers.
- Use TestComplete to record the creation of the missions' Resource and Mission Tree.
- Clone the GMAT System Test Template Project and add it to your test suite.
- Define the Project Variables for your copy of the project.
- Fill in the Template "TBD" Tests (these are panel-specific keyword tests, which have been designed to be modular and easy to fill in)
- Recording any panel-specific tests (i.e., if the panel has dependencies with other objects created within GMAT)
- Run and Validate test results.
- Add the projects from the 'SystemTests' folder into one TestComplete Project Suite and check all tests from each Project's Execution Plan folder to be executed.
Procedures and Best Practices for Writing Script Tests
This section describes procedures and best practices for writing script-based tests with the goals of complete system coverage and adequate test depth. The steps are outlined in the overview section below and then subsequent sections discuss each step in the testing process in detail. Requirements in GMAT are organized into logical groups and given a unique high-level identifier (FRR-1 Spacecraft Orbit State for example). The section below describes the process used to test a logical requirements group.
Overview
The following steps will be used to develop new script-based tests. The steps are applicable to new features as well as existing features with test gaps. Each step is discussed in more detail in the sections below.
- Perform final review of requirements and update as necessary.
- Map existing tests to requirements
- Plan new tests to cover test gaps.
- Review plan for new tests.
- Write files for new tests.
- Run tests/Debug tests.
- Check in bugs into Jira Software.
- Check all new test files into SVN Test repository.
- Inspect nightly coverage report to verify tests have been assimilated into DBT.
Step 1: Inspecting Requirements.
The purpose of the requirements inspection phase is to understand the requirements and perform a final bidirectional comparison of feature implementation and requirements. This is a visual inspection process to ensure the feature is ready for testing. In this step testers shall:
- Verify that all features implemented in GMAT are represented by a requirement in the SRS
- Verify all requirements in the SRS have appropriate features implemented.
Step 2: Mapping existing tests to requirements.
After inspecting requirements and addressing issues, testers shall map existing tests to the requirements by updating TC files for the requirements group. TC files are text files that accompany a script test file and contains metadata such as the test category and requirements covered by the tests. In this step of the testing process, testers shall:
- Add/Review requirements Id in TC files.
- Include existing tests directly related to requirement.
- Include existing tests in other feature areas that may be applicable to requirement.
- Mark test categories as appropriate (Categories: Smoke, System, Functional, Numeric, End-To-End, InputValidation, Modes: Command, Function ...)
Step 3: Writing summaries for new tests.
The first activity in this step is to analyze the coverage provided by existing tests identified in Step 2. Once the test gaps are identified, the testers shall write a brief summary of each new test to be written to complete the coverage of the requirements group. Test summaries shall be added to the Test spreadsheet located here and categorized by specified requirement. After the tests written, the new test summaries shall be included in the header of the script test file once the tests are written, and the requirements shall be added to the TC files.
Step 4: Reviewing summaries for new tests.
The purpose of Step 4 is to ensure that tests planned to cover a given requirements group are complete and will adequately verify that the system correctly meets the requirements. During this phase of testing, a GMAT team member not supporting the tests for this particular requirements group will review the test summaries written in Step 3 and identify additional tests that are required.
Step 5: Writing files for new tests.
In step 5, testers shall write script files, TC files, and truth files for all tests identified in Step 4. These tests shall provide complete requirements coverage for all test categories indicated as necessary in the requirements spreadsheet. Note that not all requirements require all test categories. For example, features that are inherently graphical do not require certain types of tests. Features that are inherently commands do not require special command mode testing. The required mapping between requirements and test categories is contained in the SRS.
The testers shall:
- Develop preliminary naming convention for new test scripts.
- Write the script files.
- Write the truth files.
- Write the .tc files.
All Script File shall contain a comment block header with the following information:
- Author name
- Brief summary of the test
- Source of truth data
All TC files shall be containing the following information:
- Test categories covered by the test.
- Test requirements covered by the test.
- Bugs found (this is added in Step 7)
- GMAT output file names.
- Truth file names.
- Comparator for each file.
- Tolerances for each comparator.
The guidelines below are a set of best practices for writing script-based tests.
- Do not include any unnecessary objects in the test. For example, if the test does not require a 3D graphics plot, then do not include one in the test script.
Step 6: Running tests/Debugging Tests.
In Step 6, testers shall place all tests written in step 5 in their local test system repository, execute those tests, and address any issues found in the script, TC and truth files.
Step 7: Checking in bugs.
In step 7, testers shall follow procedures and best practices described in this document to commit bugs to the project's Jira Software database.
Step 8: Checking in test files into SVN Test repository.
In step 7, final validation of all tests is performed. In the event that bugs were identified in Step 7, testers shall update the relevant TC files with the bug numbers. After files are updated, they are checked into the test repository.
Step 9: Inspect nightly coverage reports.
The final step in the testing process is to verify that the new tests are executing correctly in the DBT system. The testers shall verify that bugs identified during the test process are marked as failing in the automated DBT report.
Step 10: Update User Documentation.
Testers gain unique insight into a feature during the test process. After testing is complete, any information that testers deem useful to users shall be added to the appropriate reference section in the GMAT User Guide.
Procedures and Best Practices for Checking in Bugs.
All issues discovered during a test system run are reported in Jira Sortware, GMAT's issue-tracking system, located at the address: https://gmat.atlassian.net/jira/software/c/projects/GMT/issues
Procedure
The following procedure should be followed to submit an issue:
- Navigate to Jira Software at the address above.
- Log in, if necessary.
- Choose GMAT as the product.
- Click the "Create" button.
- Select Issue type (Bug, Epic, Story etc.)
- Select the most appropriate item from the Component list.
- Select your system configuration using the Platform and OS fields.
- Select appropriate values for Priority. See Appendix A for guidelines.
- Fill out the Summary field with a one-line summary of the issue.
- Fill out the Description field with a full description of the issue (see Best Practices below).
- Add an attachment, if possible (e.g. a script file illustrating the issue).
- Click "Create".
Best Practices
The following best practices should be followed when submitting an issue to the Jira Software system:
- Submit an issue as soon as possible after you discover it, even if no other details are known. It is better to have an incomplete issue report in the system than no report at all.
- Try to duplicate the bug manually (outside of the test system) by either using the GUI or loading a script.
- For a script-related bug, write a script that contains the minimum number of lines necessary to duplicate the bug.
- In the Description field, include the following items:
- The name of the test.
- The steps you followed to trigger the bug.
- The text of any error messages that are generated.
- The build date of the GMAT version that contains the bug.
- In the Attachments section, attach the following items (if appropriate):
- The script that duplicates the bug.
- GmatLog.txt if the bug relates to an error, warning, or crash.
- The output file that contains erroneous data.
- A sample file that illustrates correct data.
- A screenshot that illustrates a graphical bug.
- If you want feedback from another team member, include their email address in the CC: field.