R2017b Lessons Learned
How Lessons Learned are Managed
GMAT lessons learned include things that we did well and should keep doing, and large scale things we should be doing to improve the software or our project.  Lessons learned are each discussed by the team and if we decide there is a real issue, we require a plan for improvement.  To make sure we are efficiently handling lessons learned, here are some high level guidelines for creating them.
What is a Lesson Learned
Lessons learned are issues that cause significant problems or could have caused significant problems, or are issues where we did something that significantly improved the software or our project. Â Lessons learned require group discussion and probably a change in team habits, process or strategy.
Lessons learned satisfy one the following criteria:
- Issue that is putting the project at greater risk than necessary
- Issue that is causing significant inefficiency
- Issue that is significantly lowering quality
What is Not a Lesson Learned
A lesson learned is not a minor annoyance, a tweak to an existing process, or something that can be resolved between team members in the everyday process of getting work done. Team members should bring these types of issues up at meetings, or work them among the team members involved.
A minor issue, (i.e. not a lessons learned), satisfies one of these criteria:
- Tweak to an existing process
- Minor annoyance or gripe
- Can be resolved by just picking up the phone, or discussing via email, or weekly meeting
- Does not require significant change in habits or processes
Review of R2017a Lessons Learned
Things We Should Change
Do Better
- Collect data from the projects early in the process.Â
Example: if a customer is using GMAT to ingest a data file generated from outside of GMAT, obtain a sample file that was generated that way as early as possible. For instance, an STK/ODTK generated .e file for this release would have been useful before the .e propagator was prototyped.
- Collect data from the projects early in the process.Â
- We need to map team members to customer projects to facilitate communication of mission needs, and then include a mission needs section in the weekly team meeting.
- Improve nav test brittleness.  These tests seem to required constant tweaking to avoid false positives. Need to figure out how to make them more robust.
- The tolerances for some of the brittle test cases is set too tightly, and should be opened up a bit.
- We ought to make better use of the SetSeed option for the brittle cases, and look for seed values that provide consistent results on all of out platforms.Â
- Once the tolerances are opened up a bit, the nav owners will need to pay attention to test cases that show up in the "Changed but still pass" list
- One note: We will need to be more consistent about setting previous run test data as the set to use for comparison. (On Linux, at least, this is currently a "by hand" process.)
- We may need add a test system "warning" category that shows scripts that changed values and pass, but that should be examined.
- Put out releases more often. This will help us iron out issues, and get features out to users quickly
- The next GMAT release will need to be made by Sep 30 to meet needs for ACE and WIND(is this the right mission?)
- We will use the Sep 30 release target as a trial for quarterly releases going forward.
Start Doing
- Develop data management strategy that avoids last minute changes to system data that can cause dramatic changes in nightly tests. Preferably, make data updates occur nightly. This way we catch data issues immediately, and during development, not during RC testing. At a minimum, move data file update from App Freeze to an earlier release milestone like QA complete so the change happens much earlier in the process.  This must address GUI data file issues as well. Delete old data files that are no longer used after changes in R2017a... SPICE kernels etc.
- Stale, no longer used data files need to be removed from the repository
- The data file update utility will be used to update and commit the most recent data files every other week. The projected procedure is as follows (to be performed by WCS initially):
- Download the data file collect every other Friday
- Commit the updated files.
- Push the commit by COB
- Developers will need to pull at the start of the following work week
- Develop improved system configuration test strategy so that test system can be configured to match the startup file configuration easily. For example, if alpha plugins are turned off, we don't run those tests. If internal features are turned off, we don't run those tests.   Currently we change the startup file config, and it is quite difficult to configure the test system to match.
-  THIS ISSUE NEEDS FURTHER THOUGHT. REVISIT IT AT A LATER DATE.
- We may want to add new tags to the .tc files that identify test cases as alpha and/or internal system features.
- The test system can then use a different startup file to test the removal or inclusion of those scripts
- We may need to have the ability to autogenerate/update startup files (possibly as part of preparegmat)
- We may also want to have the test system output group alpha and internal feature data in separate section of the reported data
- Start testing sample missions in the nightly.
- Needs to be done; implementation for this item needs an owner
- The process is as follows:
- Copy the sample missions into a folder in the test system as part of preparegmat
- Check on run that the scripts ran to completion unless that is not expected (e.g. if a Stop is in the script), no errors are encountered, and no warnings are encountered unless they are expected.
- The sample missions should be updated so that they do not generate any deprecation messages.
- Perform team review of major feature completion to help catch issues earlier.
- We need to review new resources and commands more formally to ensure that they meet user needs, and are coded and tested to ensure that those needs are actually met.
- The new CommandEcho command will be used as a first case for this review on July 13.
- Find leads to map to primary customers. RQ was doing that... Need to interact more and get them updates more frequently so they can do beta testing. Currently SPH is mapped to TESS, SDO, ACE, WIND. This is too much and a few items slipped through the cracks.
- SPH will coordinate this to ensure that GMAT leads are identifie for each user group that needs one.