Greetings!
I hope all of you had a wonderful New Years holiday! (I originally misspelled that as "winderful", corrected it, and then decided that maybe it's not such a bad misspelling for a "flight simulator" forum after all!
)
Before I get started talking about reference systems and testing, it is important to note that any kind of testing is - essentially - meaningless without specifications or requirements to guide the test effort. Absent this there is no way to know what is, or is not, expected of the system.
This leads me to the Musical Question:
Are there any requirements or specifications that guide the development effort? May I have a copy?
Since there is, obviously, no "standard reference system" for testing, I have decided to nominate one - actually two, but I'll get to that later.
First of all:
My baseline hardware is an HP Pavilion "Envy" system that has an Intel Core2 Quad (a 9800 series at 2.5 GHz), 8 gigs RAM on an ASUS motherboard and an Nvidia G-Force GTX-550TI video card. This system, (IMHO) represents a reasonably competent system that's not some bizarre Alienware Super Gamer system that is unlikely to be used by a "mainstream" user.
Baseline System #1:Since the majority of the development work is done on Linux-type systems, IMHO the
primary reference system should be a Linux install.
My nomination: (because I have the install media!)
Linux Mint 17.3 using the MATE desktop - a "flat" install, (no tweaks, no additional installs except for, maybe, GParted), with all updates and patches installed as of current date.
My thinking here is that since primary development is done on Linux, primary testing should also be done on Linux, with Linux being the standard reference implementation. Likewise, since Ubuntu/Mint Linux is a common and popular distribution, this is a reasonable candidate for a test environment.
What that means is that once a baseline functional test is done on Linux, the question of what should work, and in what way should it work, will be by comparison to the Reference Linux Implementation as noted above.
Baseline System #2:My second baseline system is based on Windows. This is so we have visibility and test traction within a Windows environment.
My nomination: Windows 7, with primary testing being done on Windows 7 Professional and/or Home Premium 64 bit. Additional testing can be done on 32 bit systems, and/or other editions of Windows 7 on a time-available basis.
Note that this is also a "flat" install - no extraneous software except for those run-time libraries needed for the software, and basic utilities like IE 10.n and Firefox, and will be updated to current date.
1. I am
specifically excluding Windows versions beyond Windows 7 due to both time constraints and lack of suitable test systems.
2. I am also
specifically excluding Windows XP or earlier. I do not know if there is a sufficient installation base of earlier versions of Windows to justify the test time needed.
3. I am also
specifically excluding ANY kind of virtualized install, since it has been my experience, justified by my many years as a Software QA Analyst, that testing on virtual systems can be misleading. This is especially true for software like this, which is highly dependent on the specific hardware and software being used.
I mitigate this by using Acronis True Image to create a "baseline install snapshot" of the reference system prior to the FlightGear software being installed. This can be used to return the system to a "pristine" un-sullied state prior to testing.
Test Methodology Notes:Linux and Windows testing will involve:
1. Installation testing:* Does it install correctly?
* Does it install at all? (or does it puke its guts up, panic the kernel, cause general havoc, attempt to start a third world war, etc.)
2. Run-time testing:* Does the system - after install - come pre-installed with reasonable and sane defaults so that the typical user can "get flying!" without needing a PhD in computer science? (i.e. The system, as installed, should be "ready to fly". Any necessary setup for particular and special cases should be optional This specifically includes scenery libraries, Terrasync, etc. All of this should be properly configured at install time without additional effort.)
* Does the system behave in a sane and reasonable manner? Can it handle different monitor resolutions? What about multiple monitors? (i.e. one monitor for the flight instruments and another - possibly larger - for the "out of the window" view.)
* Does it handle navigating from one block of scenery data to another smoothly?
* Does it handle placing the aircraft's starting position in a place remote from where it is "now"? (i.e. If the "default" location is somewhere on the West Coast of the U.S. - what happens if I suddenly decide to fly out of Moscow Russia, or Bangalore India? Does scenery update correctly?)
* Does it handle shutdown-and-restart sanely? (i.e. If I ended my flight in Turkmenistan, do I restart where I ended?)
3. Testing limitations:* I cannot test flight "reasonableness", except in a rudimentary manner, as I am not a pilot and I have absolutely ZERO experience knowing what a "real" aircraft is supposed to do in real life. (i.e. Obviously I do not expect my aircraft to magically turn into a griffin during flight. However flight control sensitivity, aerodynamic realism, accuracy of the external or internal views - excepting obvious visual artifacts - are out of scope for my testing.)
* For the same reasons, I cannot test visual realism of the scenery, again excepting obvious visual artifacts such as mountains being upside down, or a redwood growing right in the middle of the runway!
I do not know, but would hope that there are separate test efforts for these items.
Please examine these recommendations and comment as necessary.
Jim (JR)