Board index FlightGear Development

Is there a "standard reference system" for testing installs?

FlightGear is opensource, so you can be the developer. In the need for help on anything? We are here to help you.
Forum rules
Core development is discussed on the official FlightGear-Devel development mailing list.

Bugs can be reported in the bug tracker.

Is there a "standard reference system" for testing installs?

Postby jharris1993 » Sun Dec 18, 2016 7:44 pm

Greetings!

I originally posted this in the "Installation" forum, and was told to try posting (cross-posting? :oops: ) to the "developers" - so here it is.

I have recently downloaded the most current Windows installers for FlightGear, and have been having non-trivial amounts of trouble getting them to work, where "non-trivial" is defined as "my patience with this is dwindling fast. . . ."

If for no other reason than respect for your own time, I'd like to do some additional checks before writing bugs. Which brings me to the topic of this question:

Is there a "standard reference system" that can be used as a baseline for Windows / Linux testing? With this, I can install the troublesome version on the "standard system" and test for bugs there. If this system exists, and if we assume the bug doesn't show there, I can then try to find out what is so different between my "normal" system and the standard system so that I can identify what is causing the trouble.

This also raises the musical question: Is there any real testing done on real Windows systems? (i.e. an install on real hardware instead of a VM, as VM's are notoriously inaccurate.)

Thanks in advance for any help or advice you can give.

Jim (JR)

p.s. Note that I have tried this on two separate Windows 7 Professional (64 bit) systems. One is a system that is "fully built out" system with all the apps and software that I need to do 99% of my work on, and the other is a "stripped" system that is dedicated entirely to flight sim software - both yours and Microsoft Flight Sim "X" Both are fully up-to-date as far as Windows Updates are concerned. Both systems exhibit the same issues.
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby erik » Mon Dec 19, 2016 12:00 am

Where exactly did you get the Windows installer from?
I ask this because the non official installers (development code) are not for home users.

Erik
erik
 
Posts: 1504
Joined: Thu Nov 01, 2007 1:41 pm

Re: Is there a "standard reference system" for testing insta

Postby jharris1993 » Mon Dec 19, 2016 7:20 am

erik wrote in Mon Dec 19, 2016 12:00 am:Where exactly did you get the Windows installer from?
I ask this because the non official installers (development code) are not for home users.

Erik


Main download page - the 2016.4.n releases

Jim
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby erik » Mon Dec 19, 2016 9:02 am

That version should be working. And as far as I know it is tested.
But I don't use Windows myself.

What type of troubles do you see?

Erik
erik
 
Posts: 1504
Joined: Thu Nov 01, 2007 1:41 pm

Re: Is there a "standard reference system" for testing insta

Postby zakalawe » Mon Dec 19, 2016 9:30 am

jharris1993 wrote in Sun Dec 18, 2016 7:44 pm: Is there any real testing done on real Windows systems?


Unfortunately the answer is that none of the core developers use Windows as their main system, so we rely on users (of various levels of experience) to submit bug reports and provide feedback. When we make a release we explicitly ask for feedback, and I also check the installer on my Windows 10 install. This is a smoke-test - if I can run the installer, start FGFS, install an add-on aircraft via the launcher, startup at an airport, and fly a circuit, then I judge the application to be working.

My Windows box is also used for development work (has Visual Studio 2015 installed) but otherwise is a fairly direct / unmodified install of Windows 10.

Obviously the above procedure has plenty of weaknesses and potential for missed problems, but the constraining factor is people's time + interests. In the past we have had developers focused on Windows, and with more experience of Windows deployment issues, but at present we do not. This is also a potential issue with submitting large number of bug-reports : unless people volunteer to triage and then work upon them, they tend to pile up, since fixing bugs is not the most rewarding or interesting kind of development.
zakalawe
 
Posts: 1149
Joined: Sat Jul 19, 2008 4:48 pm
Location: Edinburgh, Scotland
Callsign: G-ZKLW
Version: next
OS: Mac

Re: Is there a "standard reference system" for testing insta

Postby Johan G » Mon Dec 19, 2016 12:40 pm

jharris1993 wrote in Sun Dec 18, 2016 7:44 pm:I originally posted this in the "Installation" forum, and was told to try posting (cross-posting? :oops: ) to the "developers" - so here it is.

Cross-posting is generally frowned upon. What I meant in that post was the developer mailing list.

Sorry for not being more specific, and for the mess. :oops:
Low-level flying — It's all fun and games till someone looses an engine. (Paraphrased from a YouTube video)
Improving the Dassault Mirage F1 (Wiki, Forum, GitLab. Work in slow progress)
Johan G
Moderator
 
Posts: 5294
Joined: Fri Aug 06, 2010 5:33 pm
Location: Sweden
Callsign: SE-JG
IRC name: Johan_G
Version: 3.0.0
OS: Windows 7, 32 bit

Re: Is there a "standard reference system" for testing insta

Postby wkitty42 » Mon Dec 19, 2016 6:54 pm

Johan G wrote in Mon Dec 19, 2016 12:40 pm:Cross-posting is generally frowned upon. What I meant in that post was the developer mailing list.

i took care of it in PM and posted to the dev list... i don't know if any of the respondants are here due to that post on the list but i did do that part :)

Johan G wrote in Mon Dec 19, 2016 12:40 pm:Sorry for not being more specific, and for the mess. :oops:

AFAIAC, it is ok... you're entitled and there's no real mess anyway :mrgreen:
"You get more air close to the ground," said Angalo. "I read that in a book. You get lots of air low down, and not much when you go up."
"Why not?" said Gurder.
"Dunno. It's frightened of heights, I guess."
User avatar
wkitty42
 
Posts: 4746
Joined: Fri Feb 20, 2015 3:46 pm
Location: central NC, USA
Callsign: wk42
Version: git next
OS: Kubuntu 14.04.5

My suggested Standard Reference Installation platform

Postby jharris1993 » Wed Jan 04, 2017 12:13 pm

Greetings!

I hope all of you had a wonderful New Years holiday! (I originally misspelled that as "winderful", corrected it, and then decided that maybe it's not such a bad misspelling for a "flight simulator" forum after all! :D )

Before I get started talking about reference systems and testing, it is important to note that any kind of testing is - essentially - meaningless without specifications or requirements to guide the test effort. Absent this there is no way to know what is, or is not, expected of the system.

This leads me to the Musical Question: Are there any requirements or specifications that guide the development effort? May I have a copy?

Since there is, obviously, no "standard reference system" for testing, I have decided to nominate one - actually two, but I'll get to that later.

First of all:
My baseline hardware is an HP Pavilion "Envy" system that has an Intel Core2 Quad (a 9800 series at 2.5 GHz), 8 gigs RAM on an ASUS motherboard and an Nvidia G-Force GTX-550TI video card. This system, (IMHO) represents a reasonably competent system that's not some bizarre Alienware Super Gamer system that is unlikely to be used by a "mainstream" user.

Baseline System #1:
Since the majority of the development work is done on Linux-type systems, IMHO the primary reference system should be a Linux install.

My nomination: (because I have the install media!)
Linux Mint 17.3 using the MATE desktop - a "flat" install, (no tweaks, no additional installs except for, maybe, GParted), with all updates and patches installed as of current date.

My thinking here is that since primary development is done on Linux, primary testing should also be done on Linux, with Linux being the standard reference implementation. Likewise, since Ubuntu/Mint Linux is a common and popular distribution, this is a reasonable candidate for a test environment.

What that means is that once a baseline functional test is done on Linux, the question of what should work, and in what way should it work, will be by comparison to the Reference Linux Implementation as noted above.

Baseline System #2:
My second baseline system is based on Windows. This is so we have visibility and test traction within a Windows environment.

My nomination: Windows 7, with primary testing being done on Windows 7 Professional and/or Home Premium 64 bit. Additional testing can be done on 32 bit systems, and/or other editions of Windows 7 on a time-available basis.

Note that this is also a "flat" install - no extraneous software except for those run-time libraries needed for the software, and basic utilities like IE 10.n and Firefox, and will be updated to current date.

1. I am specifically excluding Windows versions beyond Windows 7 due to both time constraints and lack of suitable test systems.
2. I am also specifically excluding Windows XP or earlier. I do not know if there is a sufficient installation base of earlier versions of Windows to justify the test time needed.
3. I am also specifically excluding ANY kind of virtualized install, since it has been my experience, justified by my many years as a Software QA Analyst, that testing on virtual systems can be misleading. This is especially true for software like this, which is highly dependent on the specific hardware and software being used.

I mitigate this by using Acronis True Image to create a "baseline install snapshot" of the reference system prior to the FlightGear software being installed. This can be used to return the system to a "pristine" un-sullied state prior to testing.

Test Methodology Notes:
Linux and Windows testing will involve:

1. Installation testing:
    * Does it install correctly?
    * Does it install at all? (or does it puke its guts up, panic the kernel, cause general havoc, attempt to start a third world war, etc.)
2. Run-time testing:
    * Does the system - after install - come pre-installed with reasonable and sane defaults so that the typical user can "get flying!" without needing a PhD in computer science? (i.e. The system, as installed, should be "ready to fly". Any necessary setup for particular and special cases should be optional This specifically includes scenery libraries, Terrasync, etc. All of this should be properly configured at install time without additional effort.)
    * Does the system behave in a sane and reasonable manner? Can it handle different monitor resolutions? What about multiple monitors? (i.e. one monitor for the flight instruments and another - possibly larger - for the "out of the window" view.)
    * Does it handle navigating from one block of scenery data to another smoothly?
    * Does it handle placing the aircraft's starting position in a place remote from where it is "now"? (i.e. If the "default" location is somewhere on the West Coast of the U.S. - what happens if I suddenly decide to fly out of Moscow Russia, or Bangalore India? Does scenery update correctly?)
    * Does it handle shutdown-and-restart sanely? (i.e. If I ended my flight in Turkmenistan, do I restart where I ended?)
3. Testing limitations:
    * I cannot test flight "reasonableness", except in a rudimentary manner, as I am not a pilot and I have absolutely ZERO experience knowing what a "real" aircraft is supposed to do in real life. (i.e. Obviously I do not expect my aircraft to magically turn into a griffin during flight. However flight control sensitivity, aerodynamic realism, accuracy of the external or internal views - excepting obvious visual artifacts - are out of scope for my testing.)
    * For the same reasons, I cannot test visual realism of the scenery, again excepting obvious visual artifacts such as mountains being upside down, or a redwood growing right in the middle of the runway!
I do not know, but would hope that there are separate test efforts for these items.

Please examine these recommendations and comment as necessary.

Jim (JR)
Last edited by jharris1993 on Wed Jan 04, 2017 4:31 pm, edited 1 time in total.
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby Thorsten » Wed Jan 04, 2017 12:47 pm

I'm not sure what problem you're trying to solve.

The situation is that developers typically run FG on the systems they have - I won't buy a Mac to be able to test FG on Mac, if someone donates one for the purpose I'll do it, but so far no volunteers.

All major problems that show up for the developers and occur on hardware owned by us thus typically get reported quickly over the list and fixed.

The issue really are all other problems, different hardware, different use cases, different flight patterns, different habits. Ideally they need to be brought to the attention of the respective maintainer one by one. For instance issues with a specific aircraft need to be fixed by the aircraft maintainer, graphical issues with the renderer need to go to the maintainer of the effect, wholesale crashes need to go to the mailing list and so on.

What has historically happened is that whenever huge lists of 30+ issues and oddities were posted, 2/3 of them ended up being lost because aircraft maintainers don't browse every bug report but usually just anything with 'their' aircraft in the title, someone gets stuck on the first few bugs and eventually forgets about the rest of the list,... Also, lots of bugs need follow-up information, and the thread gets confusing quickly if that's queried for 10 issues simultaneously, again producing loss.

Imagine this a few days before the release, and you can see why this didn't work particularly well to actually address issues in the release candidate.

So any dedicated test of an install is likely to produce this result - a mixed list of issues not addressed to anyone in particular.

The current approach is to fix bugs continuously as they are reported and trust that sufficiently many people report. I don't know whether this has made the releases more stable, but it has led to a more sane time distribution of dealing with the bugs.

So in my view the real challenge is to quickly diagnose a bug and address a meaningful report to the respective maintainer - that's the bottleneck we're facing, and that won't change if we agree on some standard test (who then performs it?)
Thorsten
 
Posts: 9993
Joined: Mon Nov 02, 2009 8:33 am

Re: Is there a "standard reference system" for testing insta

Postby jharris1993 » Wed Jan 04, 2017 1:36 pm

Since this discussion migrated from a thread on the Installation page, your confusion is not surprising.

Let me recap for those who missed the first season's episodes :D

1. I am, (was), attempting to install FG on a Win7 64 bit environment.
2. The install itself appears to have certain issues.
3. There are potentially significant run-time issues as well.

These issues may be related to
* My specific install
* The Windows port itself
- or -
* The software itself - regardless of platform.

Rather than just jump up-and-down screaming that "FG is a piece of GAGH!!", I wanted to verify that my issues were really issues with the software itself, (or the Windows port thereof), instead of some peculiarity with my installation, my specific hardware, or something else outside of the control of the developers. This lead to my original question "Is there a standard reference install" that I can use to test against? With this I could sort the sheep from the goats so to speak, and then write what would (hopefully) be meaningful bugs that would stand a reasonable chance of being triaged correctly and fixed if possible.

This lead to my follow-on question: Are there specifications or requirements that the developers use as a road-map for the development process?

Maybe it seems that I am making "much ado about nothing", but I have been involved in one kind of QA or another since the early '70's, and software QA since the '80's. I want to offer my own skills in software QA, and do it in a controlled, repeatable way instead of submitting garbage bugs that go nowhere.

Secondary to all of this - once we have a good and repeatable Windows port, I want to help with the Windows Installer process to ensure that the installation conforms to certain specific Microsoft requirements for installs on Vista or later systems. (Viz., my posting at viewtopic.php?f=18&t=19745) To do this, I have to be able to install a known working install "correctly", report bugs if needed, and then help package it for formal installation.

Jim (JR)
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby Thorsten » Wed Jan 04, 2017 2:18 pm

Thanks for clarifying.

This lead to my original question "Is there a standard reference install" that I can use to test against?


There is not, and the obvious issue is that all developers would have to have access to the standard for this to be meaningful - but in reality most of us are on Linux (different distributions), some on Mac and very few on Windows (not all same version either).

I guess the idea makes more sense in a commercial development setup where the company simply supplies a standard environment to test against - I frankly don't see how it could be organized in an OpenSource environment among volunteers.

I suspect simply someone who regularly tests the Windows versions the build server produces and reports any issues to the devel list would go a long way in improving the Windows experience.
Thorsten
 
Posts: 9993
Joined: Mon Nov 02, 2009 8:33 am

Re: Is there a "standard reference system" for testing insta

Postby jharris1993 » Wed Jan 04, 2017 4:29 pm

Thorsten wrote in Wed Jan 04, 2017 2:18 pm:I guess the idea makes more sense in a commercial development setup where the company simply supplies a standard environment to test against - I frankly don't see how it could be organized in an OpenSource environment among volunteers.


Ahhh! I think my high-reliability, "failure is NOT an option!", commercial and military testing background is peeking through. :lol:

I understand that - short of having the project be totally in-house somewhere commercially - there is no way to totally standardize a dev or test configuration. However, this doesn't mean that there should be no rigor at all within the dev or test efforts. Something needs to be standardized somewhere.

For example: What requirements and/or specifications are used to guide the development effort for Flight Gear? (I am talking about Flight Gear itself, not the individual aircraft, etc.) Surely there is SOME game plan or spec SOMEWHERE, right? (Or maybe the dev's load up a bong, take a few hits, and code away? :lol: )

With respect to testing a Linux based project, (which I have never done to any significant extent), the shear volume of distributions is daunting. Can I "assume", (Oooh how I hate that word!), that Linux distributions are all similar enough that testing on one distribution, (Ubuntu or its clones), is enough like testing on other distributions to make the test effort both valid and worthwhile?

In the absence of a standard reference system, are development and/or test system configurations published somewhere for review and/or comment?

    For those who didn't understand my prior posting today, this is exactly why I wanted to publish the spec and configuration of my system as explicitly as possible. Maybe someone else won't be able to duplicate my exact system, but they WILL know exactly how I tested, and the exact configuration I used. Maybe that will be important at some point. Maybe not. However it's better to have this kind of stuff documented than to leave everyone guessing.

This is all really new to me - Open Source test efforts, that is - and I am still trying to learn my way around. Hopefully testing for projects like this isn't just "throwing code against the wall to see what sticks."

Like I said before, I am hoping to either create, or use, some kind of testing standard so that my results aren't just garbage. Any idiot with two fingers can bang on a keyboard. Useful, relevant, and repeatable test results are a horse of an entirely different hue.

Jim (JR)
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby jharris1993 » Wed Jan 04, 2017 5:42 pm

Thorsten wrote in Wed Jan 04, 2017 12:47 pm:So in my view the real challenge is to quickly diagnose a bug and address a meaningful report to the respective maintainer


Exactly, my dear Watson!

This is the point I have been trying to make for the entire thread. You have to know exactly what you are testing, and exactly what you are testing for, to create bug reports that are worth a damn.

The real challenge is, as Thorsten so sagely observed:
1. ". . .to quickly [and accurately] diagnose a bug"
* Is it really a bug, or do I just have my head up my butt?
* Is it a Linux bug, or just a bug in Ubuntu/Mint?
* Is the "bug" I'm seeing in Windows really a Windows bug, or is it a FG bug in both Windows and Linux that nobody noticed before?
* Etc.

2. ". . .and address a meaningful report"
* This assumes you have the bug properly characterized. (see #1 above)
* This also assumes you have characterized it in a consistent and repeatable manner with lucid steps to reproduce that are actually reproducible.
* Etc.

3. ". . .to the respective maintainer."
* Who is that?
* How do I find whomever is the maintainer for feature, (or section), "X" of the program?
* What is the process for submitting a bug, once all of this has been done?
* What follow-on, if any, is necessary?
(i.e. Who is responsible for scheduling the meeting, who's buying the pizza, and who's buying the beer? :shock: :lol: )
* Etc.

This is where having standards, specifications, and requirements come in handy; it makes the rest of this much easier to do. It doesn't matter if the project has a multi-zillion-dollar budget to rigid DOD specifications, or is a collaborative effort by the Open Source/FOSS community. Documented standards, spec's, and requirements allow testing - and development - to make sense.

Jim (JR)
What say ye?

Jim (JR)

Some see things as they are, and ask "Why?"
I dream things that never were, and ask "Why Not".

Robert F. Kennedy

“Impossible” is only found in the dictionary of a fool.
Old Chinese Proverb
jharris1993
 
Posts: 139
Joined: Sun Nov 18, 2012 9:20 pm
Location: Worcester, MA. / Moscow, Russia
Callsign: $%^&ing Idiot!
Version: Whatever..
OS: Win-7 / Linux

Re: Is there a "standard reference system" for testing insta

Postby Thorsten » Wed Jan 04, 2017 7:02 pm

* Is it really a bug, or do I just have my head up my butt?


Typically we post it and ask if others recognize it or can reproduce it (unless it's an obvious graphical oddity or a crash).

* Is it a Linux bug, or just a bug in Ubuntu/Mint?


We never really get this kind of information unless A LOT OF people respond - if you get five people reporting back with information on the same bug, it's a lot.

* Is the "bug" I'm seeing in Windows really a Windows bug, or is it a FG bug in both Windows and Linux that nobody noticed before?


As above. The problem is that in all cases you need plenty of statistics, i.e. people reporting across different OS. The last case on the mailing list where James specifically asked for systematics was answered by three or four people - way not enough to mine any useful patterns from it.

* This assumes you have the bug properly characterized. (see #1 above)


There's a checklist provided in the support forum (does it happen for all aircraft and all airports or only some? Does the console log spit out any errors?

Vexingly enough, a fair number of persistent bugs is not easily reproducible but requires a certain trigger or usage pattern (memory leaks are a prime example - VA pilots who leave FG on for 10 hours see these a lot more often than developers who check for 5 minutes whether the last feature coded runs).

* How do I find whomever is the maintainer for feature, (or section), "X" of the program?


If you have a bug that only occurs for a certain aircraft, you can check whether that aircraft has a development thread in the forum. If yes, you post there, if not you post a meaningful title.

Otherwise you post in the forum (or to the mailing list) where people who know some FG interna (like myself...) run a kind of triage center and decide from the posted information whether to go fix something, ping the maintainer of a subsystem, ask for follow-up information or to do nothing.

The reporting user can't really be expected to do this classification as it requires some insight into FG interna and a bit of experience with past bugs.

* What follow-on, if any, is necessary?


You get asked questions till it's fixed or you stop feeling comfortable (for some rendering issues, I need reporting people to change code blocks or comment code blocks, and not everyone goes that far...)

This is where having standards, specifications, and requirements come in handy


Would come in handy, but you're herding a bunch of cats here. You're looking at a bunch of independent-minded individuals who work in their spare time and don't like to be told what to do very much.

Translation - if someone doesn't adhere to the standard, what do you do? Fire a volunteer? Prevent him from coding? It's not so easy.
Thorsten
 
Posts: 9993
Joined: Mon Nov 02, 2009 8:33 am

Re: Is there a "standard reference system" for testing insta

Postby Hooray » Wed Jan 04, 2017 7:32 pm

Sorry, I only read a few lines of the previous response - so just responding briefly: FlightGear is cross-platform software, so unless you are facing something during install, startup/reset, it is rather unlikely that you will directly run into OS specific issues that are not obviously related to the OS/GPU at hand (e.g. OS specific file dialogs, error dialogs, driver glitches etc).

For instance, features like the route manager are built on top of the autopilot, which in turn is built on top of the property tree, all of which end up requiring/controlling the FDM - under the hood, all of these systems are built in a way that end up using the same set of multi-platform APIs, which is to say that with a few rare exceptions, OS specific issues are relatively rare - the most infamous bug proving all of this wrong is the so called "navdb cache" which is a SQLite-based navaid database that seems to contain/cause issues that are highly OS specific and that can hardly be reproduced by people on different hardware.

But even then, a corresponding set of "standards" or "reference systems" would hardly be of much help - certainly not when compared to a handful of backtraces and/or logs created by people affected by the corresponding issue/s.

Speaking in general, the idea to create such a "reference systems" sounds good - in practice however it simply isn't feasible at all due to the way the project works and in the light of the constraints it is facing (and has been facing since its inception).

All of this is to say that people are generally aware of such issues and that the bug tracker and the mailing list are the primary means to provide action-able bug reports.
Thus, compared to the resources that would go into setting up, and documenting, such a "standard reference system", even just a single "power user" who regularly follows the mailing list/bug tracker to help review/triage and verify bug reports would be a huge game changer - especially someone who is able to test pre-relaease binaries or even nightly builds somewhat regularly, and maybe even learn how to provide a back trace, how to run a leak checking tool or how to profile the fgfs binary.

None of this is intended to discourage anyone at all - it's just the way it is, and we gotta face the facts.

Something that a few of us have independently been working towards is to include more/better information in the binary itself (e.g. via the help/about dialog) and provide additional tools built right into the binary (such as the built-in profiler).

In my opinion, this is the most promising path forward - because it enables power-users to provide valuable information much more easily, while also allowing them to interface with the rest of the user-community to help review/triage bug reports and provide useful information to people able to actually debug/fix bugs.

To learn more, see:

Image

http://wiki.flightgear.org/CrashRpt
http://wiki.flightgear.org/Towards_bett ... leshooting
http://wiki.flightgear.org/Resource_Tra ... FlightGear
http://wiki.flightgear.org/Draw_masks
http://wiki.flightgear.org/Built-in_Profiler
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11186
Joined: Tue Mar 25, 2008 8:40 am

Next

Return to Development

Who is online

Users browsing this forum: No registered users and 2 guests