by Hooray » Tue Aug 25, 2015 10:23 pm
the background/motivation for this is explained pretty extensively on the wiki, it is inspired by discussions among core developers on the devel list - the main goal being to understand where resources are utilized, and to determine "expensive" features/settings - so that FG performance, and resource utilization, can be better understood/optimized over time.
In its simplest form, it will merely tell you RAM/VRAM utilization for different startup/run-time settings, so that the impact of -for example- using different aircraft/locations (or weather systems) can be compared in varying situations. Equally, this would allow us to grow a library of startup/run-time profiles and compare the memory footprint for different hardware (think graphics cards).
Such a library of startup settings could be grown and maintained as part of $FG_ROOT - at some point, this could even include scripted flights (think route-manager driven, and/or replay/fgtape based).
Once there is a handful of different test cases, those could be used for regression testing purposes - e.g. to compare FG 3.2 RAM/VRAM (memory) occupancy against 3.6 and 3.8. Equally, that would mean, that it would become easier for people to spot serious bugs/regressions, such as an aircraft (or Nasal script) suddenly utilizing much more resources.
The underlying infrastructure for this would mainly require certain metrics exposed (and updated) via the property tree, so that these can be used elsewhere - such as for example by using the CSV logging infrastructure to plot diagrams for RAM/VRAM utilization and compare FG 3.2, 3.4 against 3.6 or 3.8
In the long term, these metrics would also be required for any kind of "benchmark" (comparing FG performance on different systems, using identical settings) - but also to do automated feature-scaling, where FG is made aware of certain issues (such as your OS running out of RAM/VRAM), and scaling down some settings to take such circumstances into account.
Once you take a look at the wiki, you will find a number of quotes linking back to the devel list, illustrating that a number of core developers have been wanting to support benchmarks and feature scaling for years.
Likewise, most forum users around here aware of those infamous "OpenGL out of memory" errors, all of which are demonstrating that FG has grown so much in complexity, that it is often very hard to determine where/how and why certain resources are spent - without that having to be specific to graphics or RAM/VRAM utilization.
For instance, another recently fixed issue was about leaking listeners, fixed by TorstenD - which went unnoticed for several release cycles - simply because FG would not keep track of such metrics automatically.
And then, there's the Nasal GC (Garbage Collector), which is infamous for being triggered by certain Nasal scripts, and causing performance issues for many people.
The patches related to this effort won't magically make such bugs/issues disappear, but they will be easier to identify - imagine being able to tell exactly why an aircraft like the 777 or extra500 is performing so slowly and eating so much resources.
No matter if you are interested (or not) in benchmarks or feature scaling: we all want FG to perform sufficiently well (=fast) and make good use of our hardware resources - in many cases, this isn't currently the case, and while it may go unnoticed for people on extremely powerful systems, it will severely affect those among us on much less powerful systems.