Thorsten, I invite you to continue this discussion via PM - for now, I am just responding to things relevant in the scope of this thread:
Thorsten wrote:the degree of global RAM utilization isn't a secret, it is well known what uses what amount of RAM
see, that is the whole problem - you think there's one number, while this very number is by far too coarse grained (which should be obvious to you, especially given psadro_gm's comments):
Thorsten wrote:I think we're quite aware how we spend resources.
please don't get this wrong, but psadro_gm stated pretty clearly that he IS NOT (yet) aware of where all the memory is going, which is
exactly why he had to use the brute force method of disabling lots of code in SimGear to gain the aforementioned 60% - which kinda proves where the underlying problem is (
we don't know exactly where/why/when resources are spent):
Subject: Level of detailpsadro_gm wrote:I would turn off random objects and building, and I customized simgear to ignore everything in the .STG file except airport and terrain .btg files ( no shared/static models )
With current fg/sg/fgdata, this would take 12.5 GB of memory.
[...]
that this was crazy - the number of triangles/vertices was way too low to use so much memory. So I decided to gut simgear of anything 'unnecessary' for converting .BTG to OSG arrays.
I'll be adding more and more code back to see what is grabbing most of the memory in the next couple of days. Then I'll continue with the LOD work - which should get us even closer to the horizon.
I don't think this is too difficult to interpret/understand - the underlying problem is unknown, right ?
So I am not at all "slapping" anybody in the face by rehashing the underlying problem in layman's terms.
Or by stating that certain infrastructure would have made this approach unnecessary, and that these findings could be made by others much easier that way.
we need a custom detector for each hue
well, let's move away from analogies back to C++ code, where you can -for instance- tell malloc/new to use placement allocation, or even register a completely custom allocator (as in having one per subsystem).
So you could track allocations per subsystem. Equally, you can traverse the scene graph to see how much is added per feature (which is how the OSG StatsHandler works - IIRC it's using a simple visitor class to get the stats out of the tree). So the point was not that we needed to know "what to look for" - but that we only need a handful of infrastructure changes to allow most features to be optionally tracked, i.e. beyond just process-level RAM utilization (which you're so happily referring to here...)
to put in a device which counts listeners per frame or detects Nasal running at FDM rate is sort of a non-trivial thought
those are examples you've come up with, so I'm just responding now: both are pretty easy to do (without being necessary though) - besides, you would not want to count "listeners per frame", but instead
identical listeners across a configurable time-span (a listener being a SGEvent with an associated C/C++/Nasal callback registered) - equally, the main issue with Nasal code isn't that it's running per FDM iteration - it is that callbacks are often invoked too frequently by the main loop (non-FDM coupled), i.e. due to registered timers/listeners invoking the same callbacks over and over again, which is one of the most common coding mistakes people tend to make - even made worse by reset/re-init.
So there's really just a handful of metrics to enable people to quantify run-time behavior with selected features - even beyond just RAM/listeners etc, which is exactly how the OSG StatsHandler works internally. It's just never been integrated with most of SG. But it's definitely possible, as can be seen by Zakalawe's draw-masks - which would be straightforward to extend in order to track scene graph complexity per feature. Again, this never having been done is the sole reason for psadro_gm having to manually #ifdef sections of SimGear code and rebuild a custom SG version, and re-add stuff one by one to track memory utilization per re-added code snippet.
If that isn't obvious to you, I can explain it to you via PM - but I'd like to re-iterate my suggestion not to hijack this discussion (I'll ask psadro_gm if he wants me to split our responses in order not to distract from the real discussion here).