Board index FlightGear Release candidates 3.0

fg3.0rc1 memory usage

This is the archive of topics about the 3.0 release candidates.

Re: fg3.0rc1 memory usage

Postby f-ojac » Fri Feb 07, 2014 5:35 pm

As noticed earlier, we are aware the 2.01 terrain has uneeded vertices, mostly due to OSM roads integration.
Pete pushed quite a few fixes for terragear, we'll have to schedule a new worldwide terrain generation to try to close this issue.
f-ojac
 
Posts: 1304
Joined: Fri Mar 07, 2008 10:50 am
Version: GIT
OS: GNU/Linux

Re: fg3.0rc1 memory usage

Postby kuifje09 » Fri Feb 07, 2014 7:49 pm

After I ran in the same mem issues with FG3.X I was pointed to this tread.

Thinking I had a clever idea, I saw it is already here.

Less visibility and delete tiles out of sight, use a less detailed scenery when flying high.
You don't need tiles out of side and no trees and other details when high in the sky.
Maybe use both new and old scenery depending on the FL you fly.

32 bit or 64 bits is no point of discussion, 32 bits is just enough.

Programming like there is no limit to memory is wrong.

I know it is very clever what has been made sofar, I cannot make it better nor change parts for myself.
Thats not the point I must say " great job " what you developpers did. But don't let it de-rail. ( As what is hapening now... )

edit: removed typo.
kuifje09
 
Posts: 596
Joined: Tue May 17, 2011 9:51 pm

Re: fg3.0rc1 memory usage

Postby Thorsten » Sat Feb 08, 2014 7:32 am

Programming like there is no limit to memory is wrong.


So is programming like the limit is 2GB.

Whatever you do, someone will complain. During a normal week, I read complaints that the scenery of FG is really outdated and we should get much better scenery. I read complaints that people would like to open the visibility range to 100 km and don't get enough terrain displayed. I read complaints that FG uses too much memory/CPU. I read complaints that FG doesn't utilize all the memory/CPU on a high end machine and should be able to run much better. I read complaints that we should all commit to osgEarth, dispense with scenery generation and get it all via internet. I read complaints that we should never assume in development that there's a broad internet connection available.

What this really boils down to is that the majority of users asks the question:

Why aren't the FG defaults optimized for my system?

The answer to that is that people run FG on 7 year old legacy systems with no decent graphics cards just as well as on brand new, high-powered gaming machines. That's a much wider range of architectures than X-plane would support for instance. And there's no way to come up with defaults that'll make everyone happy.

So the only solution is that users configure FG such that it runs best on their system. We can't know whether people want to fly helicopters close to the ground and want to commit their memory to detailed trees and buildings, or whether they want to fly airliners and want to see lots of terrain. We can't know whether they need 60 fps stable, or whether they consider it acceptable to drop to 10 fps intermittantly if the visuals are good enough.

So the theme here is freedom. The configuration is largely open. You can advise FG to run settings which are completely unsuitable for your computer and which may ultimately crash the application. FG allows you to make use of 12 GB memory if you have them, but if you use these settings on a 32 bit system, they won't run.

The downside of that is that users need to take some responsibility and really need to configure to get the best out of the experience. And many apparently think it's easier to complain than to do that.

You don't need tiles out of side and no trees and other details when high in the sky.


Oh, imagine for a moment we would actually do this. You may not need these, but I see the flood of complaints along 'I've tried to do a screenshot of my airliner at 36.000 ft, and the terrain on the sides doesn't extend all the way to the horizon, this looks stupid, why don't you fix this?'

Just because you don't need a function doesn't mean no one else will use it.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: fg3.0rc1 memory usage

Postby Thorsten » Sat Feb 08, 2014 7:50 am

An afterthought:

I think what is not appreciated is that the new ocean depth effect is very cool but quite insanely memory consuming. It relies on a 8192x4096 sized texture which gimp tells me has a raw data content of 348 MB. Add mipmapping to that, and if you have the water shader on, half a GB of your memory may be gone already.

I know ALS has a lower quality version of the water reflection shader that does not use depth mapping, but I'm not sure the other frameworks supply that. So terrain mesh may not be the only culprit here.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: fg3.0rc1 memory usage

Postby Hooray » Sat Feb 08, 2014 8:02 am

I think the main problem is not so much people having the newest hardware - complaints coming from them due to FG not leveraging all their horsepower is not as likely to be as negative as feedback coming from users who are noticing that the stock/default FG startup settings have reached a level where they're no longer able to start up on hardware older than 5 years, which is probably why we keep seeing discussions among users who are using pre-OSG FG versions.

In my opinion, scaling up is not going to be as difficult as scaling down - and for that reason, I would love FG to start up with an extremely minimal startup profile, and dynamically determine if there's hardware available to do more resource-hungry things, such as using shaders, multiple cores, many GBs of RAM etc.

Meanwhile, we have reached a point, where it's become a real piece of work to start FG on old computers, or even on Intel-graphics based computers.
Admittedly, I do not expect to be able to run X-Plane or MS FSX on such computers, but yeah - I would love to be able see FG running on such computers, because it also means that we're doing something right here. Keep in mind that there are probably more users using FG because of its "cost" (i.e. being free), and not because they expect the latest eye-candy.

I am fully aware of people complaining about FG's visuals and compared to FSX or X-Plane, we're often not quite there yet - despite having encountered several algorithmic bottlenecks over time, which need to be fixed first.

Being able to start up FG using a "safe" subset of enabled features would be great for people with older/underpowered hardware - but it would also be great for us, i.e. from a troubleshooting standpoint, but also from a benchmarking perspective, because we could dynamically scale up settings-based on what's available.
This is part of the reason why I originally started documenting the settings I'm using whenever I use FG on old computers - and this article, while still pretty "young", has seen over 5k views since I added it - it would be interesting to see how many views an article would get that details how to scale-up settings to get the latest eye-candy - i.e. in comparison. :D

Note that I am certainly not against eye-candy features, but I obviously believe in having a more basic startup mode and making eye-candy entirely optional and fully configurable. FG crashing during startup, or showing single-digit frame rates is just disappointing, and probably not necessary - given that outdated versions of MS FSX or X-Plane were quite able to be run on such computers.

The recent memory issues are a fairly "new" thing, but they are demonstrating that we need to deal with the problem on a fundamental level - not being aware of where resources are spent (CPU, RAM, VRAM) just isn't helping.

Look at the number of times where some new feature turned out to be highly problematic for end-users, even with just the default settings.

We have too many content developers (aircraft/scenery) who are much more active than core developers, so it is easy to foresee some problem developing here, as long as people have no way of knowing how much their work contributes to overall simulator load - no matter if it's highly detailed scenery, a highly detailed aircraft/cockpit/texture, or custom scripts.

Likewise, it is obvious that even core developers are increasingly unaware of the footprint of some features they're adding, until it may be too late and cause frustration - especially in the eye-candy/scenery department - so the problem is not specific to content developers who happen to use hugely complex 3D models and textures.

Obviously, we can tell people that some feature is "heavy" (such as the 777 for example) and that things need to be optimized (complexity reduced) - but then again, that's just as short-sighted as suggesting that we need to stop using Nasal because we're having GC issues - yes, we can fix certain problems, but we cannot tackle the whole problem - we need a design-level solution that allows to inspect runtime footprint of different features, no matter of it's core code (subsystems, features) or if it's "content" (aircraft/scenery/scripts).

OSG itself has certain mechanisms to help with parts of this, such as tracking osg::ref_ptrs, doing LOD management (aircraft/scenery) - but these tools need to be adopted to be useful, which is currently not the case.

Technically, there's really no reason why FG 4.0 shouldn't perform better on 2006-era hardware than FG 1.0 would - just look at other software like your browser or your operating system - in the case of Linux, you can expect Linux 3.x to perform much better on 2000-era hardware than RedHat 7 ... but currently, our programming model is kinda broken, because we keep adding features without being aware of their footprint - and there's nobody to blame, because we are simply missing the tools that would tell us how expensive a certain feature, aircraft or scenery really is.

I am convinced that having exactly this sort of info available would also help people developing more eye-candy features, because they would much better understand where resources are burnt.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: fg3.0rc1 memory usage

Postby Thorsten » Sat Feb 08, 2014 12:20 pm

In my opinion, scaling up is not going to be as difficult as scaling down - and for that reason, I would love FG to start up with an extremely minimal startup profile, and dynamically determine if there's hardware available to do more resource-hungry things, such as using shaders, multiple cores, many GBs of RAM etc.


At least starting with a minimal profile should be trivial to implement: Define a property /sim/use-minimal-profile at startup, have a Nasal script check whether that property is set, and if yes, have it execute a batch of setprop() statements forcing all config options to minimal. No need to bother users with modern hardware, well-defined starting point for users with older hardware.

I don't believe in dynamical up-scaling, because you can't know what people want. I'm quite cool with 20 fps, Mathias has made it quite clear that anything below a steady 60 fps isn't acceptable for him, I see people fly with 10 fps still. Airliner pilots would prefer large visibility over having detailed trees, helicopter pilots usually ask for better (=more memory-intense) trees. This has to be done by the user who wants certain things done.

At the same time, I think dynamical automatic upscaling would be technically complicated, and I think the limited manpower would better be spent doing other things.

The recent memory issues are a fairly "new" thing, but they are demonstrating that we need to deal with the problem on a fundamental level - not being aware of where resources are spent (CPU, RAM, VRAM) just isn't helping.


No, they're not. The issue is that when I started using FG, the default visibility was some 16 km, and 35 km was considered outrageously large. I've been starting to open it up to realistic values on my old 32bit system, and I've had several crashes as a result, I had a learning curve what I could do and what I could not do and what safe options would be, so after a while I've been flying with safe options.

If people nowadays would be content to fly with 16 km visibility, sparse trees etc. there'd not be any issues. Only people are no longer content to use such settings (rightfully so), and so they discover the same issues I had seen a few years earlier.

Fast-forward 5 years, and you'll perhaps see people encountering the same issues I'm now seeing when opening the visibility up to 500-600 km.

We have too many content developers (aircraft/scenery) who are much more active than core developers, so it is easy to foresee some problem developing here, as long as people have no way of knowing how much their work contributes to overall simulator load - no matter if it's highly detailed scenery, a highly detailed aircraft/cockpit/texture, or custom scripts.


You're kidding yourself here. Someone who uses 4096x4096 texture sheets for aircraft usually knows pretty darn well that they cost lots of resources. He just thinks that that's how he wants to spend resources.

I know that making clouds visible out to 75 km costs a lot of resources. Should I *not* make this available, because I know there are some expensive aircraft, and you can't have both? In fact, for a long time I did not make it available, the GUI limited the visibility to a max. of 45 km, which got me a flood of complaints that clouds should really be visible farther out.

We have plenty of 'light' aircraft in the hangar - usually these cause comments that FG would include too many substandard aircraft.

Look at the number of times where some new feature turned out to be highly problematic for end-users, even with just the default settings.


Look at the number of times where they didn't. We don't know the unproblematic cases. And to assume that a legacy system will run todays software with default settings is inviting trouble.

but currently, our programming model is kinda broken, because we keep adding features without being aware of their footprint - and there's nobody to blame, because we are simply missing the tools that would tell us how expensive a certain feature, aircraft or scenery really is.


So, otherwise we would keep adding features knowing fully well their footprint (I'd argue I have a very good idea how expensive stuff I'm adding is right now - and I still do it). Why would this improve the situation?
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: fg3.0rc1 memory usage

Postby Hooray » Sat Feb 08, 2014 1:38 pm

I wasn't even talking about automated up-scaling, like you say, this should be left to the user - even though it could be partially supported by "runtime profiles" with different settings.

Someone who uses 4096x4096 texture sheets for aircraft usually knows pretty darn well that they cost lots of resources. He just thinks that that's how he wants to spend resources.


I am not so sure about that - most aircraft developers who are involved in 3D modeling and texturing, and not necessarily coders, i.e. they may not necessarily understand the impact on frame rate and frame spacing certain actions have - I have seen too many screenshots of highly detailed cockpits posted by aircraft developers, who seem to be also just getting 10-15 fps - so they apparently don't know how to solve the problem. Having a list of expensive resources (3D models, texture, scripts) would be useful to identify problematic components - I don't think most end-users would accept a 777 where it is obvious that 60% of the load is caused by heavy textures and 3D models with lots of vertices - likewise, Nasal scripts having 30-40% impact on frame spacing and latency would also not be accepted for very long. Ideally, this would be a part of a "review" process once aircraft are committed to fgdata, or at least if they are to be considered for inclusion into a release.

Look at the number of times where they didn't. We don't know the unproblematic cases.

Which kinda proves the point that we need better diagnostics to identify where resources are spent, and what for - in general, as part of the simulator, but also in combination with certain settings, aircraft and scenery.

So, otherwise we would keep adding features knowing fully well their footprint (I'd argue I have a very good idea how expensive stuff I'm adding is right now - and I still do it). Why would this improve the situation?

I don't think you're representative of the typical fgdata developer - most people have zero clue how to measure their resource usage, while you have repeatedly proven even seasoned contributors wrong with regards to performance statements, all based on benchmarking and common-sense - many people would not even know how to come up with representative test-scenarios. Then again, your own profiling is obviously also limited ,i.e. to shaders & Nasal I assume ? Which makes sense, because that's where you contribute mainly - but there are obviously other areas in FG that are also having an impact. And often enough, things cannot be looked at in isolation, because of mutual interdependencies.

Overall, it's kinda moot discussing this with you specifically, because you are obviously performance-aware of your own stuff - even though your tests are not necessarily based on diagnostics that a software engineer would use, which could also be seen in the GC debate - where people brought up issues that cannot be measured using just the methods and tools that you're using. So this is not simply a black & white issue. There's only so much that someone can accomplish with a certain tool set - no matter how clever and resourceful that someone may be.

It is without doubt that FG would be in a better shape if more fgdata contributors were as able as you are to evaluate the impact of their added features - on the other hand, it's also without doubt that FG would be in a much better shape if people could more easily identify problematic systems/features - no matter if those are in scripting space in C++ space.

Currently, we are spending resources without having any clue how much we have left - no matter if it's CPU or RAM/VRAM, our tools are pathetic and not well-developed to help with this sort of thing, as has been brought up numerous times by other before.

Sure, you can investigate performance issues, depending on your background you may even be skilled and successful - but that requires a conscious effort, having an integrated system that monitors CPU/RAM usage isn't uncommon at all in complex software - I'm pretty sure that even your browser has a corresponding "task monitor" - not to mention your OS, these tools are there for a reason - and any educated software engineer will tell you that you can only develop software up to a certain point until you need to have access to such details, or things are likely to bite you sooner or later ...

Again, I don't believe in having academic debates like these - there's a reason that all aircraft (or cars) have not only a gauge measuring your speed, but also gauges for measuring resource consumption (fuel) - you cannot expect to spend a ton of cash each day without knowing how much you have in total - and that's really what this is all about, nothing else.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: fg3.0rc1 memory usage

Postby Thorsten » Sat Feb 08, 2014 3:30 pm

Again, I don't believe in having academic debates like these - there's a reason that all aircraft (or cars) have not only a gauge measuring your speed, but also gauges for measuring resource consumption (fuel) - you cannot expect to spend a ton of cash each day without knowing how much you have in total - and that's really what this is all about, nothing else.


Okay, then without going into great length: I think you're trying to linearize a complex system by analyzing the performance of its components, and that won't be a meaningful predictor for how the system as a whole performs. The only meaningful measure in the end which tells you how the whole system performs is framerate/frame latency (and perhaps memory occupancy, although that's kind of moot, once you swap, framerate takes a dive). And that we have.

Just to give a few examples:

Stuart's first version of random buildings all merged into a single object and processed by the GPU is faster if you have enough memory than the instancing we have now. Instancing is however less memory consuming. It's a tradeoff situation with no clear solution.

Close to the ground, doing per-vertex lighting is always superior to per-fragment lighting in performance, as few triangles will dominate what you see on the screen. The situation reverses beyond some altitude, the precise point when this happens dictated by the performance of the GPUs vertex vs. fragment pipeline. Again a tradeoff without clear solution. You can add clouds into the equation - with large cloud count, there's additional load on the vertex pipeline, changing the turnover point for the optimal strategy.

On older architectures, passing a uniform into a fragment shader is handled by compiling the fragment shader every frame - big loss in framerate. So you're better off passing a varying from the vertex shader which is in essence constant. On a modern architecture passing a uniform to a fragment shader is unproblematic and much less costly than using a varying. That's a hardware dependent optimization you have to make.

A huge number of vertices may be easier to process than a highly detailed normal map if you want surface details on models. Or it may not be, dependent on what shader you run on what GPU.

Dependent on what shader you run for the terrain, a two pass strategy filling a z-buffer may or may not gain you something - you pay extra for the first pass, but you may save expensive computations on the second. Whether that tradeoff makes sense for instance for a tree shader depends on the tree density you want to run - if trees obscure all of the terrain underneath, it makes sense not to compute it. If trees do not occupy much of the terrain, the first pass is just a waste of time.

Point being - analyzing the actual chokepoint in a rendering strategy is a non-trivial business. And optimizing the strategy to avoid it is even less trivial, because what works for one system may deteriorate performance on a second.

So in the end, you end up with the guidelines we have: As many vertices as you need, not more. As large texture resolution as needed, not more.

In the end, our development model is adapted to complex systems dynamics. We don't try to anticipate anything but the obvious problems, because that would cost far too many resources. We see them occurring, we analyze them in-situ and we fix them once we have in-situ case studies.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: fg3.0rc1 memory usage

Postby kuifje09 » Sat Feb 08, 2014 3:37 pm

Well, apologize, I must make my excuse. I did not want to make this a war, because of the problems I encounter.

I just wanted to give some of my thoughts too.
I fully undestand what others want to see when flying and what is possible on certain hardware. So not everybody will be satisfied ever.
Thats life. I don't think thats the point where the discussion is about, or should be about.
What I do not like is my fgfs crashed because of a memory issue. I am not sad to not be able to see things far away.
I would like to have limit in loading tiles so my fgfs never runs out of memory.
I my cpu has to do more because of reloading tiles, thats fine. Keeping tiles in cache and run out of mem is simply not good.
( I have been looking and did some test/changes in the scenery sources, ( tilemgr and others ) but I am obvious not good enough so it was a kind of wasted time, did not make the change I was hoping for )
[ In my case I think I can better go back to fg2.x which gave me no problems. ]

Something else to think of: putting all programs ( fgfs, atlas/livemap, fgcom ... ) into 1 program, brings the max-mem used even closer.
kuifje09
 
Posts: 596
Joined: Tue May 17, 2011 9:51 pm

Re: fg3.0rc1 memory usage

Postby F-JJTH » Sat Feb 08, 2014 3:49 pm

kuifje09 wrote in Sat Feb 08, 2014 3:37 pm:I would like to have limit in loading tiles so my fgfs never runs out of memory.

You can already do that with the View > Adjust LOD Ranges dialog. For example you can try :
Detailed = 800
Rough = 1500
Bare = 4000
(restart FG is necessary to take fully effect)

kuifje09 wrote in Sat Feb 08, 2014 3:37 pm:Something else to think of: putting all programs ( fgfs, atlas/livemap, fgcom ... ) into 1 program, brings the max-mem used even closer.

atlas/livemap is not included in fgfs process.
fgcom is indeed included now but, do you have an idea how many memory is required by fgcom ?
terrasync is also included in fgfs process...

I can assure you that nor terrasync nor fgcom are responsible of your out-of-memory issue...
If you are not convinced, --disabled-terrasync --disabled-fgcom and see if all your problem are gone ;)


Regards,
Clément
Last edited by F-JJTH on Sat Feb 08, 2014 6:22 pm, edited 1 time in total.
User avatar
F-JJTH
 
Posts: 696
Joined: Fri Sep 09, 2011 12:02 pm

Re: fg3.0rc1 memory usage

Postby kuifje09 » Sat Feb 08, 2014 4:33 pm

Hi F-JJTH , thanks for the advice. going to try that.
Should be good for others too I guess.

On my previous systems I had to screw down the visability, but that is just not enough now.

And ...
"I can assure you that nor terrasync nor fgcom are responsible of your out-of-memory issue..."
Sure, it was a kind of misplaced joke. they are just a few drops in the bucket. ( sorry, shouldn't have done )
kuifje09
 
Posts: 596
Joined: Tue May 17, 2011 9:51 pm

Re: fg3.0rc1 memory usage

Postby Thorsten » Sat Feb 08, 2014 5:18 pm

Well, apologize, I must make my excuse. I did not want to make this a war, because of the problems I encounter.


No apology needed. This is, as far as I'm concerned, a rather informative and hopefully productive discussion.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: fg3.0rc1 memory usage

Postby kuifje09 » Sat Feb 08, 2014 5:32 pm

Thanks Clément for your advice.

I should help others too. I was not aware of these settings.
Just flew my plan and landed on EDDM with these settings

Visability 25000 Altitude 04000.
Scenery: Detailed 650, Rough 4500 , Bare 10000.

Then the mem-load just tips over 2Gb. ( not tested in multiplayer yet , maby this evening )
560192 * 4k blocks and then adds some and deletes some .
kuifje09
 
Posts: 596
Joined: Tue May 17, 2011 9:51 pm

Re: fg3.0rc1 memory usage

Postby Hooray » Sat Feb 08, 2014 9:14 pm

Thorsten, you are kidding yourself this time - you are making this more complex than it is - obviously, things are not just linear in complex setups like ALS/Rembrandt - but that's simply not the point: there's more to FG than just the rendering pipeline, no matter if it's fixed or not - there's a ton of stuff you can learn about FG by running instrumentation tools, i.e. profilers or memory trackers.

I think we had the same debate when we were talking about GL-level profilers - I am not sure if you're even familiar with these tools, which is why I am gonna stop right here now - maybe this is one of those threads where you'll need to have some time to reflect upon things, or where time is needed to prove others right - and maybe this time is measured in months or years and not just weeks, I don't know.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: fg3.0rc1 memory usage

Postby kuifje09 » Sat Feb 08, 2014 11:51 pm

Well I think this dicussion is going on for a while...
( a bit off topic )
It does remind me on what happend to ubuntu and other distros too. They also tend to grow, and it is clear why. But very sad for those who cannot afford to buy a new PC very 2 or 3 year. They also started deleting support for older hardware, while it is still usefull, the OS want be running on it or no more graphics drivers... In contrast to Windows, which still has support for it... Depending on what you do it can be usefull or not. But okay, thats other stuff.

I just finished my nightflight. And it looked very well but halfway back, I opened the build-in atlas/map. And that was just too much. Everything froze.

Memuseage then was 610105 * 4k-blocks. So I have to throttle a little more down I think.
kuifje09
 
Posts: 596
Joined: Tue May 17, 2011 9:51 pm

PreviousNext

Return to 3.0

Who is online

Users browsing this forum: No registered users and 0 guests