Board index FlightGear Development Scenery

Level of detail

Questions and discussion about enhancing and populating the FlightGear world.

Level of detail

Postby psadro_gm » Wed Jan 07, 2015 2:47 am

I was trying to look at generating arbitrary tile sizes, and simplifying geometry for supporting several levels of detail - to be loaded by Mathais' .SPT OSG file loader plugin.
To get a baseline, I was using terrasync scenery and starting at LIMA. I would set bare LOD to 120km, zfar to 130 km, and set visibility to 120 km. Then, using the UFO, climb straight up to 100,000 ft.

I would turn off random objects and building, and I customized simgear to ignore everything in the .STG file except airport and terrain .btg files ( no shared/static models )
With current fg/sg/fgdata, this would take 12.5 GB of memory.

Conversations on IRC lead me to believe that this was crazy - the number of triangles/vertices was way too low to use so much memory. So I decided to gut simgear of anything 'unnecessary' for converting .BTG to OSG arrays.
I found I could reduce the memory footprint by 60%!

Here's a screenshot 100,00 ft above LIMA with my custom simgear - ALS enabled, and view distance increased to 400 km - remember - this is corrine data, with OSM, over mountains. About the worst case scenerio.

Image

This is pushing 11 million triangles of WS2.0, and takes just over 15 GB of memory.

I'll be adding more and more code back to see what is grabbing most of the memory in the next couple of days. Then I'll continue with the LOD work - which should get us even closer to the horizon.
8.50 airport parser, textured roads and streams...
psadro_gm
 
Posts: 751
Joined: Thu Aug 25, 2011 2:23 am
Location: Atlanta, GA USA
IRC name: psadro_*
Version: git
OS: Fedora 21

Re: Level of detail

Postby Johan G » Wed Jan 07, 2015 7:32 am

psadro_gm wrote in Wed Jan 07, 2015 2:47 am:I found I could reduce the memory footprint by 60%!

That is even more impressive than the screenshot. :D 8)

I hope you will find some low hanging fruit waiting for optimization. :D
Low-level flying — It's all fun and games till someone looses an engine. (Paraphrased from a YouTube video)
Improving the Dassault Mirage F1 (Wiki, Forum, GitLab. Work in slow progress)
Johan G
Moderator
 
Posts: 5546
Joined: Fri Aug 06, 2010 5:33 pm
Location: Sweden
Callsign: SE-JG
IRC name: Johan_G
Version: 3.0.0
OS: Windows 7, 32 bit

Re: Level of detail

Postby Thorsten » Wed Jan 07, 2015 7:46 am

Cool!

It's definitely an improvement over my experiments with visibility range in CORINE scenery - I managed to squeeze about 230 km range into my computer without ocean to rescure (Iceland I could do more, because it'd run into cheap ocean everywhere eventually), so your code seems to be rather good already.

I've noticed two main stumbling blocks beyond memory though. First, it took me forever to load the terrain from harddisk - it took 5 minutes or so till everything was ready. And that's too slow - I guess if you run 200 km visibility at 52.000 ft to get good visuals for the Concorde (say), that translates into a lot of tiles which need to be loaded and discarded every minute. So I suspect this will be a bottleneck, which is where precomputed LOD would be helpful.

And second of course, the vertex shader eventually chokes on sheer numbers. Being able to fit it into memory is only the first step unfortunately. So I'm very eager to see where the LOD work will lead to.
Thorsten
 
Posts: 11337
Joined: Mon Nov 02, 2009 8:33 am

Re: Level of detail

Postby psadro_gm » Wed Jan 07, 2015 12:19 pm

Yup - the UFO can climb to 100,000 feet long before tiles can be loaded. I had to wait quite some time before they loaded from disk. waiting for them over my internet connection on the first try was worse. I let it download overnight, so I don't know how long it took. ( my connection is ~8Mbps )

I haven't tried yet, but the database pager does support multiple threads - I basically peg 2 cores doing this - one for fg main loop, and one dbpager. I'll try increasing the number of dbpager threads to see if we are io bound or cpu bound I imagine it's a combination of both - the loader does some math on the .btg before passing to OSG ( which I could try moving to terragear to make this unnecessary )

As for rendering, LOD should help here. I've managed to simplify .BTGs with Lindstrom-Turk edge collapse in CGAL to ~20% of their original size, and the results look pretty good. Much better than the point removal strategy OSG::simplifier uses. Of course, the algorithm takes more time, so it wouldn't be a run time thing like the OSG simplifier.

There are a few things I need to finish to get the simplifier even better. I have the algorithm configured to preserve the mesh shape - so tiles remain square. This also leaves airport holes untouched. So the simplification stops in tiles with airport holes long before it could. I am modifying genapt to produce airports as landclass ( grass and asphalt ) so the original .BTG LOD I use for simplification should be able to be simplified much further.

Something like this

LOD detail
Highest: current WS2.0 settings (airport holes, and OSM )
- 1: current ws2.0 settings ( airports as landclass, no OSM vector data ) -
-2: simplify - leave 25% of original nodes
-3: simplify - leave 25% of previous level
-4: textured Blue Marble mesh

Mathais' .SPT will be used for the tile sizes eventually. For my experiments, I'm just using the normal tile size for all levels of detail ( which, at high altitude, results in tons of tiles - which means tons of individual OSG nodes...) Using .SPT will shrink the scene graph considerably.
8.50 airport parser, textured roads and streams...
psadro_gm
 
Posts: 751
Joined: Thu Aug 25, 2011 2:23 am
Location: Atlanta, GA USA
IRC name: psadro_*
Version: git
OS: Fedora 21

Re: Level of detail

Postby Hooray » Wed Jan 07, 2015 12:41 pm

indeed, that's very encouraging - it is great to see someone looking into this, beyond just the usual statement "it's due to WS 2.0, please revert to 1.0".
Overall, our main problem is that we don't have any built-in stats that tell us where resources is spent or where complexity is added.
While the OSG on-screen stats can be extended accordingly, there's a lot of SG/FG level stuff happening that isn't currently exposed, no matter if it's scene complexity or degree of RAM utilization.

I customized simgear to ignore everything in the .STG file except airport and terrain .btg files ( no shared/static models )
I decided to gut simgear of anything 'unnecessary' for converting .BTG to OSG arrays.
I found I could reduce the memory footprint by 60%!


For the sake of better regression testing in the future, it would be great if you could share your simgear changes, to see if we can either make those build-time options, or preferably even startup/run-time options to help with troubleshooting SG level issues that are otherwise not understood easily.

Back when we were running into issues related to effects, particles and random buildings - those were also only understood once we were able to fully disable those features to exclude those from the pool of potential culprits. So it is much easier to draw correct conclusions if we can remove "black behavior" or at least keep it optionally disabled when troubleshooting certain issues.

Again, thank you for doing this kind of unglamorous "behind-the-scenes" work - it's long been overdue, and we used to have dozens of discussions where people would disagree quite strongly with each other because we're lacking any "hard data".
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11441
Joined: Tue Mar 25, 2008 8:40 am

Re: Level of detail

Postby psadro_gm » Wed Jan 07, 2015 3:25 pm

according to OSG 3.3 release notes:

* Threading safety and performance improvements to the DatabasePager

http://www.openscenegraph.org/index.php ... r-releases
8.50 airport parser, textured roads and streams...
psadro_gm
 
Posts: 751
Joined: Thu Aug 25, 2011 2:23 am
Location: Atlanta, GA USA
IRC name: psadro_*
Version: git
OS: Fedora 21

Re: Level of detail

Postby Hooray » Wed Jan 07, 2015 4:27 pm

That sounds good - but frankly, based on a few experiments a did a while back, most recent FlightGear threading issues seem to be caused by fg/sg code and not by OSG - osgviewer (and even fgviewer) would work properly for me in most threading modes (while osgviewer would even support switching modes at run-time), while fgfs would often segfault immediately/quickly in certain modes, which is also supported by Torsten's observations: http://wiki.flightgear.org/Howto:Activa ... PU_support

So I think it is very worthwhile to identify features/building blocks that can be made better customizable, or even optional, for better troubleshooting - while having corresponding startup/run-time options (think properties/listeners) would obviously be great, I wouldn't mind having to rebuild SG in a custom mode to get better diagnostics.

Unfortunately, SG/FG level functionality is relatively "opaque" (for the lack of a better term) when it comes to OSG integration (which makes sense, because OSG support ended up being added roughly a decade after most GL code), so that understanding rendering and scenery/terrain related issues is often very complex, and requires tons of expertise and experience in a variety of areas - which you're backing up with your current findings now, given that you're basically the primary TG developer these days, and have been doing related TG/SG coding for years obviously :D

The 11/2014 segfault discussed on the devel list also had to be understood first by adding tracking/logging to some SG code:

http://sourceforge.net/p/flightgear/mai ... /33076275/
What I think I can see is that the RandomObjectCallback::readNode (simgear/scene/tgdb/obj.cxx line 945) is being called multiple times per tile, even when the tile is already loaded.

The patch below makes a log when this occurs, and I see it happening almost continually. I don't know whether it might be because the OSG Pager is paging the scenery out, but not completely, or whether my use of pagedLOD ReadFileCallbacks is simply incorrect.
Any help from OSG gurus would be appreciated!


Back in the early days of random buildings, Stuart also had to use the SG_LOG() macro for dumping stats to the console - and he also contemplated sub-classing for getting /some/ run-time stats:

Subject: Random Buildings
stuart wrote:That doesn't allow us to start generating buildings again later as scenery is unloaded. One might be able to create a sub-class of osg::Group that tracks the number of buildings, and then has a destructor to update the building count.

-Stuart


With one of the main conclusions back then being this:
Subject: Random Buildings
stuart wrote:I think it also opens up a larger question of how we do memory management in FG, and whether we should be doing things such as more aggressively freeing up terrain tiles. At one level, removing entire terrain tiles from memory earlier if memory occupancy becomes a concern would be a better management strategy than just stopping generating new buildings.

-Stuart



There are also a few options to expose these stats to the OSG StatsHandler:

Subject: New Aircraft: the Extra500
Hooray wrote:The built-in osgviewer stats can be extended with custom stats, that works by subclassing osg::StatsHandler, this is already done in $FG_SRC/Viewer/FGEventHandler.?xx
The class can be extended to add your own stats via osgViewer::StatsHandler::addUserStatsLine()

You can even register totally custom stats via osg::Stats

http://www.mail-archive.com/flightgear- ... 37823.html
Another goal is to add more node bits (and a GUI dialog to control them) so
various categories of objects can be disabled during the update pass. This will
mean the direct hit of, say, AI models vs particles vs random trees can be
measured. Of course it won't account for resources (memory, textures) burned by
such things, but would still help different people identify slowness on their
setups.


So it would make sense to look at what it's needed from an infrastructure standpoint to allow people to identify issues much earlier, even if that just means that "metric FOO is much worse once I enable feature BAR", which would already go a long way in understanding things, beyond just addressing individual bugs - only to see those pop up a few years later again. Had we added such metrics/stats 5 years ago, people would have been able to draw certain conclusions much earlier - no matter if it's the degree of global RAM utilization, effects/listeners leaking, random objects being instanced too often - or random buidlings gobbling up RAM like crazy.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11441
Joined: Tue Mar 25, 2008 8:40 am

Re: Level of detail

Postby Thorsten » Wed Jan 07, 2015 4:32 pm

it is great to see someone looking into this, beyond just the usual statement "it's due to WS 2.0, please revert to 1.0".


That somehow gives the impression any developer has ever done that in a strategic sense, i.e. implying since all people with massive 64bit rigs are fine, we don't need to do anything core or scenery side.

That's in fact not true - LOD systems have been discussed for a long time on the mailing list as important strategic goal - not only in response to memory issues with scenery 2.0 but even earlier. And I've known about concrete plans for an implementation for quite some time.

The advice to revert to scenery 1.0 has been given to users who want to run FG now and for whom a LOD system appearing a year from now isn't going to be a big help. And prior to the scenery 2.0 release, for years people (including myself) have asked the scenery team to release something, even imperfect, even without performance optimization. So I personally feel we shouldn't be complaining that the existing 2.0 world scenery isn't optimized everywhere, and given that I think that reverting in the case of problems is a reasonable solution.

Overall, our main problem is that we don't have any built-in stats that tell us where resources is spent or where complexity is added.


Sounds like we have no clue where FG burns e.g. memory. That's a bit of a slap in Rebecca's face who has obtained very good memory usage test data, which I have been able to verify subsequently. On related issues, I'm having a technical discussion with Stuart whether we should pass some parameters to trees per vertex (faster on some graphics cards) or overall (more memory friendly) - so I think we're quite aware how we spend resources.

So I disagree that this is our main problem - there's I think a broad consensus that a LOD system is needed (at least I haven't seen anyone argue against it). It's just hard work to make it.
Thorsten
 
Posts: 11337
Joined: Mon Nov 02, 2009 8:33 am

Re: Level of detail

Postby Hooray » Wed Jan 07, 2015 4:57 pm

Certain issues recently identified (some of which were fixed, others not) were completely unrelated to the scenery engine in general, despite those components being used by the scenery/tile manager from an API standpoint (completely unrelated to TG or WS2.0).

So yes, LOD will fix such scenery related issues - but problems in underlying systems apart from massive data to process (e.g. a buggy effects system, leaking listeners) are only magnified by WS2.0 scenery, but not triggered/caused by it.

Besides, I don't think we need to have the same discussion we usually have at this point - time will prove one of us wrong (and I would surely love to be proven wrong here, but what's been said 5 years ago seems to be spot-on, and I happen to agree with Stuart's assessment back then).

So there's really no need to drag others into this argument - this is not about being right or about having to agree with each other, especially not by a remarkably dramatic choice of words (...)

It's a different mentality I guess, some people need to actually experience a house on fire, while others prefer getting smoke detectors and fire extinguishers much earlier - hoping, that they'll never actually have to use these tools.
What's been happening on the FlightGear side for the past 5 years is that FlightGear "burnt" in quite a few places/components, and we had a few volunteer "fire fighters" (=experienced core developers/external contributors) willing to patch up things later on - without anybody looking at FlightGear from an architecture/infrastructure standpoint, to see how these things could go unnoticed for months (or even years).

Having actual LOD support will be all great and dandy, but as can be seen by the troubleshooting approach that by psadro_gm (and others quoted above, like Stuart and Torsten) had to take, FlightGear is lacking the infrastructure to "detect smoke", until there actually is a real problem (think segfault, race conditions, memory leaks - or features gobbling up resources massively).

In a professional/commercial setting this kind of mentality would not be viable, i.e. people would establish/fix the infrastructure first and then look at fixing symptoms.

PS: I'm running 64 bits, too - so it's not that I am overly affected by WS2.0 being a memory hog :D And there's really no need for us to agree here - let's not hijack this great thread, and just accept that we agree to disagree (which will save us a ton of time, too!) THANK YOU :mrgreen:
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11441
Joined: Tue Mar 25, 2008 8:40 am

Re: Level of detail

Postby Bjoern » Wed Jan 07, 2015 5:01 pm

[Astonishment intensifies]
Bjoern
 
Posts: 434
Joined: Fri Jan 06, 2012 10:00 pm
Location: TXL or so
Version: Next
OS: ArchLinux, Win 10

Re: Level of detail

Postby Thorsten » Wed Jan 07, 2015 5:21 pm

So there's really no need to drag others into this argument - this is not about being right or about having to agree with each other, especially not by a remarkably dramatic choice of words (...)


Well, no one asked you to use this thread to give the impression that the general idea of the devel team would be to ignore all issues because things are fine on heavy machinery.

And I'm not fine with you giving that impression to forum users, since it's not true.

Had we added such metrics/stats 5 years ago, people would have been able to draw certain conclusions much earlier - no matter if it's the degree of global RAM utilization, effects/listeners leaking, random objects being instanced too often - or random buidlings gobbling up RAM like crazy.


* random objects being instanced too often I caught the moment I had first pulled that bug. I had it analyzed a day later and reported it to the mailing list. Hardly possible to draw the conclusion much earlier - perhaps a week or two...

* random buildings gobbling up memory were caught by Stuart the moment they actually did it (i.e. in large urban sprawls). Once he tested that case, it was immediately apparent. No memory detector would have shown any 'smoke' in a situation where they have a reasonable memory footprint. The original implementation was intentionally designed to trade memory against better GPU performance.

* the degree of global RAM utilization isn't a secret, it is well known what uses what amount of RAM

Hindsight is always 20/20 - it's a truism that if we had known what to look for beforehand, someone could have put in a warning mechanism.

It's a different mentality I guess, some people need to actually experience a house on fire, while others prefer getting smoke detectors and fire extinguishers much earlier - hoping, that they'll never actually have to use these tools.


The analogy is faulty. We're looking for smoke which can occur in 1000 different hues and smells, and we need a custom detector for each hue (it's sort of easy to look at framerate and memory occupancy - but to put in a device which counts listeners per frame or detects Nasal running at FDM rate is sort of a non-trivial thought...)
Thorsten
 
Posts: 11337
Joined: Mon Nov 02, 2009 8:33 am

Re: Level of detail

Postby Hooray » Wed Jan 07, 2015 5:39 pm

Thorsten, I invite you to continue this discussion via PM - for now, I am just responding to things relevant in the scope of this thread:

Thorsten wrote:the degree of global RAM utilization isn't a secret, it is well known what uses what amount of RAM

see, that is the whole problem - you think there's one number, while this very number is by far too coarse grained (which should be obvious to you, especially given psadro_gm's comments):

Thorsten wrote:I think we're quite aware how we spend resources.

please don't get this wrong, but psadro_gm stated pretty clearly that he IS NOT (yet) aware of where all the memory is going, which is exactly why he had to use the brute force method of disabling lots of code in SimGear to gain the aforementioned 60% - which kinda proves where the underlying problem is (we don't know exactly where/why/when resources are spent):

Subject: Level of detail
psadro_gm wrote:I would turn off random objects and building, and I customized simgear to ignore everything in the .STG file except airport and terrain .btg files ( no shared/static models )

With current fg/sg/fgdata, this would take 12.5 GB of memory.
[...]
that this was crazy - the number of triangles/vertices was way too low to use so much memory. So I decided to gut simgear of anything 'unnecessary' for converting .BTG to OSG arrays.

I'll be adding more and more code back to see what is grabbing most of the memory in the next couple of days. Then I'll continue with the LOD work - which should get us even closer to the horizon.


I don't think this is too difficult to interpret/understand - the underlying problem is unknown, right ?
So I am not at all "slapping" anybody in the face by rehashing the underlying problem in layman's terms.
Or by stating that certain infrastructure would have made this approach unnecessary, and that these findings could be made by others much easier that way.

we need a custom detector for each hue

well, let's move away from analogies back to C++ code, where you can -for instance- tell malloc/new to use placement allocation, or even register a completely custom allocator (as in having one per subsystem).

So you could track allocations per subsystem. Equally, you can traverse the scene graph to see how much is added per feature (which is how the OSG StatsHandler works - IIRC it's using a simple visitor class to get the stats out of the tree). So the point was not that we needed to know "what to look for" - but that we only need a handful of infrastructure changes to allow most features to be optionally tracked, i.e. beyond just process-level RAM utilization (which you're so happily referring to here...)

to put in a device which counts listeners per frame or detects Nasal running at FDM rate is sort of a non-trivial thought

those are examples you've come up with, so I'm just responding now: both are pretty easy to do (without being necessary though) - besides, you would not want to count "listeners per frame", but instead identical listeners across a configurable time-span (a listener being a SGEvent with an associated C/C++/Nasal callback registered) - equally, the main issue with Nasal code isn't that it's running per FDM iteration - it is that callbacks are often invoked too frequently by the main loop (non-FDM coupled), i.e. due to registered timers/listeners invoking the same callbacks over and over again, which is one of the most common coding mistakes people tend to make - even made worse by reset/re-init.

So there's really just a handful of metrics to enable people to quantify run-time behavior with selected features - even beyond just RAM/listeners etc, which is exactly how the OSG StatsHandler works internally. It's just never been integrated with most of SG. But it's definitely possible, as can be seen by Zakalawe's draw-masks - which would be straightforward to extend in order to track scene graph complexity per feature. Again, this never having been done is the sole reason for psadro_gm having to manually #ifdef sections of SimGear code and rebuild a custom SG version, and re-add stuff one by one to track memory utilization per re-added code snippet.

If that isn't obvious to you, I can explain it to you via PM - but I'd like to re-iterate my suggestion not to hijack this discussion (I'll ask psadro_gm if he wants me to split our responses in order not to distract from the real discussion here).
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11441
Joined: Tue Mar 25, 2008 8:40 am

Re: Level of detail

Postby Thorsten » Wed Jan 07, 2015 6:11 pm

If that isn't obvious to you, I can explain it to you via PM - but I'd like to re-iterate my suggestion not to hijack this discussion (I'll ask psadro_gm if he wants me to split our responses in order not to distract from the real discussion here).


If you feel this should be per PM, send a PM and don't use the thread, don't ask me to do it because I don't feel it should be PM.

If you know how to track bugs and if it's so easy, track a few, don't just tell everyone how to do it properly and watch and comment on the inadequacy of the efforts of others. I know what information I need to track bugs in systems I maintain, and I make sure I get it - and most of your items aren't on my list. I would assume that most others who track bugs also have an idea what information they need and get it. I would assume that like myself, other developers have their ideas what a reasonable test of a subsystem is.

Most of the bugs I get to see are complex system dynamics. You have one eminently reasonable feature tested for some class of scenario. You have a second eminently reasonable feature tested for some different class of scenario. But in some other use case, they don't play nice together. It's fairly characteristic for complex systems, it's called emergent behavior, and the eminent characteristics is that you can't anticipate it. But of course in hindsight you can claim that one should have...

And most resource consumption data is pointless without a reference. Say buildings take a GB - means only something if I know an alternative way which uses a quarter of that. You can only optimize things if you have alternatives and know their scaling. And even that may be pointless because you also need to know how they play together with other systems - emergent behavior again.


I don't think this is too difficult to interpret/understand - the underlying problem is unknown, right ?


You can always subdivide it. We know on the level of textures, terrain mesh, objects and instanced stuff. You can now ask what trees vs. buildings take as a function of forest in scene and tree density. Then ask whether we could reduce memory footprint for trees (I understand we could, they'd just get slower). Then ask what relative amount different types of trees take, or what the texture vs. the quad data vs. the vertex attributes is...Similarly, you can subdivide the mesh memory consumption (which is what psadro_gm is doing I gather). Point being, once we know it's terrain, we know what the average user can do and what he can't do.
Thorsten
 
Posts: 11337
Joined: Mon Nov 02, 2009 8:33 am

Re: Level of detail

Postby Hooray » Wed Jan 07, 2015 6:26 pm

For the sake of this thread, I am continuing this now in private - hopefully in a more constructive, and less-argumentative, fashion (what you said above is/was obvious to me, including even the usual introductory "banter"). And I also happened to be around when we were talking about "emergence" a few months ago-thanks anyway for the reminder.

I guess, it helps to put things into perspective and consider once in a while that our understanding of an issue may be colored by our own backgrounds, or at least be "incomplete" at times - people refusing to use certain tools and techniques can obviously not apply certain information. Whoever is doing the troubleshooting/debugging should decide whether or not my contributions in this thread are helpful or not (which is why I am also going to restrict any PMs accordingly).

So in the meantime I am very willing to consider my own understanding "incomplete" (despite it being supported by others), while also being willing to revisit this debate in a few years from now - to hopefully see that our mutual understanding of the whole issue is going to be more complete (even regardless of scenery or LOD systems).

PS: As you could have seen by clicking on a few links, I didn't just provide back traces and logs, but even patches to help draw the same conclusions that we're arriving at now - even as far as 3+ years ago, so no need to argue like this...
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11441
Joined: Tue Mar 25, 2008 8:40 am

Re: Level of detail

Postby psadro_gm » Thu Jan 08, 2015 12:29 pm

Well, I understand the issue, now. And a solution will be complicated (of course)

First: The memory is not leaking. We are saving the tileGeometryBin and matcache on purpose in case the tile comes within LOD range for trees, random lights, and random buildings. The building generator, light generator, and tree generator need to know the tile geometry for placement. If we increase the tile count, and you never get within range of most of these tiles, then that memory is wasted. Note: even with random vegetation, random buildings, and the simplifier turned off, this is the case. We always add the random lights.

possible solutions: All not very good...

1) Generate the objects at tile load, then drop the geometry - I don't know which is bigger, the tile geometry, or the building geometry. This makes sense for lights and trees, though - we just need to store a SGVec3, material, per object, which is much smaller than the tile geometry. This also makes tile load time longer, though.

2) Dump the data - load it again when tile is in near LOD range - This is what the simplifier does if simplifyNear != simplifyFar. I found this a bit strange, since we already have the unsimplified terrain in the tileGeometryBin...

3) new intermediate format. - probably the best, but most difficult. If we can determine the most minimal data needed to seed the random buildings, than we can store this information. Something like #1 above, but we need to try to remove all but the non-derived data, while keeping anything needed from tileGeometryBin without actually keeping it around. I'll need to study how long these calculations take per tile, as the random building code is pretty complex.

Stuart has mentioned some new ideas for this code, and I think we'll need to discuss how best to implement a deferred loading scheme that doesn't depend on keeping the tile geometry around. Perhaps we can use the OSG scene graph node data itself, rather than the intermediate tileGeometryBin structure.
8.50 airport parser, textured roads and streams...
psadro_gm
 
Posts: 751
Joined: Thu Aug 25, 2011 2:23 am
Location: Atlanta, GA USA
IRC name: psadro_*
Version: git
OS: Fedora 21

Next

Return to Scenery

Who is online

Users browsing this forum: Google [Bot] and 1 guest