Board index FlightGear Development Scenery

Real world scenery  Topic is solved

Questions and discussion about enhancing and populating the FlightGear world.

Re: Real world scenery

Postby stuart » Mon May 25, 2020 10:27 pm

Indeed. And it will take a lot longer to build the tools and run them than to download from merspieler's generation. Though you'll learn a lot in the process!
G-MWLX
User avatar
stuart
Moderator
 
Posts: 1555
Joined: Wed Nov 29, 2006 9:56 am
Location: Edinburgh
Callsign: G-MWLX

Re: Real world scenery

Postby Ernest57777 » Tue May 26, 2020 7:24 am

I have installed merspieler’s scenery but there is a little problem. Flightgear is loadnig around 20 minutes and it’s never gonna load. In places where is normal scenery it’s loading like usually.
Ernest57777
 
Posts: 23
Joined: Fri May 08, 2020 5:13 pm

Re: Real world scenery

Postby wkitty42 » Tue May 26, 2020 3:59 pm

what are your machine specs? and your GPU...
"You get more air close to the ground," said Angalo. "I read that in a book. You get lots of air low down, and not much when you go up."
"Why not?" said Gurder.
"Dunno. It's frightened of heights, I guess."
User avatar
wkitty42
 
Posts: 6491
Joined: Fri Feb 20, 2015 3:46 pm
Location: central NC, USA
Callsign: wk42
Version: git next
OS: Kubuntu 14.04.5

Re: Real world scenery

Postby Ernest57777 » Tue May 26, 2020 8:33 pm

Intel Xeon w3520
Nvidia quadro 600 1gb but i’m going to upgrade gpu to some like gtx 760
8gb ram ddr3
Ssd 120gb hdd 500 gb
Psu 500w 80 plus bronze
Windows 10 pro

Its normal working well on almost max settings in around 25 ~ 30fps but with merspieler’s scenery it’s not gonna load.
Ernest57777
 
Posts: 23
Joined: Fri May 08, 2020 5:13 pm

Re: Real world scenery

Postby wkitty42 » Tue May 26, 2020 10:42 pm

sounds like you need more memory... either in the GPU or possibly in the machine... i'd be thinking about that GPU update sooner rather than later...

i have a gt730 w/2Gig and it is weak but workable... i'm looking to move up to a 1080 but it depends on prices... i'm pushing it with an 8core 4Ghz AMD FX8350 with 16Gig RAM but a lot of that is taken by VMs running on the same machine... i need to up my RAM to 32Gig RAM to be more comfortable... especially since i'm seeing FG using 6-8Gig and more at times... that's why i also generally run with only 75nm visibility but i have tested 370nm in the past... 370nm is how far away the horizon is when you're at 35000 feet altitude...
"You get more air close to the ground," said Angalo. "I read that in a book. You get lots of air low down, and not much when you go up."
"Why not?" said Gurder.
"Dunno. It's frightened of heights, I guess."
User avatar
wkitty42
 
Posts: 6491
Joined: Fri Feb 20, 2015 3:46 pm
Location: central NC, USA
Callsign: wk42
Version: git next
OS: Kubuntu 14.04.5

Re: Real world scenery

Postby f-ojac » Wed May 27, 2020 7:08 am

Are you sure you installed the scenery the proper way? This may explain why FG doesn't load. Also, if available, check the console for errors. I have a less powerful setup than you do, and no problem with osm2city.
Hosting terrasync, World Scenery, TGWeb on my own private server. Click here to donate and help to make the service last.
f-ojac
 
Posts: 1290
Joined: Fri Mar 07, 2008 9:50 am
Version: GIT
OS: GNU/Linux

Re: Real world scenery

Postby vnts » Thu May 28, 2020 9:57 pm

@Ernest: I've sometimes got stuck on scenery loading screen recently too - that's with Terrasync turned off (not been able to reproduce it yet & I didn't pay attention to the cause as I was testing other things). Closing FG and trying again fixed it. You can always try another airport.

If you have performance issues on your system, try the faster/newer Iceland OSM2City with FG 2020.1 from here at BIKF: http://wiki.flightgear.org/Areas_popula ... ty_scenery. You need to select the faster buldings by renaming folders as mentioned here: link.

Decreasing menu > view > adjust lods > LoD:Rough should reduce RAM/GPU VRAM usage and increase FPS. Reducing LoD bare can help too.

wkitty42 wrote in Tue May 26, 2020 10:42 pm:i'm looking to move up to a 1080 but it depends on prices...

With regards to pricing & useful FPS / NVIDIAs pricing of late:

Increasing GPU power beyond a certain limit won't help much until current CPU bottlenecks are fixed. Things like AI traffic also increase CPU time. Then the useful limit rises to a new limit: limit of what feels smooth, or the monitor refresh limit. If frame spacing issues are big enough to make the monitor limit feel bad, the new limit becomes the comfortable non-CPU bound limit above monitor refresh rate with vsync off.

A slightly modern GPU can run FG with ALS maxed due to the high level of GPU side optimisation. Beyond that the GPU needed depends on trees, big LoDs, overlays, transparency AA (which makes overlays sharper but gets applied to unnecessary things like trees slowing performance), normal AA, and maybe (current & future) OSM2City. Another thing that reduces performance when fragment limited is drawing lots of pixels of the same /limited/ graphics quality due to lots of monitors (which should be run at lower res if far away), or monitors with lots of pixels and small area (which should be close up for the eye to make out detail).

With regards to NVIDIAs pricing:
-Previous generation GPU number + 10 = next generation gpu number. e.g. A 970 ~= 1060. Site with GPUs sortable using rough benchmark scores: link. Some marketing tactic caused NVidia to make the latest gen 16xx and 20xx for RTX.
-There's a good price performance point around the mid range xx50 Ti and xx60. GPUs used to be cheaper around the time you last upgraded but there wasn't strong competition from AMD until recently. So around the 900 series Nvidia kept the old generation about and just increased prices for the new generation IIRC.
-Top end GPUs are many times more expensive than they perform at release and are released first to take advantage of early buyers.

This means that people with long upgrade cycles who buy '80 GPUs to 'future-proof' are better off buying a mid range GPU several times over their upgrade cycle. The end result is they have more performance on average over the upgrade cycle :D - also 3d applications are aimed around the midrange, so GPU power available steadily increases when upgrading and keeps up with the rate applications get more resource hungry.

Thinking about it, there should logically be a GPU price point and buying frequency depending on how much someone typically spends over an upgrade cycle and how long it is - maximum being an xx80 every year, and other options being xxY0 every n years. I haven't done the math:) If the old GPU is sold on ebay then that helps. Of course with AMD offering competition the situation could change, but it's often the case with big companies that a duo-poly merely encourages both keeping prices high :? . Maybe there should be a guide for this type of thing in the wiki.

In FGs case GPU side is very optimised. Performance actually stands to get faster for the same visual quality with WS 3.0, work on moving things out of rendering thread, etc. New features will take up some performance in the next few years: compositor features & maybe OSM2City. But probably not enough to need more than a 1060 at worst for 1080p(??).

Kind regards
vnts
 
Posts: 169
Joined: Thu Apr 02, 2015 12:29 am

Re: Real world scenery

Postby wkitty42 » Fri May 29, 2020 3:06 pm

thanks for the info, vnts... it matches with my research... i'm also looking to do more streaming so having a good GPU is a must... especially for using NVTT which is not available for my current OS... that requires an OS update but i need another 1TB drive to clone to before attempting a full OS update... justin case might come to visit ;)
"You get more air close to the ground," said Angalo. "I read that in a book. You get lots of air low down, and not much when you go up."
"Why not?" said Gurder.
"Dunno. It's frightened of heights, I guess."
User avatar
wkitty42
 
Posts: 6491
Joined: Fri Feb 20, 2015 3:46 pm
Location: central NC, USA
Callsign: wk42
Version: git next
OS: Kubuntu 14.04.5

Re: Real world scenery

Postby Hooray » Sat Jun 20, 2020 4:29 pm

vnts wrote in Thu May 28, 2020 9:57 pm:Increasing GPU power beyond a certain limit won't help much until current CPU bottlenecks are fixed.

See Adrian's original LOD experiments:

https://sourceforge.net/p/flightgear/ma ... sg30237674
Adrian wrote:I am presenting an experimental (WIP) method to reduce memory consumption by
scenery with 30%, while increasing the visibility distance 4 times.
This method relies on some kind of LOD system, without mesh simplification.
Image


vnts wrote in Thu May 28, 2020 9:57 pm:With regards to NVIDIAs pricing

See: https://wccftech.com/nvidia-geforce-gpu ... isualized/
NVIDIA’s Mainstream GeForce GPU Performance Per Dollar Visualized Over The Years, Are We Bound To Get Another Pascal-Like Upgrade With Ampere?
Image
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 11836
Joined: Tue Mar 25, 2008 8:40 am

Re: Real world scenery

Postby vnts » Sun Jun 21, 2020 11:19 pm

Hooray wrote in Sat Jun 20, 2020 4:29 pm:See Adrian's original LOD experiments:
https://sourceforge.net/p/flightgear/ma ... sg30237674
Adrian wrote:I am presenting an experimental (WIP) method to reduce memory consumption by
scenery with 30%, while increasing the visibility distance 4 times.
This method relies on some kind of LOD system, without mesh simplification.

Interesting. There seem to be two aspects to that:

Adrian (2012) wrote:I could explain
it in my simplistic view: the current position holds the large textures for
all material within the inner zones. If there are materials within the outter zone which are not within the inner zone, their effect is using a smaller texture, at least until they pass into the inner zone.


This method might give some space savings in areas with lots of landclasses and diverse texture sets. But not sure if savings are significant these days given larger amount of VRAM/RAM(?). Also, the new DDS texture cache option also reduces occupancy a lot - using DDS format keeps textures compressed until very late in the process remaining compressed even in GPU VRAM (until GPU texture access?).

Adrian (2012) wrote:I'm only loading the bare surface from the BTG, and not
performing any tree, building, model calculations from them (they would be
invisible at 20-30 km away, but the osg::LOD just hides it from view, does not
prevent from loading in RAM all the objects).

With LoD bare etc. this seems to be already implemented(?).

There might be ways to reduce VRAM usage and framespacing issues related to VRAM significantly, even without changing scenery LOD schemes:

- At high tree density settings, trees & shadows take up a /lot/ of VRAM (and RAM). Trees also seem to contribute to VRAM related framespacing issues when turning.
- Random scenery objects currently also seem to hurt FPS by bottlnecking CPU from OSG scene traversal according to StuartB viewtopic.php?f=5&t=37499&p=368550#p368463 .
- Shader buildings probably take up a reasonable amount of VRAM too

AIUI all of these could be reduced/fixed by instancing using Uniform Buffer Objects (?). Not sure what the all advantages/reasoning for the current way of doing instancing by vertex attribute data is - I guess it might be a lot more compatible with GPUs, or maybe that was the case in the past?. It's also possible to instance extra things like OSM2CIty roads & pylons reducing memory usage.

The UBO extension GL_ARB_uniform_buffer_object [2] appears to be written against OpenGL 2.1, which AIUI FG supports (?). The existing instancing method could be left as a fallback for older or less compatible GPUs. Systems capable of running trees turned up, random scenery objects, and shader buildings probably have GPUs that support UBOs - there's the older GL_EXT_bindable_uniform too [3].

It's probably possible to reduce memory consumption from trees & shadows by packing attributes similar to shader buildings, but unlikely to be anywhere near as effective as UBOs.


Moores law (sort of & with stalls for hitting tech limits) :)

The increases in the graph is a combination of tech advancement (the silicon wafer process), and some GPU architecture advancement. Since performance is tested by games it depends on 'per game' driver writing too I guess. Both AMD (these days) and NVIDIA don't own fabrication plants or manufacture chips for GPU&VRAM they use. Companies like TSMC or Samsung do. NVIDIA/AMD focus on GPU design, marketing, and selling chips to card manufacturers. The graph shows retail prices, so the cut of the retail sector may have changed over time.
Article wrote:But AMD was picking up the pace by offering its older GCN offerings at much lower price points and with a new generation of cards coming in, NVIDIA had to go all out with Pascal, it's first 16nm cards using the FinFET design.

It would be relatively more interesting to see a price vs performance graph with /average/ cost per area of chip die, with GPU card cost & retail cost changes etc. removed. Essentially how much of performance NVIDIA/AMD were prepared to hand over considering the different levels of competition in different eras (R&D costs can change too and that scales down as the market grows). ATI/AMD used to focus more on the lower end, giving slightly better performance in each price slot. Now things are changing a bit. AIUI Intel has also decided to enter GPUPU/HPC space, and might eventually also release hardware aimed at 3d graphics market.

Advancing technology & costs are a bit complicated. Part of the benefit of constantly improving underlying semiconductor technology and creating smaller 'process sizes' is less die space is used for the same amount of transistors - the advertised 'process size' is mostly a marketing label but the trend of increasing transistor density is real. Part of the cost to CPU/GPU companies that is related to die size, trends down for the same number of transistors with increasing density. This is even with increasingly more expensive/complex fabrication processes that are then offset by being larger scale. Another point is as the process gets refined for each generation the yields go up. In addition, lower spec CPU's & GPUs can be higher spec GPUs with underperforming (or not!) cores disabled depending on yield. Power requirements have gone down a lot too which might(?) reduce card manufacturer costs. Another complicating factor is that some of the wafer is also left unused depending on the process design constraints to help with thermal management. Maybe a graph of /average/ cost per yielded transistor vs amount of transistors per average retail price might be somewhat helpful too.

(Speaking from a knowledge/interest in electronics - again more a knowledge of underlying things, and slightly following a bit of consumer CPU&GPU space out of curiosity .)
Article wrote:We also expect NVIDIA to focus on both rasterized and raytraced GPU horsepower this time around rather than just pushing raytracing as a feature once again with minimal rasterization and shader performance increases. If NVIDIA really wants Ampere to stand out then they might have another Pascal-like performance per dollar jump ready for all of us consumers.

We'll see after Ampere :mrgreen: . NVIDIA are likely reacting in performance or price to the strength of expected AMD releases. AMD also has the hardware monopoly on both consoles, and according to google, the upcoming generation of consoles will sync with the top of the current PC line (GTX 2080-ish) - it might imply PC performance taking off as next gen GPUs are supposedly on new process, and it might also put pressure on NVIDIA which only does PC GPUs to give a reason to use PCs. NVIDIA introducing a new slot xx90, at 300W sounds like a panicky attempt to have the fastest GPU tag in the face of strong competition (forcing high frequencies on normally slow tech with "Ampere" name for current draw, or just lots of SMs to leave AMD behind?). The Ampere xx60 will likely be out and discounted for christmas sales this year(?).

If Intel joins 3d graphics GPU competition in future years it may help a lot.

Kind regards
vnts
 
Posts: 169
Joined: Thu Apr 02, 2015 12:29 am

Previous

Return to Scenery

Who is online

Users browsing this forum: No registered users and 1 guest