Board index FlightGear Development Scenery

Scene building, perception and complex systems

Questions and discussion about enhancing and populating the FlightGear world.

Scene building, perception and complex systems

Postby Thorsten » Mon May 09, 2011 12:02 pm

What's this about?

This is a collection of some of my thoughts and experiments of what makes a rendered scene 'work' in the sense that the mind accepts it as realistic and is impressed, or 'fail' in the sense that it looks artificial. My own experiences within Flightgear are centered around generating believable cloud configurations, but I've also experimented a lot with texturing terrain and random scenery model generation.

In rendering a scene, it's important to note that we create an illusion. We have no way of generating all the detail that is actually there in a real scene (even a photographic scenery could not do that by the way), so what we're after is really fooling the mind into believing that a scene contains far more detail than there actually is.

To this aim, I'll first outline how perception works and what how the mind generates an impression of a scene from the raw visual information. With this information, we can understand where adding detail is helpful and where it is harmful.I'll then argue that creating a good scenery has the dynamics of a complex system, i.e. the whole determines in an essential way how the parts behave. In other words, simply adding two individually stunning improvements to the scenery doesn't necessarily combine into a stunning result - you may just get garbage. My experience from complex systems comes mainly from how I do my living - I'm a theoretical physicist simulating such systems.

I hope that this collection will be helpful for people to decide what they'd like to work on and how to improve the visual realism of Flightgear. In the following, I will also present some things that don't work. Please do not take this as personal criticism of your work if you happen to be author - I try to treat every problem in a professional manner, and I also know all too well that outlining a problem is one thing, writing the solution is quite a different thing. I'll also readily admit that there's no reason to expect that I understand all issues involved, and I'd be very happy if for instance some of our experienced 3d modellers would add their own experiences and observations.

I also believe many of the principles I'm describing are more widely applicable than I discuss here, e.g. for the generation of villages from random objects, or for decisions how to build a city best from static models, and so on.

Mental processing of visual perception

In a somewhat simplistic picture, the visual is processed by the mind in a conscious and an unconscious stream, to be merged into a mental picture of the scene.

In the conscious stream, contrasts are the dominant issue. They segment the visual field into regions requiring high and low attention, and the conscious awareness later parses the image along these division lines. For instance, if there is an island in an otherwise featureless ocean, the mind jumps to the island as a region of high attention and directs only low awareness to the ocean. This means (among other things) that the level of detail being perceived later by the mind to be in the scene is dominated by what is seen in the regions of high attention, i.e. by those singled out by contrasts - the level of detail available in low attention regions doesn't influence much what we believe to see. If you look at old pencil drawings, you can often see this at work - the drawings are sketchy around the edges, and only a small region is drawn with high contrast and high detail - yet they work in what they're supposed to do, i.e. they generate the impression that the drawing has far more detail than it actually has.

At the same time, unconscious processes try to find and categorize patterns in the visual information. If the pattern is very obvious, the information becomes conscious, if the pattern is subtle, it remains unconscious, although we may get the vague impression that something is not quite right with the picture. The pattern recoginition of the unconscious mind is really good - if there's an artificial pattern in a rendered scene, it will usually be noticed at some level, and it takes a lot of work to pattern a rendered scene in a way that it looks natural - however if it replicates the relevant patterns correctly, the scene will be accepted as natural-looking, even if it actually has a lot less detail than a photograph.

The merged mental picture of the scene is then evaluated for plausibility, based on comparison to what we have seen before. As a simple example, a blue band at a valley bottom will readily be accepted and identified as a river - the same blue band crossing a mountain range will not lead to the impression of a river even if drawn in much more detail, simply because we know from experience that water runs to the lowest point.

Thus, contrasts leading to regions of awareness, patterns and plausibility are per se not good or bad, but rather something to be taken into account - if done correctly, they can help generating a realistic scene with less efford than using a photograph. If not taken into account, they can spoil a lot of effort.

Contrasts focusing awareness - some don'ts

For lack of processing power, we can't render a natural scene - so we use some techniques to create the illusion of it. Under no circumstanced do we want to focus attention to how the illusion is achieved technically. If we fail to respect this principle, the consequences are that what we see looks highly artificial.

Some examples:


Image

The default cloud layer with 20 km visibility range seen from 30.000 ft (what an airliner passenger might see when looking out of the window). From this altitude and view angle, the visibility range boundary can clearly be recognized, and since the clouds are the highest contrast objects, the attention is naturally directed towards them. In a static screenshot, the scene is marginally plausible as there just might be a cloud layer ending - when actually flying, the effect is not realistic at all.

The problem is not so much that a technical trick is made visible - this is largely unavoidable. What is more problematic that it is seen from a perspective that frequently occurs in standard flights. Scenes do not have to work well from orbital altitudes or in steep dives - but they should work for cruise altitude views.



Image

In this scene, the landclass boundaries form contrasts which are enhanced by the two different snow colors and by the texture coloring. The focus of awareness on the boundary region directs the attention to the vastly different level of detail on both sides of the boundary, and allows a direct comparison of different rendering techniques - while technically interesting, it's looking very unnaturally.


Image

This is the Alaska range. Here the hard contrasts between forest and snowcover/glacier draw the attention to the fact that landclasses have very sharp and straight boundaries. In the later plausibility check, this makes the viewer wonder if there shouldn't be some snow in the forest areas as well (if snow and forest would be just a bit more mixed, that would essentially make the scene much better).


Contrasts focusing awareness - some do's

Contrasts can be made to work for you if you want to direct the attention of the viewer away from an area where there are no details available and to a highly detailed spot - in this case, the mind tends to accepts the whole scene as far more detailed than it actually is.

Some examples:

Image

In this scene, the morning fog and the diffuse clouds largely hide any contrasts in landclass boundaries. The highest contrast is offered by the airplane and by the sun reflection on the water, and when taking in the scenery, the attention is largely taken by the detailed reflection pattern. That directs the attention away from the fact that the mountains don't show any great texture details and that the water outside the sunspot also is fairly simple.

Image

Here, a similar technique is used to direct the viewer's attention to where it is supposed to be. A small patch of high-resolution textured cloud is embedded into a larger layer with low detail. Since the larger layer offers almost no contrast, the eye is drawn towards the high resolution area and consequently gets the impression that the whole cloud layer would be fairly detailed, whereas actually it is not.


Patterns and recognition

Let's turn to the unconscious processing and tha ability to spot patterns. Patterns can be quite obvious, like a repetition.

For example, the fact that individual terrain textures cover typically a 2x2 km area is apparent when viewing larger areas from high altitude by the repetition:

Image

In this case, the effect is enhanced by the fact that the irrigated crop texture shows very pronounced structures, so that their repetition is very prominent.


But patterns can be much more subtle. Let's turn to a different example. What is the matter with this cockpit?

Image

Clearly, it's not lack of detail - you'd be hard-pressed to find any aircraft which has more instruments than the Concorde. The problem is that monochromatic surfaces do not occur in a real environment. The unconscious mind searches the scene in vain for any cues to depth information, but fails to find any shades or any cue to an uneven surface - thus the buttons appear as if projected onto a perfectly flat and smooth surface, the material of the cockpit doesn't appear real, in spite of the high detail level of the 3d model.

Image

This is the cockpit after retexturing (done by yours truly) with the aim to offer the eye some cues to depth and some structure in the materials. Still not great (just shows that I didn't really have a clue how to texture properly), but clearly an improvement. Note that none of the improvements are 'real' 3d things - any shades in the gauge sockets and buttons are part of the texture and are quite in the wrong direction in some conditions - but nevertheless the mind accepts the cues to 3d perception gratefully and has the impression that there is structure.

Possibly the most subtle patterns are scale distributions. Objects in nature do not all have the same size, but the size varies with some distribution between certain boundaries. For instance, trees cannot have a size of 500 m, trees even with 50 m size are very rare, a forest with too many 50 m trees embedded would look decidedly strange, but so would a forest in which every tree is precisely 15 m high. If you want to get a naturally looking forest, you have to get the scale distribution right.

That's really important for clouds (and took me quite some time to figure out). Convective clouds represent turbulent air. Turbulence mixes all size scales, hence convective clouds generate patterns over a huge range in scale - from individual 'feathers' of 5 m size to vast towering cloudmasses of 10 km size, and all scales between are generated with similar probability.

Take a look at this layer:

Image

What is wrong with it? At first glance, nothing. However, there remains a nagging feeling that something isn't quite right - and if you look closely, you discover that it dominantly generates only two size scales: 1) the 'feather' width in the textures of 10 m size and 2) the size of the clouds of ~500 m with some scatter of a factor 2 around these dominant scales.

The effect of the scale hierarchy becomes apparent when comparing with this layer:

Image

Here, the textures have more scale variety inside each texture, there are not only feathers with 10 m scales but larger blobs filling the 100-500 m scale hierarchy, and individual clouds show a factor 5 or more in size.

That the problem in the first case is really a scale distribution problem is apparent in screenshots where the scale can't be observed - there is nothing wrong with any individual cloud, in fact they look fantastic. Just the sum of many individually well-working cloud doesn't result in an equally well-working layer.

Image

The tricky issues about getting patterns right is that one needs to be aware of what is wrong first - often the signs are very subtle and not easily recognized consciously. This makes it difficult to fix the problem, because one doesn't really know where to start or what precisely is wrong.

Some ways to address the problems - and complex systems

Addressing the problems created by unwanted focusing of attention or by artificial patterns is by no means easy. The whole issue is made more complicated by the fact that individual elements of scenery rendering are not independent, but tie together, and improving one issue usually opens a new can of worms elsewhere. This is a property of complex systems, and it's nicely illustrated in the following shot. Flightgear has a very realistic water reflection shader, and Flightgear is capable of rendering credible overcast skies which decrease the amount of light beneath the layer. However, the combination of the two effects is just silly:

Image

The water shader doesn't know that the sunlight is blocked - so it continues to render sparkling reflections even where none would be possible. The lesson is: Individual subsystems need to be coordinated - there is no sense in doing something to one subsystem only if that has bad repercussions elsewhere.

Having said that, let's go into some techniques. One core problem is that Flightgear utilizes landcover data in polygons rather than photographs (which actually makes a lot of sense for a whole host of other reasons - photo-textures are by no means unconditionally superior), so we have to deal with contrasts generated by polygon boundaries, as they will be there. At the same time, we have to deal with the pattern repetition in the textures.

Static solution

As mentioned above, individual texture sheets are typically 2x2 km with 1024x1024 pixels - thus they can potentially reflect the scale distribution between ~2 m and 2 km. Since the landcover resolution (or rather, the distance scale across which landcover changes) is often rather 10 - 20 km, the landcover data can only generate scales of this order or larger. Thus, there's a technical gap in Flightgear's ability to get scale distributions in terrain rendering right.

The gap is worse if the individual textures show only small-scale details. If the texture sheet looks essentially unifrom up to, say, objects of size 10 m (often for forest textures), then the actual scale gap is not a factor 5 from 2 km - 10 km, but a factor 1000 from 10 m - 10 km, i.e. the situation is much worse.

Providing artificial detail at various scales can actually work fine, as exemplified by this forest texture:

Image

The fact that the texture contains different forest as well as clearings and paths makes for a credible scene even for landcover being unchanged throughout the picture. At the same time, if there are variations and contrasts inside a texture, this distracts from the contrasts at the landcover boundaries (which we'd like to hide) which are strongest for largely homogeneous textures.

However, overdoing the contrasts inside a texture is also bad, since it worsens the pattern repetition problem (see above) which is absent for homogeneous textures.


Another way to close the scale gap is using more detailed landcover data. If the resolution of the landcover data is about 100 m, then the burden on the texture is lessened.

Compare Denali

Image

with Rainier

Image

in spite of the fact that both pictures have hard contrasts between forest and ice, the information about the rock patches and the more accurate location information for the glacier ice flow in the second case makes a lot of difference. However, detailed landcover data isn't the universal problem solver if other aspects are not addressed properly.

This is Haleakala crater with high resolution landcover data:


Image

While lava can be as dark as shown in the picture, see e.g. here

Image

the scene appears unrealistic nevertheless because the high contrast shows the polygon boundaries too clearly. So even if Haleakala would in reality have such dark lava, we should not use a dark lava texture because we're unable to get nature's more fractal boundaries between dark lava and the surroundings right. Instead, a lighter texture can do much to diffuse the contrast and focus the attention elsewhere:

Image


Some lessons:

* textures need some variation throughout the texture sheet to address the scale gap problem
* however, that variation may not be too pronounced
* there should not be strong contrasts between any two landclasses whenever this can be avoided - it's better to get the colors a bit wrong than to draw increased attention to our not so nice polygons

Dynamical solutions

There are some dynamical solutions to the pattern repetition and scale gap problem which generate structures dynamically such that repetitions are avoided.

One of them is the crop texture shader, which avoids the trap of repeating texture sheets. Thus, a repeating crop texture

Image

is turned into a non-repeating one

Image

While this cures the repetition problem for good, the landclass boundaries still come out very strong and a better blending with other textures might help. Also, the level of detail inside a texture sheet is better and more realistic in the first case - the crop shader isn't actually better when high-resolution landcover is available and the repetition problem is absent, it is just better when there is a repetition problem.

Another technique is the gradient shader, which generates more detail inside a landclass based on the (more accurate) elevation information. Again, that leads to stunning results inside a landclass

Image

but doesn't blend well at the boundaries

Image

In addition, the gradient shader has a yet different problem - it fades less with distance in hazy conditions than other terrain types. As a result, forests remain very dark while other textures have whitened long before, again causing a variant of the polygon boundary prominence problem:

Image

Some lessons:

* dynamical solutions to scale gap and pattern repetition are great, as they are capable of evading the trap that adding more contrast inside the texture to hide polygon boundaries better automatically worsens the repetition problem

* but it's not enough for them to just 'be there' - they need to be tuned to blend well with the rest of the rendering system, otherwise they may make matters even worse under some conditions.

Combined strength

A combination of hires landcover, relatively low contrast blended textures and detail inside textures can hide quite a lot of inconsistency

Image

Here for instance the fact that there is a terrain error interrupting the river - usually it takes a bit to spot it, the picture looks okay on first glance. Of course, an approach in which many knobs are turned at once doesn't fit so well into the Flightgear development structure of individualists working on their own problems. Nevertheless, that's what I think we must do to get better - talk to each other more and make individual subsystems cross-talk.

Conclusions

There are a number of things I can imagine would be helpful for a next generation scenery rendering - but I'll refrain from going into too many details. Maybe just a few keywords:

* large database of potential texture candidates and actual aerial photographs of test areas to blend the best testure set
* regional texture sets loadable at runtime
* cross-talk between any reflection effect and weather code
* generalized solutions to the repetition problem, possibly based on the crop shader
* more work dedicated to tuning effects
* ...
Thorsten
 
Posts: 10844
Joined: Mon Nov 02, 2009 8:33 am

Re: Scene building, perception and complex systems

Postby buti » Mon May 09, 2011 2:24 pm

hi thorsten,

this is a really interesting post. even though i currently have very limited time for development, i have thought a lot about possible improvements in terrain rendering in flightgear. i am specifically interested in procedural terrain rendering techniques. there are some papers/pages in my bookmarks, that i now want to share:

http://www.vterrain.org/Elevation/Artificial/
http://www.howardzzh.com/research/terrain/
http://wwwcg.in.tum.de/Research/Publications/FractalTerrain

possibly someone can dig into this earlier than me. i am still not sure whether any of this can be applied to flightgear. also this touches only one aspect of the problems you describe. one would need to address all of them, to make flying in the virtual world a visually pleasant and "natural" experience.
buti
 
Posts: 3
Joined: Wed Jan 09, 2008 3:32 pm


Return to Scenery

Who is online

Users browsing this forum: No registered users and 1 guest