Icecode GL wrote in Mon Jul 31, 2017 11:20 am:The difference in using irradiance maps or not is not that noticeable. With these features an important factor to consider is the "pain-gain ratio", i.e. is it worth for the aircraft developer to document himself on all of that stuff just for such small change?
From reading the wiki, irradiance from transparent surfaces is done algorithmically in the shader where only rough dominant light directions specified. It has the potential to affect lighting in different ways in different situations. Potentially changes with orientation relative to sunsets/terrain, can mixes with cockpit/sunlight sources in complex ways, not easily shown in one screen shot.
This is my understanding -so far- on radiosity & irraddiance maps:
Irradiance from the environment [sky dome,clouds,terrain] is a very large part of daytime cockpit. Lot of gain.
When the sun is overhead, and no canopy, >100%< of the daytime illumination comes from the environment. It's everything, a very notable % the time people fly. Glass regions are like area lights and can be considered as direct illuminators, casting soft-shadows.
Mixing of light colours from environment & cockpit sources, and light spectra competing for dominance is a factor. Skydome light is actually blue during day (even after 1 Rayleigh bounce). Terrain & clouds have their own spectra. It's only the vision systems'
colour constancy that helps keep the scene mostly legible, but this is not perfect & reproducing how it fails is a part of achieving realism. I suspect the quality of the illuminant is consciously perceived as well e.g. being able to tell time based on daylight quality.
With glass regions facing the horizon, illumination can be reddish at sunset (
google image). It will change as the pitch changes becoming bluer&dim, or as the glass regions face away from the sun. An overhead canopy means competing spectra, same with cockpit lights.
Taking light from glass surfaces as first bounce, the 2nd, 3rd, etc. produce ambient effects. Light from say a green surface close to a meeting point can colour a 2nd surface green.
Google image showing bounces and light from nearby surfaces. Human vision also uses gradients in light to guage
3d depth&structure.
A substancial portion of percieved difference between a realtime 3d render and a photo are the multiple bounces, even when there is direct sun illumination (
wiki image). That's what I meant when I suggested that more might be achieved by better radiosity, than modest improvements to textures or shading..2nd wiki image is more photolike even with minimal texture detail, the first is very recognisable with 'realtime 3d render' look. When there is no sun illumination, environmental irradiance dominates.
As I understand, modern realtime 3d applications goto quite some lengths - using grids containing precomputed environment radiance maps linearly decomposed into vector spaces with spherical harmonic basis functions (light probes IIRC). This assists with non-static geometry moving around in a static lightfield. For cockpits geometry is static or restricted (switches with limited states etc.), so accuracy is easily achievable without going to such lengths.
Doing the full raytrace, or numerical solution of Maxwell's equations, would give powerful impact. Involves looking at flux through each point in glass surface in every direction and including multiple bounces around the cockpit. Specifying rough directions of glass regions as done currently is a big improvement for a text file change.
Treating each glass region as a area light is an approximation. The light is given 1 uniform intensity and brightness, doesn't cover light variation across the skydome, or the rays passing at different angles at each point (transmission pattern). It does include the average intensity and hue/sat of each area light. So it accounts for average colour of the environment each surface points at, including asymmetries in glass surfaces. Calculations used in ambient occlusion maps are a rough approximation that records hemispherical exposure to light at points on a surface, these become unnecessary with raytraced lightmaps. The full benefit of raytrace is available for each area light: soft shadows/multiple bounces off surfaces with different properties/occlusion effects without ambient occlusion approximation.
Each area light is approximated to be uniformly white and can be assigned a raytraced lightmap containing intensity, but won't include colouring effects of neighbouring surfaces unless more space is assigned. Calculating the average hue/sat and floating point intensity for each area light can be calculated by using an opacity map to mask an environment map. Floating point environment intensity can give extremely delicate, realistic competing light sources like in the
first image where it's possible to intuitively get a feeling for the offscreen yellow-ish light environment. Possible to split up large surfaces like canopies into multiple area lights.
That's as much as I know/can say on precomputed irradiance mapping (from a conceptual view). The notion of treating glass regions as area light is an approximation, cannot think of a closer way to approach raytraced quality in real time. Not sure how close the approximation gets to the full soln., what is lost, or the pain to gain factors involved compared to other approximations. Thorsten has looked into all this and is better positioned to say.
Kind regards