I've been working on improving the camera configuration system in FG. It is almost done, but needs some cleanup before I can put up a merge request. Here is the list of new features:
- Support for setting the rendering order of cameras
- Can set the clear mask (color, depth and stencil buffer separately)
- Can set the texture format in render-to-texture (which already was there, but had almost no use before this update), rgb, rgba, depth or depth-stencil at the moment.
- Can set the texture type (normalized 2d or rectangle)
- Support for multiple render targets, via just specifying multiple texture targets. This needs modification to shaders though, but works.
- Can set whether camera uses scene data or a custom model (e.g. screen aligned quad)
- Can bind render target textures to texture units for rendering to screen. (In the future should allow using these in models by specifying the texture name...)
Maybe I forgot something, but those are the main points. And what this allows:
- Rendering a different view to texture, then binding that to a texture unit to be used in a model on another camera:
(Rear view is rendered first, then cube has an effect/shader which draws that view on all sides of the cube. The view updates when moving, so could be used in aircraft models etc.) - "Real" reflections on aircraft. Might work with some tuning?
- Distorted views etc. (which are used in simulators with multiple projectors) without the need to change source code and compile for editing.
- Post processing effects, here is an example of blue filtering the scene. Everything is rendered to texture, and then a filter is applied to that texture in post processing:
- Chaining of post processing effects.
Note though that the camera settings need to be done in preferences.xml, so an aircraft model cannot define any new views. Also, the camera positions are relative to the current view (i.e. cockpit, chase etc) and cannot be fixed on the aircraft itself. And rendering to texture while using that texture in the scene has undefined result.
I'm trying to expand this so that cameras could select which parts of scene they render. For example opaque objects to a texture, then do something with it, render to screen and then render transparent objects. And after that some post processing, like bloom, to whole scene maybe. Or deferred lighting allowing multiple lights...
Maybe someone finds this interesting. I'm not sure if this should be under "Effects and Shaders" though...
Zan