Board index FlightGear Development New features

Compositor pipeline questions

Discussion and requests for new features. Please note that FlightGear developers are volunteers and may or may not be able to consider these requests.

Compositor pipeline questions

Postby Flying toaster » Mon Feb 08, 2021 8:26 am

Hello,

I am amazed by how good compositor looks right now. I have a couple of newbie questions though.

I see that a canvas driven render to texture is in the talks, and also that compositor is capable of providing the same. I was wondering if such a camera would be fully configurable in any case (say you can drive position and look at from the property tree). Yes I hung a targeting pod on the F-20 and I am wondering how hard it would be to make it a reality.

Also I want to have a shot at updating the blackout/redout effect. I thought applying a pixel shader on a full screen quad textured with the render pass could do the trick. This seems to be implied in the Compositor wiki page. There would be a bunch of benefits, like being able to desaturate colors and create a tunnel vision simulation as the blackout sets. Any pointers on how to achieve this with compositor configuration ?

Last but not least, I see compositor and its chaining of buffers as a means to bring back the heat blur effect as was implemented in the plib versions of Flightgear. IIRC, the heatblur object was rendered separately with a pixel displacement shader (based on the previously rendered scene minus the object) on top of the current scene and then z-buffer tested to give the right appearance. I guess the trick is creating the pass, making sure that this object is only rendered in that pass and then z-filtering it before creating the final render.

Any pointers on those subjects would be very welcome

Thanks in advance

Cheers

Enrique
Last edited by Flying toaster on Mon Feb 08, 2021 10:51 am, edited 1 time in total.
Flying toaster
 
Posts: 383
Joined: Wed Nov 29, 2006 6:25 am
Location: Toulouse France

Re: Compositor pipeline questions

Postby zakalawe » Mon Feb 08, 2021 9:19 am

I'd recommend to ask these points on the mailing list. Especially about the targeting pod, Jules is doing some work on secondary cameras (A340 tail camera, etc).
zakalawe
 
Posts: 1258
Joined: Sat Jul 19, 2008 4:48 pm
Location: Edinburgh, Scotland
Callsign: G-ZKLW
Version: next
OS: Mac

Re: Compositor pipeline questions

Postby Hooray » Sun Mar 14, 2021 9:07 am

I see that a canvas driven render to texture is in the talks, and also that compositor is capable of providing the same. I was wondering if such a camera would be fully configurable in any case (say you can drive position and look at from the property tree). Yes I hung a targeting pod on the F-20 and I am wondering how hard it would be to make it a reality.


Yes, while there's nothing set in stone, that's one of the use cases to be supported (and basically already is given the current set of patches and functionality).
We prototyped that several years ago in the form of a dedicated Canvas element that renders a CameraGroup view onto a Canvas RTT/FBO (element actually).

That was based on patches shared by FJJTH/Clement at the time, later on we took those and reworked the code to use Tim's CameraGroup code and Tom's Canvas system.
Fernando helped prototype those back in 2016.

More recently, Fernando completed the Compositor framework and Jules has been working on CompositeViewer support.
In addition, Jules has been working on a new/improved view manager implementation that supports multiple independent instances/views (unlike the existing one) called "sview" (see the wiki).

Which is to say that you'd basically end up with a design that looks like this:

- a high-level Canvas to represent the texture
- a placement in the cockpit/UI to display the texture
- a top-level root group to serve as the root of the canvas scene graph
- an arbitrary number of child elements to populate that scene graph
- a custom "camera/view" element to describe a scene view (analogous to the current view manager)
- each view would then support its own pipeline using standard FlightGear effects/shaders

This would then have all the power and flexibility to enable middleware/fgdata contributors to add arbitrary independent scene views, without having to code dedicated features for each purpose.

including but not limited to:

- tail cams
- gear views
- mirror
- payload view
- RMS shuttle arm
- FLIR/night vision
- synthetic terrain (FG1000)
- ortho maps (terrain views, elevation maps)
- in-sim view management
etc

Fernando also mentioned how this setup would make it possible to provide totally separate scene graphs, i.e. to display/transform 3D models - for instance, for the shuttle ADI ball (which, being implemented in Nasal, is a resource hog according to Thorsten).

Admittedly, this means a tad more configuration work for people wanting to set up such views - but that way, the same system can be used for all purposes, and aircraft developers can obviously use copy & paste or PropertyList inheritance/inclusion to reuse XML specs.

Under the hood, the corresponding Canvas element has its own Compositor instance and a CompositeViewer camera that renders into it.

The current state can be seen here:

https://wiki.flightgear.org/Hackathon_P ... and_Canvas
https://wiki.flightgear.org/CompositeViewer_Support

Image
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12267
Joined: Tue Mar 25, 2008 8:40 am


Return to New features

Who is online

Users browsing this forum: No registered users and 1 guest