Hooray wrote:www2 wrote in Fri Oct 16, 2015 11:01 pm:Hooray any idea how loan to wait for this is add in canvas (with custom render options)?
Well, we can already render an arbitrary number of nested canvases (i.e. one canvas rendering another canvas, which is basically the requirement for window-in-window (pip) setups):
Equally, we have code/patches that extend the CanvasElement base class to add support for effects/shaders. As far as I am aware, this hasn't yet been incorporated -
http://wiki.flightgear.org/Canvas_Devel ... 2F_ShadersAs has been said previously, the proper way to support "cameras" via Canvas is using CompositeViewer, which does require a re-architecting of several parts of FG:
http://wiki.flightgear.org/CompositeViewer_SupportGiven the current state of things, that seems at least another 3-4 release cycles away.
So, short of that, the only thing that we can currently support with reasonable effort is "slaved views" (as per $FG_ROOT/Docs/README.multiscreen).
That would not require too much in terms of coding, because the code is already there - in fact, CameraGroup.cxx already contains a RTT/FBO (render-to-texture) implementation that renders slaved views to an offscreen context. This is also how Rembrandt buffers are set up behind the scenes.
So basically, the code is there, it would need to be extracted/genralized and turned into a CanvasElement, and possibly integrated with the existing view manager code.
And then, there also is Zan's newcameras branch, which exposes rendering stages (passes) to XML/property tree space, so that individual stages are made accessible to shaders/effects.
Thus, most of the code is there, it is mainly a matter of integrating things, i.e. that would require someone able to build SG/FG from source, familiar with C++ and willing/able to work through some OSG tutorials/docs to make this work:
http://wiki.flightgear.org/Canvas_Devel ... ng_CamerasOn the other hand, Canvas is/was primarily about exposing 2D rendering to fgdata space, so that fgdata developers could incorporatedevelop and maintain 2D rendering related features without having to be core developers (core development being an obvious bottleneck, as well as having significant barrier to entry).
In other words, people would need to be convinced that they want to let Canvas evolve beyond the 2D use-case, i.e. by allowing effects/shaders per element, but also to let Cameras be created/controlled easily.
Personally, I do believe that this is a worthwhile thing to aim for, as it would help unify (and simplify) most RTT/FBO handling in SG/FG, and make this available to people like Thorsten who have a track record of doing really fancy, unprecedented stuff, with this flexibility.
Equally, there are tons of use-cases where aircraft/scenery developers may want to set up custom cameras (A380 tail cam, space shuttle) and render those to an offscreen texture (e.g. GUI dialog and/or MFD screen).
It is true that "slaved views" are kinda limited at the moment, but they are also comparatively easy to set up, so I think that supporting slaved camera views via Canvas could be a good way to bootstrap/boost this development and pave the way for CompositeViewer adoption/integration in the future.
However, right now I am not aware of anybody working towards this.
Ironically, this gives a lot of momentum to poweroftwo's osgEarth effort, because that can
already support independent viewers/cameras, and it would be pretty straightforward to render an osgEarth camera/map to a Canvas texture and use that elsewhere (GUI dialog/MFD screen etc).
However, currently, I am inclined to state that Canvas is falling victim to its own success, i.e. the way people (early-adopters) are using it is hugely problematic and does not scale at all.
So we really need to stop documenting certain APIs and instead provide a single scalable extension mechanism, i.e. registering new features as dedicated Canvas Elements implemented in Nasal space, and registered with the CanvasGroup helper - absent that, the situation with Canvas contributions is likely to approach exactly the dilemma we're seeing with most Nasal spaghetti code, which is unmaintainable and is begging to be rewritten/ported from scratch.
Which is simply because most aircraft developers are only interested in a single use-case (usually their own aircraft/instrument), and they don't care about long-term potential and maintenance, i.e. there are now tons of Canvas based features that would be useful in theory, but which are implemented in a fashion that renders them non-reusable elsewhere:
http://wiki.flightgear.org/Canvas_Devel ... FlightGearSo at the moment, I am not too thrilled to add too many new features to Canvas, until this is solved - because we're seeing so much Nasal/Canvas code that is simply a dead-end due to the way it is structured, i.e. it won't be able to benefit from future optimizations short of a major rewrite or tons of 1:1 support by people familiar with the Canvas system. Which is why I am convinced that we need to stop implementing useful functionality using the existing approach, and instead adopt one that is CanvasElement-centric, where useful instruments, widgets, MFDs would be registered as custom elements implemented in Nasal space (via cppbind sub-classing).
If we don't do that, we will continue to see cool Canvas features implemented as spaghetti code monsters that reflect badly upon Nasal and Canvas due to lack of of design, and performance.