Board index FlightGear Development Canvas

Canvas remote drawing

Canvas is FlightGear's new fully scriptable 2D drawing system that will allow you to easily create new instruments, HUDs and even GUI dialogs and custom GUI widgets, without having to write C++ code and without having to rebuild FlightGear.

Canvas remote drawing

Postby ThomasS » Mon Oct 10, 2016 1:30 pm

Hello,

I'm currently integrating FlightGear with the "Soitanen 737" into my home cockpit and met - like many others before - the challenge of displaying instruments (PD, ND, Eicas) externally from FG. I scanned the Wiki and Forum for relating information and my summarized conclusion is:
    * fgpanel is to some degree outdated and therefore no option
    * canvas is THE future way for displaying instruments for ALL aircrafts, even though many instruments are still coded in the FG C++ core or in the aircraft itself
    * PHI shows a way for displaying PD in a browser or any device running JavaScript/HTML5
    * The are discussions ongoing for translating parts of Nasal code to Javascript; if I get this right its for the purpose of eg. extending PHI for also displaying more complex instruments like ND.
However, I couldn't find any information about whether you already discussed the option of adding a kind of adapter/plugin/switch into the Nasal Canvas implementation (namely api.nas) that delegates the raw drawing instructions (group handling, transforms, drawing, etc) not only to the FG core but in addition (or instead) to outside FG, eg. by HTTP requests or network UDP publishes similar to those used by the generic protocol. I understand that such a mechanism is performance critical and should in any case be a one way communication without requiring FG to wait for any return values (which shouldn't be required for drawing anyway).

Did you already discuss this option? Are there some deal breaker I don't realize?

Many thanks and Regards
Thomas
ThomasS
 
Posts: 94
Joined: Sun Sep 11, 2016 2:21 pm
Location: West of EDDK
Version: 2018.2.1
OS: Linux,MacOS,Windows

Re: Canvas remote drawing

Postby Hooray » Mon Oct 10, 2016 6:13 pm

Hi & welcome,

the short answer is that "fgpanel" is kinda outdated these days (but still functional), however it is restricted to "steam cockpit" aircraft, i.e. those using legacy cockpits, those not featuring MFDs like a ND/PFD EICAs etc - so generally not suitable for anything involving modern avionics.

Phi is the most recent, and most functional setup for providing remote avionics, however it has all the pros & cons of using a browser-based setup.

The JavaScript thing you apparently found is about refactoring existing Canvas MFDs so that the corresponding Nasal code becomes a valid subset of JavaScript (and vice versa) - that's just an idea that we ended up discussing behind the scenes when Torsten began his Phi/MFD (browser-based) work, because he began duplicating functionality that already existed elsewhere. The main reason being that a Canvas is just a property tree - and most elements are just textures or SVG images. In other words, it would not be that far-fetched to come up with a really tiny wrapper/subset of both, Nasal and JavaScript, that can be processed by both - part of this could be machine generated actually, i.e. dynamically compiling a simple subset of a DSL into whatever target platform is desired.

The main thing this would require is what Torsten implemented already: APIs for setting/getting properties and running fgcommands, in conjunction with serving fgdata resources like textures and SVG files: http://wiki.flightgear.org/Canvas_Nasal ... ipt_Subset

Another hybrid approach (much less work, but also much less efficient (think latencies)) would be taking Torsten's existing work, and reworking the screen capturing code so that the same code can be used per Canvas - at that point you could literally "stream" an actual canvas texture as an image/video to another process (e.g. via http) and it would also be possible to map actions/events via JavaScript back to the client - that would not be exactly elegant, but rather simple to do given the code we have, but only really an option in a cabled LAN >= 1gbit with good networking.

It's basically all about allowing a Canvas to be serialized on demand and sent via Torsten's mongoose handlers - there is quite a bit of existing code that could be leveraged/reworked to make this work, probably within 2-3 days of spare time hacking if you have a little experience with C++.

Some of the ideas (potential approaches) are summarized below:

http://wiki.flightgear.org/Canvas_Troub ... r_image.29
http://wiki.flightgear.org/Canvas_Devel ... ter_Vision
http://wiki.flightgear.org/Howto:Use_a_ ... Instrument

The other thing you didn't yet find/mention is the whole "fgcanvas" idea - which is basically fgpanel reinvented with Canvas/MFD support, i.e. so that multiple fgfs instances can be interlinked and used in a standalone fashion just for showing a Canvas created by another instance - that's actually partially working already, simply by manually copying a Canvas property tree to another instance and displaying the corresponding instrument/MFD in a fullscreen dialog:

http://wiki.flightgear.org/FGCanvas

This would also benefit from certain changes to FlightGear's init code, detailed at: http://wiki.flightgear.org/Initializing_Nasal_early


To sum it up, if you don't have any coding experience, use Phi - if you are familar with Nasal/JavaScript only, consider looking at the Nasal/JavaScript subset idea, if you are familiar with C++/OSG, you could also consider adapting Torsten's mongoose work to provide a new "service" that serializes/streams an arbitrary Canvas to a texture that you can "watch" via a browser.

If however you are interested in the full thing, you will probably want to read up on the original "fgcanvas" idea (you would need to know a bit about git, C++ and how to patch/rebuild SG/FG, but we do have quite a bit of existing code doing some parts of this in various places)
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby ThomasS » Tue Oct 11, 2016 6:48 am

Thank you for your detailed comments. I think I'll give the Canvas-capture-stream approach a try.

The Nasal/Javascript/DSL idea looks quite abstract to me currently. I understand it as far as PFD is concerned like Torsten did in PHI. But using it for ND appears much more extensive. And FGCanvas; I'm afraid my FlightGear/C++ skills will not be sufficient for this approach.
ThomasS
 
Posts: 94
Joined: Sun Sep 11, 2016 2:21 pm
Location: West of EDDK
Version: 2018.2.1
OS: Linux,MacOS,Windows

Re: Canvas remote drawing

Postby Hooray » Tue Oct 11, 2016 8:43 pm

patching fg to stream a canvas camera via mongoose's screenshot handler will definitely require C++ changes, too.

The Nasal/JavaScript may sound a bit abstract - but a Canvas is after all just a representation of a SVG, i.e. everything that a Canvas represents could also be represented as a single SVG file with raster images, other vector images and embedded code.

I am familiar with the ND code, and making it work in JavaScript in the form of a bunch of animated SVGs would be possisble and should not require much in terms of C++ changes.

For instance, imagine taking one of the SVG/raster image examples from the wiki and then rewrite the code to be valid JavaScript code, refactoring language specifics - e.g. function signatures:

  • JavaScript: function foo() {}
  • Nasal: var foo = func() {}

This kind of thing can be implemented using a transformation table (=hash) to encode language specifics in a markup.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby ThomasS » Fri Oct 14, 2016 9:04 am

What I did now roughly is:

    *adding a DrawCallback to the canvas camera
    *attach an image to the canvas camera
    *duplicate Torstens http ScreenshotUriHandler to a CanvasImageUriHandler that subscribes to the callback in the canvas camera

This works so far and I can grab ND images from my browser by http like a screenshot. And as you mentioned before, there are latencies and it isn't efficient. Just creating the PNG image from OSG takes up to 100ms. Using an uncomressed bitmap format like tiff results in 3MB image sizes. Assuming a frame rate of 5 will be sufficient for displaying smooth instruments (which I doubt) this results in 15 image creations per second (for Captains PFD, ND and Eicas).
However, it is working and I will use this approach for my first setup and for some long running tests for checking for memory leaks and other problems. Probably there will be some fine tuning required.

And in the meantime I keep thinking about an external canvas drawing solution. Maybe I'll pursue the Nasal/Javascript approach, though my personal favorite is a Nasal/generic approach which allows implementing drawing in any environment like JavaScript/Python/Java aso.
ThomasS
 
Posts: 94
Joined: Sun Sep 11, 2016 2:21 pm
Location: West of EDDK
Version: 2018.2.1
OS: Linux,MacOS,Windows

Re: Canvas remote drawing

Postby Hooray » Fri Oct 14, 2016 5:55 pm

Hey, thanks for keeping us posted!


That sounds good to me - like I said, I wouldn't expect any kind of impressive performance using that sort of approach - however, it should actually work "well-enough" on the same machine/network, but you may still want to look into using some of OSG's more advanced threading options to make sure that the capturing part doesn't unnecessarily affect other things - normally, you could reserve some fixed amount of RAM and let OSG handle this by copying the frame.

Apart from that, my suggestion would still be to put up your code so that others can use it - as you have certainly seen, this is a recurring/popular feature request actually, so having a public git repository that contains the corresponding patches would be cool -alternatively, please consider adding your patches to the wiki (if in doubt, just add them to the corresponding Canvas related articles to which I linked a few days ago).

Technically, a more "proper" way would be wrapping your code in a so called "Canvas placement" (see the wiki section on placements) - in this case, we'd have a virtual placement, in the sense that the main window doesn't necessarily display such a placement which exposes a Canvas via the built-in httpd/mongoose.sever.

Note that this would also mean that _any_ Canvas could be streamed using your approach, just by adding a corresponding "placement" (via Nasal).

No matter what you decide, it would be cool if you could share your code so that others can take a look.

Then again, grabbing/capturing, and even streaming live imagery from OSG is a very common thing in OSG land, and frequently discussed on the OSG users list - I think there also a handful of examples on doing this "properly", so you may want to review those or even consider posting there to get additional feedback (best practices).

I think for a proof-of-concept this should actually suffice - the Nasal/JavaScript idea will involve a bit of Nasal/JavaScript metaprogramming, so you would need to be fairly familiar with both languages - but conceptually, a Canvas can be represented as a SVG (and vice versa), i.e. interrnally, SVG could be used as the representation format, and it could reference a bunch of dynamic contents by fetching resources from fgfs via httpd.

This sort of thing would be browser-based obviously, but you could benefit from Torsten's groundwork - anything fgpanel/fgcanvas-like will inevitably require OpenGL hardware and hardware acceleration - i.e. the whole fgcanvas thing was originally about running a restricted subset of fgfs with most subsystems disabled, and merely canvas related stuff running, and some networking to sync/replicate the corresponding properties via some form of IPC (networking probably, for distributed setups).

Note that this wouldn't be much about OSG/OpenGL hacking, but more about general FG hacking, i.e. about making stuff optional/better configurable to keep it disabled - thus, it's less confined work, but definitely not as involved as setting up a new OSG cam to capture/stream a screenshot.

If your main focus is implementing something generic without it being browser-based and without requiring OpenGL/OSG, you would probably want to use a format that is well-supported by the corresponding technology stacks - e.g. by treating each top-level Canvas as a virtual SVG file - we once talked about that (like years ago), but basically we could provide each Canvas::Element with a serialize() method that turns each element into a valid serialization form, such as turning Canvas/OSG text nodes into SVG nodes, Canvas::Image (raster images) into <image> etc.

That sort of work would be relatively straightforward, and it would allow for serving an animated SVG file (e.g. representing a MFD; HUD, or even GUI dialog) - any dynamic elements would obviously need to be fetched from the fgfs instance, e.g. via Torsten's mongoose work or some other form of IPC.

We would basically need to add an abstract CanvasSerializer and implement that for SVG, animations would ideally be wrapped in a helper element, at that point, even dynamic/changing elements could be serialized by using a subset of SMIL/JavaScript (which is valid SVG)

I cannot currently find the original topic, but we talked about this 2~3 years ago - the main motivation being that this would open up all sorts of possibilities, such as using Inkscape (or any other SVG) editor to fully create/animate and maintain sophisticated avionics, without having to do much/any Nasal coding.

Some of the more relevant pointers in the wiki seem to be:

http://wiki.flightgear.org/Howto:Using_ ... erator_(IG)
http://wiki.flightgear.org/Slaving_for_Dummies
http://wiki.flightgear.org/Canvas_Devel ... torming.29
http://wiki.flightgear.org/Canvas_Prope ... ave_setups
http://wiki.flightgear.org/Canvas_Prope ... er_or_file
http://wiki.flightgear.org/Canvas_Devel ... ialization
http://wiki.flightgear.org/Canvas_Sandbox
http://wiki.flightgear.org/Canvas#Using ... G_projects

(Note that none of this is set in stone obviously, these are just a bunch of related ideas we've had, and a few related code snippets - but maybe this helps you form an informed opinion and determine your goals/priorities given the approaches/ideas we discussed)

Please be sure to keep us posted, and please also feel invited to update the wiki accordingly !
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby Hooray » Sat Oct 15, 2016 10:42 am

Also note that the whole fgcanvas idea is primarily about taking some sort of existing MFD (e.g. PFD or ND) and render it in a fullscreen view using a dialog, e.g. along the lines of the following screenshot:

Image

At that point, you'd obviously have to be running additional fgfs instances (with unneeded stuff disabled), and key data/properties synced with the main fgfs instance, this could for example be accomplished by providing a property abstraction that remotely fetches the corresponding properties from the master instance, e.g. via Nasal's http module (for prototyping), possibly augmenting/replacing the whole scheme using WebSockets or a more efficient binary version (e.g. the XDR component used for encoding properties) - this would be relatively low-level (property-level sync'ing), but should be much more efficient than actual live streaming.

In the mid-term, it would be much more efficient to actually register MFDs as Canvas elements, and only sync "events" across instances, for all the reasons discussed at: http://wiki.flightgear.org/Canvas_Devel ... FlightGear
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby ThomasS » Thu Oct 20, 2016 7:44 am

Hello Hooray,

thanks again for providing these detailed information. It gives me many approaches for thinking about a more sophisticated solution.

In the meantime, I consider my simple http solution ready to be shared. My idea is to

*build a zip archive containing the modified files and a git patch
*create a new wiki page that is located in the "Related" section of http://wiki.flightgear.org/Canvas that describes this feature

One question about formalism: I copied and modified a file that contains a copyright note like "(c)opyright by ..." in addition to the GPL header. I think I should keep this note because the modified file still contains the work of the original author. But I also think I should remove the note because the original author is not responsible for my modifications to the new file. What is the best practice in these cases? I tend to use a phrase like "Derived from original file (Copyright (C) 2014 ...)".

Best Regards
Thomas
ThomasS
 
Posts: 94
Joined: Sun Sep 11, 2016 2:21 pm
Location: West of EDDK
Version: 2018.2.1
OS: Linux,MacOS,Windows

Re: Canvas remote drawing

Postby Hooray » Thu Oct 20, 2016 8:02 pm

We've seen dozens of such patches posted in the form of URLs with ZIP archives/tarballs, sooner or later (as in 5+ years later), most of these have disappeared - thus, it is generally a good idea to really use a public git repository (think sourceforge clone) or at least post your patches via the wiki.

Regarding proper attribution of your work, I would suggest to look at other files that have gone through several iterations - usually, there are several lines at the top in the form of:

Code: Select all
1995 Originally written by Curtis Olson
1997 major rewrite by David Megginson
2005 added feature X by Erik Hofman
2009 implemented effects/shader support, Tim Moore


There are quite a few files/components whose headers look like that. If it's an entirely new file based on an other, you could simply add a note saying "based on foo.cxx written by John Doe in 1997"
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby Hooray » Fri Oct 21, 2016 8:05 pm

I've just taken a look at your patches, thanks for providing these - and also thank your for adding a corresponding article to the wiki.
I think the code is looking pretty good - however, with a bit of cleanup we may be able to actually get this reviewed/committed - it may not be particularly efficient/perfect for your current use-case, but it could be the groundwork to better integrate Phi and Canvas with each other - basically, this makes _any_ Canvas feature available to Phi.

Thus, my suggestion would be to review how so called "placements" work in the Canvas code - which would mean that you could add "virtual" http placements, which you can then register with the httpd handler to make certain Canvas textures available on demand by adding a corresponding httpd/mongoose "placement".

In turn, this would also make it possible to not just expose a Canvas "per-index", but directly register a corresponding URI handler - such as e.g. "ND1" or "PFD1" which is transparently mapped to the correct index behind the scenes.

Note that TorstenD would be the best person for anything involving httpd/mongoose related patches, and TheTom the best person to get the Canvas side of this reviewed/committed.

In the meantime, my suggestion would be to add the whole thing as a unified diff to the wiki and document your journey that way.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby Hooray » Sat Oct 22, 2016 2:32 pm

ThomasS wrote in Fri Oct 14, 2016 9:04 am:This works so far and I can grab ND images from my browser by http like a screenshot. And as you mentioned before, there are latencies and it isn't efficient. Just creating the PNG image from OSG takes up to 100ms. Using an uncomressed bitmap format like tiff results in 3MB image sizes. Assuming a frame rate of 5 will be sufficient for displaying smooth instruments (which I doubt) this results in 15 image creations per second (for Captains PFD, ND and Eicas).
However, it is working and I will use this approach for my first setup and for some long running tests for checking for memory leaks and other problems. Probably there will be some fine tuning required.

And in the meantime I keep thinking about an external canvas drawing solution. Maybe I'll pursue the Nasal/Javascript approach, though my personal favorite is a Nasal/generic approach which allows implementing drawing in any environment like JavaScript/Python/Java aso.


To be honest, looking at the code that you have shared with us so far, I would not necessarily discard the whole approach - it may not be perfect right now, but please keep in mind that you came up with this within just a few days with very few pointers, and it is working correctly already.

For instance, let's assume we can find a way to provide much better performance, at that point, you can render a fully functional MFD without much work - i.e. you really have a working proof-of-concept already, so you already understand how to retrieve the texture from the canvas manager and turn it into an image, and you also understand how to adapt the built-in httpd server to stream the whole thing.

The lowest hanging fruit to make this work much better would be reviewing the OpenSceneGraph video/streaming examples, particularly osgmovie.cpp - and some more streaming related resources - for example, ffmpeg/gstreamer examples are often shared by people on the osg-users list.

Thus, once you can review your current approach and adapt it to make it work using ffmpeg, you can rework the whole thing to use a delta-encoding approach, which means much lower bandwidth requirements and much lower latencies - RTSP can work in terms of "initial frames" and delta-encoded "change-frames", where the changes are added on top of the reference frame, so that the encoding becomes really compact, without you having to do much beyond specificying the protocol/compression you'd like to use:


http://bensoftware.com/blog/comparison- ... g-formats/
Which format is best?
When making decisions about which video compression format to use (JPEG vs. MPEG-4 vs. H.264), it is important to bear in mind that one is not necessarily “better” than the others; they all have their advantages and disadvantages.

MPEG-4 and H.264 differ significantly from JPEG in that they are both temporally compressed formats; that is the video sequence comprises one I-frame (key frame), which encodes one entire image, followed by multiple P-frames (delta frames), which encode only changes in the image since the previous frame. This strategy results in a much lower data rate compared with JPEG, especially for video surveillance footage where the majority of the image often remains the same. The more P-frames that exist between the I-frames (known as the I-frame rate, key frame rate, or GOV length), the lower the data rate will be.


[...] At all points where the quality of the JPEG video compared to the MPEG-4/H.264 video is perceived to be the equivalent, the data rate of the MPEG-4/H.264 video will be much lower than the JPEG video. As a (very) rough rule of thumb, the data rate of MPEG-4 is around a fifth that of JPEG video, and the data rate of H.264 is around half that of MPEG-4, at equivalent perceived quality.

H.264 achieves this extra saving by employing B-frames in addition to P-frames, which depend not only on the previous image in the sequence, but on the next image. As you can imagine, this increase in complexity has costs in terms of processing power required to encode and decode the data.


If you can get this working, the main bottleneck would be our main loop, i.e. how fast we can capture/sample frames from a background thread without blocking the main loop (serialization overhead), but OSG can be pretty aggressive when it comes to multi-threading (especially in a codebase that uses CompositeViewer for setting up independent scenegraphs), as long as the proper coding patterns are used by the underlying code.

So, I would definitely recommend to explore this a little further, because it is such a straightforward mechanism to simply treat a Canvas as an image source and stream the whole thing as a live video to another process/computer - and you would even have all the benefits of using the Phi/browser-based approach, because RTSP can be also streamed to a browser, so you would not need other machines to run copies of fgfs.

Let's also keep in mind that a typical MFD is a rather static display in comparison to a movie - i.e. the degree of compression needed, and the delta-encoding to express changes in the display are very minor actually.

Many of the other approaches would be much more work in comparison, while opening up a new can of worms.

If you should decide to pursue this, I would recommend to actually start using your sourceforge project by cloning the corresponding fg/sg repositories, while making your changes optional using a corresponding cmake switch: http://wiki.flightgear.org/Developing_u ... l_Features

The last step is highly recommended because ffmpeg (or gstreamer) is not currently required to build sg/fg, but it would be useful for your kind of work - and a number of core developers have actually suggested using ffmpeg for similar projects, so it may not be that far-fetched to actually provide this as a build-time (or even startup) option at some point, e.g. see Curt's most recent posting mentioning ffmpeg:

Export live render footage to train CV AI autopilot
curt wrote:I have had some success with using ffmpeg to capture a portion of my screen (live) and format it as a web based video stream and then receive the video stream with an opencv program framegrabbing and processing.

Also, maybe you want to train with real video if there was a way to accurately connect the video with the flight data log? :-)
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby Hooray » Sat Oct 22, 2016 8:43 pm

Okay, here's a screenshot showing what the posted patches actually accomplish:

Note that the ND is rendered by FlightGear, turned into a screenshot which is then served via Torsten's mongoose work to the firefox browser shown on the right

http://wiki.flightgear.org/Read_canvas_image_by_HTTP
Image
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby ThomasS » Tue Oct 25, 2016 9:19 am

Thank you for your help in creating the Wiki article. I think we can consider it finished now. There is just one fix I have to add (the nasty sleep(15) call doesn't build on Win32).

You sound very enthusiastic about the opportunities for improving this solution by something like ffmpeg. Even though I totally agree to you, I have to say I won't be the one doing this. I reached my C++/OSG limits with creating this patch. But I will keep thinking about other ideas about canvas rendering already mentioned above.
ThomasS
 
Posts: 94
Joined: Sun Sep 11, 2016 2:21 pm
Location: West of EDDK
Version: 2018.2.1
OS: Linux,MacOS,Windows

Re: Canvas remote drawing

Postby Hooray » Tue Oct 25, 2016 6:14 pm

ThomasS wrote in Tue Oct 25, 2016 9:19 am:You sound very enthusiastic about the opportunities for improving this solution by something like ffmpeg. Even though I totally agree to you, I have to say I won't be the one doing this. I reached my C++/OSG limits with creating this patch. But I will keep thinking about other ideas about canvas rendering already mentioned above.



Actually, your approach, and your work, may be more complete than you may think - see Torsten's original comment below (admittedly not sure if this is anywhere properly documented):
https://sourceforge.net/p/flightgear/ma ... /32889510/
Torsten wrote:http://localhost:8080/screenshot?window=WindowA&stream=y
same as before, but not just send a single image but a motion-jpeg
encoded video stream.

Can be used by ffmpeg to directly encode various video formats.
try ffplay -f mjpeg http://localhost:8080/screenshot/stream=y

Compression level for PNG is hardcoded to 9 (highest) and JPEG_QUALITY
hardcoded to 80.
These seem to be a good balance of performance vs. quality.



https://sourceforge.net/p/flightgear/ma ... /34534457/
Torsten wrote:The problem is not the decoder but the encoder. I don't have a fast-enough
real-time video encoder that lives happily in the FG main loop. I have
experimented with ffmpeg which was promising, but it ended up on the very
bottom of my backlog :-/

We can do MJPEG stream, try to use /screenshot?stream=y as the screenshot
url. MJPEG is ugly and a resource hog but works reasonable well for image
sizes of probably 640x480. Scale down your FG window and give it a try.


With that being said, my suggestion would be to give this a try - besides, the original code was written with the intention of doing screenshots (including full scenery), a typical MFD does not need the full resolution or color depth, I would actually suggest to tinker with different settings and/make this (resolution of the created osg::image) configurable at the OSG/property tree or request level, so that you can tell OSG to create a down-sampled image, e.g. 256x256 should usually do - we don't need to do, let alone serve, a full 1024x1024 32bpp image - as a matter of fact, those ND images are 2048x2048 when served, and your JavaScript will merely change the resolution client-side, doing this server (fgfs) side, and making these settings configurable will have a massive impact on performance, and adding the push code (streaming) will make your javascript code unnecessary.

In other words, I don't think that you will find MJPEG to be too bad for this particular use-case, especially not with a bit of tweaking (different resolution, color depth, and update semantics, i.e. only ever serve frames 5 and 20 in a second, which would give you approximately 2hz refresh rate)

For testing purposes, my suggestion would be to open a dialog containing a Canvas (e.g. map-canvas, or the new canvas-nd dialog), resize the window to the dialogs dimensions and then start streaming screenshots using Torsten's existing code - that should work really well (it does for me), the next step would be adapting your canvasimage handler to also support streaming and use a lower default resolution/color depth - note that the performance you'll be seeing should be much better actually, because the PUI overhead (legacy GUI) should not show up in a pure Canvas.

Image

(FYI: When I am streaming locally to a firefox instance using ~500x500px (full color depth), I am still getting ~20 fps in fgfs and ~5 hz in firefox, I can even fly the airplane or use the GUI via firefox)

Anyway, I would really suggest making those captured images much smaller, using a lower color depth and not doing this each frame, e.g. 1-3 per second should be sufficient - if in doubt, make this configurable via a property, too. Like I said last week, it would make sense to consider turning your code into an actual canvas placement to make these things configurable.
None of this is rocket science, and in fact, the difficult work you have completed already - what would follow now is fine-tuning, which is frankly much less complicated than adapting the screenshot handler code to serve a canvas textures with proper threading synchronization ;-)

Overall, there are several options to develop this further and improve the whole thing - but for now, the mjpeg streaming solution is probably the lowest hanging fruit, especially in conjunction with reducing the image size and color depth to something sane
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Re: Canvas remote drawing

Postby Hooray » Sat Oct 29, 2016 10:04 pm

Referring to my previous posting, I looked at your patch and it seems you explicitly removed streaming support from Torsten's original implementation ?
Was there any particular reason for doing that ?

Specifically, the lines where the boolean stream flag is passed around are relevant, i.e. the whole subscribe/unsubscribe logic isn't set up properly if this is missing, so that streaming no longer works properly, because the image is not updated subsequently.

It seems you were half-way there but then decided to simply keep this disabled/removed ?
Or you weren't aware of the streaaming functionality according to your comments:

Code: Select all
/* unknown purpose
+            if (canvasimageRequest->isStream()) {
+                canvasimageRequest->requestCanvasImage();
+                // continue until user closes connection
+                return false;
+            }
+            */



I've partially restored the streaming functionality by referring to Torsten's original implementation and I have live-streaming of a ND/Canvas working again - obviously, that needs proper synchronization, i.e. code handling shutdown of the Canvas and/or the connection, because a bunch of threads may be involved when this is used.

Note that this also fixes the bug you previously talked about, i.e. that the image had to be requested first before the 2nd request would succeeed - because you removed the subscribe() and requestCanvasimage() logic that would handle this otherwise for the first request already.

Image
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Hooray
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Hooray
 
Posts: 12707
Joined: Tue Mar 25, 2008 9:40 am
Pronouns: THOU

Next

Return to Canvas

Who is online

Users browsing this forum: No registered users and 1 guest