Board index FlightGear Development Canvas

Read pixel rgb values

Canvas is FlightGear's new fully scriptable 2D drawing system that will allow you to easily create new instruments, HUDs and even GUI dialogs and custom GUI widgets, without having to write C++ code and without having to rebuild FlightGear.

Read pixel rgb values

Postby Alant » Sat Nov 23, 2019 10:32 pm

I have a moving map, based on slippy map.
Is there any way to read the rgb value of a pixel on this map?
My aplication is to scan a line of pixels to port and starboard of the aircraft position and then process and use this data to generate something that looks like a sideways looking radar return.
My alternative solution is to use nasal geodinfo() to get altitude and terrain type, but unless I scan at very high detail I fear that roads and railways will not be detected this way. This technique works well for my forward looking radar which is used for terrain following. (Note: I will have to use this method to generate the intervisibility data - there is not much sideways visibility when terrain following in a valley.)
Thanks for any ideas.
Posts: 1032
Joined: Wed Jun 23, 2010 5:58 am
Location: Portugal
Callsign: Tarnish99
Version: from Git
OS: Windows 10

Re: Read pixel rgb values

Postby Thorsten » Sun Nov 24, 2019 7:12 am

I remember that the ability to set a pixel in a raster image was added to canvas not so long ago - so it might be that at the same time a getter function was also implemented. But unless it's documented in the canvas API, I would also not have an idea how it's called.
Posts: 11991
Joined: Mon Nov 02, 2009 8:33 am

Re: Read pixel rgb values

Postby Hooray » Sun Nov 24, 2019 2:04 pm

As far as I remember, this was just a discussion and a requested feature, that someone (James?) put on this todo list ...
I exchanged a number of messages with other users about such a feature, because they were wanting to implement some heightmap/wxradar-like mechanism using Canvas, and hit a roadblock with regard to performance.

Anyway, if this has been implemented (which I doubt), it would show up in the simgear/simgear/canvas and fightgear/src/canvas or flightgear/src/Nasal commit logs
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Posts: 12157
Joined: Tue Mar 25, 2008 8:40 am

Re: Read pixel rgb values

Postby Alant » Sun Nov 24, 2019 3:23 pm

As yet I have not found it - which does not mean that it is not there . I did a quick search in simgear source and the commit logs.

However using has led me to one possible solution.

The solution in looks even more interesting, but I do not believe that the C++ code was ever committed.

When I do get the rgb pixel values I propose to re-map the colors to monochrome.. Blue (water) gives no radar return to this type of radar. I will need to experiment with the other map features, and provide a texture for the background paper.

Thanks for the interest.

Posts: 1032
Joined: Wed Jun 23, 2010 5:58 am
Location: Portugal
Callsign: Tarnish99
Version: from Git
OS: Windows 10

Canvas WXR layer

Postby Hooray » Sun Nov 24, 2019 3:31 pm

I am posting some of my responses that I previously exchanged with Necolatis in the context of procedurally-created/updated raster images, my responses were made with performance in mind to create a "pixel-based" WXR-layer primarily in scripting space:

Hooray wrote:Referring to: ... /36482690/

I don't think this is such a good idea - actually, the idea is a good one (creating textures at runtime procedurally from Nasal space), but using Nasal directly is probably not what most people will want.

In other words, it should suffice to expose the existing effects/shader framework to scripting space, so that a Nasal "blob" can be used to dynamically compile effects/shaders "on the fly" and then use that to populate a texture at runtime.

This would give you all the flexibility that you need, without adding any Nasal related overhead - because we already have the effect/shader system that can dynamically create/modify textures.

So, you would basically create an effect (or a shader) structure in Nasal space and tell fgfs to "compile" the effect/shader and use it to render into a corresponding canvas texture/buffer, so that this can be used elsewhere.

This would be far less work than going through Nasal directly, and it would not add any Nasal related overhead. Besides, it's been a long-standing idea/feature request to mutually hook up canvas + effects, so that effects can use a canvas and vice versa (i.e. in a nested fashion): ... 2F_Shaders

You might also want to reach out to Icecode GL who has been working on a dedicated "compositor" mode, which would support this and much more:

Hooray wrote:Okay, sorry for the confusion.
The article/pointers I shared were not intended to mean that there is a "solution", but that we've had a number of related discussions.
Like you say, there is a property interface, but using that to pass "well-defined noise" to an effect/shader would be rather expensive.

Anyway, you might want to discuss this with some of the effects folks to see where this is going, and what can be done.

Personally, I am inclined to procedurally create the noise texture from shader space, as Thorsten has been doing for his ALS work.

Note that I am not opposed to the Nasal vector-based approach, I just don't think it's necessarily the best option. But hey, I am not much of a graphics coder anyway, so it would be better to discuss this with the effects/shader folks. I was really just commenting from the Nasal/Canvas standpoint.

That being said, the link/references I shared may hopefully still help you to make the case that there is a concrete use-case to actually hook up canvas and effects in a mutually compatible way, which would make all sorts of fancy use-cases possible.

Again, sharing your concrete use-case should allow Thorsten et al provide more specific advice, maybe Icecode is also currently around and could comment on your use-case and how this would work with his Compositor framework, which James said he was hoping to get committed in the next few months according to the list archives.

Finally, please do keep us/me posted - at least by updating the wiki accordingly (newsletter or canvas new article), so that we can keep track of interesting canvas related developments.

Hooray wrote:okay, been thinking a bit more about it - there's actually two rather easy ways to modify the C++ code accordingly - either by using the cppbind framework to provide a Nasal api in the form of "vector2texture()", or by coming up with a new Canvas element for pixel-data specifically (say, "pixel-data").

The more flexible (powerful) way would be a dedicated canvas element, because it would work analogous to any other canvas element (think paths, text, images etc) - however, it is rather expensive to set up/modify (update) thousands of properties that way - which you undoubtedly know already, because that is what we are doing when it comes to huge paths.

However, there is a short-cut here, i.e. using a text blob (string) that contains a list of comma separated values to be used for the new/updated texture - so that the number of properties to be changed is just one instead of WIDTHxHEIGHTxDEPTH

It would also be possible to come up with a hybrid solution using both approaches, i.e. a pixel data element in conjunction with a dedicated setPixelData() API

Again, probably best to discuss these things with some of the core devs - and not just with a focus on your use-case alone (to get others involved, like Thorsten)

Take all of this with a grain of salt, it's really just a brainstorming - but if you should end up pursuing any of this, I can share wiki pointers with you to get you going with the cppbind/canvas element stuff (most of it is boilerplate code anyway)

Hooray wrote:more food for thought: if you need something now and cannot wait for the devs (or if they should turn out to be unsupportive), we could simply code a simple texture generator in Nasal - again, this depends highly on your use-case, but textures are not exactly magic, and we can write binary files from Nasal, so it's probably under 200 lines of code to turn a Nasal vector into a texture and store it in $FG_HOME, where it can be picked up by other system (canvas, effects etc)

Writing a simple png/ppm texture generator should be rather straightforward, depending on your requirements obviously - because it would be the equivalent of "vector2texture" - and the whole thing could even run in a background thread (outside the main loop), because it would only take a copy of the Nasal vector that is to be serialized to disk.

We would have to look up the kind of simple texture formats supported by osg, that would still work for your use-case, but it should be pretty simple write such a function, and it might even provide a compelling incentive for core devs to provide a more proper method in the future ;-)

As a matter of fact, there seem to be javascript routines for writing textures, that should be rather easy to port/adapt to Nasal syntax, without requiring any external deps.

And with ppm, we don't even need to do any binary I/O:

Code: Select all
# feep.ppm
4 4
 0  0  0    0  0  0    0  0  0   15  0 15
 0  0  0    0 15  7    0  0  0    0  0  0
 0  0  0    0  0  0    0 15  7    0  0  0
15  0 15    0  0  0    0  0  0    0  0  0

Hooray wrote:That's another option - It would be pretty easy to come up with a new type of noise for effects, say "nasal-noise" and evaluate a <script> section. The issue is that you still need a way to affect that scripted noise generator, and that generator would probably run outside the main loop. So most core devs will probably hate that approach (just like most of them hate Nasal with a passion), sooner or later someone will suggest using property rules to generate the noise texture probably ;-)

Still, it's an interesting idea - but to be really sure if this would work or not, you really need to share the use-case you have in mind, because such a scripted noise generator would almost certainly not have access to any stuff in the main loop (say properties, fgcommands etc)

From a technical pov, I believe shaders and effects is probably the way to go. To get something done "soonish", I would probably code up a simple texture generator that turns a Nasal vector into a texture on disk, and go with that - hoping in the process for a more proper method to be provided by core devs ...

Hooray wrote:
Necolatis wrote:Thanks for your input.

As for the last, I think writing a file like 15 times every second does not sound too attractive. That and reloading it in nasal, might even match or surpass using paths in execution time.

that is definitely true, like I said, it's certainly a good idea to share your requirements and the concrete use-case you have in mind.

With that constraint in mind, I would probably use a dedicated "pixel-data" element: ... ew_Element

If you need to access this via effects, your dedicated noise-generator would seem like a good idea - but probably not using Nasal.

It really all boils down to the granularity of the updates, i.e. timing-wise, but also how many pixels you typically need to update (a few or possibly all?)

But updates at 15hz seems to scream effects+shaders.

The bandwidth argument you made is also true, I believe James an others previously suggested "UBOs" (Uniform Buffer Objects), i..e. extending the effects system to support UBOs for such purposes.

Maybe this helps provide a little more surrounding context to put things into perspective, i.e. like a brainstorming.

But like you say, I would not have suggested writing a texture to disk if you had said you wanted to update it at 15 hz :D

Hooray wrote:okay, now that you finally said this is about a wxr overlay, it's much easier to make specific suggestions.

First, I kinda agree it would be trivial to do this in C++ via a dedicated terrain overlay, and we actually have existing C++ code for this, that would merely need to be adapted for use as a canvas element.

Second, Stuart should be highly interested in this because he's been wanting to provide a terrain overlay for the FG1000.

Next, we do have various Nasal/Canvas samples doing this sort of thing.

Besides, the Canvas supports texture maps, too.
So, you could basically set up a texture of nested textures, and move the sub-texture pointer (window) as needed. That way, you can compute a terrain overlay in a lazy fashion, and merely show the relevant portion of it (zooming as required).

Technically, the right way is to use the hard-coded terrain pre-sampler added by Torsten, and then sub-class simgear::canvas::CanvasImage to this with a handful of configurable properties (analogous to simgear::canvas::Map) - that way, you would merely set up the position of the terrain sampler, the range, granularity and the view portion - and everything else would be handled by existing C++ code, i.e. you would merely define an altitude palette to be used for different elevations. And we have actually talked about this on the forum, that is why I suggested to exactly tell us what you are trying to do, this would have saved us quite a bit of time.

You should be able to find the corresponding topics by searching for "canvas + wxr" or "canvas + terrain + overlay" "canvas + palette"

Like I said, it can certainly done from Nasal/Canvas using a few clever tricks, and it could even be made sufficiently fast. But if you know how to patch/build the sources, I can walk you through the C++ changes needed to make this happen.

Again, a good starting point would be the canvas::Image class, on conjunction with canvas::Map, would will need a sub-class of those too for a "terrain-overlay" element, and it would do the equivalent of the hard-coded wxr-display (based on od_gauge).

Personally, I would then just make some things configurable and hook the whole thing up to Torsten's hard-coded terrain pre-sampler

But all of this can definitely also be prototyped in Nasal space. But it may still be a good idea to check back with Stuart and others who previously expressed an interest in "terrain overlays" for different aircraft/avionics.

Anyway, feel free to shoot me a PM if you need any code snippets/pointers to get going.
But I would definitely suggest searching the canvas forum first for related topics (terrain overlays, wxr, wxradar)

Hooray wrote:If you'd like to go the Nasal/Canvas-only route, it should be pretty straightforward to copy/adapt the existing MapStructure WXR layer according to your needs.

You will want to do some profiling/benchmarking probably, because using Nasal in a setPixel-like fashion is almost certainly not what you want to do at 15hz.

Like I said, we've previously talked about this, and you can speed up the whole scheme by using a handful of nested texture maps (which Canvas does support). We're using for the FIX/WPT layers, where a simple SymbolCache is instantiated to retrieve pre-rendered symbols that are then added as subtextures.

You can use a similar scheme so that you never "draw" directly, but only show/hide sub-textures froma palette image (say 10 differently colored rectangles for different terrain elevations).

You would then want to allocate those into range-based groups, so that you can easily show/hide as needed for different range selections.

That way, you would not do any actual drawing, but only ever run .show() and .hide() respectively, for different resolutions. And you can update those "pixels" in a lazy fashion, too - i.e. only actually show them when they have been updated in the background.

This would implement a simple LOD scheme, too - i.e. one based on groundspeed/direction.
All the terrain sampling could then happen in C++ space using the hard-coded terrain pre-sampler, i.e. you'd prioritize updates of visible node first, and use lazy updates for everything else. The "scheduler" would merely invoke .hide() and .show() as needed, spread across a few frames to evenly distribute the load.

Most of the pieces for this are already in place in the MapStructure/Canvas department.

Hooray wrote:I am not sure if the SymbolCache is anywhere documented or not, but image it like a pre-created texture map where all render-able symbols are added as 32x32 sub-images, with separate entries for all style-able attributes (think colors). That way, there is no runtime footprint when something is drawn (imagine 250 DME/FIX symbols), itstead it will just add sub-textures using the correct coordinates.

Like Thorsten said on the list, pre-creating the corresponding grid/raster structures will show up probably, but at runtime it's rather fast actually.

As you can probably guess, my suggestion would be to document your journey using the wiki, so that others can take a look and provide help/pointers as needed.

Personally, I am basically AFK most of the time, but we created most of the machinery needed for this a few years ago, and we also talked about allocating elements into range-based groups to have a LOD scheme for centered/off-centered views, where updates would be prioritized according to view frustrum (on the display) and groundspeed/heading

Hooray wrote:Referring to the video that you posted:

There seems to be a 3D look to it, i.e. it's not just a 2D WXR/AG radar, right ?
If that's the case, you really don't want to do this sort of thing in Nasal, definitely not at 15hz

We have previously discussed this on the forum, i.e. search for "canvas + synthetic + vision".

Basically, the most efficient way is to render a slave scene camera to a Canvas texture and apply customizable effects/shader to that texture (think tail-cam, FLIR etc)

We have shared screenshots and patches related to this, and Stuart said on the forum that he needed this capability for the FG1000's synthetic vision mode, too.

Basically, if this is involving any 3D replication of the surrounding scene, you really don't want to this via Canvas and Nasal, but using a new dedicated canvas element for that (again, Icecode_GL's compositor work would be highly relevant here).

I may be totally wrong, but I seem to recall that you once shared in private being a professional Java developer, right ? If so, it should be relatively easy for me to walk you through the steps to come up with such a Canvas element using a handful of patches and code snippets, and a few pointers to make heads and tails of everything.

If you understand how Canvas primitives work (text, image, path ...), you also understand how having a dedicated "slave-camera" element bound to being aircraft relative would be enormously helpful here.

For that, you really only have to look up the existing view manager code (the stuff that is xml configurable to make aircraft dependent views), and then the whole thing can be hooked up to the effect shader system, pointers below: ... ra_Element

Again, this is stuff for which we have actual C++ patches, and Stuart stated that he'd be willing to get this committed so that it can be used elsewhere. And again, all of this is assuming that you are trying to create the equivalent of a synthetic vision instrument.

If all of that is not correct, I still don't believe you want to modify pixels directly, but instead use the shader/effect path I sketched last week.

Synthetic vision pointers: search.php?st=0&sk=t&sd=d&sr=posts&keywords=synthetic+vision+canvas

PS: Finally, note that there is a set of existing "legacy" instruments implemented in C++ (pre-dating Canvas, not using it at all) that implements wxr and agradar functionality - this would be fairly easy to look at to borrow some C++ code and integrate it elsewhere (either Canvas level or like you said as a dedicated "terrain-based noise" sampler, possibly in conjunction with the effect framework). keywords: wxradar, agradar and od_gauge (which is the common baseclass used by these)

Background at: ... anvas_(RFC)

Hooray wrote:like I said, I don't think this needs C++ "per se". In other words, someone familiar with Nasal/Canvas internals can surely come up with an efficient mechanism that would work "well enough" (Thorsten regularly proves that).

I was originally once involved in the whole Canvas/MapStructure thing, and we did document things there - in other words, prototyping a new MapStructure layer based on what you have in mind, and based on what we have already should be fairly straightforward.

Regarding C++ patches, actually most of this is "ready", but like so many other C++ patches, this got never reviewed/integrated. However, at least Stuart offered to help with that. And James recently said, he'd like to see the Compositor stuff added.

I believe this is one of those things, where occasional "reminders" may help, i.e. by people who actually have a use-case for these things, even if these should show up as "questions", i.e. "asking for status updates" and to learn "if/how developments like these could help with a certain effort".

It's a pity, but that's how things usually work (not always though, think osgEarth)

Regarding your particular use-case (the aperture radar), I would suggest to either move the discussion to the canvas forum or to the wiki, where I can help come up with a custom tutorial, especially if you are not the one coding this ?

Regarding the C++ route, I believe the way fgfs is using C++ is rather archaic, so most of your Java background will not be needed. The only key thing is really having a working build environment to patch/rebuild fgfs via git. But again, it's not needed to come up with a radar MapStructure layer.

We can prototype the whole thing within in a couple of days and then incrementally improve it to review performance/issues etc. Most of the MapStructure machinery should actually work rather well for this, but we will probably want to use a single Canvas image that references another palette texture to retrieve different sub-textures and "instantiate" those in the main texture to act as "pixels", only showing/hiding them as needed - all sorted in range-based groups, so that you can easily hide/show groups (LOD), which should be fast enough for most needs, especially if updates are scheduled taking groundspeed/directional vector into account

It would obviously help talking to someone familiar with Nasal/Canvas stuff, ideally someone who's previously tinkered with the MapStructure framework, but I should be able to answer most questions that may arise
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Posts: 12157
Joined: Tue Mar 25, 2008 8:40 am

Re: Read pixel rgb values

Postby Hooray » Tue Mar 31, 2020 3:05 pm

Alant wrote in Sat Nov 23, 2019 10:32 pm:I have a moving map, based on slippy map.
Is there any way to read the rgb value of a pixel on this map?
My aplication is to scan a line of pixels to port and starboard of the aircraft position and then process and use this data to generate something that looks like a sideways looking radar return.
My alternative solution is to use nasal geodinfo() to get altitude and terrain type, but unless I scan at very high detail I fear that roads and railways will not be detected this way. This technique works well for my forward looking radar which is used for terrain following.

Even with the requested functionality now available/committed, there remains the obvious issue that such a "slippy map" will usually be based on very different geo data than FlightGear's terrain (i.e. mismatching data). So, the most proper data would indeed be to use geodinfo() - there is a hard-coded terrain pre-sampler available implemented by TorstenD. Or, to create a custom geodinfo() variant exactly for this use-case.

This also applies to aircraft that ship their own sets of pre-rendered charts.

On the other hand, there is the long-standing idea to use FlightGear itself to render a map to a texture, e.g. based on the atlas/map source code (GPL compatible), quoting TorstenD:
Torsten wrote in Tue Mar 18, 2014 10:35 am:What I have in mind for the map is to render the map tiles (small 256x256px fragments of a map) from flightgear scenery data on the fly when the user requests them from within the map application. Thats how openstreetmaps works and I like the idea of reusing proven concepts. There is no need at all - in fact, it's not even desireable - to do that in the main loop. Running a standalone application (or call it process if you like) creating and serving the tiles will add no load to the FlightGear process, no matter how complex the map is.

Given that we do have existing code to render such maps, and that the Canvas system does have support for loading raster images (sc::Image), this would be the most straightforward option to ensure that there is no mismatch between a moving map and a corresponding radar display, because it'd be using the same underlying terrain/vector data.
Please don't send support requests by PM, instead post your questions on the forum so that all users can contribute and benefit
Thanks & all the best,
Help write next month's newsletter !
pui2canvas | MapStructure | Canvas Development | Programming resources
Posts: 12157
Joined: Tue Mar 25, 2008 8:40 am

Return to Canvas

Who is online

Users browsing this forum: No registered users and 2 guests