I do recall James' changes relating to HiDPI and how he refactored everything to work for OSG and PUI, so that might be a low-hanging fruit to check first (?)
The other candidate to check next would seem to be James Hogan's VR work which he mentioned does support a built-in UI (?) - which should not even be using PUI at all, so might be suitable to use a template to locate the right approach to insert a Canvas cam and make it use shaders ? (I would definitely suggest to reach out to him first, since he must have thought about the problem already)
Alternatively, what about Jules' original multi-window fgcommand (part of the compositeviewer/sview cloning code IIRC): I don't remember all the nitty gritty details here, but could we render the UI to a separate window (context) that way to provide people with a compromise ?
Also: I am not sure if this is documented/mentioned anywhere on the wiki or not, but if it isn't it would be great if someone could copy it to the appropriate place - alternatively, please file a dedicated feature request, copying Fernando's posting and linking back to it here, so that this won't be lost...
PS:
<brainstorming mode="on">and if everything above fails, regarding the issue of stateset debugging to locate the proper osg::Node for the GLSL portion, how about treating this like a git bisect: I do recall that we're able to dump the scene graph to disk, so how about a debug mode using BSP to split up the scene graph for a binary search and inject the shaders in different parts while observing the stateset/rendering result (FBO), would that help narrow down potential nodes/places maybe just a little (I have seen nodes containing annotations in the form of code file and line number) ?
Basically, I imagine the equivalent of the minimal startup profile rendering just a tooltip/dialog via the UI camera and then inject the shaders at different points to evaluate just how wrong the output looks in terms of rendering output and stateset changes to hopefully exclude a bunch of obvious nodes/places and locate some of the more promising ones (remember our early canvas-cam experiments not rendering the skydome?)