The way it's written I can just generate the whole data structure and not plot it - no significant performance drain, so the projections, culling etc. in Nasal space is okay. Rendering the structure is not the issue, since I can watch it, but the drain comes only during the display update frame.
I can also look elsewhere and not hence not cause any rendering load - and I see the single frame latency rise whenever the PFD is updated.
I can also compute the whole data structure but write only parts of it, and I see the performance drain creep in proportional to what I write.
So at this point it's academic whether it's the actual property throughput or the step to pick up the properties and render it all as I can't separate these steps conceptually.
I know wiping and re-drawing is bad, but given how visible line segments on a sphere change with rotation, I'm at a loss how to even design a data structure that would just solve it by translating and hiding segments - the computational overhead to manage that is an order of magnitude more than to generate it from scratch.
I've tried a staggered approach in which in each of four successive frames a group is re-drawn, but somewhat puzzling this leads to flickering when groups are updated. My best hypothesis that depth information isn't honored immediately but takes a frame to be picked up by the renderer, but strictly I don't know - I've not found any detailed information how depth is managed by canvas.
In any case, there's now a quality slider for the ADI ball which allows you to change from 'usable' to 'extremely pretty' and I will see whether I can run somewhat more agressive culling to beat the property IO down by writing the minimal number of invisible nodes only.