Actually, I don't really see how this should be feasible without substantial changes to FG architecture / GLSL version being used.
Quick test of a not too complicated procedurally generated volumetric cloud (a number of 'blobs' modulated with 3d Perlin noise).
You get quite compelling shading that way, but you need to solve a nested integral equation somehow (and I fail how that could be done without sampling the integrand reasonably often) - so the end result drives a GeForce 1080 into single digits. For this one cloud.
Granted, the algorithm isn't optimized at all, but it'd have to be a factor 10 - 100 faster to really matter if we want Stratocumulus layers or so. Probably it's faster if you have a 3d texture pre-assembled - but then again, it needs to be held in memory.