Okay after reading through
http://http.developer.nvidia.com/GPUGem ... ter16.htmland filling three pages of paper, I think I understand what's going on. The Mie scattering will tend to give you 'white' whenever the Mie optical thickness along the ray is >1 - thus the horizon whiteout. But the problem is that you can't assume that a single scattering approximation works when the optical thickness is larger 1, since you know that you have multiple scattering. With a visibility of 1000 m, what you get is diffuse light with multiple scattering from everywhere, and a lot backscattered into space, thus the grey fog - something which the whole formalism can't deliver even in principle, because it's valid for optically thin media only. I suspect you'll need to supplement it with some additional physics for low visibility.
As it happens, I also understand why he found that everything scales with an exponential which allowed him to eliminate one dimension of the lookup table: If you Taylor-expand the curvature of Earth, which for 8000 km radius and ~100 km atmosphere gives you an error of less than 1% for almost all paths, you can do the inner t(P_A,P_B) integral analytically which just gives you an exponential of the scaling height back (in an optically thin medium, you can also Taylor-expand the exponential in the outher integral and do the outer integral analytically as well, but let's not overdo it here...).
So, I guess I'll wait for your revised code then...
Come to think of it, is there any reason to do the computations in the vertex shaders every frame? It seems like the kind of problem where the result doesn't really change significantly from frame to frame for normal aircraft velocities, so a scheme in which you just compute 1% of your problem per frame, store it in a table and use the table for 99% would seem to be vastly superior in terms of resource consumption (?). Just thinking loud... and I was wondering the same thing about the gradient shader...