Geometry shaders are not needed for buildings (Perhaps not for cars either). The techniques will work with vertex shaders.
I just used geometry shaders as they allow prototyping just using math&logic (without creating a model to load vertices & attributes each time). I was intending for vertex shader compatibility.
Vertex shaders can be used well: When model geometry is known. It may(?) be faster to collect different types of objects in a model with more vertices to draw under a single instance with deformation - compared to drawing models with minimum vertices under several instances. It's also possible to reject models or parts of models in fragment shader. Examples: a very complex roof can collapse to make a flat roof. A car that needs 10 boxes can deform into a simple truck that needs 3 boxes. A complex multibox skyscaper can deform into simple rectangular block or something in between.
Geometry shaders are slower on older hardware & possibly older drivers(AIUI???). Google seemed to think so (
eg http://www.joshbarczak.com/blog/?p=667). But recent drivers may have changed that. Vertex shader alternatives for buildings & roads may be better if geometry shaders run slower, are buggy, or if they do not run on a significant percentage of FG contributors & users systems that are still fast enough to display OSM in some form. Even latest DX12 features are supported by cards going back to NVIDIA 400 series - last page in
list (it is mostly how much computing power is available).
AIUI (?) OSM survery data varies in quality and completeness. There are probably comparisons of OSM data vs government data somehwere. e.g. Older surveys won't be detailed (maybe no fields for detailed building data?) or surveys simply won't be complete/thorough (May
contain inaccurate imported data & data filled in by algorithms).
Multiple types of definitions will save space. The FG can fill in blanks at runtime. (Possible techniques: GPU generated randomness, instancing, collecting multiple different things under a generic instanced geometry which can deform into different things).
The chances (AIUI!) for space saving are to:
A. Avoid filling in smaller scale details where OSM survey data is absent. Larger scale details like rows of streets may need to be specified in areas without OSM coverage.
B. Avoid specifying each detail where it's shorter to specify a pattern+variation.
C. Avoid using more bits than necessary (e.g. 32 bit floats for height..when 8 bit integer will do. 16 bit integers should be enough for demanding things.). Accuracy is limited by instruments used, or quality of average surveyor's guess. e.g. Guesses of heights: meaningfully limited to nearest meter. Guesses of colour: An 8 bit integer representing Hue instead of 3 RGB values in OSM. Similar to colour idea is to avoid sending data 'filled' in by an algorithm & derive the original raw measurement with likely accuracy.
D. Avoid recording smaller scale detail (fewer bits) than meaningfully matters in the context of what the sim is used for. Example: Something that is not likely visually compelling or matter to simulation even in low altitude flying: The spacing between 2 houses being rounded off a bit. If houses are 1 or 2 meters closer or distant it will not make a difference). Space needed or performance might require a compromise & discussion between parties involved OSM, terrasync, FG engine and GPU.
OSM datastructure: multiple files -> 2. Terrasync requirements(e.g. not too many separate files) -> 3. Freely custom FG engine interpretation from multiple data sources and assembling geometry from patterns -> 4. GPU.
In otherwords, to take full advantage of opportunities OSM datastructure should describe the minimum necessary to convey information (patterns), with only the data that's compelling & important for simulation, with minimum relevant precision given accuracy of data and leaving out.
(AIUI 3d file formats are massively over-specifying for repeated OSM patterns, and not designed to conserve bits(?) I think this is Thorsten's objection to it?).
FG engine & shaders AIUI can put together models to instance in future iterations - based on number of features requested (like lightpoles, roof detail, trees in gardens, fences, parked cars, pools).
vanosten wrote in Sun Apr 01, 2018 9:05 pm:while still keeping osm2city AC3D for larger buildings..
In other words I believe it would be a pity to create a shader who places "random" buildings and sets aside real-world input from OSM, e.g. Simple 3D buildings, colours, height etc..
What is an example of the most complex building described (house or skyscraper)?
(If I follow correctly) The question is what is the raw data collected by OSM surveyors?
Reading the page, it seems the raw data entered into OSM categories roofs and buildings. Common forms, with parameters for scaling different parts?
A complex roof can be deformed into a flat roof, or something in between. It should be conceptually possible to collect multiple building types under one geometry with enough vertices to deform. e.g.
this is just 3 boxes (24 vertices) & can represent a lot of shapes.
A lot of the complicated buildings may turn out(?) to be collectable under one general model with a bit more vertices than most models. That might save a lot of space & leave very little that requires a 3d model or detailed specification.
Complicated inner city building complexes may(?) be broken into L & U shaped parts (2/3 boxes). They can be placed next to each other to look seamlesss. Sides of buildings next to each other will not be visible. Definitions can state one side is seamless so border textures aren't drawn AIUI (Proximity info helps with ambient occlusion and lights being cast by nearby building windows etc).
The main space saving opportunity is to avoid state every vertex (3d file formats do this?). Pattern analysis & multiple definition types can avoid restating complicated buildings. If there were 10 complicated buildings of the same type in a street, the definition head can contain the average. The list can contain 10 variations+degree of random variation. If a lot of fields are identical to the head it is possible to avoid restating.
If a city contained very different types of buildings needing different instances, maybe adding a list of building types in child nodes to the city node will make it easy for FG to load data to GPU & avoid wasting VRAM. This type of thing would become clear later in the process, /if/ it's worth-while. If the FG engine had the ability to stitch together a texture sheet from individual images based on regional definitions it could save VRAM and allow more detailed textures (then again more recent GPUs have many GB of VRAM). A simple example is creating a forest texture sheet by adding different tree species (so tree combinations for each forest doesn't require it's own texture sheet).
vanosten wrote in Sun Apr 01, 2018 9:05 pm:Regarding street lamps: I believe it would be better to let an algorithm not using vertices determine the placement for e.g. the following reasons:
Oh, placing street lamps at vertices was just a quick demonstration such a thing was possible. The same technique as cars is available for placement now. A varying number of objects can be generated from the most relevant triangle depending on triangle size (with arbitrary spacing & the same degree of control as the fragment shader).
vanosten wrote in Sun Apr 01, 2018 9:05 pm:As I am glad for the car shader by Thorsten, I have no opinion regarding moving cars etc.; but if you need a specific structure from osm2city, then please let me know.
This /is/ Thorsten's car shader in 3d form. It was intented to be possible to sync headlights on road surface with 3d vehicles. LoD at long range could drop to 2d version using same texture.
This was just to prove the concept. It may even be worthwhile to use a vertex shader technique mentioned before (i.e. fast on older systems).
I think the point where a final polished 3d version is released is(?): After the work on FG engine side defining the format of the data sent to the GPU (e.g. 1 vertex per street segment(most efficient), or another method). That's after OSM data structure is finalised. That's several stages away & lots of time (??).
What is the next step in the process? (e.g. putting forward a list of data to be included from the OSM side. Terrasync, FG engine work, &Thorsten from GPU side may then look at: space & format & engine/shader work & requests for useful additional data like proximity or names of cities/districts/notable features. After that maybe a data structure, reading & writing, falls naturally into place? Not sure if I follow or know all the complications & can't really comment)
OSM can specify in the shortest possible manner and it should be conceptually possible for FG engine & shaders to do roads fast (and entire house blocks with roads).
Example: Street start position, direction, street type data, curvature of path. It should be possible for FG engine to instance 1 road segment model. These segments can be curved and stretched in the vertex shader based on curvature definition. Like a large loose spring (google
image). Curved sections require more verticies. Straight sections strictly only require 2 triangles. Multiple segment models can be used over a strongly curved path, less for a shallower curve (or just use seperate high poly street segments when curving). Defining house blocks by street paths where possible is powerful. It needs a good space saving definition of curvature to be worked out.. (can only say a bare minimum & inefficient one: stating a variable number of points along the path as offsets from start position, and leaving it to the engine & shaders interpolate a curve)..
Specifying the minimum data for streets allows the engine to iterate in future (e.g. by instancing a bunch of points with data for geometry shader cars if vertex shaders cars won't do).
The road object shader I did was a way to extract the required information from the current data format (It was made with minimal knowledge of the model format/FG scene conventions, and made no assumptions on how the mesh was constructed.).
I did a bit more after I last posted. The moving part of the car shader works.
Arbitrarily variable spacing:
1, [https://imgur.com/a/eVIfa]2[/url],
3.
Add vehicles at intervals
screen. Add vehicles following same direction as 2d vehicles + randomly empty slots like 2d:
day,
dusk.
Add rough shadows to light poles (blue and green on opposite side of street):
eg.
It was just a concept proving such a thing could be done (maybe?). It's probably safer to rewrite it rather than build on it directly: ).
A polished 3d version using any technique needs a lot more work on: finding the fastest method, finding & discussing a suitable texture format that is suitable for artists, LoDs, working out exactly what things give most bang for buck vehicles (subject research), clearing up glitches and driver issues (there might have been one), lighting&fragment shader work. That's more in the territory of someone like Thorsten with detailed knowledge of what's fast, FG internals, shortcuts based on scene & model data conventions, art-side, compiler issues etc (new to FG & not a programmer professionally)..
If the current concept is useful in some way I could clean up irrelevant experiments, maybe play with a better vertex shader compatible object deform function & hand it over
(the shader is pretty much the conceptual description above though).
Where I got to: Still have some issue with finding scale of texture space unit basis vectors in model space. But texture space directions vectors work in model space so it's possible to work in model space. Could be a bug or some detail of coordinates. I used a quick substitute coordinate system and it (more or less) worked. There was some extra funkyness possibly OpenGL effect setup, drivers, or the substitute coordinate system glitches, or minor bugs. Vehicles dissapearing at end of road segments is a only little bit more noticeable in 3d.
vanosten wrote in Sun Apr 01, 2018 9:05 pm:Btw: one thing to solve for cars would be e.g. static bridges, as there is no osm2city roads..
If these are a static model then it becomes hard(?). For a vertex shader version: It /may/ be possible for the FG engine to instance a transparent road just over the bridge if details are present in OSM data structure (vehicles may possibly(?) be instanced along with road). Frag shader can also just discard road fragments. Helpful to know exact bridge heights and path. Otherwise cars may clip through geometry or underground. I guess it won't look too bad to put it at a safe height over the bridge.
A possible Geometry Shader version would draw a single point at the start of the bridge with details of a path(?). The point would just be in a list of instanced points that work for all road paths(?). Eitherway, a safe path may have to be defined in OSM data, or derived by the engine at runtime. Unless there's another-way like road surfaces in static bridge models being already tagged.
vanosten wrote in Sun Apr 01, 2018 9:05 pm:I did not understand the discussion about the 1km grid. I can tell that osm2city works with 2km*2km grids (default, configurable size).
This can mostly be done by the FG engine(it seems?). It's just the current available unique seed in shaders is based on real world position. The version available has numerical issues. It doesn't stay constant & changes with view (derived using OSG view matrix that wasn't intended for this use).
The problem is floating point representations are of the form A * 2^B. 32 bit floats have too few bits in A.
The nature of PRNG's is that even a bit change in the seed causes a complete change in values generated. It's possible to round position to keep seed constant, but that means rounding to several 100m. The result is close by objects end up with the same seed. Rounding off XYZ coordinates results in a grid with same seed (that's what I meant). So data sent by FG specifically designed to be seed would seemingly help.
Any set of data (numbers) can be used as a seed. 2 examples:
If the position of tile origin was sent by the FG engine: tile origin (rounded off) +floating point model offset in tile = stable seed.
The FG engine should also be able to construct some form of seed when traversing a future OSM datastructure e.g. for a house: real world postion in high level city node origin(lat/lon or XYZ)+urban island position offset in child node (3 16 bit integers)+number in list of street or street offset in child node (16 bit integer)+house number(8 bit integer). The FG engine could hash the numbers together or just send everything as instance data to be hashed in shader.
Just 2 possibilities. There are likely far more elegant ways. AFAICS this is mostly an FG engine responsibility, and useful in a lot of objects not just OSM.
(Again mainly speaking conceptually.)
Kind regards,
vnts