vnts wrote in Sun Apr 26, 2020 2:07 pm:(In the /very long/ term (e.g. future years) there should be a 2nd iteration of buildings using different atlases that are easy for people to use (with a gimp file with layers & instructions), and easy to extract from available GPL2 compatible photos, parts of photos, photos of building parts, and texture packs. A shader based effect that builds partially randomised building facades from small parts of buildings like: wall textures, roof tiles, dirt maps, windows, doors, skylights, air-conditioning units, ornamental detail and, seasonal changes/decorations. It shouldn't need a completely unobstructed building facade photo taken under overcast conditions at the correct head-on angle. Existing texture photos have hard to avoid objects in front of buildings like: trees, cars, streetlamps, fences, bins. A /far/ smaller atlas for the same number of buildings means people only need work on a small part at a time to edit lighting for photo sources not taken under overcast conditions. It can be done without any changes in OSM2City data or scripts, just c++ & GPU side changes.)
vanosten wrote in Tue Apr 28, 2020 4:51 pm:Until then my plan remains to create a new texture atlas with better synthetic materials sometime in the future.
Firstly I was not saying that any plans for this iteration be changed.
I was talking about a 2nd iteration up-to "few years in the future."
Any proof of concept is just a start. The rest requires input from graphics experts like Thorsten (who is mostly away, and will be for the foreseaable future if you read the effects&shaders forum). Looking ahead there are some big graphics projects that graphics programmers and core programmers have waiting. The compositor. After that, I imagine the next gen scenery will be a big project, and any big OSM2City work might have to fit while that's being worked on.
There has been some confusion in the past, so I'll try to be clear and detailed:
vanosten wrote in Tue Apr 28, 2020 4:51 pm:[A] the code in osm2city and the texture atlas are directly interlinked - osm2city need to "know" what is in a slot in the atlas to be able to use heuristics to be smart when applying textures (e.g. a sand-stone facade should not be on a skyscraper, a glass facade not on a warehouse, ...).
The way GPU programming, and OSM2CIty data from models and building shader lsits works is like this:
(There has been some recurring confusion with the way GPU/CPU.data works in the past, and a clarification should help for future issues too)
What the vertex shader sees as input: Model space X , Y, Z. Vertex attribuites: Variable1, variable2,variable3.. variable N
What the vertex shader outputs: Vertex position. Interpolated variables: iv1,iv2,iv3..ivN. This could just be Variable1 to N, or an arbitraty derived quantity from inputs.
Between input and output the shader is programmable with a great degree of freedom. Like writing a python function. Except it's written in C (which GLSL is based on).
OSM2City Data:
- 3d models in AC files: vertex coords, colours rgb, normals xyz, texcoords uv, ..
- Building shader lists: vertex coords, data 1, data 2 .... data N..
Vertex shader input data:
- Vertex coords are x,y,z.
- The rest of the data are just variables (called vertex attributes). It just so happens that 3 colours , normals, etc. are commonly needed vertex attributes. Varaible 1-3 could be a colour. Variable 4-6 could be normals. Variable
The flightgear core sees the OSM2City data, textures, any other data source, and regional material definitions. Flightgear has control over what it sends as per vertex data (vertex attribuites), what 2d/3d texture data files it sends based on regional definitions, and constants it sends from regional definitions and environment things like sun position.
Fragment shader:
What the fragement shader sees as input: Interpolated position x , y, z. Any textures that are relevant for the object as 2d or 3d data: tex1, tex2..texn. Interpolated quantities that the vertex shader has chosen to share: iv1, iv1 .. ivN.
Fragment shader outputs the colour at a position in the screen mainly.
Fragment shader input data from vertex shader:
These could include colours, normals, or texture coordinates directly from vertex attribuites, or derived as needed.
Between input and output the fragment shader is programmable. For instance there's complete control over what data to read from 2d/3d texture tables and how many lookups happen. Texture lokups can be just the original vertex attribuites, or a lookups based on model position, or lookups altered pseudo-randomly by an object ID.
(For clarity I left out constants called uniforms that C++ can change every frame that has useful info like the sun position at that frame, and a bunch of stuff like setting depth)
-------
To return to the case at hand the texture coordinate that OSM2City sets is just a variable to the shader. The shaders can consider all the data supplied and decide what texture positions to look up. The shader can look at the texture coordinates and see a lookup is for the front of a medium sized office building in the current texture atlas. Then the shader can just use that 'medium office building fronts' classification and look up a future texture atlas, and also psuedo-randomly choose between 3 medium front texture variants for that region.
There's complete detachment between texture coords and other data from OSM2City, and what the shader looks up and the colours at a screen position it outputs.
That's what I meant by "It can be done without any changes in OSM2City data or scripts".
vanosten wrote in Tue Apr 28, 2020 4:51 pm:[B] The possibility to use real photos and submit them has been implemented ca. 5 years ago by radi. Since then, nobody has ever submitted anything. [quote
The difficulty in getting photos of complete building fronts, and people not being photographers/artists, is why I came up with a design around parts of buildings. That way any/all sources can be combined, including things like dirt maps to show weathering. This way there's huge variation, and it's quicker as there's a smaller texture space to fill with textures people create and/or find.
"It /shouldn't/ need a completely unobstructed building facade photo taken under overcast conditions at the correct head-on angle. Existing texture photos have hard to avoid objects in front of buildings like: trees, cars, streetlamps, fences, bins." - i.e the NOT current way
.
vanosten wrote in Tue Apr 28, 2020 4:51 pm:And IMO there is no reason to do so. Synthetic textures will almost always beat photos and will not have IP problems.
It doesn't matter to the shader if the sources are synthetic (e.g. based on noise textures, or texture packs), or parts of photos from where ever they can be found.
The idea is to get parts of buildings created synthetically or otherwise.
The current texture and any improvements can be chopped up into a new atlas in a second iteration.
-------
vnts wrote in Sun Apr 26, 2020 2:07 pm:A shader based effect that builds partially randomised building facades from small parts of buildings like: wall textures, roof tiles, dirt maps, windows, doors, skylights, air-conditioning units, ornamental detail and, seasonal changes/decorations.
Again this is for some distant future iteration, when people with relevant core/GPU expertise are around. It shouldn't change plans in the immediate future and not for 2020.1.
vnts wrote in Wed Mar 18, 2020 2:01 pm:In my old experiments I used a building version of the model ALS ultra fragment-shader.
It's enough to specify face number of each quad & corner number of each vertex as vertex attributes. The vertex coords can be stored shader side in an array of vec3s representing vertices. An array of ivec4s can contain the index number for each vertex at the 4 corners of each quad. It's then a simple matter of using the face & corner number to look up the index number of the vertex for each corner. Normals can be derived in shader saving 3 attributes. The shader can do a cross product of two quad vectors, or look up a table. If the face and corner number isn't available in explicit form, then it has to be reverse engineered from the tex coords and vertex positions in a future iteration that uses a custom texture sheet (when I did some experiments I just assumed a front face for all walls as the face was convoluted to derive from tex coords and vertex positions)
[/quote]
At around the time the example building lists for EHLE/EHAM was released at the end of last year, I spent 3-4 weeks doing some experiments prototyping and looking at conceptual difficulties of a new iteration.
This is more a prototype of a proof of concept
Don't take the presence of screenshots as a implemented version, it's just the concept. These are old screenshots.
I'll follow the way it was developed. These textures are all quick copies from the existing texture atlas.
Print of the new texture sheet format on a wall. Each row is a building part with variants. Ignore the big grey windows that were from the old light map or normal map.
Wall, window:
Change building parts by changing the PRNG seed manually:
Take the wall texture and apply it (this is a super low res texture and the non-resyntesiser tiling I used at the time blurred it):
(The wall needs to be mixed with a map of weathering which changes saturation and/or hue to make it less uniform.)
Tile huge windows + red wall:
Tile huge windows+dark wall:
Tile different window+red wall:
Tile different window+yellow wall:
Dividing building faces into different stories based on height: ground floor+repeated stories (shades of green)+left over space+top ornamental decoration area:
Change repeated floor heights and the shader adjusts:
Putting it together, doing multiple stories with the 3rd window variant and adding doors (I had a door for the second story at that time):
The blue is just the light volume (circle) of the window for illumination at night. The same circle can be used to place stains and weathering around windows. It shows the type of control that the shader has when the shader the building facades from parts, as the shader knows what's being drawn and where it is in relation to other objects.
This is more for general interest, and what's as an example of possible without any change to OSM2City. As I said with people being busy or away because of covid-19, and upcoming big graphical projects to FG, it may take quite some time before another iteration is done.
Nothing about the current way needs to change as far as OSM2CIty is concerned (the texture atlas can be chopped up and reused).