Board index FlightGear Development Scenery

OSM custom sceneries to terrasync

Questions and discussion about enhancing and populating the FlightGear world.

Re: OSM custom sceneries to terrasync

Postby Thorsten » Tue Mar 13, 2018 8:30 am

Okay, here's TorstenD's answer to the question copied over from the mailing list

Technically, I would not mind running osm2city jobs on the scenery machine at a regular schedule. This - together with the new Terrain- will increase disk space and bandwith usage a lot. (The new terrain, covering less than a quarter of the globe, already weighs approx. 47GB - the old Terrain folder was "just" 88GB for the entire globe).
Now add several hundret GB for worldwide osm2city data and we will probably end up with many users complaining about full disks, endless terrasync download times and overloaded internet downlinks. We might also loose some mirror providers along the way if they can't carry the load anymore. I just checked that the mirror that I control, sends out 33GB of data on an average day (13TB over the last year). And this is just one of four active mirrors.

I have a few ideas how to make this easier to handle, but they all need a bit of work:
- make terrasync more modular, e.g. allow t/s servers with only parts of the scenery (our DNS infrastructure should be able to handle this but work on the t/s-client is needed for it)
- make osm2city files more disk-space friendly, e.g. by tarballing a 1x1 tile into a single gzip'ed archive (requires work on the model loader code)
- eventually, if the above works, extend the 1x1 "tile as a tar.gz" to the /Terrain and /Objects folder, too.

I don't think I'll take care of any of the above items in the near future. I'll focus on getting the terrain building pipeline into a reliable shape before touching anything else. But I would not mind any contribution from anybody else in that area, even if it was only some conceptual work ;-)


So compression might indeed be an option.

Especially I would be interested to know whether there would be a possibility to combine an AC3D type of scenery (the current way of doing stuff in osm2city) for e.g. larger buildings and cities with templates for buildings outside of cities (e.g. family houses) etc.


If you store a 'normal' building, you're storing, say, 24 vertices with 3 numbers each if it is already placed in suitable tile coordinates, otherwise in addition the translation and rotation parameters (six numbers).

For instanced buildings, you store the vertices once - all you need is a pointer to the right list. Since they're not placed absolutely, you also need position and rotation, possibly scale transformations (nine numbers) - so you end up a lot cheaper in memory.

The current random buildings are done in that way, and conceptually, rather than randomly creating coordinates for the buildings, the code might just read in a list from a file but render the same random buildings at the positions specified. So rather than specifying the models, you'd just have to specify a list of placement coordinates for large, medium and small buildings.

But of course the variation you get this way is severely limited.

Assuming that 'most' buildings are 'simple' and can be represented by a 'generic house', this might allow to rather drastically slash the size requirements of OSM2City.

I have to add though that the idea to split OSM buildings into 'simple, generic instanced' and 'complex explicitly modeled' has received a fairly lukewarm reception by the rest of the developers (and it's not something I'm terribly keen on seeing myself, but something that has the potential to cut disk usage).
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: OSM custom sceneries to terrasync

Postby vnts » Tue Mar 13, 2018 12:29 pm

Quick extension of previous experiment on an old nightly. Maybe useful for concretely showing concepts & power when shader is given meta-info on what is being rendered+unique ID.

Geometry shader generated buildings on airport keep (concepts apply without geometry shaders).

Unique number ID based on position of the 3 triangle verticies it rests on+number of buildings spawned. Stretched cube + simple roof (8+2 verticies). Different color for each part to show orientation. Unique number means each building can have any amount of random properties. Rotation is randomised. (Grass colour shows unique ID for each triangle. Some funkyness due to using rendering + shader for grass.)

4 windows@front (screen). Randomly lit windows(screen). Shows per building variation.

Height, width, depth have freely customisable variation per building (e.g. min+random variation). (no variation versus variation).

A lot of buildings seem 1 cube, or 2 cube L shapes, or 3 cube U shapes(?). Space saving by only specifying stretching/deformation. Maybe (?) it is fast to render multiple buildings with 1 shape and morph on GPU (e.g. collapsing roof of house to become cube building, or just rejecting certain faces in fragment shader).

Perhaps (?) there's some cheap way to deform a polygon in vertex shader so it creates approximate building shadows similar to trees. Maybe (?) even optional cheap instanced light poles & shadows on roads (or with geometry shaders(?)..perhaps geometry shaders may even work for an optional 3d version of the car shader with fake shadow).

Wrt. to Discussion on space saving:

OSM data structures can be a tree of nodes(?). Lowest level node contains individual objects & groups of patterns of objects. Patterns like rows of houses defined between street segments with spacing+spacing variation. Higher level nodes can contain meta-data. Each step down specifies additional local properties & overrides/variation. Space can be saved for per object properties by using few bits, and specifying range in meta-data (e.g. Individual building heights specified are 8m, 9m or 10m in a small region. Only requires 2 bits variation not 3 - 8+[0,1,2] ). It's also possible to add random variation generated in shaders to save bits e.g. add decimal place variation to height to get 8.55m, 9.81m.

Maybe a resolution & data structure will somehow (?) naturally become clear by both sides listing requirements. Listing parameters used in OSM generation & patterns to make visible what can be quickly reproduced in shader, what assembly of data can be conveniently done in c++ to send to GPU via uniforms or instance indexed data, and what meta-data is useful. Listing what type of data is useful for determining visually compelling aspects and visual cues from air (can't provide anything remotely like a complete or useful list). Maybe not (not a programmer)..

Kind regards,
vnts
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Re: OSM custom sceneries to terrasync

Postby Thorsten » Tue Mar 13, 2018 1:09 pm

Geometry shader generated buildings on airport keep (concepts apply without geometry shaders).


Quick question - what is the input you've been using for the geometry shader?

I gather if you have a single vertex in the right place, you can expand it with a geo shader to a full house, or a car, or a tree, and move it around - but you somehow need that vertex.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: OSM custom sceneries to terrasync

Postby vnts » Tue Mar 13, 2018 2:50 pm

Thorsten wrote in Tue Mar 13, 2018 1:09 pm:what is the input you've been using for the geometry shader?

It's just the grass shader from what I posted previously. Just spawned a (random) amount of further geometry after the grass shells on each triangle were done, taking triangle area into account a bit (3 vert positions as PRNG seeds, 2 axes created orthagonal to triangle normal & random rotations/properties). Come to think of it, shader based buildings may allow randomly or methodically discarding buildings common buildings like houses in the fragment shader to reduce density for people with old GPUs.

Kind regards,
vnts
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Re: OSM custom sceneries to terrasync

Postby vanosten » Sun Mar 18, 2018 7:18 am

vnts wrote in Tue Mar 13, 2018 12:29 pm:OSM data structures can be a tree of nodes(?). Lowest level node contains individual objects & groups of patterns of objects. Patterns like rows of houses defined between street segments with spacing+spacing variation. Higher level nodes can contain meta-data. Each step down specifies additional local properties & overrides/variation. Space can be saved for per object properties by using few bits, and specifying range in meta-data (e.g. Individual building heights specified are 8m, 9m or 10m in a small region. Only requires 2 bits variation not 3 - 8+[0,1,2] ). It's also possible to add random variation generated in shaders to save bits e.g. add decimal place variation to height to get 8.55m, 9.81m.

Maybe a resolution & data structure will somehow (?) naturally become clear by both sides listing requirements. Listing parameters used in OSM generation & patterns to make visible what can be quickly reproduced in shader, what assembly of data can be conveniently done in c++ to send to GPU via uniforms or instance indexed data, and what meta-data is useful. Listing what type of data is useful for determining visually compelling aspects and visual cues from air (can't provide anything remotely like a complete or useful list). Maybe not (not a programmer)..


I have two questions to this exposing my ignorance in the shader world:
  • Instead of encoding the building properties for each building: would it be possible to encode the building variation elsewhere (e.g. in a xml-file within FGData?) Then you only would need the same info as shared objects have and instead of the model path you would have an id, which then can be resolved in a datastructure.
  • who could define these buildings? I.e. how can "artists" help with making (regionalised) buildings of a large enough variety?
Maintaining osm2city. Contributing with ground attack stuff to the OPRF FlightGear military-simulation community.
vanosten
 
Posts: 540
Joined: Sat Sep 25, 2010 6:38 pm
Location: Denmark - but I am Swiss
Pronouns: he/his
Callsign: HB-VANO
Version: latest
OS: Win 10 and Ubuntu

Re: OSM custom sceneries to terrasync

Postby Thorsten » Sun Mar 18, 2018 8:14 am

Just spawned a (random) amount of further geometry after the grass shells on each triangle were done,


That's really clever, I didn't think of scaling with the area myself... but I see potential...

Come to think of it, shader based buildings may allow randomly or methodically discarding buildings common buildings like houses in the fragment shader to reduce density for people with old GPUs.


Yeah - we do that for the clouds, so it ought to work.

Then you only would need the same info as shared objects have and instead of the model path you would have an id, which then can be resolved in a datastructure.


That's just a question of writing the lookup code, I don't think we can do this out of the box.

who could define these buildings? I.e. how can "artists" help with making (regionalised) buildings of a large enough variety?


I suppose like the random buildings can be defined right now, for instance for the Caribben Stuart has a block

Code: Select all
  <building-texture>Textures/buildings-caribbean.png</building-texture>
  <building-lightmap>Textures/buildings-caribbean-lightmap.png</building-lightmap>
  <building-small-min-floors>1</building-small-min-floors>
  <building-small-max-floors>2</building-small-max-floors>
  <building-small-max-width-m>20.0</building-small-max-width-m>
  <building-small-min-depth-m>8.0</building-small-min-depth-m>
  <building-small-max-depth-m>20.0</building-small-max-depth-m>
  <building-medium-min-floors>1</building-medium-min-floors>
  <building-medium-max-floors>3</building-medium-max-floors>


in the materials definition. So something akin to that could probably encode other template buildings as well (?)
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: OSM custom sceneries to terrasync

Postby vnts » Wed Mar 21, 2018 11:30 am

A curiosity: I wondered if it was possible to create geometry shader cars & street light poles on current OSM roads. Turned out (conceptually) possible (got somewhere to an extent)

Objects placed at one of the triangle strip vertices. No lighting. Different faces assigned different colors+1 white vertex at each face to show orientation.

Light poles (2 blocks) screen eg, eg. Shadows can be cast outside road geometry: eg, eg.

Objects: box, 3 box car.

How it worked&idea:

1. Just combined grass & road shaders to get a framework as quickly as possible.
2. 2nd road rendering pass (grass.eff->roads.eff, separate vert/geom/frag shaders. Discarded the 1st pass fragments & spawned geometry after rendering but easy enough to clean out unneeded road parts).
3. Skip triangles on the walls of the platform on which the road rests (using local up dir & tri normal to not spawn geoemtry or discard fragment)
4. work out texture space road length and width directions in model space A). find any 2 normal basis vectors in the triangle plane. B). Find the 2by2 rotation matrix from 2d texture space coords to 2 basis vectors. C). transform length&width tex space unit vectors to 2 known triangle plane vectors. Express directions in 3d model space. Find locations to spawn geometry - offsets relative to a vertex with known tex coords). Ran into a subtle bug with all 3 vertices being on lane boundaries & width coordinate, resolved now - resulted in orientation of objects in one side of road being off in screenshots.
5. Step through triangle extents and spawn vehicle in triangle only if rear left corner was inside tri (reduce duplicates). Vehicles overshooting at end of roads may be more noticeable than with 2d cars (can spawn if any part of a vehicle was in a triangle & drop fragments outside the triangle I guess)

This is not efficient, but shows the idea. I guess for a proper solution in an instanced GS system, FG engine geometry shader pass would draw only 1 vertex per road segment with data (?): direction, length, some measure of road curvature, traffic data, where to go at end of road or fade (e.g. turn a corner and go for a time then fade).

It seems(?) (conceptually) possible to do the same thing using vertex shaders only & deformation patterns for each vehicle type. Most vehicles have symmetry along length - reduces parameters sent to shader. Rows of boxes can deform to pre-set patterns with random selection of deformation + texture lookup pattern. Maybe a universal texture look-up pattern might not look terrible(?) (5 sides of cube: front/back/top/2 sides + normal mapping. Might be quicker for artists). Deformations: 3 boxes screen. Height change: eg. Box top faces can slope: screen, or be smaller so sides slope inward screen. Possible to reject some geometry like pipes on trucks based on detail level. 2 rows stacked on top might be enough (5 boxes per row). Maybe requires X deformable vehicles to be instanced with each road segment (possible to reject fragments for short roads).

Vertex shader only technique would likely require instancing X deformable vehicles for each street segment length or rejecting based on length. Vertex shader can calculate all deformed vertices in an array & pick correct one.

Should be possible to sync street lights and moving car lights with road a fragment shader. Blending an emissive translucent overlay to illuminate terrain outside road geometry may work too (additive blend eqn(?) ).

The same principles would work for buildings. Maybe faster to collect multiple less common buildings under 1 instanced geometry and deform or discard parts (e.g. U & L shaped buildings. Or complicated skyscrapers&buildings made from multiple boxes.).

---------
vanosten wrote in Sun Mar 18, 2018 7:18 am:Then you only would need the same info as shared objects have and instead of the model path you would have an id, which then can be resolved in a datastructure.

With regards to a unique ID & minimum requirements: Anything can be used as an ID. Any collection of numbers or even character codes. Even position. The problem with using raw world position is it's numerically unstable due to limited precision of 32 bit floats. A truncated number is stable (but must be truncated to several 100 meters). Say the resolution is 1km (likely lower).

If the FG engine provided uniforms containing integer coordinates to a 1km grid + floating point offset from the nearest grid for each model it would be sufficient. That requires nothing stored in OSM data. It's possible to hash the integer world coordinates to a smaller number to reduce uniform size. Tile origin world position+ float offset or unique item number in tile, may work instead. The main requirement is the world coordinate doesn't repeat over a visible area, or patterns don't recognisably recur at other locations (unlikely with changing OSM & terrain data).

For a OSM data structure that is a tree of nodes, with patterns & individual structures at the lowest level, the engine does traversing. If the engine is able to determine a coordinate to a pattern at the lowest node (say a collection of street descriptions) then if each street description was numbered it would be sufficient to create an ID. Even if the OSM data didn't explicitly contain a number, the position in a list is sufficient for the engine.

With patterns being provided, I think the FG engine is in the best position to deal with this? That gives freedom to design data structures as you would normally. There may even be a solution that doesn't need OSM data that should work even now (integer grid + float offset)

vanosten wrote in Sun Mar 18, 2018 7:18 am:Instead of encoding the building properties for each building: would it be possible to encode the building variation elsewhere (e.g. in a xml-file within FGData?)

Wrt. to freedom over OSM output & multiple data outputs & separate FG data sources & requirements like iteration+bacwards compatibility, this is the best I can do from a conceptual overview (AIUI from looking at mathematical requirements for end result):

1. Freely custom data structures including OSM output: multiple files -> 2. Requirements of terrasync(not too many seperate files) -> 3. Freely custom FG engine interpretation from multiple data sources and assembling geometry from patterns -> 4. GPU.

Only a conceptual overview (amount of freedom will depend on FG systems&performance&work). Main thing (seems to me) there is a freedom to arrive at a data structure definition and move from there.

It's possible have separate files: A). Low level pattern data describing buildings and streets(OSM output, high volume, rapidly changing). B). Medium level single data structure file containing meta data in FGData or terrasync (OSM output, sizeable). C). Files created by OSM with high level descriptions as starting points for editing & refinement by people. D). Regional & art definitions created by people (not OSM output) - these can supplement or even override other data.

A: Slow to build. Maybe format is locked in for backwards comatibility (?). Bundling similar things may help future SIMD CPU processing (e.g. interpreting 4 streets at a time). Examples: street lengths & curvature, street pattern type (two rows of houses between 2 streets, 1 row), small or isolated exceptions to patterns.

B: Frequent iteration. Maybe format allows adding extra data fields on later versions with more detail. Data interpretation so older FGs ignore fields past a certain point (so terrasync remains compatible with older versions). Requires FG to traverse 2 deepish data structures at a time.

C. -

D: Iterated independently of OSM

Engine can collect A to D and assemble to send to GPU. Lot of freedom to define data stuctures.

I can't really give a list of data that is complete, or certain to be valid&compelling (not a programmer). But types of data to include:

1. Data stated directly in OSM.

2. Derived data: data calculated by analysing OSM data. When in B. or C. it may well end up being used by other areas like terrain rendering or random objects: Biggish urban blobs like cities or towns could be given a center and weighting based on size. This could increase density of paths in procedural texturing or man made objects placed by random objects. Can affect density of random lights visible from far away. It can probably affect skyglow/light pollution/smog too (?). It can affect little touches like maintenance of roads (or dirt, sand deposits). It is likely useful to know how close buildings are to edges of 'urban islands' in cities, or to city centers. Attaching locality names to the data structures might be useful for other things. Building rendering can switch material types and age for country houses or in small towns, as well as reduce maintenance of roads etc.

3. Data from other sources like landclasses as you suggested (e.g. it may be useful to know what landclass roads go over so mud, rubbish, or sand can accumulate at sides, and desert roads aren't allocated green walls sides). Maybe useful to know if houses are backing on to wood-land (no fence or different type of fence).


Wrt to discussion on templates and encodings:

After recent experiments, this is where I got to wrt. a finding technique that is space saving & fast:

Describing patterns will massively reduce space usage (a building itself is a repeated pattern).

FG engine can interpret patterns. Example: defining street segments & houses together. e.g. 2 streets+2 rows of houses in-between, 2 streets+1 row of houses in-between, 1 street with 1 row of houses on one side and other terrain types on other side. 2 curved streets with an inner and outer curvature can have fan shaped blocks.

Data for 2 rows of houses between streets might consist of: Street start+direction+length. For street+2 streets on either side to define block shape. House spacing and variation. Data affecting probability and style of: Fences/walls, trees, water storage, garages, secondary buildings (This data can be placed in B,C & mainly sourced from D).

Engine can simply construct the entire street. Instancing is just when OSM data is such the engine can bundle similar geometry, upload 1 model, and render it in multiple places incredibly fast.

A possible instanced model might consist of: 1 street segment+2 blocks on either side+houses+secondary buildings. Optional detail levels can add: fences+street light poles+trees for garden+things noticeable from air like pools or parked vehicles+polygons on the ground for shadows & lights cast on the ground from street & building lights+building details like geometry for chimneys.

Streets can then constructed by placing models in a row (uploading one model and drawing the instance incredibly fast). It's possible to randomly reject geometry like trees or chimneys to create variation. The full suite of placement variation & deformation is available. Of course, it's also possible to instance individual objects instead of collecting in once model. What ever is fast(?).

When an entire block of streets are defined, it may be faster to reject blocks on one side of the street and still use one model instead of a street with houses only on one side. Curved street segments need some definition of curvature(?).

Blocks & streets are being assembled in one model. So it's possible to include proximity data so street lights affect buildings and trees (and vice versa).

This level of detail doesn't need to be there from the start. It can be added as time goes on. Because the description of patterns is on a high level, FGdata, FG engine, and shaders aren't locked in by OSM data format & backwards compatibility.

It's just an example (speaking conceptually), but describing patterns of things can massively reduce space, allow choosing the fastest instancing method, create more flexibility & space for future iteration. AIUI(?). Existing streets can be matched to a pattern (e.g. calculating variation min, max, average, std deviation).

vanosten wrote in Sun Mar 18, 2018 7:18 am:who could define these buildings? I.e. how can "artists" help with making (regionalised) buildings of a large enough variety?


To add as art side of things seems a bit unclear: Art needed is just parts of buildings (AIUI). That part is after the first shader iteration. It can start with chopping up existing OSM textures to create a lot more variation than present currently. But it can be iterated independently of OSM data structure development.

It's a lot easier when only parts of images are needed. Can use open source image sites like pixabay. No need to get unobstructed shots of facades. Can use high quality closeup shots of parts of surfaces (walls, roofs, windows).Or opensource material textures. Very specific art: seasonal decorations (Christmas), roof details, advertisements, window interior shapes so windows on buildings near helipads and terminals have interiors with some parallax movement. End result is it's (AIUI?) possible to construct very complex facades that have natural variation and don't look cloned when there's a row of similar buildings.

Just greyscale textures for surfaces and wear & tear may work (?) e.g. for roofs&walls. Colours can be randomly selected from a list, or list of ranges & combinations for each building part category (stated in regional building definitions). It's possible to define how common colours are in a region.

As Thorsten said regional definition of effects can be done (and iterated with better effects over time). It's possible to have per building lighting, with a randomness. Example: Like window lights lighting up surrounding walls. Lights come on 1 by 1 randomly as night falls (increased probability), and switch off as people go to sleep. Procedural texturing can apply to do wear and tear, or materials (a lot like snow which currently accumulates on roofs). When the shader knows what each part is it's possible to do shader based ambient occlusion (near window sills or air conditioners), or shadows (vents & chimneys on roofs), or walls being lit up. Limitation is depends on how compelling each feature is viewed from air vs performance hit -detail/shadows/lights on roofs affect the scene more than from ground.

Edit: Perhaps(?) to save space, there is some library that can handle packing & unpacking data with custom bit lengths to make tweaking data structure quick (OSM processing side & FG engine side might require 2 libraries). Maybe it will support making it convenient to change rules quantising values too. This problem must occur frequently enough in other uses to likely have support.

(Current conceptual understanding).

Kind regards,
vnts
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Re: OSM custom sceneries to terrasync

Postby vanosten » Sun Apr 01, 2018 9:05 pm

Having read up a bit on geometry shaders, I really like the idea. And I would like the idea even more if it could be combined (for buildings, it is) with the possibility to have "generic"/"templates" for smaller buildings (which happen to be more region specific than large buildings in towns), while still keeping osm2city AC3D for larger buildings. Notice below that I am much more conservative on what is possible. Simply due to the fact that sheer man-power might be a limiting factor (at least it has been for osm2city).

Previously I was told, that it is important to keep the number of (I guess primary) nodes in the 3D world to a minimum. Therefore I guess it is still a good idea to have a osm2city generated mesh as the basis for geometry shaders (instead of a lot of single entries in the shg-files).

As there is already logic in osm2city for placing buildings in the right spot preserving as many OSM attributes as possible, I guess it would be better to have heuristics in osm2city and let the shader "only" do the presentation without a lot of logic. So osm2city could for each building have e.g. one triangle giving with one side the dimension of the front (including direction and position) and with other attributes (e.g. size, colours) an entry in a lookup map shared with osm2city to know the exact building to place. As the shader could both read a look-up map with different attributes and the linked 3D-files, it should be easy to have osm2city and the shader work together (one doing pre-processing, the other during run-time). A necessity would be to program a 3D-file format loader. AC3D might be the obvious choice, but OBJ might be a good alternative. I say this because with the world-models clone from the original World2XPlane there is a quite comprehensive library of buildings available (GPL v3), which in my opinion is by far richer than what is currently available of shared models in FlightGear's scenery database - but it might require an a bit richer repository format than the current csv.

In other words I believe it would be a pity to create a shader who places "random" buildings and sets aside real-world input from OSM, e.g. Simple 3D buildings, colours, height etc..

Regarding street lamps: I believe it would be better to let an algorithm not using vertices determine the placement for e.g. the following reasons:
  • The vertices are somewhat arbitrary placed in OSM and then again in osm2city - real world street lamps tend to have a somewhat constant distance between them.
  • You might be able to derive the type of highway from the texture used, however I guess it would be better to know the highway type from OSM and then from there to use a heuristic to determine the streetlamp type.
  • You might be able to derive whether or not a street is lit from the vertex' or surface's material, however it is specified originally in osm2city.

Therefore I guess it could be more convenient to place light poles based on triangles set by osm2city in an AC3D-mesh giving the position, heading, and type (e.g. encoded in colour), even if that costs a few bytes. And it would IMHO be a good place to start with a PoC.

As I am glad for the car shader by Thorsten, I have no opinion regarding moving cars etc.; but if you need a specific structure from osm2city, then please let me know. Btw: one thing to solve for cars would be e.g. static bridges, as there is no osm2city roads put there. Another thing to solve would be osm2city residuals in roads, which currently are not high on my priority list.

I did not understand the discussion about the 1km grid. I can tell that osm2city works with 2km*2km grids (default, configurable size).
Maintaining osm2city. Contributing with ground attack stuff to the OPRF FlightGear military-simulation community.
vanosten
 
Posts: 540
Joined: Sat Sep 25, 2010 6:38 pm
Location: Denmark - but I am Swiss
Pronouns: he/his
Callsign: HB-VANO
Version: latest
OS: Win 10 and Ubuntu

Re: OSM custom sceneries to terrasync

Postby vnts » Fri Apr 06, 2018 11:46 pm

vanosten wrote in Sun Apr 01, 2018 9:05 pm:Having read up a bit on geometry shaders, I really like the idea

Geometry shaders are not needed for buildings (Perhaps not for cars either). The techniques will work with vertex shaders.

I just used geometry shaders as they allow prototyping just using math&logic (without creating a model to load vertices & attributes each time). I was intending for vertex shader compatibility.

Vertex shaders can be used well: When model geometry is known. It may(?) be faster to collect different types of objects in a model with more vertices to draw under a single instance with deformation - compared to drawing models with minimum vertices under several instances. It's also possible to reject models or parts of models in fragment shader. Examples: a very complex roof can collapse to make a flat roof. A car that needs 10 boxes can deform into a simple truck that needs 3 boxes. A complex multibox skyscaper can deform into simple rectangular block or something in between.

Geometry shaders are slower on older hardware & possibly older drivers(AIUI???). Google seemed to think so (eg http://www.joshbarczak.com/blog/?p=667). But recent drivers may have changed that. Vertex shader alternatives for buildings & roads may be better if geometry shaders run slower, are buggy, or if they do not run on a significant percentage of FG contributors & users systems that are still fast enough to display OSM in some form. Even latest DX12 features are supported by cards going back to NVIDIA 400 series - last page inlist (it is mostly how much computing power is available).

AIUI (?) OSM survery data varies in quality and completeness. There are probably comparisons of OSM data vs government data somehwere. e.g. Older surveys won't be detailed (maybe no fields for detailed building data?) or surveys simply won't be complete/thorough (May contain inaccurate imported data & data filled in by algorithms).

Multiple types of definitions will save space. The FG can fill in blanks at runtime. (Possible techniques: GPU generated randomness, instancing, collecting multiple different things under a generic instanced geometry which can deform into different things).

The chances (AIUI!) for space saving are to:

A. Avoid filling in smaller scale details where OSM survey data is absent. Larger scale details like rows of streets may need to be specified in areas without OSM coverage.
B. Avoid specifying each detail where it's shorter to specify a pattern+variation.
C. Avoid using more bits than necessary (e.g. 32 bit floats for height..when 8 bit integer will do. 16 bit integers should be enough for demanding things.). Accuracy is limited by instruments used, or quality of average surveyor's guess. e.g. Guesses of heights: meaningfully limited to nearest meter. Guesses of colour: An 8 bit integer representing Hue instead of 3 RGB values in OSM. Similar to colour idea is to avoid sending data 'filled' in by an algorithm & derive the original raw measurement with likely accuracy.
D. Avoid recording smaller scale detail (fewer bits) than meaningfully matters in the context of what the sim is used for. Example: Something that is not likely visually compelling or matter to simulation even in low altitude flying: The spacing between 2 houses being rounded off a bit. If houses are 1 or 2 meters closer or distant it will not make a difference). Space needed or performance might require a compromise & discussion between parties involved OSM, terrasync, FG engine and GPU.

OSM datastructure: multiple files -> 2. Terrasync requirements(e.g. not too many separate files) -> 3. Freely custom FG engine interpretation from multiple data sources and assembling geometry from patterns -> 4. GPU.

In otherwords, to take full advantage of opportunities OSM datastructure should describe the minimum necessary to convey information (patterns), with only the data that's compelling & important for simulation, with minimum relevant precision given accuracy of data and leaving out.

(AIUI 3d file formats are massively over-specifying for repeated OSM patterns, and not designed to conserve bits(?) I think this is Thorsten's objection to it?).

FG engine & shaders AIUI can put together models to instance in future iterations - based on number of features requested (like lightpoles, roof detail, trees in gardens, fences, parked cars, pools).

vanosten wrote in Sun Apr 01, 2018 9:05 pm:while still keeping osm2city AC3D for larger buildings..
In other words I believe it would be a pity to create a shader who places "random" buildings and sets aside real-world input from OSM, e.g. Simple 3D buildings, colours, height etc..

What is an example of the most complex building described (house or skyscraper)?

(If I follow correctly) The question is what is the raw data collected by OSM surveyors?

Reading the page, it seems the raw data entered into OSM categories roofs and buildings. Common forms, with parameters for scaling different parts?

A complex roof can be deformed into a flat roof, or something in between. It should be conceptually possible to collect multiple building types under one geometry with enough vertices to deform. e.g. this is just 3 boxes (24 vertices) & can represent a lot of shapes.

A lot of the complicated buildings may turn out(?) to be collectable under one general model with a bit more vertices than most models. That might save a lot of space & leave very little that requires a 3d model or detailed specification.

Complicated inner city building complexes may(?) be broken into L & U shaped parts (2/3 boxes). They can be placed next to each other to look seamlesss. Sides of buildings next to each other will not be visible. Definitions can state one side is seamless so border textures aren't drawn AIUI (Proximity info helps with ambient occlusion and lights being cast by nearby building windows etc).

The main space saving opportunity is to avoid state every vertex (3d file formats do this?). Pattern analysis & multiple definition types can avoid restating complicated buildings. If there were 10 complicated buildings of the same type in a street, the definition head can contain the average. The list can contain 10 variations+degree of random variation. If a lot of fields are identical to the head it is possible to avoid restating.

If a city contained very different types of buildings needing different instances, maybe adding a list of building types in child nodes to the city node will make it easy for FG to load data to GPU & avoid wasting VRAM. This type of thing would become clear later in the process, /if/ it's worth-while. If the FG engine had the ability to stitch together a texture sheet from individual images based on regional definitions it could save VRAM and allow more detailed textures (then again more recent GPUs have many GB of VRAM). A simple example is creating a forest texture sheet by adding different tree species (so tree combinations for each forest doesn't require it's own texture sheet).
vanosten wrote in Sun Apr 01, 2018 9:05 pm:Regarding street lamps: I believe it would be better to let an algorithm not using vertices determine the placement for e.g. the following reasons:

Oh, placing street lamps at vertices was just a quick demonstration such a thing was possible. The same technique as cars is available for placement now. A varying number of objects can be generated from the most relevant triangle depending on triangle size (with arbitrary spacing & the same degree of control as the fragment shader).
vanosten wrote in Sun Apr 01, 2018 9:05 pm:As I am glad for the car shader by Thorsten, I have no opinion regarding moving cars etc.; but if you need a specific structure from osm2city, then please let me know.

This /is/ Thorsten's car shader in 3d form. It was intented to be possible to sync headlights on road surface with 3d vehicles. LoD at long range could drop to 2d version using same texture.

This was just to prove the concept. It may even be worthwhile to use a vertex shader technique mentioned before (i.e. fast on older systems).

I think the point where a final polished 3d version is released is(?): After the work on FG engine side defining the format of the data sent to the GPU (e.g. 1 vertex per street segment(most efficient), or another method). That's after OSM data structure is finalised. That's several stages away & lots of time (??).

What is the next step in the process? (e.g. putting forward a list of data to be included from the OSM side. Terrasync, FG engine work, &Thorsten from GPU side may then look at: space & format & engine/shader work & requests for useful additional data like proximity or names of cities/districts/notable features. After that maybe a data structure, reading & writing, falls naturally into place? Not sure if I follow or know all the complications & can't really comment)

OSM can specify in the shortest possible manner and it should be conceptually possible for FG engine & shaders to do roads fast (and entire house blocks with roads).

Example: Street start position, direction, street type data, curvature of path. It should be possible for FG engine to instance 1 road segment model. These segments can be curved and stretched in the vertex shader based on curvature definition. Like a large loose spring (google image). Curved sections require more verticies. Straight sections strictly only require 2 triangles. Multiple segment models can be used over a strongly curved path, less for a shallower curve (or just use seperate high poly street segments when curving). Defining house blocks by street paths where possible is powerful. It needs a good space saving definition of curvature to be worked out.. (can only say a bare minimum & inefficient one: stating a variable number of points along the path as offsets from start position, and leaving it to the engine & shaders interpolate a curve)..

Specifying the minimum data for streets allows the engine to iterate in future (e.g. by instancing a bunch of points with data for geometry shader cars if vertex shaders cars won't do).

The road object shader I did was a way to extract the required information from the current data format (It was made with minimal knowledge of the model format/FG scene conventions, and made no assumptions on how the mesh was constructed.).

I did a bit more after I last posted. The moving part of the car shader works.

Arbitrarily variable spacing: 1, [https://imgur.com/a/eVIfa]2[/url], 3.

Add vehicles at intervals screen. Add vehicles following same direction as 2d vehicles + randomly empty slots like 2d: day, dusk.

Add rough shadows to light poles (blue and green on opposite side of street): eg.

It was just a concept proving such a thing could be done (maybe?). It's probably safer to rewrite it rather than build on it directly: ).

A polished 3d version using any technique needs a lot more work on: finding the fastest method, finding & discussing a suitable texture format that is suitable for artists, LoDs, working out exactly what things give most bang for buck vehicles (subject research), clearing up glitches and driver issues (there might have been one), lighting&fragment shader work. That's more in the territory of someone like Thorsten with detailed knowledge of what's fast, FG internals, shortcuts based on scene & model data conventions, art-side, compiler issues etc (new to FG & not a programmer professionally)..

If the current concept is useful in some way I could clean up irrelevant experiments, maybe play with a better vertex shader compatible object deform function & hand it over :) (the shader is pretty much the conceptual description above though).

Where I got to: Still have some issue with finding scale of texture space unit basis vectors in model space. But texture space directions vectors work in model space so it's possible to work in model space. Could be a bug or some detail of coordinates. I used a quick substitute coordinate system and it (more or less) worked. There was some extra funkyness possibly OpenGL effect setup, drivers, or the substitute coordinate system glitches, or minor bugs. Vehicles dissapearing at end of road segments is a only little bit more noticeable in 3d.
vanosten wrote in Sun Apr 01, 2018 9:05 pm:Btw: one thing to solve for cars would be e.g. static bridges, as there is no osm2city roads..

If these are a static model then it becomes hard(?). For a vertex shader version: It /may/ be possible for the FG engine to instance a transparent road just over the bridge if details are present in OSM data structure (vehicles may possibly(?) be instanced along with road). Frag shader can also just discard road fragments. Helpful to know exact bridge heights and path. Otherwise cars may clip through geometry or underground. I guess it won't look too bad to put it at a safe height over the bridge.

A possible Geometry Shader version would draw a single point at the start of the bridge with details of a path(?). The point would just be in a list of instanced points that work for all road paths(?). Eitherway, a safe path may have to be defined in OSM data, or derived by the engine at runtime. Unless there's another-way like road surfaces in static bridge models being already tagged.

vanosten wrote in Sun Apr 01, 2018 9:05 pm:I did not understand the discussion about the 1km grid. I can tell that osm2city works with 2km*2km grids (default, configurable size).

This can mostly be done by the FG engine(it seems?). It's just the current available unique seed in shaders is based on real world position. The version available has numerical issues. It doesn't stay constant & changes with view (derived using OSG view matrix that wasn't intended for this use).

The problem is floating point representations are of the form A * 2^B. 32 bit floats have too few bits in A.

The nature of PRNG's is that even a bit change in the seed causes a complete change in values generated. It's possible to round position to keep seed constant, but that means rounding to several 100m. The result is close by objects end up with the same seed. Rounding off XYZ coordinates results in a grid with same seed (that's what I meant). So data sent by FG specifically designed to be seed would seemingly help.

Any set of data (numbers) can be used as a seed. 2 examples:

If the position of tile origin was sent by the FG engine: tile origin (rounded off) +floating point model offset in tile = stable seed.

The FG engine should also be able to construct some form of seed when traversing a future OSM datastructure e.g. for a house: real world postion in high level city node origin(lat/lon or XYZ)+urban island position offset in child node (3 16 bit integers)+number in list of street or street offset in child node (16 bit integer)+house number(8 bit integer). The FG engine could hash the numbers together or just send everything as instance data to be hashed in shader.

Just 2 possibilities. There are likely far more elegant ways. AFAICS this is mostly an FG engine responsibility, and useful in a lot of objects not just OSM.

(Again mainly speaking conceptually.)

Kind regards,
vnts
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Re: OSM custom sceneries to terrasync

Postby vnts » Fri Apr 13, 2018 7:11 am

After some discussion in PMs with vanosten about this topic & progress so far.. There are opportunities to save space by describing shapes. The problem can be cut down to size one part at a time so it falls under constraints: Terrasync space, bandwidth costs, user download bandwidth & hdd space limits.

Regardless of the overall approach taken in other aspects this specific part is an example of something that has to be looked at in due process anyway.

The idea here is to eventually get feedback from GPU, FG engine, Terrasync side, and end up with an agreed data specification for this part. That would give OSM2City side something concrete to implement. So there is some traction which has been absent (hopefully more parts can be negotiated so data falls within constraints).

(Not saying this is the most elegant, fast, or space saving way - just that it's conceptually a large improvement and - a minimal /starting point/ that's still better than nothing.)

A data format specification to replace existing OSM2City roads & pylons with descriptions of shapes instead of specifying every vertex


An important thing to note is this concept should be a tweak to processing already done by OSM2city to create roads&pylons.

It saves effort on OSM2City side dealing with creating a mesh and fixing bugs. It's a lot more straightforward to let instancing handle it.

This is outlining a basic, minimal, inefficient way( :mrgreen: ). It's still far more efficient than specifying every vertex.



OSM2City processing:

Instead of each quad (or 2 triangles), the start point and list of points along the center of the road is created. It's possible to just replace quads (2 triangles roughly) with a point.

As far as I can see it also saves OSM2City effort creating & texturing road meshes and fixing visual artifacts(?). Similarly pylons only need a start + list of points.



Data Format:

- Start point data includes additional fields for descriptions that don't come from regional definitions. Examples: What landclass the road goes over so dirt & sand can accumulate, street lights.
- 3 16 bit integers per point 1.5 cm accuracy over a 1 Km winding road.
- 1 16 bit integer giving coordinate along the road curve (offset from start can't be used for curved roads). Needed for texturing, cars, streetlamps etc. If a road needs to be split into 2 sections the coordinate can continue from where it left off.
- This avoids specifying every vertex for triangle strips defining road surface and 2 walls on each side that are needed so the road doesn't seem to float in air.
- Specified data can be integrated into the final data structure (to reduce number of files).



FG engine:

- A road segment model to every X points, or less. This model is instanced.
- Each vertex in the segment model gets vertex attributes: for 3 nearest points along the road center so vertices can shape themselves.
- 1 8/16 bit integer. Each vertex gets a vertex attribute giving position on a curve between 3 points. Used when there are many vertices between road center points - used to deal with less than X road points or just straight roads.
- It's possible to calculate the coordinate along the road curve on CPU, but it requires CPU time.
- Road description fields are sent via per instance data



GPU:

- Vertices in the model position themselves based on the description of shape. In this instance it's 3 closest points and percentage position along the curve between those 3 points.
- Vertices can use some form of 3d polynomial curve between points to position themselves..or something.
- Direction across road can be derived from the local up-vector and the road direction between points.
- Instancing is blazingly fast. It allows future iteration without limits via FGdata/ALS - nothing is frozen. AFAICS instancing is very low level so even with next gen scenery these features are what would be what's used(?).
- It makes it conceptually possible to add arbitrary geometry detail as well as texture and lighting detail in future. That includes moving cars & street lamps with shadows. Proof of concept: Cars, Vans, Trucks. Multiple types at once. Effect in motion. (Everything uses the same 20 vertex mesh in 1 tri strip that deforms + very short description of shape. This scheme was deliberately designed to be a proof of concept for vertex shader mesh deformation & shape description.)

Again this is a minimal proof of concept. Thorsten will have a better idea of potential complications and constraints on what is possible, and can put forward a better signed off version (Taking into account data formats, FG engine side, and terrasync side complications).



Terra-sync space constraints:

There are even more space saving descriptions, and better shaping methods, but even this is a huge improvement. It's a tradeoff. Speed, space/bandwidth, future flexibility, compelling scenery VS more FG engine features & OSM features & shader work.

- Road center points need not be evenly spaced apart. Curved sections require more points. Straight sections require 2. How many road points are needed depends on the interpolation algorithm and how many interpolation parameter variables each vertex is given by the FG engine.
- More sophisticated road curve definitions may be faster to be done in OSM2City side.
- Road center point format: Maybe shorter to use offset coords relative to previous point. May allow using 8-bit integers instead of 16 provided points are spaced together (100m spacing = 40 cm accuracy with 8-bits). It requires CPU time to add up displacements and calculate offset from the start point(?).
- It's possible to have multiple road shape definition formats to save space. If an area has
- Maybe it makes performance sense to collect straight sections under 1 short model and curved sections under a more complex model.



Is there infrastructure that could help iterate on specifications & test easily?

One potential (??) issue is it may help if there was an easy way to specify arbitrary data-structures from OSM2City and read from C++ (if there isn't a way already!). That may involve finding libraries that allow quick definition and testing.

Google results for the problem: libblobpack 'library for packing arbitrary structured data into binary blobs' (platform independent). LittleIntPacker: variable bit length packer / unpacker for short lists of itnegers (C, SIMD version for longer lists of integers called simdcomp available, looks a bit too low level). My apologies if there is already a way to serialise / deserialise arbitrary data structures on FG and OSM2City side (minimal space wasteage would be a bonus assuming it doesn't cost much FG engine CPU time unpacking).



Additional data(???) depends on what the best bang for buck is in terms of performance & being the most compelling when viewed from the air:

- Knowing the end coordinate of the road. Whether it's an intersection, dead end. Useful for vehicles. Perhaps there is also some hint for vehicles like turn a corner and move in one of these 2 directions then fade (not sure if that will be any better than just disappearing).
- Per road center point data?: Building types on either side (affects parked cars, light from shop fronts).
- What about intersections? Can leave traffic marks, oil deposits etc. which are fast-ish to do in fragment shaders. It's possible to add street signs.
- Per road segment descriptions: can include a few intersection locations and street signs. Can include most common or average building type on either side. It's possible to have a table of common per point descriptions so per point data can just be an integer index referring to the table (e.g. list of building types).
- Hints as to whether roads are likely to have walls, barricades, fences etc. - these are conceptually possible. Traffic barricades/cones & things with high contrast are more noticeable from air.
- These are just examples of the type of things additional things that may or may not be worth it..

Perhaps it's possible for the specification to allow for future data fields to be added to road descriptions. That will allow older FG versions to simply skip over the new data once it reaches the end of what is known.

I can prototype some vertex shader concepts involved in converting city objects to more data saving methods. If I'm around as these get discussed in later months - using geometry shaders to avoid needing to create a model with vertex attributes each iteration. These will only be concepts & a final version needs work on polish & iterations for performance that can change approach.



Some open unresolved questions starting from the top:

- Is this approach even conceptually viable:) ? (possible, fast and space saving, flexible).
- If so what would be a data specification for OSM2City?
- Is there a way to allow more descriptive data fields to be added in the future which makes it quicker to arrive at a spec?
- What algorithm & sophistication level would it use (& how much space is saved)? i.e. At what point do the tradeoffs make more sophisticated descriptions of shape have diminishing returns:
--- A). space/bandwidth for infrastructure & simmers, B). performance, C). future flexibility/improvements, D). compelling scenery
--- VERSUS
--- E). FG engine feature work, F.) OSM feature work, G). shader work, H). maintenance..etc.
- What type of information from OSM side do parties involved need to comment & weigh tradeoffs A-D vs E-H? e.g. Terrasync side might want rough estimates of % of total space, % of space saved compared to existing method to justify E-H. Is there specific information 4 (AIUI) sides want : OSM2City, Terrasync, FG engine, GPU?.
- Is breaking things down into small parts and arriving at concrete actionable specifications that meet requirements A-D, a good workflow?

This is made in the context of discussion in PMs (just trying to assist, apologies if this isn't the best approach).

Edit: OSM-->OSM2City

Kind regards,
vnts
Last edited by vnts on Sun Apr 15, 2018 1:22 pm, edited 1 time in total.
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Re: OSM custom sceneries to terrasync

Postby vanosten » Sat Apr 14, 2018 4:45 pm

A few remarks:
  • In general the term "OSM" above should be replaced with osm2city.
  • Not only is it necessary to know the road type, but also whether or not it is lit during night.
  • osm2city also automatically creates bridges given different levels of roads / railroad as encoded in OpenStreetMap data
Maintaining osm2city. Contributing with ground attack stuff to the OPRF FlightGear military-simulation community.
vanosten
 
Posts: 540
Joined: Sat Sep 25, 2010 6:38 pm
Location: Denmark - but I am Swiss
Pronouns: he/his
Callsign: HB-VANO
Version: latest
OS: Win 10 and Ubuntu

Re: OSM custom sceneries to terrasync

Postby vnts » Sun Apr 15, 2018 2:02 pm

vanosten wrote in Sat Apr 14, 2018 4:45 pm:A few remarks:
osm2city also automatically creates bridges given different levels of roads / railroad as encoded in OpenStreetMap data

OSM2City description can involve:
A). All fields for describing whole bridge (including picking & positioning an explicit model)
B). Description of start and end if needed, descriptions of support sutructures (arch or cable patterns)
C). For instancing: sequence of X points containing data for repeating segments - may include variation in repeating patterns of support (height of arch, cables).


/Examples/ of possibilities for implementations:

D) Repeating model instance.
E). Separate model instances for start/end & repeating middle. Drawn separately.
F). Single model of start+end, with a lot of repeating segments. Vertices belonging to Start/End parts and repeating segments are identified by vertex attributes.
G). OSM2City explicitly creates list of common model types. Data in A. selects model type, scaling, texture/lighting variation. The node for each area in the data structure (e.g. city node) can list types of models used in that region to load on demand. It's possible to add a length coordinate to all vertices so the bridge can be curved in height by data in A.
H). OSM2City outputs unique bridge models explicitly

A combination of D-H could be used - based on whether bridge types occur often. This means multiple bridge defintion types.

It all depends on tradeoffs for: space saved, performance, & work / maintenance. It may be worth it describe bridge shapes for space saved even if performance is similar to explicit models.

It may just be simpler to output explicitly defined models and place an instanced road over them.

It may be convenient & pretty fast to just use instanced roads over bridges every-time(?). Even for instanced bridges. These would just reuse bridge center points as road center points. Having complex branches in bridge shaders for displaying the road part may be expensive because branches are calculated for non-road surface parts. It may not be worth creating or maintaining additional road shader code integrated with bridges(?).
Another reason is vertex or geometry shader created objects like street lamps, traffic lights, vehicles, traffic barriers, make branching even more complicated. Geometry shader objects may be far faster with a separate pass - just drawing the listed road center points which then generate geometry for objects. The branching overhead of a complex model deformation scheme is avoided. Perhaps fewer road center points are needed so every 2nd or 3rd one may be skipped by FG engine when creating an array of points for rendering (it depends on the description of curvature used).


Possible implementation details:

D).The instanced model identifies parts via vertex attributes.
- This allows following a height curve (based on several heights specified in C. ).
- Each bridge model could have 2 quads drawn with the pattern of thin supporting, beams, cables, arches, as well as traffic barriers/guard rails. The thick support structures would be done by the model. The fragment shader can expand or shrink tiling of a texture showing the pattern. and also control a curve in height. Discarding quads or other parts if not needed based on bridge definition is possible.
- The bigger & more complex bridges will likely have a Terrasync custom model. This scheme may not need to be too detailed unless it can just reuse techniques developed for other things. Numerous boutique or historical regional bridges may best be done by selecting from a list of explicitly modeled types.

F). Just a possibility. It's possible for the vertex shader to identify which segments aren't needed based on per instance data in A). Then the shader could collapse them onto a point. The vertices at for the bridge end part will be positioned after the last drawn segment derived from A.

Instanced models in D-F don't need to be created by OSM2City. Options G-H needs OSM2City to output meshes & vertex attributes.

Conceptually speaking, whether it's worthwhile creating a special scheme(s) depends on hit to average space used and hit to average performance (including any performance spikes in areas with a lot of bridges). Don't know the tradeoffs to comment. The reasoning for the level and sophistication of any of the different approaches used to describe buildings would also involve same considerations and a similar range of options (as far as I can see..).

vanosten wrote in Sat Apr 14, 2018 4:45 pm:Not only is it necessary to know the road type, but also whether or not it is lit during night.

Yep. It would be part of the per instance data. Might also include streetlamp placing & colour (lamp type), but that could come from a separate data structure like regional definitions.

Kind regards,
vnts
vnts
 
Posts: 409
Joined: Thu Apr 02, 2015 1:29 am

Previous

Return to Scenery

Who is online

Users browsing this forum: No registered users and 12 guests