Board index FlightGear Development Aircraft

Boeing 777 Seattle - A sneak preview into our Hangar...

Questions and discussion about creating aircraft. Flight dynamics, 3d models, cockpits, systems, animation, textures.

Boeing 777 Seattle - A sneak preview into our Hangar...

Postby I-NEMO » Thu Dec 10, 2015 6:02 am

Hallo FG Pilots,

As I reported (quite a long time ago, to be honest) I have been really busy working on the NEW external model for our Boeing 777 Seattle.
As Hyde and myself love realism, I completely remodeled the fuselage, the Nose Landing Gear (NLG), the Main Landing Gear (MLG), the Engine (GE90 Turbofan, inspectable on ground, with internal parts modelled and textured), and the wings (I'm actually working on the 777-200, to start with). All the job is based - as usual - on Boeing manuals and reference pictures, carefully respecting dimensions and proportions.

These are some preview shots of the brand new 777 Seattle, modelled and rendered in Blender.
Textures are mine, as well as the materials.
The NLG is already completely animated (in Blender); I still have to make the MLG animation (tough job!), and fine tuning the modelling for the Flight Control Surfaces (NACA profles) and animate all of them.
Once finished I'll pass the ac model to Hyde for final build up and test.

So, in due time (some months!) the external model will be ready.

One major issue is to check if FG can properly render the materials used in Blender (Cycles): Blender has evolved a lot, and a lot of new stuff is coming in about the material and rendering side....we will see. I really hope to be able to have the new Seattle in FG.

As you see, we are preparing a new Hangar for the Seattle, where its further development will take place: I plan (not at the moment, tough) to make some new ground vehicles; the Goldhofer AST-1 Tractor is an example (modeled on actual blueprints, with some free "adaptations"), as well as the Jacks !
I'm crazy enough to dream that we could make some 'maintenance' in the future, like mounting different Engines in the Hangar ("...Oh, my God!....when will I finish it?" :oops: ).



Image

Image

Image

Image

Image

Image

Image

Image

Image

Since the new Seattle won't be ready shortly, I assume that the increased 'weight' of the model will be easily handled by the latest GPUs. Anyway, this time I made sure to use the lowest possible Poly count, while ensuring a realistic look of the parts even at closer distance. It will not be a 'lightweight' model, but we'll test it and optimize it...
Once finished, I will put hands back on the Cokpit, so to reduce the Poly count as promised, and to make use of the new materials in Blender (hoping to be able to replicate them in FG).

Well, that's it. Hope you like the preview ... ! :D

Regards,

I-NEMO
I-NEMO
 
Posts: 102
Joined: Thu Sep 29, 2011 3:09 am
Location: Italy
Callsign: I-NEMO
Version: 2017.2.1
OS: Windows 7 64 bit

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby biloute974 » Thu Dec 10, 2015 7:53 am

Oh my god, you are my hero... I thought that Seattle project was abandoned. It's very very good news :)
Your project seem very promising.
Intel I7 7700 - 16Gb DDR4 - Nvidia GTX970 - FG 2017.4.0 from D&C
biloute974
 
Posts: 193
Joined: Mon Feb 23, 2015 9:49 am
Callsign: U974
Version: 2016.1.0
OS: Mint 17.2

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby Thorsten » Thu Dec 10, 2015 7:59 am

One major issue is to check if FG can properly render the materials used in Blender (Cycles): Blender has evolved a lot, and a lot of new stuff is coming in about the material and rendering side....we will see.


All information FG is using from 3d models is ambient (rgb), diffuse (rgb), specular (rgb), emissive (rgb) shininess (s) and transparency (t) in addition to an (optional) texture (rgba) declared in the model file.

To the degree that blender delivers you different visuals, they're done by blender-side shaders. FG has it's own set of shaders (for good reason, the blender-side Shader would not know how to light and fog properly when inserted into an FG scene). Looking at the amount of reflection in some of the shots, it seems blender is also using raytracing techniques to render - which are not fast enough to run in a real time context at all.

So if you want to use reflection mapping, dirt mapping, environment reflections, normal maps,.... that kind of thing, you have to develop it FG-side using the tools our effect framework provides.

Or to cut it short - it's completely irrelevant for rendering anything in FG how the rendering framework of blender has evolved, all that matters is how the FG rendering framework has evolved. Fancy effects aren't part of the model, they're part of blender.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby I-NEMO » Thu Dec 10, 2015 11:04 am

Thorsten,

thank you for your note. I do of course know how to pass rendering info to FG.
I did not mean to criticize FG's render framework, nor your more than excellent work for clouds and fog.
FG is probably the best simulation for weather related issues, but the 3d model's rendering could - perhaps - be enhanced/developed in order to give the model a more accurate look. Some simulation platforms are proposing - do not ask me how, I'm not a programmer - interesting development on this particular issue (Aerofly, Prepar3d and some others).

What I am saying is that - in my humble opinion - there's no way at the moment to pass 'material' settings to FG by fine tuning some parameters which affect ambient, diffuse, specular, emissive, shininess, and transparency renderings. Most of the 3d Modeling tools (3dS, Blender, Maya and so forth) allows the modeller the fine tuning of reflection, for instance, by applying some 'nodes' to the overall material settngs. Those 'nodes' are of course to be considered as 'side-shaders', but their effects on the behaviour of simulated light on a surface greatly enhance the final result.
These efforts and enhancements should not be considered as 'fancy effects': using reflection mapping, dirt mapping, environment reflections, normal maps is extremely time consuming when you have to model a large aircraft composed of thousands of parts. A modeller concentrates about modelling; texturing today means to produce one accurate image only, to which the various ambient, diffuse, specular, emissive, shininess, and transparency settings are quickly tuned up through 'nodes'.
In Blender, as in most 3d softwares, the modeller makes one quick UV mapping only for every part, and then produces various materials always using that same UV map. The 'classic' workflow (before the advent of 'nodes' materials) is of course to produce one reflection map, one dirt map, one environment map, one reflection map, one normal map for every part; then the relevant UVs have to be scaled and arranged onto those maps. That requires time, which is OK when you have 100 parts, and a nightmare when you have 2000 parts.
The 'fancy effects' you see in these preview shots are produced by 4-5 Texture files (color and normal map) to which 'nodes' materials are applied for each part of the aircraft. Should I need to increase basic 'glossiness' for a specific part, I just have to change a number, without actually going back to retouch/repaint the actual 'painting' of the Texture. This kind of workflow allows the modeller to better use his time while producing the 3d Model.
Consider that my current job on the external fuselage took me about one year and a half of modelling: any tool which would shorten the time required by the 'classical' workflow will be welcomed.

Anyway, I'm going off-topic: once the final model for the Seattle will be ready, ... well, we will have more stuff on the table!

Thank you again: best regards,

I-NEMO
I-NEMO
 
Posts: 102
Joined: Thu Sep 29, 2011 3:09 am
Location: Italy
Callsign: I-NEMO
Version: 2017.2.1
OS: Windows 7 64 bit

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby Thorsten » Thu Dec 10, 2015 2:00 pm

I do of course know how to pass rendering info to FG.


Okay - that's good.

The main point of my post was rather to warn you not to invest lots of work blender-side which you might need to redo FG side and to explain why this doesn't necessarily work the same way.

A modeller concentrates about modelling; texturing today means to produce one accurate image only, to which the various ambient, diffuse, specular, emissive, shininess, and transparency settings are quickly tuned up through 'nodes'.


I'm not sure what you're talking about here - as I mentioned abvove, all these /are/ in fact written to the model and used by FG.

The 'fancy effects' you see in these preview shots are produced by 4-5 Texture files (color and normal map) to which 'nodes' materials are applied for each part of the aircraft.


If the tool of your choice has the texture as one (a few) file, isn't it able to write a reflection map (dirt map,...) in the same format into textures as well? I.e. there should be an easy conceptional 1:1 translation from nodes into textures - and if the modeling tool doesn't exploit that, maybe you should request the feature there?

But in fact the reflection of a truck in the fuselage you only get if you do something like ray-tracing. It has nothing to do with any effects we can reasonably hope to generate in real time, and there's really nothing you could hand over to FG rendering framework which would get it.

The 'classic' workflow (before the advent of 'nodes' materials) is of course to produce one reflection map, one dirt map, one environment map, one reflection map, one normal map for every part; then the relevant UVs have to be scaled and arranged onto those maps.


I've never done a classic workflow, but I can confidently state that this would never have worked in FG at any point - all effect textures (normal map, lightmap, reflection map, dist map) are assumed to share the UV mapping of the base texture layer, i.e. if you remap them in any way, you don't get them rendered correctly.

That's from the model-combined fragment shader

Code: Select all
    vec4 texel      = texture2D(BaseTex, gl_TexCoord[0].st);
    vec4 nmap       = texture2D(NormalTex, gl_TexCoord[0].st * nmap_tile);
    vec4 reflmap    = texture2D(ReflMapTex, gl_TexCoord[0].st);
    vec4 noisevec   = texture3D(ReflNoiseTex, rawpos.xyz);
    vec4 lightmapTexel = texture2D(LightMapTex, gl_TexCoord[0].st);


as you can see they all reference gl_TexCoord[0] which is the base texture uv-mapping.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby legoboyvdlp » Thu Dec 10, 2015 2:49 pm

Amazing!

But are those shots FlightGear? O_O
If they are not, and you have some time, HOW on earth did you get that hangar?
User avatar
legoboyvdlp
 
Posts: 7981
Joined: Sat Jul 26, 2014 2:28 am
Location: Northern Ireland
Callsign: G-LEGO
Version: next
OS: Windows 10 HP

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby I-NEMO » Thu Dec 10, 2015 3:26 pm

Thorsten,

I see that I have not been able to properly explain my points.

Basically, the 'new' trend in 3d authoring software platforms (at least those used in professional 3d world, the world into which I work) are offering the modeller a quick way to work with textures, achieving realistic renderings by using 'nodes' (Blender's Cycles Render is an example).

I'll try to summarize the very basic workflow:

A) The UV mapping is done independently from a Texture; the vertex data are stored in 'datablocks', which will be referred to by the materials 'nodes'.

B) Inside the Node Editor, the modeller just sets up the required effect, using a concatenation of simple (but very powerful and effective) nodes [for instance the fuselage effect is achieved by regulating the 'Anisotropic Node' through five values (Colour, Roughness, Anisotropy, Rotation and a Vector (Normal or Tangent), to wich is added a 'Glossy node' regulated by two values (Colour, Roughness) and a Vector (Normal)].
the two nodes are then just 'mixed' before being output to the surface shader (the modeller may regulate the mixing amount of the two nodes).

C) The node set-up works fine even without Texture, just using the relevant Colours instances of the nodes (later mixed)

D) Now, IF ... IF a Texture (any Texture: Color, Normal, Reflection, Ambient Occlusion, Displacement, or whatever) is used as an Input (inside the Node Editor) for the two Colour slots (Anisotropic adn Glossyness), ... then, the affected surface will use that Texture data (Colour or Not Colour) on that surface.

That's it. The fuselage effect that you see in the images above is WITHOUT texture applied: the reflections are given by Ambient Occlusion of the surrounding environment (the Hangar structure, or the surrounding Sky).
Should I need to fine tune the effect, Blender offers a variety of more sophisticated shaders/nodes to the purpose.

The final outcome is this: I work with way much less Texture files, and - absolutely important in the modeller workflow - forgetting to place the actual UVs, scale them, regulate their colour tone, their brightness, and/or contrast and so forth (going back and forth in Photoshop). You create the Textures needed, and then you 'work' with them inside a single tool.

This is a major trend in ALL 3d professional Software, which of course use this 'nodes' approach with some variations .
The advantage is clear: time and fine tuning.

Now, this is Blender, and I'm not pretending that FG 'should' follow this trend; but, as a Pro modeller, I would love to see - in time - the Simulation platform to take it into account.

Why?

Easy of use, less Textures, much more time for proper modelling, and realistic look.
The 'nodes' shaders in Blender are just Python scripts, crunching numbers. And it's not my field of interest.
Might it be that someone would one day be interested in exploring, testing, developing this approach so to offer FG a more realistic models' look ? ... (of course keeping the compass about performance and what is feasible in FG, or not).

I dont' know.

The reason why I'm 'wasting time' into Blender materials it's because I can quickly check for modelling errors inside Blender (holes, smooth creasing of curved surfaces, appropriate light bounces and so forth): a major pain in the neck is to actually export the model to .ac file, then check it for conversion errors by the .ac plugin (it happens sometimes, and you're certainly aware how boring is to go through an .ac file made of 600 parts looking for the culprit), then back into Blender to tune things up and/or adding other parts, then back to the exporter... :evil:
The application of Anisotropy and properTest- Lighting ensure the modeller a comfortable workbench for his long task...

Regarding this:

I'm not sure what you're talking about here - as I mentioned abvove, all these /are/ in fact written to the model and used by FG.


I clarify: in FG (in the .ac file) you can only use the rbg values for the general effect; BUT:

1) we cannot instruct FG about the mixing rate of those effects (at least not easily)
2) and - more important - we do not dispose of the 'fancy tools' widely used today in professional 3d World (Anisotropy, Translucency, Fresnel, Volumetric emission, Layer Wight, Scaling, Colour control, Exposure ... just to name a few).

I will of course try to use ASL Settings with the new Seattle (once ready!) ... but I fear that true, natural, reflection won't be possible in FG at the moment.
Which is a pity. Ray tracing - you are right - might be behind this; but, who knows...perhaps someone much more competent than me will devise something useful for that ...
Actually FG models lack realistic reflection, and this is something which we all miss a lot in our beautiful platform.
BTW, I've seen your Space Shuttle Project: just great !... Should you like to actually see&check how the model would look once properly "materialized" (with proper natural lighting, on ground and in space), I'll be glad to help.

Last, but not least, I concur about the Texture sharing the identical UV mapping.
But I do hope to have better specified what I meant in my previous post.

Thank you, Thorsten; it's always a pleasure to read you.

Best regards,

I-NEMO
I-NEMO
 
Posts: 102
Joined: Thu Sep 29, 2011 3:09 am
Location: Italy
Callsign: I-NEMO
Version: 2017.2.1
OS: Windows 7 64 bit

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby I-NEMO » Thu Dec 10, 2015 3:39 pm

legoboyvdlp,

No, the shots are rendered in Blender (Cycles Render), with my Texture AND materials. I use and know Blender quite well now.
As you may read above, I doubt that the model will be rendered as such once exported to FG.
But I trust miracles ... ! :D

Regarding the Hangar: there are quite a lot of free basic 3d Model available on the Web.
I took one of those as a 'skeleton' structure, worked on that hardly in Solidworks, then imported it into Blender, changing the components and dimensioning as I needed (the Seattle is a huge 'bird'!); then, added several enhancements (like walls, doors, panels, columns from scratch); and - again - using proper Lighting and materials to the purpose. One month and half of work.
The Hangar will as well receive some major enhancements, before Final.

Regards,

I-NEMO
I-NEMO
 
Posts: 102
Joined: Thu Sep 29, 2011 3:09 am
Location: Italy
Callsign: I-NEMO
Version: 2017.2.1
OS: Windows 7 64 bit

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby Thorsten » Thu Dec 10, 2015 5:20 pm

C) The node set-up works fine even without Texture, just using the relevant Colours instances of the nodes (later mixed)


Basically that's procedural texturing. Your 'nodes' seem to be individual procedures, concatenated into each other. Needless to say, I'm a fan and have introduced lots of procedural techniques into terrain texturing in FG, but...

The 'nodes' shaders in Blender are just Python scripts, crunching numbers. And it's not my field of interest.


... here lies the rub: Unlike blender, we are highly performance-critical. We ideally have 1/60 second to render some two million pixels and then we need the next frame. We can't possibly run anything as slow as Python in the process. It's GLSL only, and it has to be optimized GLSL.

If you want to render, say, a movie, then we could use many more techniques, but a real time environment is something different.

but I fear that true, natural, reflection won't be possible in FG at the moment.


No - because that requires you to do raytracing. Which is about a factor 100 to 1000 or so (dependent on how many reflections you follow up and how many test rays you use) too slow to run in real time. If you can spare a second to render a scene you can start doing it (well, we actually need to render tens of thousands of square kilometers of terrain in addition to the plane).

A real time constraint is in fact quite severe. It restricts rendering techniques quite drastically to what is mostly an illusion game.

(If you would ask blender to render a complete terrain mesh with a couple of million vertices, fully textured and fogged, along with clouds, trees a sky etc, and then your plane in the middle of it, moving you'd see the difference...)
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby legoboyvdlp » Thu Dec 10, 2015 5:24 pm

Very nice, I-NEMO!
User avatar
legoboyvdlp
 
Posts: 7981
Joined: Sat Jul 26, 2014 2:28 am
Location: Northern Ireland
Callsign: G-LEGO
Version: next
OS: Windows 10 HP

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby MSA-S23 » Thu Dec 10, 2015 5:56 pm

WOW! One of my favorite aircraft in FG, ever!!!

Any chance this will be followed by the 777-8 and -9, AKA the 777X? (Hasn't Boeing dropped the X, though?)

I am super excited! :D

Cheers,

Ash
MSA-S23
 
Posts: 477
Joined: Tue Nov 25, 2014 7:45 pm
Location: Flying high...in the sky...
Callsign: UpAndAway

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby Thorsten » Thu Dec 10, 2015 6:46 pm

Basically, the 'new' trend in 3d authoring software platforms (at least those used in professional 3d world, the world into which I work) are offering the modeller a quick way to work with textures, achieving realistic renderings by using 'nodes' (Blender's Cycles Render is an example).


Or, let me express the basic difference in catchy way:

Blender's concern is to make the modeling process and effect assignment easy for you.

FG's concern (probably shared by all real time 3d rendering environments) is to make rendering fast, and since in many cases it's actually the bottleneck, this is the overarching concern. So the strategy is to do whatever can be done before entering the rendering pipeline actually before the rendering pipeline. We pre-compute whatever we can and dump the results into tables which we just read at runtime. They're called textures :-)

Except when the computation can be done faster than a table lookup - which actually can happen. But you need to know some technicalities to know what to use when.

That's the basic reason we won't have such a node concept anytime soon - we don't aim for easy modeler workflow, we aim for performance and performance only. There's no performance to spare to simplify your task - instead, we'd make your task yet more complicated if it buys us another 20% performance.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby I-NEMO » Thu Dec 10, 2015 6:51 pm

Thorsten,

I perfectly follow your points about real-time engine rendering.
As far as I know, ray-traced bounces for reflection is "stopped" in Blender just to the first bounce (the modeller can of course set a higher value for very refined rendering scenes).
Blender currently uses OSL 1.5.1 and the shader's scripts are all GNU GPL.

Is this good news?... I'm totally ignorant about Python, but I'm wondering if those Python scripts could be a source of info/inspiration to eventually port them in C++ ... or am I saying something silly?

Best regards,

I-NEMO
I-NEMO
 
Posts: 102
Joined: Thu Sep 29, 2011 3:09 am
Location: Italy
Callsign: I-NEMO
Version: 2017.2.1
OS: Windows 7 64 bit

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby bugman » Thu Dec 10, 2015 6:58 pm

It might be quite a while until ray tracing hardware is up to the task and we'll find RPUs (ray processing units) integrated into GPUs ;)

Regards,
Edward
bugman
Moderator
 
Posts: 1808
Joined: Thu Mar 19, 2015 10:01 am
Version: next

Re: Boeing 777 Seattle - A sneak preview into our Hangar...

Postby Thorsten » Thu Dec 10, 2015 7:04 pm

Is this good news?... I'm totally ignorant about Python, but I'm wondering if those Python scripts could be a source of info/inspiration to eventually port them in C++ ... or am I saying something silly?


Doesn't make a lot of difference. C++ is still too slow - rendering happens on the graphics card, which can achieve the required speed by massive parallel computation. Except that very architecture precludes raytracing, because parallel processing requires that you rely on local information only, whereas raytracing wants non-local information.

Picture the eye E, a vertex V and the light L.

In real time rendering, you follow the ray E V L - i.e. you only need a vertex. Position of eye and light is fixed, so you know up-front where all rays are pointing, and you need to know no other vertex from the scene - which means you can process them all at once, there is no order required.

In raytracing, you allow bounces, so you can have E V V L rays (or E V V V V V L, dependent on how many you allow). After hitting the first vertex, your ray is going to hit a second one - but where is it? You need to follow the ray through the whole scene to find the intersection - which means that this is no longer a local problem, you can't process vertices all at once, you need to know the whole scene to process a single vertex.

Or, in other words, if you compute a reflection, you need to make sure you already computed lighting of whatever you see in the reflection. So there's now a hierarchy which vertices need to be processed first - but you don't know up-front what it is.

Graphics cards aren't made to solve this kind of problem - they simply can't do it. They're fast because certain operations are hard-wired into their chips and don't need to be emulated by software, but they're fast doing these kinds of things only.

The toolset you have available in offline rendering (raytracing, radiosity, volumetric integrals,...) simply doesn't run on a graphics card. So short of NVIDIA coming up with a completely novel architecture it can't be done. It's not FG - it's a more fundamental thing. I couldn't code raytracing on the graphics card even if I'd be willing to sacrifice the performance - there's no provision to access other vertices of the scene inside a vertex shader.
Thorsten
 
Posts: 12490
Joined: Mon Nov 02, 2009 9:33 am

Next

Return to Aircraft

Who is online

Users browsing this forum: No registered users and 10 guests