http://wiki.flightgear.org/FlightGear_N ... _F-JJTH.29http://wiki.flightgear.org/TerraGear_sc ... ild_serverF-JJTH wrote:But maybe we could create a deb/apt package so that we can easily install this inside the VM ?
Setting the required environment is really simple:
1) Having a debian based OS
2) Install the terragear toolchain (I already providing a download_and_compile_tg.sh since some months and it works really fine)
3) Install a webserver
That's all, it take less than 10 min to setup the required environment.
I like your original tg build script, but I'd prefer to get the cmake files extended accordingly. that would be in line with James' superbuild/fgmeta efforts.
In general, build scripts should no longer be OS specific. If we could extend the superbuild/fgmeta stuff to also build TG, we could then also use cpack to create/update the linux packages, which would greatly simplify TG deployment. I am sure that saiarcot895 could help with the specifics here, so that we can get rid of any custom scripts, and just integrate everything with superbuild/fgmeta, so that this stuff could possibly be even run on the build server...
For the web service, I would prefer to also create a package, so that future updates can be easily shipped to all people running it.
At least, that should be a long-term goal - manually having to download, compile and install things is too fragile and doesn't scale - see the number of fgms servers we have for example ...
F-JJTH wrote:Well after some IRC discussion I have my answer:
- nobody is working on HGT file (TIFF format coming from
http://www.viewfinderpanoramas.org- our SHP are already really complex and we don't want that people add more complexity to our SHP. (We can imagine some funny guys adding all the swimming pool of their village !!! We surely don't want that )
That's why we don't need to give "easy-access" to our user to ogr-decode/hgtchop/tg-construct tools.
However we want detailed airport ! So it make sense to give easy-access to genapts850
Because this part is already implemented, I will simply improve this part and see if our scenemodels.flightgear.org guys are ready to host the project
psadro_gm already mentioned that some tools may have mutex issues when running "concurrently" - so if that still is a problem, we may need to use cron jobs to schedule things sequentially, where each finished/terminate job would re-schedule itself or the next job. Once that is no longer a bottleneck, we could also run multiple jobs in parallel, and merely re-nice each job to a lower priority - not sure if RAM may be an issue though ?
I think it would be a good idea to prepare support for multiple back-ends/TG servers.
My proof-of-concept is not oriented in this way, in fact I'm more and more thinking to limit the tool to genapts850. Why ?
Because I'm not sure we want that people improve our shapefiles, they are already really complex (we have seen a lot of report about scenery 2.0 eating too many RAM), also I'm not sure we have a lot of people interested in improving HGT (altitude) file.
So finaly, I think our user are mostly interested in improving their airports.
Any opinion about _only_ providing a genapts850-web-generator ?
It clearly is better than anything we have now - but personally, I'd try to keep it scalable - i.e. by coming up with a CLI tools wrapper that turns CLI profiles (program path, name, settings, files) into a web service (API). That way, we could also support other tools in the future, without having to rewrite tons of things.
That's basically the approach I experimented with when modifying Gijs' Qt GUI.
And power users would be able to create new "profiles" and chain them together with other programs.
When I played with this a few weeks ago, I even found a few tools that would create ncurses-based GUIs by using a profile for each tool - and there's another python lib for wrapping CLI tools and exposing them as a web service, which also handles uploads/downloads.
Having/supporting just a single server may inevitably become a bottleneck sooner or later.
For those reasons, I'd prefer to 1) support multiple servers (i.e. by running jobs over SSH), and 2) use a simple cron-based scheduler - or maybe even using Linux "batch" to schedule things directly based on overall system load.
But maybe this is too sophisticated for now - let's better postpone this discussion until we have someone showing up and willing to take over maintenance.
So what is involved framework-wise here, i.e. dependencies (php, perl ... etc ?)