https://devblogs.nvidia.com/gpudirect-storage/https://blocksandfiles.com/2019/08/06/n ... e-storage/
GPUDirect Storage: A Direct Path Between Storage and GPU Memory
Keeping GPUs Busy
As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application’s performance. When considering end-to-end application performance, fast GPUs are increasingly starved by slow I/O.
I/O, the process of loading data from storage to GPUs for processing, has historically been controlled by the CPU. As computation shifts from slower CPUs to faster GPUs, I/O becomes more of a bottleneck to overall application performance.
Just as GPUDirect RDMA (Remote Direct Memory Address) improved bandwidth and latency when moving data directly between a network interface card (NIC) and GPU memory, a new technology called GPUDirect Storage enables a direct data path between local or remote storage, like NVMe or NVMe over Fabric (NVMe-oF), and GPU memory. Both GPUDirect RDMA and GPUDirect Storage avoid extra copies through a bounce buffer in the CPU’s memory and enable a direct memory access (DMA) engine near the NIC or storage to move data on a direct path into or out of GPU memory – all without burdening the CPU or GPU. This is illustrated in Figure 1. For GPUDirect Storage, storage location doesn’t matter; it could be inside an enclosure, within the rack, or connected over the network. Whereas the bandwidth from CPU system memory (SysMem) to GPUs in an NVIDIA DGX-2 is limited to 50 GB/s, the bandwidth from SysMem, from many local drives and from many NICs can be combined to achieve an upper bandwidth limit of nearly 200 GB/s in a DGX-2.
it will be interesting to see if/when and how FlightGear is going to be able to use, let alone saturate, such a GPU, given that the multicore (r)evolution has been happening without any major architectural changes implemented to address
the "new" situation - keep in mind, back then (~2006) this was a controversial topic on the devel list at the time[1][2], but these days all remaining core devs apparently agree that FlightGear isn't making proper use of multi-core systems that are indeed commonplace today, and that the core development community is indeed hoping to change that in the future.
So it seems, that the project is -once again- at the crossroads of history, i.e. of adapting its design to address a new reality due to enormous progress in how hardware is developing, and now it's no longer just the CPU, but also GPUs that are going to change drastically.
[1]
New Architecture for Flightgear[2]
Suggestion to make FlightGear multiplayer compliant with HLA