Depends on how you look at it. Its 16 PCIe lanes multiplexed to three PCIEx4-NMVEs (12 lanes total) plus a graphics card with potentially 16 lanes. Of course the bandwidth at any given time is still limited to 16 lanes. But as I'm not doing any GPU-bandwidth intensive tasks, the GPU will use up that 16 lanes only rarely and only for a very short period (if at all). So if you look at it time-multiplexed, there is definitely (except for very few and short exceptions) full bandwidth available for the NMVEs.
As for saturating those SSDs, I have a very unusual use case. One component is high IO intensive database access, so its more about the max IOs at high queue depth than about max sequential data transfer. And I have other components accessing the NVMEs, which I can't have competing with each other for SSD access, thats why I'm not using RAID but seperate SSDs. And with that scenario, it's even more important to have the SSDs on direct CPU attached lanes.
As for the cooling, I'm not quite sure if high IO access strains the SSD's controller even more than sequential access (resulting in even more heat output). But as far as I know, using the kryoM.2 evo adapters (
https://www.amazon.com/Aquacomputer-KryoM-2-Adapter-Passive-Heatsink/dp/B0742LW4WB/ ) and enough airflow should do the trick. I will replace the thermal pads it ships with with 14 W/mK pads (
https://www.amazon.com/Alphacool-Eisschicht-Thermal-120mm-2-Pack/dp/B00ZC9SSWQ/ ), though.
HiVizMan said, he is trying to get a few details while I'm still wating for my CPU to get shipped. If even he can't get any details in time, I will probably have to buy the board to check out the VRM and hope for the best.