Most "audiophile-grade" sound devices are rated for 24-bit/192KHz. Not a technical standard, just a de-facto one which has become popular through common usage.
The general consensus is that higher bit-depth and sampling specs are basically meaningless because they're imperceptible to the human ear, although some audiophiles argue (rather vehemently) that richer and faster digitization provides finer resolution so it must logically always be better. (And, of course, some audiophile purists still continue to argue that nothing digital is *ever* actually audiophile quality, lol.)
From an engineering standpoint, unnecessary rescaling and transcoding will never (by itself) improve playback fidelity/quality of recorded audio but it can introduce digital artifacts and electrical noise which actually diminishes fidelity/quality.
From a practical standpoint, I'd use 24-bit/192KHz as a baseline, then experiment with other settings, then retain the ones which actually produce whatever sounds best.
And there's no need to upscale audio quality beyond the resolution of the recorded sound source. Most digital audio files in games, movies, and music (even the uncompressed or "lossless" ones) were encoded at 16-bit/48KHz or up to 24-bit/192KHz, playback at 32-bit/288KHz simply can't and won't improve the way they sound.
"All opinions are not equal. Some are a very great deal more robust, sophisticated and well supported in logic and argument than others." - Douglas Adams
[/Korth]