It is WAY to early in the game to make any specific hardware recommendations. We need to keep an eye on price, but given that there is an option that is essentially free (just using Home) I don’t feel like that needs to be the primary decision maker.
Given the history of computing, if we push out a stack of software that runs on a system that costs $X next spring then we can fairly expect it will be able to run on a system that only cost half that within 18 months. And remember, the software itself is also progressing very rapidly. The DeepSpeech memory requirement dropped from something like 6GB to a few hundred MB, for example. This was exceptionally huge, but I’m certain there will be other significant software optimizations.
This is also part of the “does it have to be NVIDIA” discussion. Today TensorFlow only works with some GPUs, but I’m certain that is going to change over the next 6 months. A year ago there was no way to run TensorFlow on a Raspberry Pi, but now we have TensorFlow Lite. Is it exactly as capable? No. But many things can be done with it with minor modifications.