Hacker News new | past | comments | ask | show | jobs | submit login

If I remember correctly about 80% of a modern FPGA's silicon is is used for connections. FPGA have their uses and very often a big part in them is the Field Programmability. If that is not required, there is no good reason another solution (ASIC, GPU, etc.) couldn't beat the FPGA in theory. Now, in practice there are some niches, where this is not absolutely true, but I agree with GP that I see challenges for deep learning.



An ASIC will always have better performance than an FPGA, but it will have an acceptable cost only if it is produced in a large enough number. You will always want an ASIC, but only seldom you will able to afford it.

So the decision of ASIC vs. FPGA is trivial, it is always based on the estimated price of the ASIC, based on the number of ASICs that would be needed.

The decision between off-the-shelf components, i.e. GPUs and FPGAs, is done based on performance per dollar and performance per W and it depends very strongly on the intended application. If the application must compute many operations with bigger numbers, e.g. FP32 or FP16, then it is unlikely that an FPGA can compete with a GPU. When arithmetic computations do not form the bulk of an algorithm, then an FPGA may be competitive, but a detailed analysis must be made for any specific application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: