It’s an amazing time for data scientists everywhere, as the hardware needed to crunch numbers like never before comes online. It wasn’t long ago that there were only a few chip manufacturers in the world, Intel and AMD, working on essentially the same architecture for a decade. GPUs were only for video games and it wasn’t until AlexNet smashed the earlier records for ImageNet with GPUs that AI researchers turned to their parallel processing might to do matrix calculations.
In this article, Paul Mooney, a PhD in Molecular and Cellular Life Sciences, runs experiments on CPUs, GPUs and TPUs, to break down where you get the most bang for the buck.
GPUs can really crank on deep learning applications, but TPUs can speed them up dramatically. Still the question remains is it worth the extra cost for those custom chips from AI powerhouse, Google?
Recent Comments