Stanford Team Develops 11 Billion Parameter Deep Learning System Using COTS GPU Hardware

Recently, Adam Coates and others at Stanford developed a deep learning system with over 11 billion learnable parameters. One of the key drivers to progress in deep learning has been the ability to scale up these algorithms. Ng’s team at Google had previously reported a system that required 16,000 CPU cores to train a system with 1 billion parameters. This result shows that it is possible to build massive deep learning systems using only COTS (commercial off-the-shelf) hardware, thus hopefully making such systems available to significantly more groups.   Read More