Facebook plans to open-source Big Sur, its latest Open Rack-compatible hardware designed for AI computing at a large scale, the company announced in an official blog post. The company will submit the design materials to the Open Compute Project (OCP).
In collaboration with “partners,” Facebook built Big Sur to incorporate eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. “Leveraging NVIDIA’s Tesla Accelerated Computing Platform, Big Sur is twice as fast as our previous generation, which means we can train twice as fast and explore networks twice as large,” said the company in the blog post. “Distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two,” it added.
Facebook uses the server to run its machine learning programs, a type of AI software that “learns” and gets better at tasks over time. The company set up the Open Compute Project to let companies share designs for new hardware.