Google has signalled it intends to design its own server-grade systems-on-a-chip (SoCs), because it sees the technology as its next-generation compute platform.
A missive by Amin Vahdat, a Google fellow and veep of systems infrastructure, said the web ad giant has hired Uri Frank to head its server engineering chip design team to make it happen.
Frank led next-gen Core processor development at Intel from 2016 to 2020 and served as vice president and director of product development of Chipzilla’s Platform Engineering Group. In that role, according to his LinkedIn profile, Frank’s duties included “managing multiple SoC teams from definition to production.”
Vahdat said Frank and new SoCs are needed because while Google has built mighty computing facilities to run its many services, “custom chips are one way to boost performance and efficiency now that Moore’s Law no longer provides rapid improvements for everyone.”
“Compute at Google is at an important inflection point,” Vahdat said. “To date, the motherboard has been our integration point, where we compose CPUs, networking, storage devices, custom accelerators, memory, all from different vendors, into an optimized system.
“But that’s no longer sufficient: to gain higher performance and to use less power, our workloads demand even deeper integration into the underlying hardware.
“Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to ‘Systems on Chip’ (SoC) designs where multiple functions sit on the same chip, or on multiple chips inside one package. In other words, the SoC is the new motherboard.”
If you wanna make your own open-source chip, just Google it. Literally. Web giant says it’ll fab them for free
The post makes no mention of which architecture Google fancies for its future SoC, a choice with enormous consequences because Google buys huge quantities of hardware for its own operations.
Arm-based server SoCs have already proven popular with the likes of AWS and Equinix. Vadhat also mentioned low power consumption, a factor that is often Arm’s trump card. We also note that Google is a founding member of RISC-V International, and has experimented with OpenPower (of which it is also a founding member.)
However, Frank’s Intel experience suggests that Google adopting a non-x86 architecture for a future server-level SoC can’t be assumed. Intel is known to do custom-ish cuts of Xeons for big customers. x86 SoCs are also already a big market: AMD makes them by the tens of millions for the Xbox and PlayStation.
Vahdat’s post signs off by saying that Google will work “with our global ecosystem of partners… to innovate at the leading edge of compute infrastructure, delivering the next generation of capabilities that are not available elsewhere.” Google is big enough to partner with most of the big SoC players: AMD and Intel are in its data centers, while Arm and Qualcomm are neck-deep in Android.
Whatever Google builds, the goliath is known to be committed to containers – its in-house Borg tech was spun out as Kubernetes. Borg continues to evolve and now works alongside an autoscaling tool called Autopilot that sometimes assigns workloads with different requirements to a single physical server. That arrangement suggests any Google SoC may need broad capabilities, and probably also many cores as 1:1 container:core ratios are now common. ®