Intel, Google, Microsoft, Meta, Cisco, and other tech giants, have announced the formation of the Ultra Accelerator Link (UALink) Promoter Group, a strategic move aimed at curbing Nvidia‘s dominance in the AI accelerator market.
The group, which also includes AMD, Hewlett Packard Enterprise, and Broadcom, seeks to develop a new industry standard for high-speed, low-latency communication for scale-up AI systems in data centers, directly competing with Nvidia’s NVLink.
The group’s proposal, UALink 1.0, will enable the connection of up to 1,024 AI accelerators within a computing pod, allowing direct memory loads and stores between accelerators like GPUs. The UALink Consortium, expected to be incorporated in Q3 2024, will oversee the development. UALink 1.0 is expected to be available around the same time, with a higher-bandwidth update, UALink 1.1, set for Q4 2024.
Nvidia under attack
Sachin Katti, SVP & GM, Network and Edge Group, Intel Corporation said, “UALink is an important milestone for the advancement of Artificial Intelligence computing. Intel is proud to co-lead this new technology and bring our expertise in creating an open, dynamic AI ecosystem. As a founding member of this new consortium, we look forward to a new wave of industry innovation and customer value delivered though the UALink standard.”
Gartner estimates that AI accelerators used in servers will total $21 billion this year, growing to $33 billion by 2028. AI chip revenue from compute electronics is projected to hit $33.4 billion by 2025. Microsoft, Meta, and Google have already invested billions in Nvidia hardware for their clouds and AI models, and are understandably seeking to reduce their dependence on the company which controls an estimated 70% to 95% of the AI accelerator market.
Notably absent from this initiative is Nvidia, for understandable reasons. The company is naturally reluctant to support a rival standard that could challenge its proprietary NVLink technology and potentially dilute its considerable market influence.
Forrest Norrod, AMD’s GM of Data Center Solutions, said, “The work being done by the companies in UALink to create an open, high performance and scalable accelerator fabric is critical for the future of AI. Together, we bring extensive experience in creating large scale AI and high-performance computing solutions that are based on open-standards, efficiency and robust ecosystem support. AMD is committed to contributing our expertise, technologies and capabilities to the group as well as other open industry efforts to advance all aspects of AI technology and solidify an open AI ecosystem.”