Too little, too late? Bertha Language Processing Unit joins Groq's ultrafast LPU as challenge to Nvidia's formidable GPU firepower grows
Date:
Mon, 21 Oct 2024 12:31:00 +0000
Description:
HyperAccel's Bertha LPU aims to challenge Nvidia's GPU stranglehold on AI
FULL STORY ======================================================================
South Korean AI startup HyperAccel partnered with platform-based SoC and ASIC designer SEMIFIVE back in January 2024, to develop the Bertha LPU.
Tailored for LLM inference, Bertha offers low cost, low latency, and domain-specific features, with the aim to replace high-cost and
low-efficiency GPUs. SEMIFIVE reports that work has now concluded, and the processor, designed using 4nm technology, is slated for mass production by early 2026.
HyperAccel claims Bertha can deliver up to double the performance and 19
times better price-to-performance ratio than that of a typical supercomputer, but it faces tough competition in a market where Nvidias GPUs are so deeply entrenched. Facing challenges
We are delighted to work with SEMIFIVE, a leading provider of SoC platforms and comprehensive ASIC design solutions, for the development of Bertha to be mass-produced, said Joo-Young Kim, CEO of HyperAccel. By collaborating with SEMIFIVE , we are excited to offer customers AI semiconductors that provide more cost-effective and power-efficient LLM features than GPU platforms. This advancement will significantly reduce the operational expenses of data
centers and expand our business scope to other industries that require LLMs.
Groq, an AI challenger headquartered in Silicon Valley and led by ex-Google engineer and CEO Jonathan Ross, has already made strides with its own LPU product , focusing on high-speed AI inference.
Groqs technology, which provides cloud and on-prem inference at scale for AI applications, has already found a sizable audience with over 525K developers using the LPU since it launched in February. Berthas late entry might put it at a disadvantage.
Brandon Cho, CEO and co-founder of SEMIFIVE, is more upbeat about Berthas chances. He said, HyperAccel is a company with the most efficient and
scalable LPU technology for LLMs. As the demand for LLM computation is skyrocketing, HyperAccel has the potential to become a new powerhouse in the global processor infrastructure.
Bertha's focus on efficiency could attract enterprises looking for alternatives to reduce operational costs, but with Nvidias dominance unmatched, HyperAccels product may find itself fighting for a niche in an already crowded space, rather than becoming an AI leader. More from TechRadar Pro These are the best AI tools around today Groq's ultrafast LPU could well be the first LLM-native processor AI is becoming increasingly vital in software development
======================================================================
Link to news story:
https://www.techradar.com/pro/too-little-too-late-bertha-language-processing-u nit-joins-groqs-ultrafast-lpu-as-challenge-to-nvidias-formidable-gpu-firepower -grows
--- Mystic BBS v1.12 A47 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)