New EE Times article: “Solving AI’s Memory Bottleneck.” Written by Sally Ward-Foxton in EE Times, AI & Big Data Designline.
- “Among the criticisms aimed by skeptics at current AI technology is that the memory bottleneck – caused by inability to accelerated the data movement between processor and memory – is holding back useful real-world applications.”
- “Emerging from stealth mode at Hot Chips, Esperanto offered yet another take on the memory bottleneck problem. The company’s 1000-core RISC-V AI accelerator targets hyper-scaler recommendation model inference rather than AI training workloads mentioned above.”
- “Comparing Esperanto’s approach to other data center inference accelerators, Ditzel said others focus on a single giant processor consuming the entire power budget. Esperanto’s approach – multiple low-power processors mounted on dual M.2 accelerator cards – better enables use of off-chip memory, the startup insists. Single-chip competitors “have a very limited number of pins, and so they have to go things like HBM to get very high bandwidth on a small number of pins, but HBM is really expensive, and hard to get, and high power,” he said.”