Esperanto in RISC-V Community News

RISC-V Foundation Community News Blog:
"Accelerating ML Recommendation with Over 1,000 RISC-V/Tensor Processors on Esperanto’s ET-SoC-1 Chip."
With Dave Ditzel, Esperanto Technologies.

.

 


The Genius of RISC-V Microprocessors – ACCU 2022

Watch the video: "The Genius of RISC-V Microprocessors."
With Erik Engheim at the ACCU Conference 2022.
Esperanto Technologies' ET-SoC-1 is featured at 15:00

.

 


RISC-V in Electronic Design

Article in electronicdesign.com: “RISC-V Serves Up Open-Source Possibilities for the Future.”

.

 


IEEE Micro: Accelerating ML Recommendation

In IEEE Micro with Dave Ditzel: "Accelerating ML Recommendation With Over 1,000 RISC-V/Tensor Processors on Esperanto's ET-SoC-1 Chip."

 

 


Stanford Video: Accelerating ML Recommendation with over a Thousand RISC-V/Tensor Processors

Stanford Video with Dave Ditzel: "Accelerating ML Recommendation with over a Thousand RISC-V/Tensor Processors on a 7nm Chip."

 

 


ITNEXT.io: RISC-V Is Actually a Good Design

ITNEXT.io article by Erik Engheim: "Yeah, RISC-V Is Actually a Good Design"

  • "Well known people in the industry such as Dave Jaggar, Jim B. Keller and Dave Ditzel give RISC-V the thumbs up."

 


Hackster.io: Esperanto Technologies Begins Trialing 1,093-Core ET-SoC-1 RISC-V ML Accelerator

Esperanto Technologies Begins Trialing 1,093-Core ET-SoC-1 RISC-V Machine Learning Accelerator
Initial results from the company's early access program promising — and you can apply as a member now to receive a test chip of your own.

by Gareth Halfacree in Machine Learning & AI

Read more

 


RISC-V Startup Esperanto Technologies Samples First AI Silicon

In Forbes, Enterprise Tech, by Karl Freund: "RISC-V Startup Esperanto Technologies Samples First AI Silicon."

Karl wrote:

  • "I had the chance to watch a demo of the platform and came away quite impressed with the performance and power efficiency of the RISC-V based platform.
  • I was also pleased to see that the Esperanto device is not a one-trick pony, as the team demonstrated Resnet50, DLRM, and the Transformer network underlying BERT."

 


Esperanto Technologies’ Massively Parallel RISC-V AI Inferencing Solution Now in Initial Evaluations

Delivering Industry-Leading Energy Efficiency, Esperanto’s ML Inference Accelerator
Is Designed to be the Highest Performance Commercial RISC-V AI Chip

MOUNTAIN VIEW, Calif., April 20, 2022 – Esperanto Technologies™, the leading developer of high performance, energy-efficient artificial intelligence (AI) inference accelerators based on the RISC-V instruction set, today announced that initial evaluations for its ET-SoC-1 AI inference accelerator are underway with lead customers. Additional slots are available to qualified customers who have an interest in AI inferencing accelerators for datacenter applications.

To inquire about the evaluation program, please visit esperanto.ai/technology/#eap.

“Our data science team was very impressed with the initial evaluation of Esperanto’s AI acceleration solution,” said Dr. Patrick Bangert, vice president of Artificial Intelligence at Samsung SDS. “It was fast, performant and overall easy to use. In addition, the SoC demonstrated near-linear performance scaling across different configurations of AI compute clusters. This is a capability that is quite unique, and one we have yet to see consistently delivered by established companies offering alternative solutions to Esperanto.”

Esperanto’s evaluation program enables users to obtain performance data from running a variety of off-the-shelf AI models including recommendation, transformer and visual networks on the ET-SoC-1 AI Inference Accelerator. Users can set options including model and dataset selection, data type, batch size and compute configuration of up to 32 clusters containing over 1,000 RISC-V cores with ML-optimized tensor units. Customers can run many inference jobs, with the results provided in detailed histogram reports as well as fine-grain visibility into silicon performance.

“Esperanto has made very impressive progress and is now providing customers evaluation access to their RISC-V hardware and software running off-the-shelf AI models with strong performance and efficiency. This really shows the company’s confidence in their first multi-core solution,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “In addition, because Esperanto’s chip is RISC-V-based, it has the programming tools and software stack to more easily adapt to new AI workloads, alongside non-AI workloads, all running on the same silicon. This step forward is another very strong indicator of the bright future of RISC-V.”

“Esperanto has achieved an industry first by demonstrating its massively parallel RISC-V silicon running a variety of real-world AI workloads,” said Richard Wawrzyniak, principal analyst at Semico Research. “It was exciting for me to see the company put the chip through its paces across a variety of scenarios including different models, data types, batch sizes and compute cluster combinations – all showing competitive results. This is another positive step forward for the RISC-V industry in the AI space as this new market continues to grow even faster than we had previously forecasted.”

“Harnessing the power of over 1,000 RISC-V processors is a major accomplishment, and we are very pleased with the results which validate our initial projections of performance and efficiency,” said Art Swift, president and CEO of Esperanto Technologies. “We look forward to extending access to a broader range of qualified companies, as we accelerate our RISC-V roadmap efforts with a growing number of strategic partners for applications spanning from Cloud to Edge.”

Esperanto Technologies is the AI RISC-V leader, offering massively parallel 64-bit RISC-V-based tensor compute cores currently delivered in the form of a single chip with 1088 ET-Minion compute cores and a shared high performance memory architecture. Designed to meet the performance, power and total cost of ownership (TCO) requirements of large-scale datacenter customers, Esperanto’s inference chip is a general purpose, parallel processing solution that can accelerate many parallelizable workloads. It is designed to run any machine learning (ML) workload well, and to excel at ML recommendation models, one of the most important types of AI workloads in many large datacenters.

About Esperanto Technologies:
Esperanto Technologies develops massively parallel, high-performance, energy-efficient computing solutions for Artificial Intelligence / Machine Learning based on the open standard RISC-V instruction set architecture. Esperanto is headquartered in Mountain View, California with additional engineering sites in Portland, Oregon; Austin, Texas; Barcelona, Spain; and Belgrade, Serbia. For more information, please visit https://www.esperanto.ai/


RISC-V AI Chips Are Joining GPU Race for AI Processing: CDO Trends

In CDO Trends, see Paul Mah's article on how "RISC-V AI Chips Are Joining GPU Race for AI Processing."

He writes:

"The future has RISC-V in it

One such startup, Esperanto Technologies, utilized a modified RISC-V design with 1,092 cores into a system-on-a-chip (SoC) half the size of the popular A100 GPU from Nvidia.

As reported on IEEE Spectrum, the team created their own vector instructions to complement RISC-V’s efficient 47 instructions (A typical Intel desktop processor has close to a thousand instructions) to support machine learning math such as matrix multiplication.

The ET-SoC-1 from Esperanto is envisioned to accelerate AI in power-constrained data centers through expansion boards that fit into a standard peripheral component interconnect express (PCIe) slot. According to the report, each board can deliver 800 trillion operations per second.

What sets Esperanto’s solution apart is how each board uses multiple low-power SoC chips instead of a giant SoC. According to the AI chip maker, each ET-SoC-1 chip consumes 20 watts when performing a recommender-system benchmark neural network, or less than one-tenth of what the A100 GPU draws.

This allowed the team to place six chips for over 6,000 cores on a single AI accelerator card and still stay at around 120 watts.

And according to a report on All About Circuits last year, Esperanto claims an ET-SoC-1 outperforms the Nvidia A100 in both relative performance and energy efficiency running the MLPerf Deep Learning Recommendation Model benchmark."

Read more


© Esperanto Technologies, Inc.
All trademarks or registered trademarks are the property of Esperanto Technologies or their respective holders.

Privacy Preference Center