Ampere Will Purchase Onspecta In Order to Accelerate AI Inference on Cloud-Native Applications
Talha Tamboli | July 29, 2021
Ampere® Computing announced today that it has agreed to purchase the AI technology firm OnSpecta, which will improve Ampere® Altra® performance with AI inference applications. The OnSpecta Deep Learning Software (DLS) AI optimization engine can significantly outperform commonly used CPU-based machine learning (ML) frameworks.
The businesses have already worked together and showed over 4x acceleration on Ampere-based instances performing typical AI-inference workloads. An optimised model zoo with object identification, video processing, and recommendation engines will be included in the acquisition. The terms of the transaction were not disclosed, but it is expected to conclude in August, subject to standard closing conditions.
"We are thrilled to welcome the amazing OnSpecta team to Ampere," stated Renee James, Ampere Computing's founder, chairman, and CEO. "Ampere will be able to give a more robust platform for inference job processing with lower power, higher speed, and better predictability than ever before thanks to the inclusion of deep learning capabilities. This purchase demonstrates our commitment to providing our clients with a truly distinctive cloud native computing platform in both cloud and edge deployments."
According to IDC Research, the AI server industry will be worth more than $26 billion by 2024, with a 13.7 percent annual growth rate. Customers of Ampere are looking for strategies to manage the expenses and increasing demands for AI inference activities in both centralised and edge-based infrastructure situations. DLS is a seamless binary drop-in library for many common AI frameworks that will dramatically accelerate inference on Ampere Altra. It permits the use of the Altra-native FP16 data format, which can provide twice the performance of FP32 formats without major accuracy loss or model retraining.
"This is a natural continuation from our existing strong partnership with Ampere," said OnSpecta co-founder and CEO Indra Mohan. "Being a part of Ampere will immensely benefit our team as we assist build on the great success of Ampere Altra and give vital support to clients as they deploy the Altra product line to a wide variety of AI inference use cases."
"We have already seen the powerful performance and ease-of-use of Ampere Altra and OnSpecta on the Oracle OCI Ampere A1 instance," replied Clay Magouyrk, executive vice president of Oracle Cloud Infrastructure. "With DLS compatibility on all major open source AI frameworks including as Tensorflow, PyTorch, and ONNX, as well as the predictable performance of Ampere Altra, we anticipate continuous innovation on OCI Ampere A1 forms for AI inference workloads."
With the world's first cloud native processor, Ampere is shaping the future of hyperscale cloud and edge computing. Ampere, which was built for the cloud with a current 64-bit Arm server-based architecture, allows clients to expedite the delivery of all cloud computing applications. Ampere processors are designed for the continuous rise of cloud and edge computing, with industry-leading cloud performance, power efficiency, and scalability.
DLS, OnSpecta's software, substantially speeds up AI inference workloads in the cloud and at the edge. The company is based in Redwood City, California, and is run by its founders, Indra Mohan (CEO) and Victor Jakubiuk (CTO). Sage Hill Capital Partners, WestWave Capital, BMNT, and GoingVC Partners are among its investors.