Deci | September 26, 2022
Deci, the deep learning company harnessing AI to build AI, today announced a new set of industry-leading semantic segmentation models, dubbed DeciSeg. Deci’s proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated semantic segmentation models that significantly outperform the most powerful models publicly available, such as the MobileViT released by Apple, and the DeepLab family released by Google. Deci’s models deliver more than 2x lower latency, as well as 3-7% higher accuracy.
Semantic segmentation is one of the most widely used computer vision tasks across many business verticals, including automotive, smart cities, healthcare, and consumer applications, and is often required for many edge AI applications. However, significant barriers exist to running semantic segmentation models directly on edge devices, such as high latency and the inability to deploy those models due to their size.
With DeciSeg models, semantic segmentation tasks that previously could not be carried out at the edge because they were too resource intensive are now possible. This allows companies to develop new use cases and applications on edge devices, reduce inference costs (since AI practitioners will no longer need to run these tasks in expensive cloud environments), open new markets, and shorten development times.
“DeciSegs are an example of the power of Deci’s AutoNAC engine capabilities to generate custom hardware-aware deep learning models with unparalleled performance on any hardware. AI teams can easily use DeciSegs models or leverage Deci’s AutoNAC engine to build and deploy custom models that run real-time computer vision tasks on their edge devices.” said Yonatan Geifman, PhD, co-founder and CEO of Deci.
Deci’s platform has a proven-track record in enabling AI at the edge and empowering AI teams to build and deploy production grade deep learning models. Earlier this year, Deci announced the discovery of DeciNets for CPUs, which reduced the gap between a model’s inference performance on a GPU versus a CPU by half, without sacrificing the model’s accuracy, enabling AI to run on lower cost, resource constrained hardware.
“In the world of automated deep neural network design and construction, Deci’s AutoNAC technology is a game changer. It uses deep learning to search vast spaces of neural networks for the model most appropriate for a particular task and particular AI chip. In this case, AutoNAC was applied to the Pascal VOC Semantic Segmentation task on NVIDIA’s Jetson Xavier NX™ chip and we are very pleased with the results.” said Ran El-Yaniv, co-founder and Chief Scientist of Deci and Professor of Computer Science at the Technion – Israel Institute of Technology.
Deci’s platform is serving customers across industries in various production environments including edge, mobile, data centers and cloud. To learn more about how leading AI teams leverage Deci’s platform to build production grade models and accelerate inference performance, visit here.
Deci enables deep learning to live up to its true potential by using AI to build better AI. With the company's deep learning development platform, AI developers can build, optimize, and deploy faster and more accurate models for any environment including cloud, edge, and mobile, allowing them to revolutionize industries with innovative products. The platform is powered by Deci's proprietary automated Neural Architecture Construction technology (AutoNAC), which automatically generates and optimizes deep learning models' architecture and allows teams to accelerate inference performance, enable new use cases on limited hardware, shorten development cycles and reduce computing costs. Founded by Yonatan Geifman, Jonathan Elial, and Professor Ran El-Yaniv, Deci's team of deep learning engineers and scientists are dedicated to eliminating production-related bottlenecks across the AI lifecycle.
AI TECH,GENERAL AI,AI APPLICATIONS
Nextech AR | September 23, 2022
Nextech AR Solutions Corp. , a Metaverse Company and leading provider of augmented reality (“AR”) experience technologies and 3D model services is pleased to announce it has launched its groundbreaking Toggle3D, a new AI powered SaaS platform that enables the creation, design, configuration and deployment of 3D models at scale. The Company sees this launch as a major milestone on its way to becoming the dominant 3D model platform and looks forward to Toggle3D becoming a new high margin engine of growth.
Toggle3D is a standalone web application which enables product designers, 3D artists, marketing professionals and eCommerce site owners to create, customize and publish high-quality 3D models and experiences without any technical or 3D design knowledge required. The Company believes that Toggle3D is the first platform of its kind, and this break-through SaaS product is a potential game changer for the manufacturing and design industry, as it provides a viable solution to convert large CAD files into lightweight 3D models at affordable prices and at scale.
CAD is a function of product engineering. Industrial designers, working for product manufacturers, use CAD software like AutoCAD, and SolidWorks to design many of the products in the modern world. The Toggle3D platform leveraged AI so those raw CAD files can be converted to photo realistic, fully textured 3D models at scale. Toggle3D technology creates optimized 3D meshes that are suitable for 3D and AR applications.
The use of CAD files is ubiquitous across manufacturing verticals including; automotive, aerospace, industrial machinery, civil and construction, electrical & electronics, pharmaceutical, healthcare, consumer goods and others. According to BIS Research, the CAD market, quantified by the amount spent on the creation of CAD files, is $11 billion dollars by 2023.1...the Company sees substantial use cases.
Toggle3D uses Nextech AR’s patent pending technology AI (enabling the conversion of CAD files into 3D models at scale) as well as the Company’s ARitize Configurator product. Creators can easily transform their CAD files into 3D models, or bring their existing 3D models into the platform. Within the platform, creators will be able to conduct 3 types of projects which are all templatized for a fantastic user experience: 3D Product Configurators, Virtual Photography, and Product Demos being rolled out over the next few weeks.
To rapidly gain early adopters, Toggle3D will be available both through a free trial and pro SaaS license. A free license will give users just enough functionality to try out the platform and experiment with the technology. By upgrading to a pro plan, users will be granted full access to the entire platform. This includes unlimited projects, more materials, larger file uploads, more storage, and other advanced editing tools. It is a completely self-serve platform, that contains an extensive pre-built library of high-quality over 1000 PBR materials. Toggle3D makes things easy, with a friendly user interface that works for the user and makes the entire 3D journey seamless and predictable. The user is fully in control of their design output.
Yesterday, CEO Evan Gappelberg joined the Wall Street Reporter’s NEXT SUPER STOCK for a livestream event, where he discussed the new Toggle3D product and provided a live demo of the platform.
About ARitize Configurator
ARitize Configurator was previously only available as a managed service but now is also available as a self-serve product through the Toggle3D platform.
To use the product configurator within Toggle3D, a user is required to bring a CAD file or an existing 3D model. Once the 3D model is created or uploaded, the configurator tool makes it easy and seamless to change the colors, materials, and individual parts all in engaging real-time 3D.
About Nextech AR
Nextech AR Solutions is the engine accelerating the growth of the Metaverse. Using breakthrough AI, Nextech AR is able to quickly, easily and affordably ARitize (transform) vast quantities and varieties of existing assets at scale making products, people and places ready for interactive 3D use, giving creators at every level all the essential tools they need to build out their digital AR vision in the Metaverse. Our platform agnostic tools allow brands, educators, students, manufacturers, creators, and technologists to create immersive, interactive and the most photo-realistic 3D assets and digital environments, compose AR experiences, and publish them omnichannel. With a full suite of end-to-end AR solutions in 3D Commerce, Education, Events, and Industrial Manufacturing, Nextech AR is in a unique position to meet the needs of the world’s biggest brands and all Metaverse contributors.
AI TECH,GENERAL AI
Run:ai | September 22, 2022
Run:ai, the leader in compute orchestration for AI workloads, today announced that its Atlas Platform is certified to run NVIDIA AI Enterprise, an end-to-end, cloud-native suite of AI and data analytics software that is optimized to enable any organization to use AI.
"The certification of Run:AI Atlas for NVIDIA AI Enterprise will help data scientists run their AI workloads most efficiently. "Our mission is to speed up AI and get more models into production, and NVIDIA has been working closely with us to help achieve that goal."
Omri Geller, CEO and co-founder of Run:ai
With many companies now operating advanced machine learning technology and running bigger models on more hardware, demand for AI computing chips continues to grow. GPUs are indispensable for running AI applications, and companies are turning to software to reap the most benefit from their AI infrastructure and get models to market faster.
The Run:ai Atlas Platform uses a smart Kubernetes Scheduler and software-based Fractional GPU technology to provide AI practitioners seamless access to multiple GPUs, multiple GPU nodes, or fractions of a single GPU. This enables teams to match the right amount of computing power to the needs of every AI workload, so they can get more done on the same chips. With these capabilities, Run:ai's Atlas Platform lets enterprises maximize the efficiency of their infrastructure, avoiding a scenario where GPUs sit idle or use only a small amount of their power.
"Enterprises across industries are turning to AI to power the breakthroughs that will help improve customer service, boost sales and optimize operations," said Justin Boitano, vice president of enterprise and edge computing at NVIDIA. "Run:ai's certification for NVIDIA AI Enterprise provides customers with an integrated, cloud-native platform for deploying AI workflows with MLOps management capabilities."
Run:ai creates fractional GPUs as virtual ones within available GPU framebuffer memory and compute space. These fractional GPUs can be accessed by containers, enabling different workloads to run in these containers — in parallel and on the same GPU. Run:ai works well on VMware vSphere and bare metal servers, and supports various distributions of Kubernetes.
This certification is the latest in a series of Run:ai's collaborations with NVIDIA. In March, Run:ai completed a proof of concept which enabled multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. This was followed by the company fully integrating NVIDIA Triton Inference Server. And in June, Run:ai worked with Weights & Biases and NVIDIA to gain access to NVIDIA-accelerated computing resources orchestrated by Run:ai's Atlas Platform.
Run:ai's Atlas Platform brings cloud-like simplicity to AI resource management - providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system - which includes a workload-aware scheduler and an abstraction layer - helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud.