San Francisco-based AI infrastructure company Anyscale has unveiled a new service, Anyscale Endpoints, at Ray Summit 2023. The service enables application developers to seamlessly integrate open-source Large Language Models (LLMs) into their projects using popular LLM APIs. Anyscale claims that Endpoints is significantly more cost-effective than proprietary solutions, with costs up to 10 times lower for specific tasks.
Traditionally, developers faced challenges such as complex infrastructure, high compute costs, and time-consuming model development when working with open-source LLMs. Anyscale's Endpoints simplifies this process by offering easy API access to powerful GPUs at a competitive price, allowing developers to harness open-source LLM capabilities without the traditional complexity.
Robert Nishihara, the Co-Founder and CEO of Anyscale, reportedly mentioned that historically, obstacles such as infrastructure complexity, compute resources, and cost had limited AI application developers’ use of open-source LLMs.
The demand for generative AI and high-quality LLM applications is rapidly rising, with the generative AI market projected to grow from $40 billion in 2022 to $1.3 trillion over the next decade, according to Bloomberg Intelligence. Gartner notes the advantages of open-source models, including customizability, better deployment control, enhanced privacy and security, and the ability to leverage collaborative development.
Anyscale offers Endpoints at a competitive rate of $1 per million tokens for state-of-the-art open-source LLMs, making LLM services more accessible to application developers. Additionally, Anyscale can quickly add new models, ensuring users have access to the latest innovations from the open-source community.
Robert Nishihara, the Co-Founder and CEO of Anyscale, emphasized the significance of endpoints, stating,
With seamless access via a simple API to powerful GPUs at a market-leading price, Endpoints lets developers take advantage of open-source LLMs without the complexity of traditional ML infrastructure. As AI innovation continues to accelerate, Endpoints enables developers to harvest the latest developments of the open-source community and stay focused on what matters—building the next generation of AI applications.
[Source: Globe Newswire]
Furthermore, Anyscale offers the option to run and use the Endpoints service within the customer's existing AWS or GCP cloud accounts, improving security and enabling the reuse of security controls and policies. Customers can also upgrade to the full Anyscale AI Application Platform for more customization and control over their data, models, and app architecture.
Anyscale Endpoints integrates seamlessly with popular Python and machine learning libraries and frameworks, facilitating various use cases across different cloud platforms as AI applications evolve.
Early users of Anyscale Endpoints have reported significant benefits, such as faster service deployment and cost advantages over proprietary alternatives. Anyscale's new service aims to empower developers to leverage open-source LLMs for their AI applications while reducing complexity and costs.
Anyscale is a leading AI application platform founded by the creators of Ray, an open-source framework for scalable computing. Based in San Francisco, California, the company empowers developers of all skill levels to build, run, and scale AI applications with ease, from individual laptops to extensive data centers. Anyscale's mission is to simplify AI development by eliminating the need for distributed systems’ expertise, ensuring that every developer and team can succeed with AI. The company has gained traction in the industry, with organizations like Uber, OpenAI, Shopify, and Amazon using Ray for their machine learning platforms.