PR Newswire | August 25, 2023
Sendbird, the communications API platform powering 4,000 apps and 300 Million monthly users, today announced that its enterprise Salesforce Connector has entered General Availability (GA), delivering efficiency-driving enhancements for customer service teams. This release adds a host of new features, including the Sendbird SmartAssistant, a first-of-its-kind, conversational AI solution. As part of Sendbird's leading communications platform, Salesforce Connector empowers any business to extend Salesforce Service Cloud capabilities to deliver an exceptional live chat support experience to customers. Now, with Sendbird SmartAssistant, a customizable, no-code generative AI chatbot is readily available to integrate into companies' support workflows. Conversations become richer and more rewarding directly within an organization's mobile app.
Salesforce Connector provides superior chat capabilities, such as rich media attachments, image moderation, webhooks, and a customizable end-user experience that can be tailored to companies' specific needs. With integration of the AI chatbot, SmartAssistant, Sendbird now provides even more high-quality responses to support queries. Whether it's answering frequently asked questions or troubleshooting product issues, SmartAssistant for Salesforce Connector ensures helpful and human-like responses throughout the customer support journey.
Sendbird's cutting-edge generative AI solution enables users to leverage high-value first-party data for support interactions. Users can now easily create AI knowledge chatbots without the need for OpenAI credentials. This can provide personalized and human-like chatbot conversations to quickly meet customer needs while improving agent efficiency.
The new integration builds upon and extends Salesforce Connector's Summarize feature. What was tested conceptually through the beta period is now an essential, valuable part of the Salesforce Connector tool. Summarize is powered by ChatGPT and enables agents to get a comprehensive summary of an entire support chat conversation with a customer in an instant; agents no longer need to read the conversation from the very start to provide the best, immediate support for their users.
In addition to SmartAssistant and Summarize, Sendbird also incorporated new Moderation capabilities. Organizations can moderate message content and users from the convenience of a single dashboard. With the supported filters and moderation methods, they can determine the level of suitability of language, images, and other content for their applications using Salesforce Connector. Users also benefit from Auto-Translation, in which Salesforce Connector translates received messages into an agent's preferred language within the case chat.
"Salesforce Connector changes the game for customer support," said Sendbird CEO and Co-founder John S. Kim. "Agents have all the tools and answers they need right at their fingertips to solve even the most complex problems with ease. And with SmartAssistant, agents can help more customers than ever before with new levels of personalization and efficiency."
With Sendbird Salesforce Connector, organizations get an out-of-the-box solution with seamless, near-instant integration compared to other chat solutions that can take weeks. This translates to faster time to value. Additionally, as a cloud-based solution, the tool is persistently updated with no interruption to the user experience and no management demands on teams.
Sendbird, a member company of Born2Global Centre, believes conversations are at the heart of building relationships and getting things done. The company's global conversations platform powers over 7 billion mobile messages and interactions monthly. Industry leaders like Carousell, Traveloka, RedDoorz, Tiket, Rakuten, Viki, AirAsia, TADA, RuangGuru, Ralali, Reddit, and Paytm build with Sendbird chat, voice, video, and livestream APIs to create a differentiated user experience that improves customer retention, conversion, and satisfaction.
AI Tech, General AI, Software
Globenewswire | July 04, 2023
At the 2023 Collision Conference, Mattermost, Inc., the secure collaboration platform for technical teams, announced the launch of “OpenOps”, an open-source approach to accelerating the responsible evaluation of AI-enhanced workflows and usage policies while maintaining data control and avoiding vendor lock-in.
OpenOps emerges at the intersection of the race to leverage AI for competitive advantage and the urgent need to run trustworthy operations, including the development of usage and oversight policies and ensuring regulatory and contractually-obligated data controls.
It aims to help clear key bottlenecks between these critical concerns by enabling developers and organizations to self-host a “sandbox” environment with full data control to responsibly evaluate the benefits and risks of different AI models and usage policies on real-world, multi-user chat collaboration workflows.
The system can be used to evaluate self-hosted LLMs listed on Hugging Face, including Falcon LLM and GPT4All, when usage is optimized for data control, as well as hyperscaled, vendor-hosted models from the Azure AI platform, OpenAI ChatGPT and Anthropic Claude when usage is optimized for performance.
The first release of the OpenOps platform enables evaluation of a range of AI-augmented use cases including:
Automated Question and Answer: During collaborative and individual work users can ask questions to generative AI models, either self-hosted or vendor-hosted, to learn about different subject matters the model supports.
Discussion Summarization: AI-generated summaries can be created from self-hosted, chat-based discussions to accelerate information flows and decision-making while reducing the time and cost required for organizations to stay up-to-date.
Contextual Interrogation: Users can ask follow-up questions to thread summaries generated by AI bots to learn more about the underlying information without going into the raw data. For example, a discussion summary from an AI bot about a certain individual making a series of requests about troubleshooting issues could be interrogated via the AI bot for more context on why the individual made the requests and how they intended to use the information.
Sentiment Analysis: AI bots can analyze the sentiment of messages, which can be used to recommend and deliver emoji reactions on those messages on a user’s behalf. For example, after detecting a celebratory sentiment an AI bot may add a “fire” emoji reaction indicating excitement.
Reinforcement Learning from Human Feedback (RLHF) Collection: To help evaluate and train AI models, the system can collect feedback from users on responses from different prompts and models by recording the “thumbs up/thumbs down” signals end users select. The data can be used in future to both fine tune existing models, as well as providing input for evaluating alternate models on past user prompts.
This open source, self-hosted framework offers a "Customer-Controlled Operations and AI Architecture," providing an operational hub for coordination and automation with AI bots connected to interchangeable, self-hosted Generative AI and LLM backends from services like Hugging Face that can scale up to private cloud and data center architectures, as well as scale down to run on a developer’s laptop for research and exploration. At the same time, it can also connect to hyperscaled, vendor-hosted models from the Azure AI platform as well as OpenAI.
“Every organization is in a race to define how AI accelerates their competitive advantage,” says Mattermost CEO, Ian Tien, “We created OpenOps to help organizations responsibly unlock their potential with the ability to evaluate a broad range of usage policies and AI models in their ability to accelerate in-house workflows in concert.”
The OpenOps framework recommends a four phase approach to developing AI-augmentations:
1 - Self-Hosted Sandbox - Have technical teams set up a self-hosted “sandbox” environment as a safe space with data control and auditability to explore and demonstrate Generative AI technologies. The OpenOps sandbox can include just web-based multi-user chat collaboration, or be extended to include desktop and mobile applications, integrations from different in-house tools to simulate a production environment, as well as integration with other collaboration environments, such as specific Microsoft Teams channels.
2 - Data Control Framework - Technical teams conduct an initial evaluation of different AI models on in-house use cases, and setting a starting point for usage policies covering data control issues with different models based on whether models are self-hosted or vendor-hosted, and in vendor-hosted models based on different data handling assurances. For example, data control policies could range from completely blocking vendor-hosted AIs, to blocking the suspected use of sensitive data such as credit card numbers or private keys, or custom policies that can be encoded into the environment.
3 - Trust, Safety and Compliance Framework - Trust, safety and compliance teams are invited into the sandbox environment to observe and interact with initial AI-enhanced use cases and work with technical teams to develop usage and oversight policies in addition to data control. For example, setting guidelines on whether AI can be used to help managers write performance evaluations for their teams, or whether researching techniques for developing malicious software can be researched using AI.
4 - Pilot and Production - Once a baseline for usage policies and initial AI-enhancements are available, a group of pilot users can be added to the sandbox environment to assess the benefits of the augmentations. Technical teams can iterate on adding workflow augmentations using different AI models while Trust, Safety and Compliance teams can monitor usage with full auditability and iterate on usage policies and their implementations. As the pilot system matures, the full set of enhancements can be deployed to production environments that can run on a production-ized version of the OpenOps framework.
The OpenOps framework includes the following capabilities:
Self-Hosted Operational Hub: OpenOps allows for self-hosted operational workflows on a real-time messaging platform across web, mobile and desktop from the Mattermost open-source project. Integrations with in-house systems and popular developer tools to help enrich AI backends with critical, contextual data. Workflow automation accelerates response times while reducing error rates and risk.
AI Bots with Interchangeable AI Backends: OpenOps enables AI bots to be integrated into operations while connected to an interchangeable array of AI platforms. For maximum data control, work with self-hosted, open-source LLM models including GPT4All and Falcon LLM from services like Hugging Face. For maximum performance, tap into third-party AI frameworking including OpenAI ChatGPT, the Azure AI Platform and Anthropic Claude.
Full Data Control: OpenOps enables organizations to self-host, control, and monitor all data, IP, and network traffic using their existing security and compliance infrastructure. This allows organizations to develop a rich corpus of real-world training data for future AI backend evaluation and fine-tuning.
Free and Open Source: Available under the MIT and Apache 2 licenses, OpenOps is a free, open-source system, enabling enterprises to easily deploy and run the complete architecture.
Scalability: OpenOps offers the flexibility to deploy on private clouds, data centers, or even a standard laptop. The system also removes the need for specialized hardware such as GPUs, broadening the number of developers who can explore self-hosted AI models.
The OpenOps framework is currently experimental and can be downloaded from openops.mattermost.com.
Mattermost provides a secure, extensible hub for technical and operational teams that need to meet nation-state-level security and trust requirements. We serve technology, public sector, and national defense industries with customers ranging from tech giants to the U.S. Department of Defense to governmental agencies around the world.
Our self-hosted and cloud offerings provide a robust platform for technical communication across web, desktop and mobile supporting operational workflow, incident collaboration, integration with Dev/Sec/Ops and in-house toolchains and connecting with a broad range of unified communications platforms.
We run on an open source platform vetted and deployed by the world’s most secure and mission critical organizations, that is co-built with over 4,000 open source project contributors who’ve provided over 30,000 code improvements towards our shared product vision, which is translated into 20 languages.
General AI, AI Applications, Software
Prnewswire | July 25, 2023
Eccentex, an industry leading No-code / Low-code development platform and process automation software company, announced today the EAP Availability of new Generative AI capabilities that will shape the future of Case Management.
These new AI capabilities will fundamentally change the way companies managing customer related cases and tasks today - said Tibor Vass CMO of Eccentex.
Eccentex introduced 20+ new AI features as part of its new HyperAutomation Cloud platform and recently trademarked HyperAutomation as a Service™ (HaaS) offer that creates a new category in the Business Automation Market.
With this new platform and innovative offer, Eccentex is moving from Low-Code development to a No-Code development method, that further accelerates agility and widens the horizon of all Business Automation capabilities.
With the help of Eccentex AI Services companies can create new business processes based on simple text line inputs, such as - "Create a new customer onboarding process for a large size Retail Bank" >> [Invent] - and ingest the auto-generated application model directly to the HyperAutomation Cloud engine without writing a single line of code or using a visual process editor.
Responding to a customer email – even if it is complex, contains multiple requests, needs interaction history and contextual awareness – is now easier than ever. Eccentex customers can leverage the incredible AI powered Auto Responder feature to create a fully personalized answer to any email or text input in any language in only a few seconds. Back Office agents can handle more requests, answer more questions, resolve complex issues faster and with less errors.
The platform level Eccentex AI services can be embedded in many different use cases from case management, sentiment and intent analytics, knowledge management, interaction personalization, sensitive personal information masking, key data extraction, agent assistance, fraud prevention, and so on.
Eccentex HyperAutomation Cloud is a platform that can be easily integrated with 3rd party applications or customer owned / homegrown systems through platform level open APIs. HyperAutomation Cloud is equipped with out-of-box and easily customizable adapters that can help speed up and simplify integrations.
Eccentex delivers cloud-based Business Automation capabilities to customers of all sizes across all industries for customer service modernization, journey orchestration and back-office automation. Eccentex's flexible and unified HyperAutomation Cloud platform empowers businesses to rapidly deploy, easily extend and effortlessly change business applications to meet their strategic goals and keep up with the ever-changing customer needs.
Throughout its history, Eccentex has delivered award-winning capabilities in Dynamic Case Management and Business Process Automation to help the world's leading brands and government entities to achieve breakthrough results in the short term. With Eccentex, businesses can achieve their Digital Transformation goals without sacrificing human centricity. Customers will enjoy the new automated process and self-service experiences while brand employees will be better motivated and freed up from boring, repetitive tasks.