AI Tech, General AI, Software

Mattermost Introduces “OpenOps” to Speed Responsible Evaluation of Generative AI Applied to Workflows

Mattermost Introduces “OpenOps” to Speed Responsible

At the 2023 Collision Conference, Mattermost, Inc., the secure collaboration platform for technical teams, announced the launch of “OpenOps”, an open-source approach to accelerating the responsible evaluation of AI-enhanced workflows and usage policies while maintaining data control and avoiding vendor lock-in.

OpenOps emerges at the intersection of the race to leverage AI for competitive advantage and the urgent need to run trustworthy operations, including the development of usage and oversight policies and ensuring regulatory and contractually-obligated data controls.

It aims to help clear key bottlenecks between these critical concerns by enabling developers and organizations to self-host a “sandbox” environment with full data control to responsibly evaluate the benefits and risks of different AI models and usage policies on real-world, multi-user chat collaboration workflows.

The system can be used to evaluate self-hosted LLMs listed on Hugging Face, including Falcon LLM and GPT4All, when usage is optimized for data control, as well as hyperscaled, vendor-hosted models from the Azure AI platform, OpenAI ChatGPT and Anthropic Claude when usage is optimized for performance.

The first release of the OpenOps platform enables evaluation of a range of AI-augmented use cases including:

  • Automated Question and Answer: During collaborative and individual work users can ask questions to generative AI models, either self-hosted or vendor-hosted, to learn about different subject matters the model supports.
  • Discussion Summarization: AI-generated summaries can be created from self-hosted, chat-based discussions to accelerate information flows and decision-making while reducing the time and cost required for organizations to stay up-to-date.
  • Contextual Interrogation: Users can ask follow-up questions to thread summaries generated by AI bots to learn more about the underlying information without going into the raw data. For example, a discussion summary from an AI bot about a certain individual making a series of requests about troubleshooting issues could be interrogated via the AI bot for more context on why the individual made the requests and how they intended to use the information.
  • Sentiment Analysis: AI bots can analyze the sentiment of messages, which can be used to recommend and deliver emoji reactions on those messages on a user’s behalf. For example, after detecting a celebratory sentiment an AI bot may add a “fire” emoji reaction indicating excitement.
  • Reinforcement Learning from Human Feedback (RLHF) Collection: To help evaluate and train AI models, the system can collect feedback from users on responses from different prompts and models by recording the “thumbs up/thumbs down” signals end users select. The data can be used in future to both fine tune existing models, as well as providing input for evaluating alternate models on past user prompts.

This open source, self-hosted framework offers a "Customer-Controlled Operations and AI Architecture," providing an operational hub for coordination and automation with AI bots connected to interchangeable, self-hosted Generative AI and LLM backends from services like Hugging Face that can scale up to private cloud and data center architectures, as well as scale down to run on a developer’s laptop for research and exploration. At the same time, it can also connect to hyperscaled, vendor-hosted models from the Azure AI platform as well as OpenAI.

“Every organization is in a race to define how AI accelerates their competitive advantage,” says Mattermost CEO, Ian Tien, “We created OpenOps to help organizations responsibly unlock their potential with the ability to evaluate a broad range of usage policies and AI models in their ability to accelerate in-house workflows in concert.”

The OpenOps framework recommends a four phase approach to developing AI-augmentations:

1 - Self-Hosted Sandbox - Have technical teams set up a self-hosted “sandbox” environment as a safe space with data control and auditability to explore and demonstrate Generative AI technologies. The OpenOps sandbox can include just web-based multi-user chat collaboration, or be extended to include desktop and mobile applications, integrations from different in-house tools to simulate a production environment, as well as integration with other collaboration environments, such as specific Microsoft Teams channels.
2 - Data Control Framework - Technical teams conduct an initial evaluation of different AI models on in-house use cases, and setting a starting point for usage policies covering data control issues with different models based on whether models are self-hosted or vendor-hosted, and in vendor-hosted models based on different data handling assurances. For example, data control policies could range from completely blocking vendor-hosted AIs, to blocking the suspected use of sensitive data such as credit card numbers or private keys, or custom policies that can be encoded into the environment.
3 - Trust, Safety and Compliance Framework - Trust, safety and compliance teams are invited into the sandbox environment to observe and interact with initial AI-enhanced use cases and work with technical teams to develop usage and oversight policies in addition to data control. For example, setting guidelines on whether AI can be used to help managers write performance evaluations for their teams, or whether researching techniques for developing malicious software can be researched using AI.
4 - Pilot and Production - Once a baseline for usage policies and initial AI-enhancements are available, a group of pilot users can be added to the sandbox environment to assess the benefits of the augmentations. Technical teams can iterate on adding workflow augmentations using different AI models while Trust, Safety and Compliance teams can monitor usage with full auditability and iterate on usage policies and their implementations. As the pilot system matures, the full set of enhancements can be deployed to production environments that can run on a production-ized version of the OpenOps framework.

The OpenOps framework includes the following capabilities:

Self-Hosted Operational Hub: OpenOps allows for self-hosted operational workflows on a real-time messaging platform across web, mobile and desktop from the Mattermost open-source project. Integrations with in-house systems and popular developer tools to help enrich AI backends with critical, contextual data. Workflow automation accelerates response times while reducing error rates and risk.

AI Bots with Interchangeable AI Backends: OpenOps enables AI bots to be integrated into operations while connected to an interchangeable array of AI platforms. For maximum data control, work with self-hosted, open-source LLM models including GPT4All and Falcon LLM from services like Hugging Face. For maximum performance, tap into third-party AI frameworking including OpenAI ChatGPT, the Azure AI Platform and Anthropic Claude.

Full Data Control: OpenOps enables organizations to self-host, control, and monitor all data, IP, and network traffic using their existing security and compliance infrastructure. This allows organizations to develop a rich corpus of real-world training data for future AI backend evaluation and fine-tuning.

Free and Open Source: Available under the MIT and Apache 2 licenses, OpenOps is a free, open-source system, enabling enterprises to easily deploy and run the complete architecture.

Scalability: OpenOps offers the flexibility to deploy on private clouds, data centers, or even a standard laptop. The system also removes the need for specialized hardware such as GPUs, broadening the number of developers who can explore self-hosted AI models.

The OpenOps framework is currently experimental and can be downloaded from openops.mattermost.com.

About Mattermost

Mattermost provides a secure, extensible hub for technical and operational teams that need to meet nation-state-level security and trust requirements. We serve technology, public sector, and national defense industries with customers ranging from tech giants to the U.S. Department of Defense to governmental agencies around the world.

Our self-hosted and cloud offerings provide a robust platform for technical communication across web, desktop and mobile supporting operational workflow, incident collaboration, integration with Dev/Sec/Ops and in-house toolchains and connecting with a broad range of unified communications platforms.

We run on an open source platform vetted and deployed by the world’s most secure and mission critical organizations, that is co-built with over 4,000 open source project contributors who’ve provided over 30,000 code improvements towards our shared product vision, which is translated into 20 languages.

Spotlight

Other News
AI Tech

AI and Big Data Expo North America announces leading Speaker Lineup

TechEx Events | March 07, 2024

AI and Big Data Expo North America announces new speakers! SANTA CLARA, CALIFORNIA, UNITED STATES, February 26, 2024 /EINPresswire.com/ -- TheAI and Big Expo North America, the leading event for Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP, has announced a fresh cohort of distinguishedspeakersfor its upcoming conference at the Santa Clara Convention Center on June 5-6, 2024. Some of the top industry speakers set to take the stage are: - Sam Hamilton - Head of Data & AI – Visa - Dr Astha Purohit - Director - Product (Tech) Ops – Walmart - Noorddin Taj - Head of Architecture and Design of Intelligent Operations - BP - Temi Odesanya - Director - AI Governance Automation - Thomson Reuters - Katie Sanders - Assistant Vice President – Tech - Union Pacific Railroad - Prasanth Nandanuru – SVP - Wells Fargo - Rodney Brooks - Professor Emeritus - MIT These esteemed speakers bring a wealth of knowledge and expertise to an already impressive lineup, promising attendees a truly enlightening experience. In addition to the speakers, theAI and Big Data Expo North Americawill feature a series of presentations covering a diverse range of topics in AI and Big Data exploring the latest innovations, implementations and strategies across a range of industries. Attendees can expect to gain valuable insights and practical strategies from presentations such as: How Gen AI Positively Augments Workforce Capabilities Trends in Computer Vision: Applications, Datasets, and Models Getting to Production-Ready: Challenges and Best Practices for Deploying AI Ensuring Your AI is Responsible and Ethical Mitigating Bias and Promoting Fairness in AI Systems Security Challenges in the Era of Gen AI and Data Science AI for Good: Social Impact and Ethics Selling Data Democratization to Executives Spreading Data Insights across the Business Barriers to Overcome: People, Processes, and Technology Optimizing the Customer Experience with AI Using AI to Drive Growth in a Regulated Industry Building an MLOps Foundation for AI at Scale The Expo offers a platform for exploration and discovery, showcasing how cutting-edge technologies are reshaping a myriad of industries, including manufacturing, transport, supply chain, government, legal sectors, financial services, energy, utilities, insurance, healthcare, retail, and more. Attendees will have the chance to witness firsthand the transformative power of AI and Big Data across various sectors, gaining insights that are crucial for staying ahead in today's rapidly evolving technological landscape. Anticipating a turnout of over 7000 attendees and featuring 200 speakers across various tracks, AI and Big Data Expo North America offers a unique opportunity for CTO’s, CDO’s, CIO’s , Heads of IOT, AI /ML, IT Directors and tech enthusiasts to stay abreast of the latest trends and innovations in AI, Big Data and related technologies. Organized by TechEx Events, the conference will also feature six co-located events, including the IoT Tech Expo, Intelligent Automation Conference, Cyber Security & Cloud Congress, Digital Transformation Week, and Edge Computing Expo, ensuring a comprehensive exploration of the technological landscape. Attendees can choose from various ticket options, providing access to engaging sessions, the bustling expo floor, premium tracks featuring industry leaders, a VIP networking party, and a sophisticated networking app facilitating connections ahead of the event. Secure your ticket with a 25% discount on tickets, available until March 31st, 2024. Save up to $300 on your ticket and be part of the conversation shaping the future of AI and Big Data technologies. For more information and to secure your place at AI and Big Data Expo North America, please visit https://www.ai-expo.net/northamerica/. About AI and Big Data Expo North America: The AI and Big Data Expo North America is a leading event in the AI and Big Data landscape, serving as a nexus for professionals, industry experts, and enthusiasts to explore and navigate the ever-evolving technological frontier. Through its focus on education, networking, and collaboration, the Expo continues to be a beacon for those eager to stay at the forefront of technological innovation. “AI and Big Data Expo North Americais a part ofTechEx. For more information regardingTechExplease see onlinehere.”

Read More