AI In Incident Management: A Privacy-Centric Approach

JJ Tang is the CEO and Co-founder of Rootly, an enterprise-grade incident management platform.


Imagine an alien, fluent in your language, silently joining your family discussions during a conflict. Its presence, although silent and unobtrusive, is keenly felt as it learns from the intimate details shared. This analogy reflects the current landscape of large language models (LLMs) like OpenAI’s GPT, which understand and process human language, offering useful insights in an accessible format. Although LLMs are transforming our work environment by reducing daily cognitive loads, their deployment demands careful consideration of privacy and the necessity of an opt-out mechanism.

My team at Rootly embraced this challenge head-on when integrating AI capabilities into our Slack-based incident management solution. Given our product’s role in customer infrastructure—managing outages, performance issues and bugs—privacy and compliance were our primary design considerations. This piece aims to convey the lessons learned from developing an AI solution with user privacy in mind.

The Enterprise Privacy Challenge
Privacy concerns aren’t new. Instead, they’ve evolved from fears of totalitarian surveillance in the 20th century to the modern threats posed by social media to global democracies. Privacy concerns go beyond the fear of exposure; they highlight how the knowledge of being observed can fundamentally change behavior.

This phenomenon isn’t limited to personal life. In the corporate world, attempts to track and improve developer productivity often lead to unintended consequences. Developers, aware of the metrics being used to judge their performance, may game the system to their advantage, undermining the very goal of the assessment.

Privacy extends into the realm of data management, encompassing how data is stored, processed and shared, with broad implications for legal compliance, security and corporate governance. When incorporating AI technologies, it’s imperative that their privacy standards align with your organizational policies.

AI, Incident Management And Privacy
The impact of AI in incident management can be huge. AI can consolidate diverse data streams—the team’s findings, error traces or cluster metrics—enabling people to quickly understand and address issues. AI’s capacity to digest complex datasets and offer actionable insights, suggest resolutions and even draft post-incident retrospectives is opening a new horizon in the industry.

Yet, the sensitive nature of such data necessitates a cautious approach. Incidents can be so sensitive that you want to classify them as private, such that not even your engineers know they exist. Thus, the integration of AI into these processes poses unique privacy challenges. Employing AI models involves sharing data with external parties for processing, raising valid concerns about data privacy and control.

Committing To Privacy By Design
Acknowledging that privacy requirements vary widely across industries and regulatory environments, it’s essential to design an AI solution that makes customers confident about how their data is used. Based on feedback from users and industry research, we built a privacy framework that lets customers determine which incidents and specific Slack messages are subjected to AI analysis.

Additionally, given that incidents happen under pressure, there are inherent risks associated with handling sensitive production data. Thus, as a safety net, it’s important to scrub personally identifiable information (PII), service secrets and other sensitive details before data is sent for AI processing.

Navigating The Shared Knowledge Landscape
LLMs accumulate and refine their knowledge base from vast databases, adjusting responses based on user input. Although improving an AI model’s performance through data and feedback is beneficial, there’s a legitimate concern about inadvertently aiding competitors with your proprietary data.

To address this, you’ll want to partner with your LLM provider to ensure that customer data is neither stored nor utilized for training. For additional control and peace of mind, you can also consider offering customers the option to integrate their own LLM accounts, such as OpenAI.

Emphasizing User Control And Choice
The metaphor of the alien observer underscores the utility and potential discomfort of AI in personal spaces. Similarly, in the professional realm, the ability to opt in or out of AI features at will is crucial for maintaining operational flexibility without compromising privacy.

Incorporating AI into your operational toolkit shouldn’t come at the expense of privacy. By choosing partners that prioritize these values, organizations can leverage AI to enhance their incident management processes while safeguarding the privacy and trust of their users.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

{Categories} _Category: Inspiration,*ALL*{/Categories}
{Author}JJ Tang, Forbes Councils Member{/Author}
{Source}Forbes – Innovation{/Source}

Exit mobile version