Enhanced productivity without compromising data privacy
Technology
|
IT-Security
|
Natural Language Processing
Slack built secure, private AI features for summarizing and answering questions in Slack.
Slack’s AI can summarize chats and answer questions without peeking into data it shouldn’t.
Slack AI uses the existing access control lists (ACLs) that govern what each user can see in Slack. When a user requests an AI-powered summary or search, the system first checks what channels, messages, and files that user is permitted to access. The AI can only retrieve and process content from those sources, it cannot access private channels, DMs, or files that the user isn’t a member of or doesn’t have permission to view
Because the platform was architected from day one with strict data-steering principles: customer data must never leave Slack’s trust boundary; the models must not be trained on customer data; and the AI should only operate on content each user already has permission to see.
To fulfil the first principle, Slack deploys closed-source large language models (LLMs) within a virtual private cloud (VPC) under its control (via Amazon Web Services) so that external model providers never ingest or retain customer data.
For the second principle, Slack chooses to use off-the-shelf models rather than fine-tuning on customer content. Instead, it uses Retrieval Augmented Generation (RAG): when a user asks for a summary or search answer, Slack fetches just the permission-scoped messages that the user can already see, feeds that context to the LLM at inference time, and does not store or reuse the user data for model training.
On the permissions side, Slack ensures that AI outputs reflect only what the user could already view via standard search—which means the AI will never surface content from private channels the user cannot access. The same access control logic used by Slack’s search and channels is used when fetching data for AI prompts.
Operationally, Slack reports that pilot customers saved on average around 97 minutes per user per week by using Slack AI’s summarisation and search features.
Key insight: the success of this approach stems from aligning AI capability with enterprise-grade security and privacy controls. By enforcing data locality, using stateless prompt-based inference (RAG) instead of model fine-tuning, and sticking to existing access permission boundaries, Slack builds trust and usability simultaneously.
Why It Worked: The tight integration of AI within the product workflow, combined with rigorous data governance, allowed Slack to deliver value (faster answers, summaries) without triggering major enterprise objections around data risk.
It’s like having a super-helpful assistant who remembers everything you said, but locks it all in a vault only you can open.
4
/5
Secure, privacy-preserving AI summarization is advanced, meeting strict enterprise compliance needs.
Timeline:
10 months
Cost:
$1,350,000
Headcount:
8