Skip to main content
Grapevine is a multi-tenant, real-time unified knowledge store that connects to your company’s data sources and makes them searchable through natural language queries and AI-powered exploration.

What is Grapevine?

Grapevine indexes content from various sources across your organization, including: Check out the Reference page for a full list, and more details on each connector.

How it works

  1. Connect - Integrate your data sources through secure OAuth connections and API keys
  2. Ingest - Using a combination of real-time webhook processing and periodic API syncs, Grapevine ingests your data into a unified knowledge base
  3. Index - Grapevine indexes your data into a number of underlying data structures, to give agents a rich context to work with
  4. Search - Build an agent using our Search Tools, or use our built-in agent

API Design

Grapevine is designed after tools like Cline and Claude Code that popularized the idea of Agentic Navigation. Grapevine exposes a number of small, fast tools that can be used in sequence to explore the knowledge space, and gather the context needed to answer complex queries. Instead of focusing purely on code, we’ve designed Grapevine to help agents navigate all the unstructured data that your organization produces such as decisions made in meetings / Slack, technical pitfalls acknowledged in PR review, design directions that were decided against in tickets, etc. To keep this simple for agents, we’ve designed our API after the tools that applications like Claude Code or Cline use to navigate a filesystem.

Documents

A Document in Grapevine represents a unit of knowledge from your data sources. A Document is roughly equivalent to a file in a filesystem, with their exact contents being dependant on the source system.

Chunks

A Document can be broken down into one or more Chunks. The exact chunking algorithm is source-specific (i.e for Slack, we chunk messages by thread, whereas for Notion we chunk by semantic sections). Each Chunk is embedded using OpenAI’s text-embedding-3-large model, and is stored in a vector database to power the semantic_search tool.
A full reference of the structure of each source’s Documents is in progress. In the meantime, try checking them via the API!

Examples

  • SlackChannelDocument - Collection of messages from a channel for a specific date
    • ID format: slack:C12345:2025-01-15
    • Metadata: {channel_id, channel_name, date, message_count}
    • Chunks: One chunk per message with thread replies inlined
  • GitHubPRDocument - Pull request with comments and reviews
    • ID format: github_prs:owner/repo:123
    • Metadata: {repository, pr_number, author, state, merged_at}
    • Chunks: PR description, code diff, comments
  • LinearIssueDocument - Issue with comments and activity
    • ID format: linear:TEAM-123
    • Metadata: {team_name, issue_number, title, state, assignee}
    • Chunks: Issue description, comments

How it Works

The MCP Server exposes tools that enable agentic exploration: 1. Navigation Tools (Find relevant documents):
  • semantic_search - Conceptual similarity search using embeddings
  • keyword_search - Exact keyword matching with Boolean operators
2. Fetching Tools (Retrieve full documents):
  • get_document - Retrieve full content by ID
  • get_document_metadata - Retrieve metadata only (faster)
3. Agent Tool (End-to-end Q&A):
  • ask_agent / ask_agent_streaming - Wraps the full agentic loop

Example Agentic Navigation Loop

User: "What did Alice say about the login bug?"

Agent decides to call tools:

1. semantic_search(query="login bug", filters={sources: [SLACK, LINEAR]})
   → Returns: Slack messages + Linear issues mentioning login bugs

2. get_document(document_id="linear:TEAM-123")
   → Returns: Full Linear issue with detailed context

3. semantic_search(query="Alice login", filters={sources: [SLACK]})
   → Returns: Alice's Slack messages about login

Agent synthesizes answer with citations:
"Alice mentioned in #engineering on 2025-01-15 that the login bug
was caused by... [Link to Slack message] [Link to Linear issue]"

Why Agentic Navigation?

Agentic Navigation has only recently become possible thanks to advancements in tool calling and reasoning models. We believe that this is the future of AI-powered search, and we’re excited to be at the forefront of it. As reasoning models get more powerufl, we believe they’ll be able to increasingly effectively navigate using tools, provided those tools are well designed and available for everyone to use. Specifically, agents built on Grapevine can:
  • Explore iteratively - Makes multiple tool calls to gather context
  • Self-direct - Decides which tools to call based on intermediate results
  • Curate their own context - Fetches full documents when needed, not just search results
  • Cite sources - Every answer links back to original documents
This allows users to ask complex questions without knowing which data sources contain the answer or how to formulate precise queries - the agent handles the exploration.