Vilix
Get started

Set up Vilix in your AI tool.

Free for 3 days. Works with Claude, ChatGPT, Cursor, Perplexity — anything that speaks MCP.

Vilix uses OAuth.

You don't need to generate or paste any tokens. Just paste the MCP URL into your AI tool and approve the connection — the same 3-step flow everywhere.

1

Copy the Vilix MCP URL

The same URL works for every AI tool that supports MCP.

MCP URLurl
https://api.getvilix.com/mcp/sse
2

Add Vilix to your AI tool

Each tool follows the same flow: paste the URL, approve OAuth, done. No tokens, no config files for most setups.

Claude (Desktop or claude.ai)
  1. Settings → Connectors → Add custom connector
  2. Paste the MCP URL above
  3. Approve the OAuth prompt — done
ChatGPT
  1. Settings → Connectors → Add MCP server
  2. Paste the MCP URL above
  3. Approve OAuth, then enable Vilix inside the chat — done
Cursor
  1. Open ~/.cursor/mcp.json (or the project-level .cursor/mcp.json) and add the snippet below
  2. Restart Cursor; it will trigger the OAuth flow automatically
  3. Sign in and approve — Cursor stores the credentials internally, no manual token
cursor mcp.jsonjson
{
  "mcpServers": {
    "vilix": {
      "url": "https://api.getvilix.com/mcp/sse"
    }
  }
}
Perplexity
  1. Settings → Connectors → Add MCP server
  2. Paste the MCP URL above
  3. Approve OAuth — done
3

Paste this system prompt

Add this to your AI's system prompt / custom instructions so it actually uses Vilix instead of just listing the tools.

System prompt
I want you to use Vilix MCP as the mandatory memory layer for every message I send unless I explicitly ask for stateless mode.

For EVERY message I send, always follow this exact order:

1. Call:
   get_context(user_prompt=<my latest message>)

2. Use the returned context:
   - recent_messages
   - user_memories
   - related_conversations
   - instruction

3. Generate your final response internally.
   Do NOT send it yet.

4. Before replying to me, call:
   save_turn(
     user_message=<my latest message>,
     assistant_message=<exact final response>,
     source=<agent/app name>,
     chat_id=<existing chat_id if available>
   )

5. Return the exact same saved response to me.

STRICT RULES

- Always follow: get_context → generate_answer → save_turn → return_answer
- Apply this to EVERY message I send.
- Never skip memory steps based on importance.
- Never answer before get_context.
- Never return an answer before save_turn.
- The saved response MUST exactly match the final returned response.
- Never fake or claim a successful save if it failed.
- Reuse chat_id for the same conversation.
- Do not use lower-level memory tools if get_context/save_turn exist.

If Vilix/MCP tools are unavailable or broken:
- Do NOT silently continue.
- Tell me the memory tools are unavailable.
- Ask whether I want to continue without memory.
- Be explicit that continuity and persistence may not work.
4

You’re done

Start a new conversation in your AI tool. It should call get_context at the start of each turn and save_turn before replying. Switch tools mid-thought — your context comes with you.

Want a dashboard view of what's been saved? Sign in at app.getvilix.com.

Stuck? Check the docs or email support@getvilix.com for setup, ChatGPT / Claude / Cursor connection issues, or billing and account help.