Skip to main content

Install Jintel in your AI tools

Copy-paste install instructions for Claude, Cursor, ChatGPT, Gemini, and any OpenAI-compatible tool-calling LLM.

Jintel ships two MCP surfaces:

  • Local stdio@yojinhq/jintel-mcp runs in your client's process. Lowest latency, no network hop.
  • Remote (Streamable HTTP)https://jintel.ai/mcp runs on Jintel's infra. No install step, works in clients that don't ship an npm runtime.

Pick whichever your client supports. Both speak the same tool catalog and accept the same jk_live_… API key.

Claude Desktop

Local stdio (recommended on desktop):

{
"mcpServers": {
"jintel": {
"command": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}
}

Or remote (Streamable HTTP):

{
"mcpServers": {
"jintel": {
"transport": "streamable-http",
"url": "https://jintel.ai/mcp",
"headers": { "Authorization": "Bearer jk_live_your_key_here" }
}
}
}

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows), then restart Claude Desktop.

Claude Code

Install the full plugin (MCP server + skill + slash commands):

claude plugin marketplace add YojinHQ/jintel-sdk
claude plugin install jintel@YojinHQ
export JINTEL_API_KEY=jk_live_your_key_here

Local stdio MCP only:

claude mcp add jintel -- npx -y @yojinhq/jintel-mcp

Remote MCP:

claude mcp add jintel --transport http https://jintel.ai/mcp \
--header "Authorization: Bearer jk_live_your_key_here"

Cursor

Add to ~/.cursor/mcp.json (global) or <project>/.cursor/mcp.json (per-project):

{
"mcpServers": {
"jintel": {
"command": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}
}

Restart Cursor; the Jintel tools appear in the agent's tool picker.

ChatGPT (Custom GPT)

Custom GPTs ingest a public OpenAPI spec — paste this URL into the Actions panel:

https://jintel.ai/openapi.json

Authentication: API Key, type Bearer, value jk_live_your_key_here. The spec already declares the bearerAuth scheme, so the GPT builder picks it up automatically.

Gemini

Gemini Extensions / Gems submission flow changes frequently. Point the extension at the same OpenAPI URL (https://jintel.ai/openapi.json) and follow Google's current submission instructions: see ai.google.dev for the live process.

Continue

Continue (the open-source coding assistant for VS Code and JetBrains) supports MCP servers. Add to ~/.continue/config.yaml:

mcpServers:
- name: jintel
command: npx
args: ["-y", "@yojinhq/jintel-mcp"]
env:
JINTEL_API_KEY: jk_live_your_key_here

Or remote Streamable HTTP:

mcpServers:
- name: jintel
type: streamableHttp
url: https://api.jintel.ai/mcp
headers:
Authorization: Bearer jk_live_your_key_here

Cline

Cline (autonomous coding agent in VS Code) reads MCP servers from ~/Library/Application Support/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json (macOS):

{
"mcpServers": {
"jintel": {
"command": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}
}

Zed

Zed's assistants panel speaks MCP. Add to ~/.config/zed/settings.json:

{
"context_servers": {
"jintel": {
"command": {
"path": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}
}
}

Windsurf

Windsurf (Codeium's agentic IDE) supports MCP via the same Claude-Desktop-style config. Edit ~/.codeium/windsurf/mcp_config.json:

{
"mcpServers": {
"jintel": {
"command": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}
}

Raycast AI

Raycast's MCP integration accepts the same Claude-Desktop config shape under Extensions → AI → MCP Servers:

{
"jintel": {
"command": "npx",
"args": ["-y", "@yojinhq/jintel-mcp"],
"env": { "JINTEL_API_KEY": "jk_live_your_key_here" }
}
}

OpenAI Responses API

The Responses API consumes any OpenAPI 3.1 spec as a tool. Pass the catalog URL when constructing the response:

const res = await openai.responses.create({
model: 'gpt-4.1',
tools: [{ type: 'mcp', server_url: 'https://api.jintel.ai/mcp', server_label: 'jintel', authorization: 'Bearer jk_live_your_key_here' }],
input: 'Latest quote and 30-day RSI for NVDA',
});

LangChain / LlamaIndex

Both frameworks ship MCP client adapters that consume the same Streamable HTTP endpoint:

# LangChain
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"jintel": {
"transport": "streamable_http",
"url": "https://api.jintel.ai/mcp",
"headers": {"Authorization": "Bearer jk_live_your_key_here"},
},
})
tools = await client.get_tools()
# LlamaIndex
from llama_index.tools.mcp import McpToolSpec
from llama_index.tools.mcp.utils import RemoteMcpClient
client = RemoteMcpClient(url="https://api.jintel.ai/mcp", headers={"Authorization": "Bearer jk_live_your_key_here"})
tools = McpToolSpec(client=client).to_tool_list()

Generic OpenAI-compatible tool-calling LLM

Any LLM that consumes OpenAPI 3.1 specs can call Jintel directly:

Spec: https://jintel.ai/openapi.json
Auth: Authorization: Bearer jk_live_your_key_here
Tools: POST https://jintel.ai/tools/<name> (typed JSON body per tool)
GraphQL: POST https://jintel.ai/api/graphql (free-form fan-out)

Skip the API key entirely by paying per-query in USDC on Base (x402): drop the Authorization header, send a PAYMENT-SIGNATURE header on the second call. The server returns a 402 quote on the first call.