α
ALPHAGENT
Docs v2.0
Docs/API Reference/Responses API

Responses API

The Responses API is fully compatible with OpenAI SDKs. Simply change the base_url and api_key to integrate Alphagent into your existing OpenAI workflow.

OpenAI Compatible

Drop-in replacement for OpenAI SDKs. Same interface, different endpoint.

Multi-Agent Models

Access specialized financial agents with deep research capabilities.

Streaming Support

Real-time SSE streaming with tool call visibility.

#Endpoint

POST https://api.alphagent.co/v1/responses

#Quick Start

Install the OpenAI SDK and configure it to use the Alphagent API endpoint.

bash
$ pip install openai
Python
from openai import OpenAI

# Initialize the client with Alphagent endpoint
client = OpenAI(
    api_key="your-alphagent-api-key",
    base_url="https://api.alphagent.co/v1"
)

# Make a request
response = client.responses.create(
    model="alphagent-smart",
    input="What is AAPL trading at?"
)

print(response.output_text)

#Available Models

Choose the model that best fits your use case. Single-agent models are faster, while deep research models provide comprehensive analysis.

ModelDescriptionType
alphagent-fastQuick responses using Gemini FlashSingle-agent
alphagent-smartBalanced performance with Gemini ProSingle-agent
alphagent-proHigh reasoning effort with Gemini ProSingle-agent
alphagent-deep-research-fastMulti-agent with Gemini Flash6-agent
alphagent-deep-research-proMulti-agent with high reasoning6-agent

Multi-Agent Architecture

Deep research models use a 6-agent architecture with specialized agents for market data, fundamentals, options/risk, research, and verification. Multi-domain queries are processed in parallel for comprehensive analysis.

#Streaming Responses

Enable streaming to receive responses in real-time as they are generated.

Python
from openai import OpenAI

client = OpenAI(
    api_key="your-alphagent-api-key",
    base_url="https://api.alphagent.co/v1"
)

# Stream the response
stream = client.responses.create(
    model="alphagent-smart",
    input="Analyze TSLA fundamentals",
    stream=True
)

for event in stream:
    if event.type == "response.output_text.delta":
        print(event.delta, end="", flush=True)
    elif event.type == "response.completed":
        print("\nDone!")

#Conversations

Group related requests into conversations by providing a conversation_id. This enables history tracking and retrieval.

Python
import uuid
from openai import OpenAI

client = OpenAI(
    api_key="your-alphagent-api-key",
    base_url="https://api.alphagent.co/v1"
)

# Start a new conversation
conversation_id = str(uuid.uuid4())

# First message
response = client.responses.create(
    model="alphagent-smart",
    input="What is AAPL trading at?",
    extra_body={"conversation_id": conversation_id}
)

# Follow-up in same conversation
response = client.responses.create(
    model="alphagent-smart",
    input="What about its P/E ratio?",
    extra_body={"conversation_id": conversation_id}
)

#Code Execution

Enable server-side code execution for complex calculations, data analysis, and visualizations. Code runs in a secure sandboxed environment with Python 3.11, pandas, numpy, and matplotlib.

Python
from openai import OpenAI

client = OpenAI(
    api_key="your-alphagent-api-key",
    base_url="https://api.alphagent.co/v1"
)

# Enable code execution
response = client.responses.create(
    model="alphagent-smart",
    input="Calculate correlation between AAPL and MSFT returns",
    tools=[{"type": "code_execution"}]
)

# Get container_id for reuse
if response.metadata:
    container_id = response.metadata.get("container_id")
    outputs = response.metadata.get("code_execution_outputs", [])

# Reuse container in follow-up
response = client.responses.create(
    model="alphagent-smart",
    input="Now plot the results",
    tools=[{"type": "code_execution"}],
    extra_body={"container_id": container_id}
)

Request Parameters

ParameterTypeDescription
modelstringRequired. Model ID to use.
inputstring | arrayRequired. The input prompt or conversation messages.
streambooleanEnable SSE streaming. Default: false
conversation_idstringUUID v4 to group requests into a conversation.
instructionsstringCustom system prompt that replaces the default.
toolsarrayTools available for the model to use.
max_output_tokensintegerMaximum tokens in the response.
temperaturenumberSampling temperature (0-2).

#Response Format

The response follows the OpenAI Responses API format with additional metadata for verification results.

JSON Response
{
  "id": "resp_abc123",
  "object": "response",
  "model": "alphagent-smart",
  "status": "completed",
  "created_at": 1736360000,
  "output": [
    {
      "id": "msg_1",
      "type": "message",
      "role": "assistant",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "text": "AAPL is trading at $187.12.",
          "annotations": []
        }
      ]
    }
  ],
  "output_text": "AAPL is trading at $187.12.",
  "usage": {
    "input_tokens": 18,
    "output_tokens": 26,
    "total_tokens": 44
  }
}

#Claim Verification

Deep research models include a Verifier agent that checks claims against tool outputs. The verification report is included in the response metadata.

Python
# Use a deep research model for verification
response = client.responses.create(
    model="alphagent-deep-research-pro",
    input="Analyze AAPL's current valuation"
)

# Access verification report
if response.metadata:
    verification = response.metadata.get("verification", )
    print("Confidence:", verification.get("confidence"))
    print("Verified claims:", verification.get("verified_claims"))
    print("Flagged claims:", verification.get("flagged_claims"))

Verification Report Fields

  • verified_claims - Claims matched to supporting tool output
  • flagged_claims - Claims with issues (contradictions or unsourced)
  • confidence - Overall confidence: "high", "medium", or "low"

#TypeScript / Node.js

The same OpenAI SDK compatibility applies to the Node.js/TypeScript SDK.

bash
$ npm install openai
TypeScript
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-alphagent-api-key',
  baseURL: 'https://api.alphagent.co/v1',
});

async function main() {
  // Non-streaming request
  const response = await client.responses.create({
    model: 'alphagent-smart',
    input: 'What is AAPL trading at?',
  });

  console.log(response.output_text);

  // Streaming request
  const stream = await client.responses.create({
    model: 'alphagent-smart',
    input: 'Analyze TSLA fundamentals',
    stream: true,
  });

  for await (const event of stream) {
    if (event.type === 'response.output_text.delta') {
      process.stdout.write(event.delta);
    }
  }
}

main();

API Key Security

Never commit your API keys to version control. Use environment variables like ALPHAGENT_API_KEY to store secrets securely.