Now in Early Access

The API Platform for AI Agents

Transform APIs instantly. Aggregate services, Reshape data and deploy API logic — with native performance and zero cold starts.

Start free. No credit card required.

xform.sh - Transform APIs. Instantly - The API Platform for AI Agents | Product Hunt
MCP & A2A

Turn APIs into AI Tools with MCP & A2A

Expose your transformations as tools for LLMs via the Model Context Protocol (MCP)—no extra code. Or build the platform: host MCP servers and Agent-to-Agent (A2A) interfaces so any company can publish their APIs for AI and autonomous agents.

Instant MCP Server

Every project gets a dedicated MCP-compliant SSE endpoint. Use it in Claude, Cursor, or other MCP clients—or host MCP for your customers with no extra infrastructure.

Agent-to-Agent (A2A)

Standardize how agents talk to each other. Publish A2A interfaces so autonomous agents can discover and execute tasks against your APIs.

Secure & Governed

AI and agent traffic runs through your transformed endpoints. Control access with authentication, rate limiting, and audit logging.

claude_desktop_config.json
"mcpServers": {
"xform-project": {
"url":"https://api.xform.sh/mcp/sse...",
"type": "sse"
}
}

Works with Claude, Cursor, and other MCP clients

0.04ms
Average Latency
1000x
Faster than FaaS
0ms
Cold Start Time
Concurrent Transformers

Built for production

Enterprise-Grade Platform Features

Reliability, observability, and safe rollout—so your team can ship faster without breaking things.

Multiple API Support

Combine data from multiple sources. Parallel execution or sequential chaining with smart mock matching.

✓ Parallel & sequential execution
✓ Data chaining between calls
✓ Smart URL matching

Testing & Simulation

Contract verification and manual simulation before deployment. Test with custom inputs and mocks.

✓ Automated contract verification
✓ Manual simulation mode
✓ Debug panel for multi-API

Canary Releases

Deploy with confidence using percentage-based traffic splitting. Instant rollback if issues arise.

✓ Semantic versioning
✓ Traffic splitting (10%, 25%, 50%)
✓ One-click rollback

Live Debugging

Structured logging with color-coded output. Environment-aware verbosity and external service integration.

✓ DEBUG, INFO, WARN, ERROR levels
✓ Stack trace capture
✓ Sentry/Datadog ready

Live Analytics Dashboard

Real-time monitoring with 12 key metrics. Interactive charts, auto-refresh, and latency breakdown.

✓ QPS, P50/P95/P99 latency
✓ Service vs proxy overhead
✓ Interactive chart filtering

Mock API System

Create instant mock endpoints with AI-generated responses. Configurable latency, collections, and cURL import.

✓ AI-powered response generation
✓ 0-60s latency simulation
✓ Instant serve endpoints
New

AI On-Call

When external APIs fail or drift, get AI-generated root cause analysis and suggested code patches. Alerts on 5xx and schema drift, with configurable sampling and PII masking.

Learn more
New

gRPC for All Transformers

Call every transformer via HTTP or gRPC. Strongly-typed contracts, lower latency, and a downloadable transformer.proto. Live debug and staging support gRPC too.

Learn more

Code-as-Infrastructure & Zero Vendor Lock-in

Synchronize projects natively with private GitHub repositories for full GitOps version control and auditability. Develop locally using the xform-cli. Completely avoid cloud vendor lock-in by downloading your compiled .so native Rust binaries to run transformation logic entirely on-premise or within your own private Docker containers.

From cURL to live API in 3 steps

Paste, generate, deploy — then watch it execute at scale with sub-milisecond latency overhead

1

Paste your cURL command

Start with any API cURL command. Our wizard parses it and extracts the structure.

Input
curl -X POST https://api.vendor.com/v1/data \
  -H "Authorization: Bearer key" \
  -H "Content-Type: application/json" \
  -d '{ "user_id": "123", "amount": 200 }'

Parsed in milliseconds

2

AI generates TypeScript → Compiles to native Rust

Generated TypeScript✓ AI
export async function
  transform(event) {
  const data =
    await event.json();

  return {
    id: data.user_id,
    fee: data.amount * 0.007,
    timestamp: Date.now()
  };
}

Compiling...

TypeScript → Rust

Native binary

Compiled Rust✓ Native
pub fn transform(
  event: Request
) -> Response {
  let data: Value =
    parse_json(event);

  json!({
    "id": data["user_id"],
    "fee": data["amount"]
      * 0.007,
    "timestamp": now()
  })
}

Under the hood

TypeScript is developer-friendly, but Rust is performance-native. We compile to .so/.dylib libraries that are loaded into memory with zero cold starts and 0.04ms execution time. No JS runtime, no V8, no WASM overhead.

3

Deployed & live in production

Your endpoint is instantly live with a unique URL. Hot-loaded into the runtime — ready to handle production traffic.

Production Endpoint
POST https://api.xform.sh/v1/proxy/abc-123
Request:
{ "user_id": "123", "amount": 200 }
Response (0.04ms):
{
  "id": "123",
  "fee": 1.4,
  "timestamp": 1733483081
}
Live • No cold starts • Sub-millisecond latency

What happens now

  • Compiled binary loaded into memory
  • Hot-reload enabled (update without redeploy)
  • Version history tracked automatically
  • Metrics & logs streaming in real-time

Performance

0.04ms
Execution
0ms
Cold Start
Concurrent

Total time: ~10 minutes

From pasting cURL to production-ready endpoint with native Rust performance

Where xform.sh sits in your stack

The logic layer between your services and the outside world — no backend redeploys for transformation changes.

Clients
Web · Mobile · APIs
API Gateway
Auth · Rate limit
Backend Services
Your microservices
xform.sh
Logic layer
External world
APIs · DBs · Partners
Your infrastructure xform.sh (logic) External

Real-world transformation examples

See how xform.sh handles single API proxying, parallel aggregation, and sequential workflows

1

Single API Proxy

Payment fee calculation with custom routing logic

Input Request

POST /transform
{
  "user_id": "usr_123",
  "amount": 10000,
  "currency": "USD",
  "payment_method": "card",
  "region": "US"
}

Transformation Logic

Written by AI
const fee = amount * 0.029;
const tax = fee * 0.08;

// Dynamic routing
const processor =
  region === "US"
  ? "stripe"
  : "adyen";

return {
  ...data,
  fee, tax,
  processor
};

Output Response

0.04ms
{
  "user_id": "usr_123",
  "amount": 10000,
  "currency": "USD",
  "fee": 290,
  "tax": 23.2,
  "total": 10313.2,
  "processor": "stripe",
  "timestamp": 1704067200
}
Use case: Fee calculation logic that changes frequently based on business rules. Update fee percentages, routing logic, or add promo codes without redeploying payment services.
2

Multiple APIs - Parallel Aggregation

User dashboard with data from 3 independent services

Source APIs (called in parallel)

API 1: User Service

GET /api/users/{id}
{ "name": "John", "email": "..." }

API 2: Orders Service

GET /api/orders?user={id}
{ "orders": [...], "total": 25 }

API 3: Analytics Service

GET /api/analytics/{id}
{ "views": 142, "clicks": 38 }

Transformation Logic

Parallel Execution

Written by AI
// Fetch all APIs simultaneously
const [user, orders, analytics] =
  await Promise.all([
    fetch(userAPI),
    fetch(ordersAPI),
    fetch(analyticsAPI)
  ]);

// Aggregate into single response
return {
  user: {
    name: user.name,
    email: user.email
  },
  stats: {
    orders: orders.total,
    views: analytics.views,
    engagement:
      analytics.clicks /
      analytics.views
  }
};

Aggregated Response

3 APIs in parallelTotal: 0.12ms
{
  "user": {
    "name": "John Doe",
    "email": "john@example.com"
  },
  "stats": {
    "orders": 25,
    "views": 142,
    "engagement": 0.268
  },
  "generatedAt": 1704067200
}
Use case: Backend-for-Frontend (BFF) pattern for mobile/web dashboards. Combine multiple independent services into a single optimized response. Reduce mobile app network calls from 3 to 1.
3

Multiple APIs - Sequential Workflow

Order processing with dependent API calls

Sequential API Calls

1

Authenticate User

POST /api/auth/login
→ { "token": "eyJ...", "userId": "123" }
2

Fetch User Cart (using token)

GET /api/cart
Headers: { Authorization: token }
→ { "items": [...], "total": 299.99 }
3

Create Order (using cart)

POST /api/orders
Body: { userId, items, total }
→ { "orderId": "ord_456", "status": "pending" }

Transformation Logic

Sequential Execution

Written by AI
// Step 1: Authenticate
const auth = await fetch(
  authAPI,
  { method: 'POST', ... }
);

// Step 2: Get cart using token
const cart = await fetch(
  cartAPI,
  {
    headers: {
      Authorization: auth.token
    }
  }
);

// Step 3: Create order
const order = await fetch(
  orderAPI,
  {
    method: 'POST',
    body: {
      userId: auth.userId,
      items: cart.items,
      total: cart.total
    }
  }
);

return {
  success: true,
  orderId: order.orderId,
  items: cart.items.length,
  total: cart.total
};

Final Response

3 APIs sequentialTotal: 0.15ms
{
  "success": true,
  "orderId": "ord_456",
  "items": 3,
  "total": 299.99,
  "status": "pending",
  "processedAt": 1704067200
}
Use case: Complex workflows where each step depends on the previous. Common in checkout flows, multi-step authentication, or data enrichment pipelines. Each step uses data from previous calls.

Stop waiting for backend deployments.

Ship logic instantly with sub-millisecond performance.