Now in Early Access

Transform APIs. Instantly.

Aggregate services, reshape data, and deploy API logic instantly — with native Rust performance and zero cold starts.

Rust runtime for dynamic backend logic

→ Paste a cURL → Get a production endpoint in 10 minutes

Start free. No credit card required.

For teams that update backend logic frequently

AggregatorsPaymentsFinTechPOS IntegrationsSuper AppsPlatform TeamsB2B SaaSWebhook HandlersLegacy ModernizationMobile BFF

If you update API mappings, routing rules, fee logic, risk logic, or data shaping — xform.sh eliminates deployment cycles.

Before and after xform.sh

See how modern teams ship faster with complete visibility

Before

  • PR reviews + CICD + staging + redeploy
  • Logic changes require backend team
  • FaaS latency (100ms+)
  • Downtime risk with deployments
  • No visibility into transformations
  • Manual rollback procedures
  • Waiting for backend mocks

After

  • Paste cURL → live API in 30 seconds
  • Anyone can update logic (or AI)
  • Native 0.04ms execution
  • Zero-downtime canary releases
  • 12 metrics + live analytics dashboard
  • One-click rollback with version history
  • AI-powered mock APIs instantly

Stop deploying backend logic.

No more PR reviews, CICD pipelines, microservices, staging environments, or container spins just to change a mapping, fee rule, transformation, routing rule, or data shape.

With xform.sh, logic is dynamic — and motion is instant.

0.04ms
Average Latency
1000x
Faster than FaaS
0ms
Cold Start Time
Concurrent Transformers

Enterprise-Grade Platform Features

Production-ready features for teams that need reliability, observability, and velocity.

Multiple API Support

Combine data from multiple sources. Parallel execution or sequential chaining with smart mock matching.

✓ Parallel & sequential execution
✓ Data chaining between calls
✓ Smart URL matching

Testing & Simulation

Contract verification and manual simulation before deployment. Test with custom inputs and mocks.

✓ Automated contract verification
✓ Manual simulation mode
✓ Debug panel for multi-API

Canary Releases

Deploy with confidence using percentage-based traffic splitting. Instant rollback if issues arise.

✓ Semantic versioning
✓ Traffic splitting (10%, 25%, 50%)
✓ One-click rollback

Live Debugging

Structured logging with color-coded output. Environment-aware verbosity and external service integration.

✓ DEBUG, INFO, WARN, ERROR levels
✓ Stack trace capture
✓ Sentry/Datadog ready

Live Analytics Dashboard

Real-time monitoring with 12 key metrics. Interactive charts, auto-refresh, and latency breakdown.

✓ QPS, P50/P95/P99 latency
✓ Service vs proxy overhead
✓ Interactive chart filtering

Mock API System

Create instant mock endpoints with AI-generated responses. Configurable latency, collections, and cURL import.

✓ AI-powered response generation
✓ 0-60s latency simulation
✓ Instant serve endpoints
New: AI Integration

Turn APIs into AI Tools with MCP

Automatically expose your transformations as tools for LLMs using the Model Context Protocol (MCP). No extra code required — just deploy your transformer and it's ready for your AI agents.

Instant MCP Server

Every project gets a dedicated MCP-compliant SSE endpoint that lists your transformers as executable tools.

Claude Desktop Integration

Connect directly to Claude Desktop. Let Claude call your APIs to fetch data, execute logic, or trigger workflows.

Secure & Controlled

AI agents run through your transformed endpoints, ensuring they respect your logic, validation, and security rules.

claude_desktop_config.json
"mcpServers"
: {
"xform-project"
: {
"url"
:
"https://api.xform.sh/mcp/sse..."
,
"type"
:
"sse"
}
}

Works with Claude, Cursor, and other MCP clients

From cURL to live API in 3 steps

Paste, generate, deploy — then watch it execute at native Rust speeds

1

Paste your cURL command

Start with any API cURL command. Our wizard parses it and extracts the structure.

Input
curl -X POST https://api.vendor.com/v1/data \
  -H "Authorization: Bearer key" \
  -H "Content-Type: application/json" \
  -d '{ "user_id": "123", "amount": 200 }'

Parsed in milliseconds

2

AI generates TypeScript → Compiles to native Rust

Generated TypeScript✓ AI
export async function
  transform(event) {
  const data =
    await event.json();

  return {
    id: data.user_id,
    fee: data.amount * 0.007,
    timestamp: Date.now()
  };
}

Compiling...

TypeScript → Rust

Native binary

Compiled Rust✓ Native
pub fn transform(
  event: Request
) -> Response {
  let data: Value =
    parse_json(event);

  json!({
    "id": data["user_id"],
    "fee": data["amount"]
      * 0.007,
    "timestamp": now()
  })
}

Under the hood

TypeScript is developer-friendly, but Rust is performance-native. We compile to .so/.dylib libraries that are loaded into memory with zero cold starts and 0.04ms execution time. No JS runtime, no V8, no WASM overhead.

3

Deployed & live in production

Your endpoint is instantly live with a unique URL. Hot-loaded into the runtime — ready to handle production traffic.

Production Endpoint
POST https://api.xform.sh/v1/proxy/abc-123
Request:
{ "user_id": "123", "amount": 200 }
Response (0.04ms):
{
  "id": "123",
  "fee": 1.4,
  "timestamp": 1733483081
}
Live • No cold starts • Sub-millisecond latency

What happens now

  • Compiled binary loaded into memory
  • Hot-reload enabled (update without redeploy)
  • Version history tracked automatically
  • Metrics & logs streaming in real-time

Performance

0.04ms
Execution
0ms
Cold Start
Concurrent

Total time: ~10 minutes

From pasting cURL to production-ready endpoint with native Rust performance

Where xform.sh sits in your stack

API Gateway → Backend Services → xform.sh → External APIs / Databases / Partners / Clients

xform.sh does not replace microservices — it decouples the logic layer (transformations, routing, fees, personalization) so backend deployments are not required for logic updates.

Or use it as a Backend-for-Frontend (BFF): Since xform.sh endpoints can be public, you can aggregate multiple internal APIs into a single optimized response for your web & mobile apps.

Real-world transformation examples

See how xform.sh handles single API proxying, parallel aggregation, and sequential workflows

1

Single API Proxy

Payment fee calculation with custom routing logic

Input Request

POST /transform
{
  "user_id": "usr_123",
  "amount": 10000,
  "currency": "USD",
  "payment_method": "card",
  "region": "US"
}

Transformation Logic

Written by AI
const fee = amount * 0.029;
const tax = fee * 0.08;

// Dynamic routing
const processor =
  region === "US"
  ? "stripe"
  : "adyen";

return {
  ...data,
  fee, tax,
  processor
};

Output Response

0.04ms
{
  "user_id": "usr_123",
  "amount": 10000,
  "currency": "USD",
  "fee": 290,
  "tax": 23.2,
  "total": 10313.2,
  "processor": "stripe",
  "timestamp": 1704067200
}
Use case: Fee calculation logic that changes frequently based on business rules. Update fee percentages, routing logic, or add promo codes without redeploying payment services.
2

Multiple APIs - Parallel Aggregation

User dashboard with data from 3 independent services

Source APIs (called in parallel)

API 1: User Service

GET /api/users/{id}
{ "name": "John", "email": "..." }

API 2: Orders Service

GET /api/orders?user={id}
{ "orders": [...], "total": 25 }

API 3: Analytics Service

GET /api/analytics/{id}
{ "views": 142, "clicks": 38 }

Transformation Logic

Parallel Execution

Written by AI
// Fetch all APIs simultaneously
const [user, orders, analytics] =
  await Promise.all([
    fetch(userAPI),
    fetch(ordersAPI),
    fetch(analyticsAPI)
  ]);

// Aggregate into single response
return {
  user: {
    name: user.name,
    email: user.email
  },
  stats: {
    orders: orders.total,
    views: analytics.views,
    engagement:
      analytics.clicks /
      analytics.views
  }
};

Aggregated Response

3 APIs in parallelTotal: 0.12ms
{
  "user": {
    "name": "John Doe",
    "email": "john@example.com"
  },
  "stats": {
    "orders": 25,
    "views": 142,
    "engagement": 0.268
  },
  "generatedAt": 1704067200
}
Use case: Backend-for-Frontend (BFF) pattern for mobile/web dashboards. Combine multiple independent services into a single optimized response. Reduce mobile app network calls from 3 to 1.
3

Multiple APIs - Sequential Workflow

Order processing with dependent API calls

Sequential API Calls

1

Authenticate User

POST /api/auth/login
→ { "token": "eyJ...", "userId": "123" }
2

Fetch User Cart (using token)

GET /api/cart
Headers: { Authorization: token }
→ { "items": [...], "total": 299.99 }
3

Create Order (using cart)

POST /api/orders
Body: { userId, items, total }
→ { "orderId": "ord_456", "status": "pending" }

Transformation Logic

Sequential Execution

Written by AI
// Step 1: Authenticate
const auth = await fetch(
  authAPI,
  { method: 'POST', ... }
);

// Step 2: Get cart using token
const cart = await fetch(
  cartAPI,
  {
    headers: {
      Authorization: auth.token
    }
  }
);

// Step 3: Create order
const order = await fetch(
  orderAPI,
  {
    method: 'POST',
    body: {
      userId: auth.userId,
      items: cart.items,
      total: cart.total
    }
  }
);

return {
  success: true,
  orderId: order.orderId,
  items: cart.items.length,
  total: cart.total
};

Final Response

3 APIs sequentialTotal: 0.15ms
{
  "success": true,
  "orderId": "ord_456",
  "items": 3,
  "total": 299.99,
  "status": "pending",
  "processedAt": 1704067200
}
Use case: Complex workflows where each step depends on the previous. Common in checkout flows, multi-step authentication, or data enrichment pipelines. Each step uses data from previous calls.

All executed with native Rust performance

No deploy. No CICD. No cold starts. Update logic instantly without touching your backend services.

Single API
Perfect for fee calculations, routing logic, data shaping
Parallel APIs
Ideal for dashboards, BFF patterns, data aggregation
Sequential APIs
Great for workflows, pipelines, dependent operations

Note: Logic is authored in TypeScript for ergonomics, but executes as compiled native Rust — not JavaScript or WASM — ensuring 0.04ms execution time.

Stop waiting for backend deployments.

Ship logic instantly with sub-millisecond performance.

Build your first transformer in under 15 minutes — deploy to production without touching your microservices.