Skip to main content

Overview

The Log Ingestion API receives structured log entries from your application. Logs are stored in ClickHouse for high-throughput ingestion and fast querying, and can be correlated with errors via trace IDs.

Authentication

Log ingestion uses API key in payload authentication. Include your project API key in the request body. See Authentication for details.

Batch Log Ingestion

Send multiple log entries in a single request. This is the only log ingestion endpoint - single log ingestion is not supported.
curl -X POST https://api.proliferate.com/api/v1/logs/batch \
  -H "Content-Type: application/json" \
  -d '{
    "api_key": "pk_abc123def456ghi789",
    "environment": "production",
    "release": "v1.2.3",
    "logs": [
      {
        "timestamp": 1736938200.123,
        "level": "info",
        "message": "User logged in successfully",
        "template": "User {user_id} logged in successfully",
        "attributes": {
          "user_id": "user_123",
          "ip_address": "192.168.1.1"
        },
        "trace_id": "trace_abc123def456",
        "span_id": "span_xyz789",
        "user": {
          "id": "user_123",
          "email": "[email protected]"
        },
        "account": {
          "id": "acct_456",
          "name": "Acme Corp"
        },
        "logger_name": "auth.service",
        "file_path": "src/auth/login.py",
        "line_number": 42,
        "function_name": "handle_login"
      },
      {
        "timestamp": 1736938205.456,
        "level": "error",
        "message": "Database connection failed: timeout after 5000ms",
        "template": "Database connection failed: {error}",
        "attributes": {
          "error": "timeout after 5000ms",
          "retry_count": 3
        },
        "trace_id": "trace_abc123def456",
        "logger_name": "db.pool"
      }
    ]
  }'

Endpoint

POST /api/v1/logs/batch

Request Schema

api_key
string
required
Your project API key (format: pk_...)
environment
string
Environment (e.g., production, staging). Default: production
release
string
Release version or git commit SHA
logs
array
required
Array of log entries (max 100 per batch)

Response

Success (202 Accepted):
{
  "accepted": 2,
  "dropped": 0
}
Error (401 Unauthorized):
{
  "detail": "Invalid API key"
}
Error (503 Service Unavailable):
{
  "detail": "Log storage temporarily unavailable"
}

Log Levels

Use appropriate log levels to categorize log entries:
LevelUse CaseExamples
traceVery detailed diagnostic infoFunction entry/exit, variable values
debugDetailed debugging infoQuery execution, cache hits/misses
infoGeneral informational messagesUser actions, successful operations
warnWarning conditionsDeprecated API usage, slow queries
errorError conditionsFailed operations, exceptions
fatalCritical failuresSystem crashes, data corruption
Use the minLevel SDK option to filter logs before sending them (e.g., only send info and above in production).

Template Logging

Templates enable log aggregation by extracting the structure from variable messages.

Without Templates

logger.info(`User ${userId} logged in from ${ipAddress}`);
// Generates unique log entries:
// "User user_123 logged in from 192.168.1.1"
// "User user_456 logged in from 10.0.0.1"
// Hard to aggregate!

With Templates

logger.fmt`User ${userId} logged in from ${ipAddress}`;
// Template: "User {userId} logged in from {ipAddress}"
// All similar logs group together for analysis
The template field is extracted automatically when using template literals with the SDK’s .fmt method.

Trace ID Correlation

The killer feature: Correlate logs with errors using trace IDs.
1

SDK Generates Trace ID

The SDK generates a unique trace ID per page load or request
2

Attach to Logs and Errors

Both logs and errors include the same trace_id
3

View in Dashboard

When viewing an error, see all logs with the same trace ID (±5 minutes) to understand what led to the error

Example Flow

// Page load - trace ID generated
const traceId = Proliferate.getTraceId();
// => "trace_abc123def456"

// User actions logged with trace ID
logger.info('User clicked button', { button: 'submit' });
logger.info('Validating form data');
logger.warn('Missing optional field: phone');

// Error occurs
throw new Error('Validation failed');

// In dashboard: view the error and see all 3 logs leading up to it

Storage and Performance

Logs are stored in ClickHouse, a high-performance columnar database optimized for:
  • High-throughput ingestion: Millions of logs per second
  • Fast querying: Sub-second queries on billions of rows
  • Time-based partitioning: Efficient queries with time ranges
  • Compression: 10x-100x compression ratios

Time-Window Optimization

When correlating logs with errors, Proliferate uses a ±5 minute time window around the error:
SELECT * FROM log_entries
WHERE trace_id = 'trace_abc123def456'
  AND timestamp BETWEEN error_time - 5min AND error_time + 5min
ORDER BY timestamp ASC
This hits the ClickHouse primary key index (project_id, timestamp) for optimal performance.

Best Practices

Batch Logs Efficiently

The SDK batches logs over a configurable interval (default: 5 seconds). This reduces network overhead. Manual batching guidelines:
  • Batch 10-100 logs per request
  • Flush every 5-10 seconds
  • Don’t exceed 100 logs per batch

Use Structured Attributes

Instead of:
logger.info(`Processing order ${orderId} for user ${userId} with total ${total}`);
Use:
logger.info('Processing order', {
  order_id: orderId,
  user_id: userId,
  total: total,
  currency: 'USD',
});
This enables filtering and aggregation by specific fields.

Set Appropriate Min Level

Don’t send debug and trace logs to production:
Proliferate.init({
  logs: {
    enabled: true,
    minLevel: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
  },
});

Include Source Location

For server-side logs, include source location:
import proliferate

proliferate.logger.info(
    "Order processed",
    order_id=order_id,
    # Source location auto-captured by Python SDK
)
This helps identify where logs originated in your codebase.

Use Template Logging

Templates enable log grouping and analysis:
// Without template - hard to aggregate
logger.info(`Failed to process ${count} items`);

// With template - groups all similar logs
logger.fmt`Failed to process ${count} items`;

Querying Logs

Via Dashboard

Use the Logs page to query logs with filters:
  • Time range
  • Log levels
  • Trace ID
  • Search in message
  • User/account context

Via API

Query logs programmatically:
GET /api/v1/projects/{project_id}/logs?
  levels=error,warn&
  search=database&
  start_time=2025-01-15T00:00:00Z&
  end_time=2025-01-15T23:59:59Z&
  limit=100
Query Parameters:
levels
string
Comma-separated log levels (e.g., error,warn)
Full-text search in message
trace_id
string
Filter by trace ID (for error correlation)
start_time
string (ISO 8601)
Start of time range
end_time
string (ISO 8601)
End of time range
limit
integer
default:100
Max logs to return (1-1000)
offset
integer
default:0
Pagination offset
Response:
{
  "logs": [
    {
      "timestamp": "2025-01-15T10:30:00.123Z",
      "level": "error",
      "message": "Database connection failed",
      "template": "Database connection failed: {error}",
      "attributes": { "error": "timeout" },
      "trace_id": "trace_abc123",
      "span_id": "span_xyz789",
      "user_id": "user_123",
      "account_id": "acct_456",
      "logger_name": "db.pool",
      "file_path": "src/db/pool.py",
      "line_number": 42,
      "function_name": "connect"
    }
  ],
  "total": 1,
  "has_more": false
}

Logs for an Error

Get logs correlated with a specific error:
GET /api/v1/events/{event_id}/logs
This returns all logs with the same trace ID within ±5 minutes of the error, ordered by timestamp.

Testing

Test log ingestion with a simple cURL command:
curl -X POST https://api.proliferate.com/api/v1/logs/batch \
  -H "Content-Type: application/json" \
  -d '{
    "api_key": "YOUR_API_KEY",
    "environment": "development",
    "logs": [
      {
        "timestamp": '$(date +%s.%3N)',
        "level": "info",
        "message": "Test log from cURL"
      }
    ]
  }'
Check the Logs page in the dashboard to verify the log appears.