Overview
The Log Ingestion API receives structured log entries from your application. Logs are stored in ClickHouse for high-throughput ingestion and fast querying, and can be correlated with errors via trace IDs.Authentication
Log ingestion uses API key in payload authentication. Include your project API key in the request body. See Authentication for details.Batch Log Ingestion
Send multiple log entries in a single request. This is the only log ingestion endpoint - single log ingestion is not supported.Endpoint
Request Schema
Your project API key (format:
pk_...)Environment (e.g.,
production, staging). Default: productionRelease version or git commit SHA
Array of log entries (max 100 per batch)
Response
Success (202 Accepted):Log Levels
Use appropriate log levels to categorize log entries:| Level | Use Case | Examples |
|---|---|---|
trace | Very detailed diagnostic info | Function entry/exit, variable values |
debug | Detailed debugging info | Query execution, cache hits/misses |
info | General informational messages | User actions, successful operations |
warn | Warning conditions | Deprecated API usage, slow queries |
error | Error conditions | Failed operations, exceptions |
fatal | Critical failures | System crashes, data corruption |
Template Logging
Templates enable log aggregation by extracting the structure from variable messages.Without Templates
With Templates
template field is extracted automatically when using template literals with the SDK’s .fmt method.
Trace ID Correlation
The killer feature: Correlate logs with errors using trace IDs.1
SDK Generates Trace ID
The SDK generates a unique trace ID per page load or request
2
Attach to Logs and Errors
Both logs and errors include the same
trace_id3
View in Dashboard
When viewing an error, see all logs with the same trace ID (±5 minutes) to understand what led to the error
Example Flow
Storage and Performance
Logs are stored in ClickHouse, a high-performance columnar database optimized for:- High-throughput ingestion: Millions of logs per second
- Fast querying: Sub-second queries on billions of rows
- Time-based partitioning: Efficient queries with time ranges
- Compression: 10x-100x compression ratios
Time-Window Optimization
When correlating logs with errors, Proliferate uses a ±5 minute time window around the error:Best Practices
Batch Logs Efficiently
The SDK batches logs over a configurable interval (default: 5 seconds). This reduces network overhead. Manual batching guidelines:- Batch 10-100 logs per request
- Flush every 5-10 seconds
- Don’t exceed 100 logs per batch
Use Structured Attributes
Instead of:Set Appropriate Min Level
Don’t senddebug and trace logs to production:
Include Source Location
For server-side logs, include source location:Use Template Logging
Templates enable log grouping and analysis:Querying Logs
Via Dashboard
Use the Logs page to query logs with filters:- Time range
- Log levels
- Trace ID
- Search in message
- User/account context
Via API
Query logs programmatically:Comma-separated log levels (e.g.,
error,warn)Full-text search in message
Filter by trace ID (for error correlation)
Start of time range
End of time range
Max logs to return (1-1000)
Pagination offset

