Errors & Rate Limits
How to handle error responses and work within rate limits for reliable integration with the Galen API.
Error response format
All API errors return a consistent JSON structure. Every error response includes an error message and a machine-readable code.
| Field | Type | Description |
|---|---|---|
| error | string | Human-readable error message |
| code | string | Machine-readable error code for programmatic handling |
| detail | string? | Additional context (may be null) |
| degraded | boolean | True if the response contains partial results due to backing service unavailability |
| unavailable_sources | string[] | List of data sources that were unavailable (empty if all healthy) |
Graceful degradation: When a backing database (e.g., ChEMBL, cBioPortal) is temporarily unavailable, the API may return partial results with degraded: true and the affected sources listed in unavailable_sources. Check these fields to handle partial data gracefully.
HTTP status codes
The API uses standard HTTP status codes. Successful requests return 200. Errors return the appropriate 4xx or 5xx status code with a JSON error body.
{
"error": "Bad Request",
"code": "bad_request",
"detail": "Missing required parameter: entity_id",
"degraded": false,
"unavailable_sources": []
}{
"error": "Unauthorized",
"code": "unauthorized",
"detail": "API key required. Pass via X-API-Key header.",
"degraded": false,
"unavailable_sources": []
}{
"error": "Forbidden",
"code": "forbidden",
"detail": "Endpoint requires higher tier. Your tier: free",
"degraded": false,
"unavailable_sources": []
}{
"error": "Not Found",
"code": "not_found",
"detail": "Entity not found: gene:FAKEGENE",
"degraded": false,
"unavailable_sources": []
}{
"error": "Too Many Requests",
"code": "rate_limited",
"detail": "Rate limit exceeded: 30/min for free tier",
"degraded": false,
"unavailable_sources": []
}{
"error": "Internal Server Error",
"code": "internal_error",
"detail": null,
"degraded": false,
"unavailable_sources": []
}{
"error": "Service Unavailable",
"code": "service_unavailable",
"detail": "Database temporarily unavailable",
"degraded": true,
"unavailable_sources": [
"chembl",
"depmap"
]
}Rate limits
Rate limits are enforced per API key using a sliding window. Limits are applied at three levels: per minute, per hour, and per day.
| Tier | Per minute | Per hour | Per day |
|---|---|---|---|
| Explorer (Free) | 30 | 500 | 1,000 |
| Researcher ($49/mo) | 60 | 1,000 | 5,000 |
| Pro ($199/mo) | 300 | 10,000 | 50,000 |
| Enterprise (From $999/mo) | Custom | Custom | Unlimited |
Rate limit headers
Every authenticated response includes rate limit headers so you can track your usage:
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests allowed in the current window |
| X-RateLimit-Remaining | Requests remaining in the current window |
| X-RateLimit-Reset | Unix timestamp when the window resets |
| Retry-After | Seconds to wait before retrying (only on 429) |
Retry strategy
When you receive a 429 or 503 response, use exponential backoff with jitter. Always respect the Retry-After header when present.
import time
import httpx
BASE_URL = "https://research.usegalen.com/api/v1"
def request_with_retry(path, api_key, max_retries=3):
headers = {"X-API-Key": api_key}
for attempt in range(max_retries):
response = httpx.get(f"{BASE_URL}{path}", headers=headers)
if response.status_code == 200:
return response.json()
if response.status_code == 429:
retry_after = int(response.headers.get("Retry-After", 2 ** attempt))
print(f"Rate limited. Retrying in {retry_after}s...")
time.sleep(retry_after)
continue
if response.status_code >= 500:
time.sleep(2 ** attempt) # Exponential backoff
continue
# 4xx client error — don't retry
error = response.json()
raise Exception(f"{error['code']}: {error.get('detail', error['error'])}")
raise Exception("Max retries exceeded")Tier access levels
Each API key tier has access to a different set of endpoint groups. Attempting to access an endpoint above your tier returns a 403.
| Tier | Available Endpoints |
|---|---|
| Explorer | Knowledge Graph, All Databases, Hypotheses, System |
| Researcher | Everything in Explorer + Causal Inference (incl. do-calculus), Predictions |
| Pro | Everything in Researcher + Counterfactual, Dynamic Simulation, Patient Interpretation, Bulk Export |
| Enterprise | Everything + dedicated infrastructure, SLA, priority support |