All Systems Operational

Compute capacity Operational
ap-northeast-1 Operational
ap-northeast-2 Operational
ap-south-1 Operational
ap-southeast-1 Operational
ap-southeast-2 Operational
ca-central-1 Operational
eu-central-1 Operational
eu-central-2 Operational
eu-north-1 Operational
eu-west-1 Operational
eu-west-2 Operational
eu-west-3 Operational
sa-east-1 Operational
us-east-1 Operational
us-east-2 Operational
us-west-1 Operational
Analytics Operational
90 days ago
99.96 % uptime
Today
API Gateway Operational
90 days ago
99.82 % uptime
Today
Auth Operational
90 days ago
99.49 % uptime
Today
Connection Pooler Operational
90 days ago
100.0 % uptime
Today
Dashboard Operational
90 days ago
99.84 % uptime
Today
Database Operational
90 days ago
100.0 % uptime
Today
Edge Functions Operational
90 days ago
99.82 % uptime
Today
Management API Operational
90 days ago
99.82 % uptime
Today
Realtime Operational
90 days ago
99.82 % uptime
Today
Storage Operational
90 days ago
99.82 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
REST API Latency (Singapore)
Fetching
REST API Latency (North Virginia)
Fetching
REST API Latency (Frankfurt)
Fetching
Jan 12, 2026

No incidents reported today.

Jan 11, 2026

No incidents reported.

Jan 10, 2026

No incidents reported.

Jan 9, 2026

No incidents reported.

Jan 8, 2026

No incidents reported.

Jan 7, 2026
Resolved - ESM has resolved their issue. All function deploy actions should now be working.
Jan 7, 19:23 UTC
Monitoring - To address function deploy issues, replace esm.sh with npm: or jsr imports instead. This workaround is confirmed to work.
Jan 7, 18:54 UTC
Identified - For users experiencing this issue: esm.sh is having issues. Users can replace esm.sh with npm: or jsr imports to get function deploys to work.
Jan 7, 18:40 UTC
Investigating - We are seeing increased timeouts when deploying edge functions and are currently investigating. Already deployed functions are not affected.
Jan 7, 18:14 UTC
Jan 6, 2026
Resolved - This incident has been resolved.
Jan 6, 18:50 UTC
Monitoring - Log ingestion has been fully restored across all services. We will continue to monitor the system to ensure stability.
Jan 6, 18:23 UTC
Update - We are continuing to work on restoring full log ingestion for Postgres, PostgREST, and Auth services.
Jan 6, 17:59 UTC
Update - We have stabilized the ingestion servers. Error rates are back to normal and the Logflare dashboard is functioning.

Our team is now working to restore log ingestion for Postgres, PostgREST, and Auth services.

Jan 6, 16:56 UTC
Update - We continue to see degraded log ingestion across all regions. Logflare dashboard may be temporarily unavailable as well while we take steps to mitigate the issue and restore normal service.

Our engineering team is actively working on a resolution. We will provide further updates as we make progress.

Jan 6, 16:30 UTC
Identified - We are seeing degradation in log ingestion in all regions, our engineering team is investigating this issue.
Jan 6, 15:31 UTC
Jan 5, 2026

No incidents reported.

Jan 4, 2026

No incidents reported.

Jan 3, 2026

No incidents reported.

Jan 2, 2026

No incidents reported.

Jan 1, 2026

No incidents reported.

Dec 31, 2025

No incidents reported.

Dec 30, 2025

No incidents reported.

Dec 29, 2025

No incidents reported.