All Systems Operational

Compute capacity Operational
ap-northeast-1 Operational
ap-northeast-2 Operational
ap-south-1 Operational
ap-southeast-1 Operational
ap-southeast-2 Operational
ca-central-1 Operational
eu-central-1 Operational
eu-central-2 Operational
eu-north-1 Operational
eu-west-1 Operational
eu-west-2 Operational
eu-west-3 Operational
sa-east-1 Operational
us-east-1 Operational
us-east-2 Operational
us-west-1 Operational
Analytics Operational
90 days ago
99.76 % uptime
Today
API Gateway Operational
90 days ago
99.81 % uptime
Today
Auth Operational
90 days ago
99.47 % uptime
Today
Connection Pooler Operational
90 days ago
99.82 % uptime
Today
Dashboard Operational
90 days ago
99.79 % uptime
Today
Database Operational
90 days ago
99.82 % uptime
Today
Edge Functions Operational
90 days ago
99.8 % uptime
Today
Management API Operational
90 days ago
99.77 % uptime
Today
Realtime Operational
90 days ago
99.81 % uptime
Today
Storage Operational
90 days ago
99.81 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
REST API Latency (Singapore)
Fetching
REST API Latency (North Virginia)
Fetching
REST API Latency (Frankfurt)
Fetching
Feb 21, 2026

No incidents reported today.

Feb 20, 2026
Resolved - Things have remained stable, and we are confident that things are now resolved.

The impact of this event was limited to log visibility and log retention. Some logs between 20:20 UTC and 23:23 UTC on Friday, Feb 20, 2026 may not be available.

Feb 20, 23:56 UTC
Monitoring - The configuration update has brought error rates and stability back to normal, all logging and observability data should now be accessible.

The impact of this event was limited to log visibility and log retention. Some logs between 20:20 UTC and 23:23 UTC on Friday, Feb 20, 2026 may not be available.

We will continue to monitor to ensure things continue to look good.

Feb 20, 23:30 UTC
Update - The team is currently pushing a configuration change we hope will finish stabilizing the analytics services. We will continue to update as we have more information.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

Feb 20, 23:16 UTC
Update - The analytics service continues to be periodically degraded, which means some users may still periodically see issues seeing logging and observability information. We have added additional resources to the logging service to increase stability.

The team is continuing their work to stabilize all analytics functionality.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

Feb 20, 22:23 UTC
Update - We are still seeing logging and observability services continue to stabilize, but some users may continue to see some issues. The team is continuing to working on full stabilization efforts.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

Feb 20, 22:02 UTC
Update - We are still seeing logging and observability services continue to stabilize, but some users may continue to see some issues. The team is continuing to working on full stabilization efforts.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

Feb 20, 21:42 UTC
Update - We've implemented a fix, and we are seeing the affected services begin to stabilize, but access to logs, observability metrics, and edge function invocation information may still be spotty.

The team is continuing to working on full stabilization efforts.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

Feb 20, 21:24 UTC
Identified - We have identified an issue resulting in missing log, observability, and edge function information.

Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.

We will have another update within 20 minutes.

Feb 20, 21:02 UTC
Update - We are investigating errors across our logs and observability services and will provide an update soon.
Feb 20, 20:41 UTC
Investigating - We are investigating errors with Edge Functions and will provide an update soon.
Feb 20, 20:37 UTC
Feb 19, 2026

No incidents reported.

Feb 18, 2026

No incidents reported.

Feb 17, 2026
Resolved - Performance has returned to normal levels. This incident has been resolved.
Feb 17, 00:29 UTC
Monitoring - We've identified degraded performance in a Supavisor cluster resulted in elevated connection latency and increased p99 response times. The problematic node has been replaced and performance is returning to normal levels. We will continue to monitor.
Feb 17, 00:16 UTC
Investigating - Starting from Feb 22:55 UTC, degraded performance in one of our Supavisor clusters resulted in elevated connection latency and increased query p99 response times.
Feb 16, 23:48 UTC
Feb 16, 2026
Resolved - Degraded performance in a Supavisor cluster resulted in elevated connection latency and increased p99 response times between 2026-02-15 22:00 and 2026-02-16 01:00. The problematic node was replaced and performance returned to normal levels.
Feb 16, 01:00 UTC
Feb 15, 2026

No incidents reported.

Feb 14, 2026

No incidents reported.

Feb 13, 2026
Postmortem - Read details
Feb 14, 03:27 UTC
Resolved - Service has been fully restored. All impacted jobs have been requeued and are currently processing normally. We will be publishing a public post-mortem with additional details about this incident.
Feb 13, 01:53 UTC
Monitoring - The revert of the change helped and most of the metrics are back to the pre incident levels. We are requeuing failed jobs and monitoring to make sure the issue doesn’t come back.
Feb 13, 01:26 UTC
Identified - We identified a potential internal networking configuration that may have caused the incident. We have since reverted that change and it appears services are recovering.
Feb 13, 01:04 UTC
Update - We are still investigating the root cause for this incident. us-east-2 region isn’t receiving any network traffic at this point. We are also seeing some API request errors in other US regions, but not as high as us-east-2.
Feb 12, 23:58 UTC
Update - We continue to see increased levels of 500 errors across US-West and US-East regions. Our engineering team is investigating the issue.
Feb 12, 22:57 UTC
Update - The issue identified it as a problem in US-West with some impact in US-East and the impact seems to be primarily on reads rather than writes.
Feb 12, 22:37 UTC
Investigating - We have identified increasing 500 errors in some US regions and are actively investigating the cause.
Feb 12, 21:32 UTC
Feb 12, 2026
Feb 11, 2026

No incidents reported.

Feb 10, 2026
Resolved - This issue is now resolved.
Feb 10, 21:28 UTC
Update - We continue to work with network vendors to mitigate this issue. In the interim, using a VPN will give you access to your Supabase project.
Feb 10, 15:24 UTC
Update - We are actively working with network vendors to mitigate this issue.
Feb 4, 18:08 UTC
Identified - We have noticed increase connection failures to supabase.co domains from connections originating in Yemen.

Projects are up and running, this only impacts connections from this region. We are working to resolve this issue with appropriate parties and will provide an update soon.

We have specifically had reports of connection issues from connections via these ISPs:

Yemen Mobile
Sabafon
Y-Telecom
Spacetel

Feb 3, 21:38 UTC
Feb 9, 2026
Resolved - This incident has been resolved.
Feb 9, 19:34 UTC
Update - The team noticed that some connection pools had workers stuck as a consequence of the previous issue. This could cause query failures. Stuck workers were now restarted.
Feb 9, 18:37 UTC
Monitoring - We've removed the problematic cluster node and latency returned to the normal level. We are now monitoring.
Feb 9, 17:56 UTC
Investigating - We’re investigating high latency in us-west-1 affecting some connections to databases via our shared connection pooler.
Feb 9, 17:39 UTC
Feb 8, 2026

No incidents reported.

Feb 7, 2026

No incidents reported.