Update - Affected projects are continuing to recover. We are monitoring the recovery of the AZ and will provide regular updates until full recovery.
May 08, 2026 - 10:52 UTC
Update - Affected projects are continuing to recover. We are monitoring the recovery of the AZ and will provide regular updates until full recovery.
May 08, 2026 - 07:59 UTC
Update - Affected projects are continuing to recover. We are monitoring the recovery of the AZ and will provide regular updates until full recovery.
May 08, 2026 - 06:44 UTC
Update - We're starting to see a number of affected projects starting to recover. We're still working with our upstream providers on a full recovery of all affected projects.
May 08, 2026 - 05:31 UTC
Update - We are continuing to work on a fix for this issue.
May 08, 2026 - 05:30 UTC
Update - We are continuing to work on a fix for this issue.
May 08, 2026 - 04:21 UTC
Update - We've disabled pausing projects in us-east-1. We recommend not changing project configurations at this time. Users cannot access their databases or run any authentication-dependent services. We are continuing to monitor this situation with our upstream provider.
May 08, 2026 - 03:11 UTC
Update - We have disabled project creation in us-east-1. We will provide another update in an hour, if not sooner.
May 08, 2026 - 02:30 UTC
Update - We are waiting on a resolution from our upstream provider. We will provide an update in an hour.
May 08, 2026 - 02:05 UTC
Identified - We have identified the root cause of the connection issues as an outage with an upstream provider affecting a single availability zone (us-east-1a). All affected projects are located in this zone.
May 08, 2026 - 01:06 UTC
Investigating - Increased connection times for Supavisor since about 00:10 UTC is affecting a limited number of projects in us-east-1
May 08, 2026 - 00:50 UTC
Compute capacity Operational
ap-northeast-1 Operational
ap-northeast-2 Operational
ap-south-1 Operational
ap-southeast-1 Operational
ap-southeast-2 Operational
ca-central-1 Operational
eu-central-1 Operational
eu-central-2 Operational
eu-north-1 Operational
eu-west-1 Operational
eu-west-2 Operational
eu-west-3 Operational
sa-east-1 Operational
us-east-1 Operational
us-east-2 Operational
us-west-1 Operational
Analytics Operational
90 days ago
99.78 % uptime
Today
API Gateway Operational
90 days ago
99.73 % uptime
Today
Auth Operational
90 days ago
99.66 % uptime
Today
Connection Pooler Operational
90 days ago
99.81 % uptime
Today
Dashboard Operational
90 days ago
99.66 % uptime
Today
Database Operational
90 days ago
99.62 % uptime
Today
Edge Functions Operational
90 days ago
99.68 % uptime
Today
Management API Operational
90 days ago
99.75 % uptime
Today
Realtime Operational
90 days ago
99.75 % uptime
Today
Storage Operational
90 days ago
99.75 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.

Scheduled Maintenance

Shared Pooler scheduled maintenance in ap-southeast-1 May 13, 2026 07:00-09:00 UTC

There will be scheduled maintenance on May 13 from 7:00-9:00 UTC (15:00-17:00 SGT) on the Shared Pooler(V1) for the ap-southeast-1 region.

This maintenance upgrades the Shared Pooler to a new version (V2) that provides better scalability and uptime.

The Shared Pooler will be unavailable during this time period for anyone connecting to their projects using it. Your projects remain available and connections via the Dedicated Pooler and Direct Connections will continue to work.

How to determine whether you are affected:

Only projects connecting to V1 of the Shared Pooler are affected. If your connection string has the format:

aws-0-ap-southeast-1.pooler.supabase.com, you can expect to see errors during the maintenance.

Those using connection strings with the format: aws-1-ap-southeast-1.pooler.supabase.com are not affected by this maintenance.

Please follow the status page for updates.

Posted on May 07, 2026 - 22:13 UTC

Shared Pooler scheduled maintenance in sa-east-1 May 14, 2026 18:00-20:00 UTC

There will be scheduled maintenance on May 14 from 18:00-20:00 UTC (15:00-17:00 BRT) on the Shared Pooler(V1) for the sa-east-1 region.

This maintenance upgrades the Shared Pooler to a new version (V2) that provides better scalability and uptime.

The Shared Pooler will be unavailable during this time period for anyone connecting to their projects using it. Your projects remain available and connections via the Dedicated Pooler and Direct Connections will continue to work.

How to determine whether you are affected:

Only projects connecting to V1 of the Shared Pooler are affected. If your connection string has the format:

aws-0-sa-east-1.pooler.supabase.com, you can expect to see errors during the maintenance.

Those using connection strings with the format: aws-1-sa-east-1.pooler.supabase.com are not affected by this maintenance.

Please follow the status page for updates.

Posted on May 07, 2026 - 22:16 UTC
REST API Latency (Singapore)
Fetching
REST API Latency (North Virginia)
Fetching
REST API Latency (Frankfurt)
Fetching
May 8, 2026

Unresolved incident: Network connectivity in us-east-1-az4.

May 7, 2026

No incidents reported.

May 6, 2026

No incidents reported.

May 5, 2026
Completed - The scheduled maintenance has now been completed.
May 5, 14:56 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 5, 14:00 UTC
Update - We have postponed this maintenance to Tuesday, May 5 1400-1500 UTC.
Apr 29, 13:27 UTC
Scheduled - We will be carrying out scheduled database migrations on our Management API on Wednesday, April 29, 2026 between 14:00 to 15:00 UTC.

Existing customer projects will not be affected and will continue to operate normally. During the maintenance period, all write activity from our API, Dashboard and CLI, such as project creation or configuration updates, will be delayed and may time out. You can retry on timeout. Read activity across our Management API will remain unaffected.

Apr 28, 09:27 UTC
May 4, 2026
Completed - The scheduled maintenance has been completed.
May 4, 05:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 4, 04:00 UTC
Scheduled - Upstream provider regular scheduled maintenance on instances in us-east-1 and eu-central-1. We do not expect downtime or impact to Edge Functions invocations or deploys during this window. We will monitor and update its progress.
May 1, 17:18 UTC
May 3, 2026
Resolved - From 15:30 to 7:00 UTC, for projects in eu-central-1, a Supavisor node experienced connectivity issues and was not able to connect to customer databases, causing checkout timeouts and auth query failed errors.

This has been resolved and Supavisor is fully available.

May 3, 22:30 UTC
May 2, 2026

No incidents reported.

May 1, 2026

No incidents reported.

Apr 30, 2026

No incidents reported.

Apr 29, 2026

No incidents reported.

Apr 28, 2026
Resolved - This is now resolved. Thank you for your patience while we worked to resolve this issue.
Apr 28, 23:31 UTC
Monitoring - The exceptions fix has been rolled out, and we are monitoring for continued stability.
Apr 28, 23:03 UTC
Update - We are continuing to work on a fix. We’ll share further updates as progress is made.
Apr 28, 22:24 UTC
Update - We are continuing to work on a fix for this issue.
Apr 28, 21:40 UTC
Identified - The issue has been identified and we are working on a fix.
Apr 28, 21:36 UTC
Investigating - We are investigating 403 errors for PostgREST requests across multiple regions.
Apr 28, 21:26 UTC
Apr 27, 2026
Resolved - All projects are patched and this is now resolved. Thank you for your patience while we ensured all projects were updated.
Apr 27, 20:05 UTC
Monitoring - The fix has been rolled out, and we are monitoring for continued stability. Any remaining impacted projects are being manually patched.
Apr 27, 17:31 UTC
Update - The permanent fix has now been rolled out across the fleet. We are continuing to manually patch any remaining impacted projects to restore service.
Apr 27, 16:05 UTC
Identified - We’ve identified additional impacted projects and are currently working to manually patch and restore them.
We are also continuing to roll out a fix across the fleet to prevent further impact.

Apr 27, 14:21 UTC
Monitoring - All impacted projects with open support tickets have now been manually patched and restored.
We are currently rolling out a fix across the fleet to prevent further impact.
We will continue to monitor progress and provide updates.

Apr 27, 13:12 UTC
Update - Our team is actively working on a fix and is manually patching impacted projects to restore service as quickly as possible.

If you are running a version of Postgres older than 15.1.1.57, please avoid restarting your project until this issue is resolved.

Apr 27, 11:40 UTC
Update - We’ve identified that a recent change to the Postgres systemd service has a dependency on a prestart script.
This script is not present on some older projects, and when those projects restart, Postgres may fail to start successfully.
This can result in projects remaining offline.
We are working on a fix to restore service.

If you are running a version of Postgres older than 15.1.1.57, please avoid restarting your project until this issue is resolved.

Apr 27, 11:11 UTC
Identified - We’ve identified that a recent change to the Postgres systemd service has a dependency on a prestart script.
This script is not present on some older projects, and when those projects restart, Postgres may fail to start successfully.
This can result in projects remaining offline. We are working on a fix to restore service.

Apr 27, 10:54 UTC
Resolved - This incident has been resolved.
Apr 27, 19:04 UTC
Monitoring - We believe all users affected by this particular issue have been resolved. The team is going to keep an eye on these error rates to ensure we catch any that didn't originally appear, or that no new issues arise.

We appreciate your patience as we worked through this issue.

Apr 27, 18:34 UTC
Update - The team is continuing their mitigation efforts. We've fixed and restarted most of the affected projects. We're continuously looking for any others that are affected so we can be sure to get them all fixed.
Apr 27, 16:22 UTC
Update - The team is still working to bring these back online, and have a fix under way.

Many users can also resolve this on their own via a project restart. This can be performed from the dashboard for your own projects at any time. But for those who are still seeing issues after a restart, we will be pushing a fix soon.

Apr 27, 15:00 UTC
Update - We have identified this issue across multiple regions, not just eu-west-3 as originally suspected. We are expanding the scope of efforts to bring affected projects back online.

Users can also resolve this, in most cases, on their own via a project restart. This can be performed from the dashboard for your own projects at any time.

Apr 27, 13:38 UTC
Update - The team is continuing to work through affected projects; however, a project restart is also effective. This can be performed from the dashboard for your own projects at any time.
Apr 27, 13:31 UTC
Identified - We are seeing an increase in projects unavailable in eu-west-3 following an upstream issue with EC2 instances in the region. The team is working on restoring access to these projects
Apr 27, 12:39 UTC
Apr 26, 2026
Resolved - A modification to existing projects was deployed due to the recently communicated Data API and pg_graphql changes. https://github.com/orgs/supabase/discussions/45329

This modification was supposed to disable pg_graphql for projects that had not seen usage in the last 30 days. Due to a misconfiguration, this targeted more projects than intended.

We are deeply sorry for the inconvenience and have since addressed the issue.

For those using pg_graphql actively, you may have seen pg_graphql extension is not enabled in the logs for your project and you can safely re-enable this.

If you have been impacted, or you are unable to re-enable pg_graphql, please contact success@supabase.io

Apr 26, 17:52 UTC
Apr 25, 2026
Resolved - This issue has now been resolved and projects have returned to normal.
Apr 25, 05:47 UTC
Monitoring - Projects have recovered and capacity has been restored. All previously affected regions have been re-enabled, and we will continue to monitor to ensure stability.
Apr 25, 05:14 UTC
Update - Capacity has been freed, and the regions have been re-enabled. The team is currently working on resolving any failed project starts, resizes, restarts, or other configuration changes during this event.
Apr 25, 04:38 UTC
Update - US-East-2 and AP-Northeast-1 have been disabled for project creation and configuration change actions, and the team is currently working to free additional capacity in these regions.
Apr 25, 04:26 UTC
Identified - We are seeing new project creation, resize requests, and project restart failures due to capacity issues in us-east-1. We are disabling project creation, project resize actions, and project restarts in these regions.

We have already reached out to our provider for additional capacity, and will update here as we have additional information.

Apr 25, 04:16 UTC
Apr 24, 2026
Resolved - The issue has been resolved and DNS resolution is now operating normally.
Apr 24, 14:36 UTC
Monitoring - A fix has been implemented, and we are seeing improvements in DNS resolution.
We’re closely monitoring to ensure the issue is fully resolved and services remain stable.

Apr 24, 12:43 UTC
Identified - We are currently experiencing an issue propagating new DNS records for our zone. We are working closely with our upstream network providers technical team to implement a fix and will share updates as we learn more.
Apr 24, 11:56 UTC
Investigating - We are currently investigating reports that newly created projects may be unreachable. Initial findings indicate DNS resolution failures. We will provide an update as soon as more information is available.
Apr 24, 11:21 UTC
Resolved - Incident Summary
A health check designed to prevent Out-of-Memory (OOM) conditions began closing some incoming connections, which led to 503 errors for a subset of users. These errors were initially surfaced under a generic SUPABASE_EDGE_RUNTIME_ERROR code.

Timeline & Actions
Sunday, 26: Issue detected as user reports of 503 errors increased. Investigation into root cause began.
Monday, 27: Introduced a more accurate error code (SUPABASE_EDGE_RUNTIME_SERVICE_DEGRADED) and implemented infrastructure changes to mitigate impact.
Tuesday, 28: Adjusted conditions for returning 503 responses to be less aggressive. Prepared a “retry-on-degraded” mechanism (not yet deployed).

Current Status
Mitigations are in place and improvements to error handling are ongoing. Further resilience enhancements will be deployed shortly.

Apr 24, 10:00 UTC