API is degraded
Resolved
Jul 27, 2025 at 03:07pm UTC
We are back!
Affected services
Updated
Jul 27, 2025 at 02:38pm UTC
We need to update our Kubernetes node pools to restore service. In order to achieve this, we are strategically reducing the amount of pods Firecrawl uses in order to reduce the number of nodes that need updating. This means that Firecrawl will run worse for a short period of time, and full functionality will be restored earlier. Thank you for your patience.
Affected services
Updated
Jul 27, 2025 at 12:56pm UTC
We are seeing some intermittent timeout failures. We are investigating.
Affected services
Updated
Jul 27, 2025 at 03:39am UTC
All endpoints have recovered. We are working to investigate further.
Affected services
Updated
Jul 27, 2025 at 01:01am UTC
Scrape has recovered, other endpoints are still affected. We are observing large-scale cutouts in TCP traffic between our workers and our Redis instance in long-lived connections. The cause is not yet known.
Affected services
Updated
Jul 27, 2025 at 12:24am UTC
The issue has regressed again
Affected services
Updated
Jul 27, 2025 at 12:13am UTC
The issue is resolved. We are still monitoring the situation
Affected services
Updated
Jul 27, 2025 at 12:06am UTC
The issue has regressed. We are still working to clear up the queue and resolv it.
Affected services
Updated
Jul 26, 2025 at 11:44pm UTC
The issue is resolved.
Affected services
Created
Jul 26, 2025 at 11:29pm UTC
We are currently investigating an issue where some scrapes are timing out.
Affected services