6 min read / ,

Why Your Laravel Jobs Might Retry Forever After an OOM

TLDR Laravel's queue worker retries OOM jobs forever because the exception counter only lives in the catch block, which never executes during a fatal error. Tested end-to-end on Laravel 13.4.0.

In December 2019, I deployed an app on Laravel Vapor and received a bill for approximately $140 USD after one week. I contacted support. The response was that the issue was on my end.

In August 2023, it happened again. $218.90 USD in 7 days. The support emails referenced queued jobs as the cause, but the exact root cause was never identified. Same response from support.

Both times I accepted it and moved on. I didn’t have the tools or the time to investigate what actually went wrong.

In March 2026, while reviewing the Laravel org repos and codebase, I came across an issue that described a bug that could contribute to exactly this kind of problem. I can’t say for certain it caused my specific billing incidents, but the symptoms match: runaway queued jobs with no circuit breaker.

The Bug

Issue #58207 on the Laravel framework repository, filed in December 2025, describes the problem: jobs with $maxExceptions retry infinitely when killed by an out-of-memory error.

The reason is straightforward. Laravel’s exception counter for maxExceptions is only incremented inside markJobAsFailedIfWillExceedMaxExceptions(), which is called from handleJobException(), which is called from the catch (Throwable) block in Worker::process().

When PHP hits its memory limit, the process is killed by a fatal error. Fatal errors are not catchable exceptions. The catch block never executes. The counter is never incremented. The job goes back to the queue with a counter of zero. The next worker picks it up and the cycle repeats.

Worker starts -> fire() -> OOM -> process killed
Worker restarts -> same job -> counter still 0 -> fire() -> OOM
Worker restarts -> counter still 0 -> fire() -> OOM
... forever

On serverless platforms where you pay per invocation, each retry can cost money. As far as I’m aware, there is no circuit breaker for this scenario.

The Evidence

I verified this against the actual source code in Laravel Framework 13.4.0:

  • markJobAsFailedIfWillExceedMaxExceptions() is the only place the job-exceptions:{uuid} counter is incremented (Worker.php, line 636)
  • It is called from handleJobException() (line 538)
  • handleJobException() is called from the catch block (line 505)
  • There is no pre-fire check on the exception counter anywhere in the codebase

I then ran a subprocess test that allocates memory until OOM at 32M. The result:

SHUTDOWN FUNCTION: OOM detected
CATCH BLOCK EXECUTED: NO

PHP’s register_shutdown_function runs after OOM. The catch block does not.

End-to-End Reproduction

To prove this isn’t theoretical, I built a full end-to-end test:

  1. Created a fresh Laravel 13.4.0 app with SQLite database queue
  2. Dispatched an OomJob with $tries = 0 and $maxExceptions = 3
  3. Ran php artisan queue:work --once with a 64M memory limit, 10 times
  4. Checked the job state and exception counter after each run

Stock Laravel 13.4.0

RunPendingFailedException CounterDB Attempts
110not set1
210not set2
310not set3
10not set
1010not set10

The job retried 10 times. The exception counter was never set. The job would continue retrying forever.

With Fix Applied

RunPendingFailedException CounterDB Attempts
11011
21022
31033
401clearedn/a

The job was correctly failed after 4 runs with MaxExceptionsExceededException. The workers were freed.

The Fix

PR #59329 uses an optimistic increment pattern:

  1. Increment the exception counter before $job->fire()
  2. Decrement it after successful completion
  3. Add a pre-fire check that fails the job if the counter already meets maxExceptions

If the worker dies during fire(), the increment persists in cache. On the next pickup, the pre-fire check catches it.

This is the same approach that community members independently built as a job middleware and validated in production, as discussed in the issue thread.

A companion fix (PR #59330) addresses the upstream cause: Pipeline retains references to completed job data between jobs, inflating worker memory and increasing the likelihood of OOM in the first place. Together, one reduces the chance of OOM and the other ensures graceful failure when it does happen.

A Note on Responsibility

No framework can guardrail against every possible runaway job caused by application code. There will always be edge cases where a developer’s code consumes more resources than expected. That’s the nature of software.

But the framework, hosting platform, and application code should all have seatbelts in place to reduce the likelihood of infinite retries in serverless environments. The maxExceptions feature exists specifically for this purpose. It just doesn’t work when the failure mode is OOM, because of where the counter lives in the code. This fix doesn’t solve every possible cause of runaway bills, but it closes one gap that the framework can reasonably address.

Current Status

The issue was filed in December 2025. It has 31 comments from multiple production users discussing the bug and independently developing workarounds.

PR #59329 was closed with the reason: “To preserve our ability to adequately maintain the framework, we need to be very careful regarding the amount of code we include.”

PR #59330 (the companion Pipeline memory fix) was closed as a breaking change, because calls to getResolvedJob would return null post fire. A follow-up PR #59415 was submitted that removed the breaking change and only nulled $passable and $pipes. It was closed with no maintainer response.

The bug still exists in Laravel Framework 13.4.0.

Reproducibility

All benchmark scripts, reproduction code, and the end-to-end test app are publicly available:

github.com/JoshSalway/laravel-memory-benchmarks

The repository includes:

  • reproduce_oom_infinite_retry.php — three-step proof of the bug
  • oom-retry-test/ — full end-to-end test app with setup instructions
  • test_oom_128.php — proves OOM occurs with realistic workloads at 128M
  • test_oom_prevention.php — proves Pipeline memory retention triggers OOM
  • test_memory_real.php — real-world queue workload benchmarks
  • test_memory_usecases.php — PDF, image, CSV, and API sync benchmarks

Anyone can clone the repo, follow the instructions, and verify the results independently.

This fix has been tested end-to-end but has not been through a formal code review process. The reproduction scripts are publicly available for independent verification.