Celery PENDING State: It Means 'Unknown,' Not 'Queued'
Celery PENDING State Explained: Why It Doesn't Mean What You Think
You call AsyncResult(task_id).state and get back 'PENDING'. Your first instinct is that the task is sitting in the queue, waiting for a worker to pick it up. That instinct is wrong — and the mismatch between what PENDING sounds like and what it actually means is one of the most common sources of confusion in the Celery ecosystem.
PENDING doesn't mean "waiting in queue." It means "I have no information about this task." The Celery docs themselves admit the name is misleading — they say it would be "better named 'unknown.'" Once you internalize that distinction, a whole category of debugging headaches starts to make sense.
What PENDING Actually Means
There are three completely different scenarios that all produce the same 'PENDING' response from AsyncResult, and they range from benign to deeply broken.
1. The task hasn't been dispatched yet. You created a task ID or an AsyncResult object, but the task was never actually sent to the broker. Maybe you're building the ID ahead of time, or maybe your .delay() call silently failed because the broker was unreachable.
2. The task was dispatched, but no state events have been recorded. This is the most insidious case. The task might be sitting in a broker queue, actively executing on a worker, or even finished — but if Celery's event system isn't configured to report state transitions, the result backend has nothing to show you. So it falls back to the default: PENDING.
3. The task ID doesn't exist — and never did. This is the one that catches people off guard. You can ask Celery about a completely fabricated task ID, and it will calmly report 'PENDING' rather than raising an error.
Here's the proof:
from myapp import app
# This returns PENDING — but the task doesn't exist
result = app.AsyncResult("00000000-0000-0000-0000-000000000000")
print(result.state) # 'PENDING'
print(result.ready()) # False
No exception. No TaskNotFound error. Just a serene 'PENDING' for a task that was never created. If you're polling AsyncResult in a loop waiting for a task to finish, and the task ID has a typo in it, you'll wait forever.
Why Celery Works This Way
Celery uses what you might call an "optimistic" state model. Rather than tracking every task from creation to completion, it only records state when something happens — when a worker receives the task, starts executing it, finishes it, or fails. If none of those events have been recorded, Celery doesn't distinguish between "nothing has happened yet" and "nothing will ever happen" — both look like PENDING.
This is a deliberate design choice, not a bug. Celery was built for high-throughput environments where tracking every state transition for every task would impose significant overhead. The system optimizes for the common case (tasks complete successfully and quickly) at the cost of visibility into edge cases (tasks that are stuck, lost, or never existed).
The result backend — whether it's Redis, a database, or something else — only stores states that have been explicitly written to it. And by default, Celery doesn't write most of the intermediate states you'd want for monitoring. The task-sent event, the task-received event, the task-started event — all of them are turned off out of the box.
The Three Config Flags You Probably Haven't Set
This is where the practical fix lives. Celery ships with three configuration flags that control task state visibility, and all three default to False:
# settings.py or celeryconfig.py
app.conf.update(
worker_send_task_events=True, # Default: False
task_send_sent_event=True, # Default: False
task_track_started=True, # Default: False
)
Each one unlocks a different piece of the puzzle:
worker_send_task_events is the master switch for Celery's event system. When this is off, workers don't emit any task lifecycle events at all — no task-received, no task-started, no task-succeeded. Any monitoring tool, whether it's Flower, Sluice, or a custom event consumer, is effectively blind without this flag. Turn it on first.
task_send_sent_event emits a task-sent event when the client publishes a task to the broker. This is the event that lets you distinguish between "this task was sent to the broker and is waiting for a worker" and "this task was never sent at all." Without it, there's a gap between calling .delay() and a worker picking up the task where the state is — you guessed it — PENDING.
task_track_started stores a STARTED state in the result backend when a worker begins executing a task. This is separate from the event system — it lets AsyncResult.state report 'STARTED' for tasks that are actively running. Without it, AsyncResult jumps straight from PENDING to SUCCESS or FAILURE, and you can never tell from the result backend alone whether a task is still queued or actively executing. (Note: the task-started event is emitted by the event system when worker_send_task_events is enabled — task_track_started controls the result backend state, not event emission.)
Celery's PENDING state is the default return value for any task ID — including task IDs that have never existed. It means "no information," not "waiting in queue." These three flags are the minimum configuration for making task state meaningful.
The Result Backend TTL Trap
Even after enabling those flags, there's another gotcha waiting for you. Celery's result_expires setting defaults to 86400 seconds — 24 hours. After that window, completed task results are purged from the backend, and any subsequent AsyncResult lookup on those task IDs returns... PENDING.
from myapp import app
# This task completed successfully 25 hours ago
result = app.AsyncResult("a-real-task-that-definitely-ran")
print(result.state) # 'PENDING' — result expired
print(result.result) # None
The task ran. It succeeded. But because the result backend cleaned it up, Celery reports it as PENDING — indistinguishable from a task that never existed. If you have workflows that check on tasks hours or days after they ran, this will bite you.
You can increase result_expires or set it to None to keep results indefinitely, but that trades one problem for another — your result backend will grow unbounded, and Redis in particular doesn't appreciate that.
# Increase TTL to 7 days
app.conf.result_expires = 604800
# Or disable expiry entirely (use with caution)
app.conf.result_expires = None
After result_expires (default 24 hours), completed Celery tasks revert to PENDING state in the result backend. The better approach is to track task state outside the result backend — either in your own database or with a dedicated monitoring tool that persists task history independently.
How to Actually Check Task State
Given everything above, here's the pragmatic playbook for task state visibility:
Enable all three config flags. This is the minimum. Without worker_send_task_events, task_send_sent_event, and task_track_started, you're operating with partial information at best.
Don't trust AsyncResult as your sole source of truth. It reads from the result backend, which is ephemeral by design. It's useful for checking if a task just completed, but it's not a reliable audit log.
Use the event system for real-time state. Celery's event protocol (task-sent, task-received, task-started, task-succeeded, task-failed, task-retried, task-revoked, task-rejected) provides a real-time stream of state transitions. Subscribing to these events gives you a much more complete picture than polling AsyncResult.
Monitor the broker directly. If you want to know whether tasks are actually sitting in a queue (as opposed to PENDING-meaning-unknown), check queue lengths in Redis or RabbitMQ directly. A queue with 10,000 messages tells you something that AsyncResult never will.
Use a monitoring layer that persists state independently. Tools like Sluice consume task events and store them separately from Celery's result backend, which means task history survives result expiry and you can distinguish between "sent and waiting," "actively running," and "never seen" — the three scenarios that PENDING conflates into one.
Common PENDING Debugging Scenarios
When you're staring at a wall of PENDING tasks, the cause usually falls into one of a few patterns:
| Symptom | Likely Cause | Fix |
|---|---|---|
| All tasks show PENDING | No result backend configured, or events disabled (for monitoring tools) | Configure a result backend for AsyncResult; set worker_send_task_events=True for Flower/Sluice |
| Task sent but stays PENDING | No workers consuming the queue | Verify workers are subscribed to the correct queue with celery inspect active_queues |
| Old completed tasks show PENDING | Result backend TTL expired | Increase result_expires or use persistent monitoring |
| Random tasks show PENDING | Worker crashed mid-execution | Enable task_track_started, check worker logs for OOM kills or segfaults |
| Tasks PENDING after broker restart | Messages lost in non-persistent queue | Use persistent/durable queues; for RabbitMQ, ensure delivery_mode=persistent |
FAQ
Is PENDING an error state?
No. PENDING is the absence of state information, not an indication that something went wrong. Celery has explicit error states — FAILURE for exceptions, REVOKED for cancelled tasks, RETRY for tasks awaiting retry. If you're seeing PENDING, the task either hasn't progressed far enough to report state, or the state reporting infrastructure isn't configured.
How do I tell if a PENDING task is actually lost?
You can't — not from AsyncResult alone. If a task has been PENDING for longer than your expected execution time, check the broker queue (is the message still there?), check worker logs (did it crash?), and check whether task_track_started is enabled (would you even know if it started?). A monitoring tool that tracks the task-sent event can tell you whether the task was at least delivered to the broker, which narrows the failure window significantly.
Should I set result_expires to None?
Only if you have a strategy for managing backend growth. With Redis as your result backend, unbounded result storage will eventually consume all available memory. A better approach is to set a reasonable TTL (7 days, 30 days) based on your operational needs, and use a separate persistence layer for long-term task history.
What's the performance impact of enabling events?
Measurable but modest. Each event is a small message published to the broker's event exchange. For most workloads — even high-throughput ones pushing thousands of tasks per second — the overhead is in the low single-digit percentage range. The visibility you gain is almost always worth it. If you're running at extreme scale and worried about event volume, you can selectively enable events on specific queues or workers.
How does Sluice handle the PENDING problem?
Sluice maps Celery's PENDING state to an unknown unified state — making it visually and semantically clear that "no information" is different from "queued" or "running." By consuming task-sent events, Sluice can distinguish tasks that were dispatched to the broker from tasks that were never seen at all, closing the ambiguity gap that AsyncResult leaves wide open.
Three Celery configuration flags must be enabled for task state visibility: worker_send_task_events, task_send_sent_event, and task_track_started. All three are False by default. If you take one thing from this post, make it this: go check your Celery config and turn them on. You'll be amazed at how much more you can see.
Related reading: Sluice vs. Flower | Why monitor Celery at all? | Debug Celery task failures | Why we built Sluice | Celery worker OOM detection