This document describes the current stable version of Celery (5.3). For development docs, go here.
What’s new in Celery 5.2 (Dawn Chorus)¶
Omer Katz (
omer.drow at gmail.com)
Celery is a simple, flexible, and reliable distributed programming framework to process vast amounts of messages, while providing operations with the tools required to maintain a distributed system with python.
It’s a task queue with focus on real-time processing, while also supporting task scheduling.
Following the problems with Freenode, we migrated our IRC channel to Libera Chat as most projects did. You can also join us using Gitter.
We’re sometimes there to answer questions. We welcome you to join.
To read more about Celery you should go read the introduction.
While this version is mostly backward compatible with previous versions it’s important that you read the following section as this release is a new major version.
This version is officially supported on CPython 3.7 & 3.8 & 3.9 and is also supported on PyPy3.
Table of Contents
Make sure you read the important notes before upgrading to this version.
This release contains fixes for two (potentially severe) memory leaks. We encourage our users to upgrade to this release as soon as possible.
The 5.2.0 release is a new minor release for Celery.
From now on we only support Python 3.7 and above. We will maintain compatibility with Python 3.7 until it’s EOL in June, 2023.
— Omer Katz
We no longer support Celery 4.x as we don’t have the resources to do so. If you’d like to help us, all contributions are welcome.
Celery 5.x is not an LTS release. We will support it until the release of Celery 6.x.
We’re in the process of defining our Long Term Support policy. Watch the next “What’s New” document for updates.
This wall was automatically generated from git history, so sadly it doesn’t not include the people who help with more important things like answering mailing-list questions.
Celery 5.0 introduces a new CLI implementation which isn’t completely backwards compatible.
The global options can no longer be positioned after the sub-command. Instead, they must be positioned as an option for the celery command like so:
celery --app path.to.app worker
If you were using our Daemonization guide to deploy Celery in production, you should revisit it for updates.
If you haven’t already updated your configuration when you migrated to Celery 4.0, please do so now.
We elected to extend the deprecation period until 6.0 since we did not loudly warn about using these deprecated settings.
Please refer to the migration guide for instructions.
Make sure you are not affected by any of the important upgrade notes mentioned in the following section.
You should verify that none of the breaking changes in the CLI do not affect you. Please refer to New Command Line Interface for details.
Celery 5.x only supports Python 3. Therefore, you must ensure your code is compatible with Python 3.
If you haven’t ported your code to Python 3, you must do so before upgrading.
After the migration is done, run your test suite with Celery 4 to ensure nothing has been broken.
At this point you can upgrade your workers and clients with the new version.
The supported Python versions are:
PyPy3.7 7.3 (
Celery supports these Python versions provisionally as they are not production ready yet:
CPython 3.10 (currently in RC2)
Two severe memory leaks have been fixed in this version:
celery.result.ResultSetno longer holds a circular reference to itself.
The prefork pool no longer keeps messages in its cache forever when the master process disconnects from the broker.
The first memory leak occurs when you use
Each instance held a promise which provides that instance as an argument to
the promise’s callable.
This caused a circular reference which kept the ResultSet instance in memory
forever since the GC couldn’t evict it.
The provided argument is now a
weakref.proxy() of the ResultSet’s
The memory leak mainly occurs when you use
since it inherits from
celery.result.ResultSet which doesn’t get used
The second memory leak exists since the inception of the project. The prefork pool maintains a cache of the jobs it executes. When they are complete, they are evicted from the cache. However, when Celery disconnects from the broker, we flush the pool and discard the jobs, expecting that they’ll be cleared later once the worker acknowledges them but that has never been the case. Instead, these jobs remain forever in memory. We now discard those jobs immediately while flushing.
Celery now requires Python 3.7 and above.
Python 3.6 will reach EOL in December, 2021. In order to focus our efforts we have dropped support for Python 3.6 in this version.
If you still require to run Celery using Python 3.6 you can still use Celery 5.1. However we encourage you to upgrade to a supported Python version since no further security patches will be applied for Python 3.6 after the 23th of December, 2021.
When replacing a task with another task, we now give an indication of the
replacing nesting level through the
A task which was never replaced has a
replaced_task_nesting value of 0.
Starting from v5.2, the minimum required version is Kombu 5.2.0.
Now all orphaned worker processes are killed automatically when main process exits.
You can now terminate running revoked tasks while using the Eventlet Workers Pool.
We introduced a custom handler which will be executed before the task
is started called
See App-wide usage for more details.
Dropped support for Python 2.7 & 3.5¶
Celery now requires Python 3.6 and above.
Python 2.7 has reached EOL in January 2020. In order to focus our efforts we have dropped support for Python 2.7 in this version.
In addition, Python 3.5 has reached EOL in September 2020. Therefore, we are also dropping support for Python 3.5.
If you still require to run Celery using Python 2.7 or Python 3.5 you can still use Celery 4.x. However we encourage you to upgrade to a supported Python version since no further security patches will be applied for Python 2.7 or Python 3.5.
Eventlet Workers Pool¶
Due to eventlet/eventlet#526 the minimum required version is eventlet 0.26.1.
Gevent Workers Pool¶
Starting from v5.0, the minimum required version is gevent 1.0.0.
Couchbase Result Backend¶
The Couchbase result backend now uses the V3 Couchbase SDK.
As a result, we no longer support Couchbase Server 5.x.
Also, starting from v5.0, the minimum required version for the database client is couchbase 3.0.0.
To verify that your Couchbase Server is compatible with the V3 SDK, please refer to their documentation.
Riak Result Backend¶
The Riak result backend has been removed as the database is no longer maintained.
The Python client only supports Python 3.6 and below which prevents us from supporting it and it is also unmaintained.
If you are still using Riak, refrain from upgrading to Celery 5.0 while you migrate your application to a different database.
We apologize for the lack of notice in advance but we feel that the chance you’ll be affected by this breaking change is minimal which is why we did it.
AMQP Result Backend¶
The AMQP result backend has been removed as it was deprecated in version 4.0.
Removed Deprecated Modules¶
The celery.utils.encoding and the celery.task modules has been deprecated in version 4.0 and therefore are removed in 5.0.
If you were using the celery.utils.encoding module before, you should import kombu.utils.encoding instead.
If you were using the celery.task module before, you should import directly from the celery module instead.
Given the SDK changes between 0.50.0 and 7.0.0 Kombu deprecates support for older azure-servicebus versions.
Bug: Pymongo 3.12.1 is not compatible with Celery 5.2¶
For now we are limiting Pymongo version, only allowing for versions between 3.3.0 and 3.12.0.
This will be fixed in the next patch.
Previously if you attempted to publish a chord
while providing a signature which wasn’t registered in the Celery app publishing
the chord as the body of the chord, an
exception would be raised.
From now on, you can publish these sort of chords and they would be executed correctly:
# movies.task.publish_movie is registered in the current app movie_task = celery_app.signature('movies.task.publish_movie', task_id=str(uuid.uuid4()), immutable=True) # news.task.publish_news is *not* registered in the current app news_task = celery_app.signature('news.task.publish_news', task_id=str(uuid.uuid4()), immutable=True) my_chord = chain(movie_task, group(movie_task.set(task_id=str(uuid.uuid4())), movie_task.set(task_id=str(uuid.uuid4()))), news_task) my_chord.apply_async() # <-- No longer raises an exception
We now create a new client per request to Consul to avoid a bug in the Consul client.
The Consul Result Backend now accepts a new
You can opt out of this behavior by setting
one_client to True.
Please refer to the documentation of the backend if you’re using the Consul backend to find out which behavior suites you.
We now cleanup expired task results while using the filesystem result backend as most result backends do.
You can now check the validity of the CA certificate while making a TLS connection to ArangoDB result backend.
If you’d like to do so, set the
verify key in the
arangodb_backend_settings dictionary to