This document describes the current stable version of Celery (5.1). For development docs, go here.

celery.backends.redis

Redis result store backend.

class celery.backends.redis.RedisBackend(host=None, port=None, db=None, password=None, max_connections=None, url=None, connection_pool=None, **kwargs)[source]

Redis task result store.

It makes use of the following commands: GET, MGET, DEL, INCRBY, EXPIRE, SET, SETEX

property ConnectionPool
class ResultConsumer(*args, **kwargs)
cancel_for(task_id)
consume_from(task_id)
drain_events(timeout=None)
on_after_fork()
on_state_change(meta, message)
on_wait_for_pending(result, **kwargs)
reconnect_on_error()
start(initial_task_id, **kwargs)
stop()
add_to_chord(group_id, result)[source]
apply_chord(header_result_args, body, **kwargs)[source]
client
connection_class_ssl = None
delete(key)[source]
ensure(fun, args, **policy)[source]
expire(key, value)[source]
forget(task_id)[source]
get(key)[source]
incr(key)[source]
max_connections = None

Maximum number of connections in the pool.

mget(keys)[source]
on_chord_part_return(request, state, result, propagate=None, **kwargs)[source]
on_connection_error(max_retries, exc, intervals, retries)[source]
on_task_call(producer, task_id)[source]
redis = None

redis client module.

retry_policy
set(key, value, **retry_policy)[source]
set_chord_size(group_id, chord_size)[source]
supports_autoexpire = True

If true the backend must automatically expire results. The daily backend_cleanup periodic task won’t be triggered in this case.

supports_native_join = True

If true the backend must implement get_many().

class celery.backends.redis.SentinelBackend(*args, **kwargs)[source]

Redis sentinel task result store.

as_uri(include_password=False)[source]

Return the server addresses as URIs, sanitizing the password or not.

connection_class_ssl = None
sentinel = None