we are at almost the end of deployig indico in kubernetes but it has been quite a long journey. A this current time we constantly seeing some warning at logs under indido and celery pod which are as follow:
*
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 364600 bytes (356 KB) for 4 cores
*** Operational MODE: preforking ***
/opt/indico/.venv/lib/python3.12/site-packages/indico/core/config.py:218: UserWarning: Ignoring unknown config key OS
config = _sanitize_data(_parse_config(path))
/opt/indico/.venv/lib/python3.12/site-packages/indico/core/config.py:218: UserWarning: Ignoring unknown config key AST
config = _sanitize_data(_parse_config(path))
/opt/indico/.venv/lib/python3.12/site-packages/indico/web/flask/app.py:417: UserWarning: Logging config file not found; using defaults. Copy /opt/indico/.venv/lib/python3.12/site-packages/indico/logging.yaml.sample to /opt/indico/etc/log to get rid of this warning.
Logger.init(app)
/opt/indico/.venv/lib/python3.12/site-packages/sentry_sdk/_compat.py:201: Warning: IMPORTANT: We detected the use of uWSGI in preforking mode without thread support. This might lead to crashing workers. Please run uWSGI with both "--enable-threads" and "--py-call-uwsgi-fork-hooks" for full support.
warn(
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories
...cut..
Fontconfig error: No writable cache directories
Fontconfig error: No writable cache directories
WSGI app 0 (mountpoint='') ready in 6 seconds on interpreter 0x7f664c07ec10 pid: 11 (default app)
spawned uWSGI master process (pid: 11)
spawned uWSGI worker 1 (pid: 92, cores: 1*
2-nd issue:
Web login to application displays warning below:
*"! The Celery task scheduler does not seem to be running. This means that email sending and periodic tasks such as event reminders do not work."
As consequence of above email sending fails when: SMTP_USE_CELERY = True.
Could you please give us some hints on how to resolve this.
It’s hard to debug such issues, but for the celery one it sounds like the celery worker process is not actually running. Anything useful in the output of that container?
Hi,
thanks for quick response. You are correct celery pod crashes, respectively the celery scheduler is killed (see below), at some point we keep changing resources because sometimes the hint from logs is out of memory :
I have actually managed to make it running with correct resource settings however , the same warning appears on the browser and that only when logged as admin user:
`!The Celery task scheduler does not seem to be running. This means that email sending and periodic tasks such as event reminders do not work.
Are you running both celery and celery beat? The latter triggers regular tasks, and the “heartbeat” task that prevents this message from appearing is such a task.
I suspect celery beat may not be running it is hard to tell though and fix it for a pod..any idea how this can be done via pod config or any other way..?
copying here again log output of the celery pod?
I much appreciate your response