So I thought logging in Python would be easy.
You start logging to the command line then when that gets too much you log to a file. A bit of me still feels logging to a file in the app directory is the easiest set up. But various articles on the Internet say file logging can cause problems when you containerise the app or run it as a service. A better suggested option is to log to
journald with other systemd services.
Logging to journald for viewing with journalctl
There is a fair bit of information online on viewing and filtering journald logs using the
journalctl command line program.
journald from Python is fairly simple but the online information is a bit patchy and contradictory. As of 2022 – here is a recipe that works.
First, install systemd development libraries:
sudo apt install libsystemd-dev
pip install the python systemd library:
pip install systemd-python
In a high level module
__init__.py file I then have the following logging setup:
import logging import os from systemd.journal import JournalHandler # Define module logger logger = logging.getLogger(__name__) # Initially set to log all - change this in production logger.setLevel(logging.DEBUG) # Create journal logger journalHandler = JournalHandler(SYSLOG_IDENTIFIER='my_app_name') # create formatter - can also use %(lineno)d - # see https://stackoverflow.com/questions/533048/how-to-log-source-file-name-and-line-number-in-python/44401529 formatter = logging.Formatter( '%(asctime)s.%(msecs)03d - %(levelname)s - %(message)s | %(filename)s > %(module)s > %(funcName)s', datefmt='%Y-%m-%d %H:%M:%S' ) # add formatter to jh journalHandler.setFormatter(formatter) # add jh to logger logger.addHandler(journalHandler) # Extra bit to get logging level from environment variable DEBUG_MODE = (os.environ.get('DEBUG_MODE', 'False') == 'True') if DEBUG_MODE: logger.setLevel(logging.DEBUG) else: logger.setLevel(logging.INFO)
Viewing with journalctl
To view the
journald logs we have two options:
- command line viewing via
journalctl(good for ssh access to remote servers)
- simple GUI via QJournalctl
As an example, you can use
journalctl to filter based on the
To view streaming logs just add the follow –
-f – flag:
journalctl SYSLOG_IDENTIFIER=my_app_name -f
To filter logs below a certain level, use the priority –
-p – flag. Level 6 is INFO, level 7 is DEBUG. So the following will filter out DEBUG messages:
journalctl SYSLOG_IDENTIFIER=my_app_name -p 6
Logging with Celery
I use celery to manage a backend task queue. Celery uses its own logging arrangements. I found it useful to also direct this to
journald so I could see everything with the same log views. I followed the approach helpfully described here.
In your celery module use the
after_setup_logger signal to tag on your configured logging setup:
from celery.signals import after_setup_logger from app import logger as app_logger, journalHandler # ... other celery app configure ... # Setup logging by augmenting the celery logger as per: https://www.distributedpython.com/2018/08/28/celery-logging/ @after_setup_logger.connect def setup_loggers(logger, *args, **kwargs): """Setup logging for celery.""" # Add journal handler to celery logger logger.addHandler(journalHandler)
The celery app then logged to
journald as well.