Skip to content

SlashDB Logging

SlashDB has 3 sources of logs:

  • SlashDB application
  • WSGI server
  • Reverse proxy

By default the logging of SlashDB application and WSGI server are configured in the INI file located in /etc/slashdb/slashdb.ini and reverse proxy logging is configured in /etc/slashdb/nginx.conf.

The format for each source has been adjusted to make browsing all logs together easier. Each log entry starts with:
- date - time - timestamp - source name [sdbapp], [uwsgi ] or [nginx ] - pid

2022-05-30 12:07:13 1653912433101 [sdbapp] [pid: 782 ...
2022-05-30 12:07:57 1653912477717 [uwsgi ] [pid: 782 ...
2022-05-30 12:07:57 1653912477720 [nginx ] [pid: 23 ...

SlashDB Application Logs

The application logs are configured in the INI file between lines # Logging Config and # End of Logging Config. You can find more about Python 3 logging in this Logging HOWTO. The logging config allows to customize what logs are collected, where they are sent and in what format.

Below is explained the default logging configuration.

Loggers

There are 4 loggers enabled which are sources of logs:

Logger root

[logger_root]
level = INFO
handlers = console
  • is a special logger that collects log entries from all other loggers
  • is level INFO
  • logs are passed to handler console which uses generic formatter and sends the logs to standard error output.
  • the output is collected by uWSGI server and saved to file defined in option logto in section [uwsgi] of the INI file; by default that logs are saved in /var/log/slashdb/slashdb.log.

Logger slashdb

[logger_slashdb]
level = INFO
handlers = console_slashdb
qualname = slashdb
propagate = 0
  • is a general SlashDB logger
  • is level INFO
  • logs are sent to handler console_slashdb which uses slashdb formatter which is different from what logger root uses because it's able to add SlashDB authenticated user_id to log entry.
  • the output is collected by uWSGI server and saved to file defined in option logto in section [uwsgi] of the INI file; by default that logs are saved in /var/log/slashdb/slashdb.log.
  • propage = 0 prevents logs being sent to root logger because we don't want the logs to be duplicated

Logger exc_logger

[logger_exc_logger]
level = ERROR
handlers = exc_handler
qualname = exc_logger
propagate = 1
  • is a logger for errors not handled by application
  • is level ERROR
  • logs are passed to exc_handler handler in format of exc_formatter and saved to rotating files /var/log/slashdb/exceptions.log. Be default 5 files of size 2 MB each can be created. When more is needed then latest is removed. More about that can be found in the documentation on RotatingFileHandler.
  • logs are propagate = 1 which means the errors will also be collected and handled by root logger and eventually also saved in /var/log/slashdb/slashdb.log.

Logger sqlalchemy

[logger_sqlalchemy]
level = WARNING
handlers =
qualname = sqlalchemy
propagate = 1
  • is a general logger for communication with databases. Provides information about connections, database reflect process,
  • is level WARNING
  • the logger does not have specific handler defined but logs are propagated (propagate = 1) which means the logs will also be collected and handled by the root logger and eventually saved in /var/log/slashdb/slashdb.log.

Handlers

They receive log entries from loggers format them using defined formatter and send them to proper output like standard output, file and many more Python 3 logging handlers

Handler console

It's a StreamHandler that uses generic format to prepare log line and sends it to stderr the standard error stream.

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

Handler console_slashdb

It's a StreamHandler like handler console and sends log entries to stderr the standard error stream but uses different formatter - slashdb. That formatter uses additional parameter that allows to show in logs authenticated SlashDB user.

[handler_console_slashdb]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = slashdb

Handler exc_handler

It's a RotatingFileHandler that uses exc_formatter and saves log entries in 5 files, each 2 MB. Oldest file is removed to make space for newer logs

[handler_exc_handler]
class = handlers.RotatingFileHandler
args = ('/var/log/slashdb/exceptions.log','a', 2*1024*1024, 5, 'utf-8')
level = ERROR
formatter = exc_formatter

The args are in order:

  • /var/log/slashdb/exceptions.log location and name of the file where logs are stored
  • a is a file mode for appending data to the file
  • 2*1024*1024 is number of bytes that will be calculated automatically i.e. 2 MB
  • utf-8 determines the encoding in which the file is opened

Formatters

Define how log entries look. Some known log record attributes can be used to provide more information than just log message for example log record attribute %(asctime)s to add human readable date and time to log .

Formatter generic

It's a most general formatter that uses standard log record attributes.

An empty [] is placed before %(message)s to keep the log template like for the formatter slashdb. That makes logs consistent and easier to parse.

[formatter_generic]
format = %(asctime)-.19s %(created)d%(msecs)03d [sdbapp] [pid: %(process)d %(processName)s %(threadName)s] %(levelname)-5.5s %(name)s %(module)s:%(funcName)s:%(lineno)d [] %(message)s

Formatter slashdb

It's like generic formatter but uses also a custom log record attribute %(user_id)s which is SlashDB authenticated user id.

Although logger root and slashdb are very similar they have to be separated because of that attribute user_id. Formatter fails if the attribute is not available. Only logger slashdb provides that attribute hence separate handler and formatter must be used.

[formatter_slashdb]
format = %(asctime)-.19s %(created)d%(msecs)03d [sdbapp] [pid: %(process)d %(processName)s %(threadName)s] %(levelname)-5.5s %(name)s %(module)s:%(funcName)s:%(lineno)d [%(user_id)s] %(message)s

Formatter exc_formatter

Formatter for exceptions with only creation time and message.

[formatter_exc_formatter]
format = %(asctime)s %(message)s

Debugging

Extended log details

There are 5 levels of event severity:

  • DEBUG
  • INFO
  • WARNING
  • ERROR
  • CRITICAL

The DEBUG is the most detailed and CRITICAL shows only most important log records.

In case more detailed logs are needed for debugging the easiest is changing logger levels.

  • for logger root and slashdb to DEBUG
  • for logger sqlalchemy to INFO

That will provide much more information being sent to /var/log/slashdb/slshdb.log.

Limiting SQLAlchemy loggers for clarity

Because logger sqlalchemy collects logs from all various SQLAlchemy modules the output may be too cluttered on level INFO. In such case it's better to pick a more specific logger for example sqlalchemy.engine and keep level INFO. Thet logger provides useful information like SQL statement that are sent to databases.

The sqlalchemy.engine logger is already in the INI configuration file and only has to be enabled. Just replace logger key sqlalchemy with sqlalchemy_engine in option keys in section [loggers] of INI file /etc/slashdb/slashdb.ini. After the change the section should be

[loggers]
keys = root, slashdb, exc_logger, sqlalchemy_engine

WSGI Server Logs

SlashDB is distributed with uWSGI server. By default the server logs are saved together with application logs in /var/log/slashdb/slashdb.log

The file location is configurable in option logto in section [uwsgi]. The same section options

[uwsgi]
...
logto = /var/log/slashdb/slashdb.log
...

NGINX Logs

All HTTP requests are also logged by the reverse proxy, the NGINX. Log files are saved in /var/log/nginx.

Saving logs in S3

Assuming the AWS CLI is configured for root on the SlashDB instance you can run a daily cron job that will copy log files.

Below is a cron job definition that every day at 2 AM looks for files in SDB_LOGS_DIR that are not older than 1 day and sends them to S3 bucket defined in SDB_LOGS_S3.

The same is done for NGINX logs from folder NGX_LOGS_DIR to bucket NGX_LOGS_S3.

To edit cron:

sudo crontab -e

then paste, make sure SDB_LOGS_S3 and NGX_LOGS_S3 are s3 buckets and folders where you want to save the logs

SDB_LOGS_DIR=/var/log/slashdb
SDB_LOGS_S3=s3://slashdb-logs/instance1/slashdb

NGX_LOGS_DIR=/var/log/nginx
NGX_LOGS_S3=s3://slashdb-logs/instance1/nginx


0  2  *  *  *  find ${SDB_LOGS_DIR} -type f -mtime -1 | xargs -L 1 -I {} bash -c 'aws s3 cp {} ${SDB_LOGS_S3}'
0  2  *  *  *  find ${NGX_LOGS_DIR} -type f -mtime -1 | xargs -L 1 -I {} bash -c 'aws s3 cp {} ${NGX_LOGS_S3}'

Sending logs to CloudWatch

The easiest way is to use amazon-cloudwatch-agent. That allows to send any logs without need to install additional Python libraries.

To start quickly follow the blog post Monitoring NGINX with CloudWatch. It has straightforward instructions how to setup the amazon-cloudwatch-agent.

To dive deeper check Official Amazon documentation on the agent.

The logs in SlashDB are located in folders: - /var/log/slashdb - /var/log/nginx