In software development, understanding how your application behaves in real time is crucial. Python logging provides a robust framework for achieving this. From diagnosing issues to monitoring performance, logging offers valuable insights. In this article, we'll explore Python logging from basics to advanced techniques, empowering you to leverage its full potential for better software development.
Table of Contents:
What is Logging?
Logging is the process of recording events, actions, and messages that occur during the execution of a program. These logs provide invaluable insights into the runtime behavior of the application, aiding developers in debugging, performance optimization, and system monitoring.
Why Use Python Logging?
Python logging is not just a debugging technique, but it’s a form of communication. It communicates with you about the state of the application at runtime. Here are some benefits of using logging in Python:
Error Reporting: While developing an application, errors are inevitable. Logging helps you understand the problem by providing the context in which the error occurred.
Application Monitoring: Logging allows you to monitor the behavior of the application. It helps you track the user activity and system interaction.
Performance Analysis: By logging the time taken by different parts of your application, you can analyze the performance and identify bottlenecks.
Debugging: Logs provide detailed insight into the flow of the application and help in debugging issues.
Auditing: Logging can serve as a means of auditing system usage and tracking system changes.
Logs can be incredibly helpful for debugging issues, but they also provide a wealth of information about the performance of the application. By analyzing logs, you can identify patterns, track user activity, and even predict future behavior. In the next section, we’ll discuss how to use the logging module in Python.
Python Logging Levels
The Python Logging levels are an indication of the severity of the events. They are used to categorize the log messages.
Here’s what each level means:
DEBUG: Typically of interest only when diagnosing problems.
INFO: Confirmation that things are working as expected.
WARNING: An indication that something unexpected happened, or there may be some problem shortly (e.g., ‘disk space low’). The software is still working as expected.
ERROR: Due to a more serious problem, the software has not been able to perform some functions.
CRITICAL: A very serious error, indicating that the program itself may be unable to continue running.
DEBUG:
This is the lowest level of logging. As the name suggests, DEBUG logs are primarily used for diagnosing problems. They provide detailed information about the application’s execution process. This level is typically used during development to understand the flow of the program and identify any potential issues.
logger.debug('This is a debug message')
INFO:
INFO logs are used to confirm that things are working as expected. They provide general insights about the application’s state and confirm the normal functioning of your application.
logger.info('This is an info message')
WARNING:
WARNING logs indicate that something unexpected happened, or there may be some problem shortly (e.g., ‘disk space low’). However, the software is still working as expected. It’s used to alert the user of something that needs attention.
logger.warning('This is a warning message')
ERROR:
ERROR logs are used when the software has not been able to perform some function due to a more serious problem. It indicates that an error occurred that prevented the application from performing a function.
logger.error('This is an error message')
CRITICAL:
CRITICAL logs represent very serious errors. This level indicates that the program itself may be unable to continue running. It’s the highest level of severity.
logger.critical('This is a critical message')
Python Logging Components
Python logging framework comprises three primary components:
Loggers
Handlers
Formatters
Loggers:
Loggers are instances of the Logger class from the logging module. They serve as the entry point for logging operations and can be organized hierarchically to facilitate modular logging configurations.
Logger objects in Python have a crucial threefold job:
Logging Methods: They expose methods to application code for logging messages at runtime.
Log Filtering: Loggers determine which log messages to handle based on severity or custom filters.
Passing Messages: They pass relevant log messages to all interested log handlers.
The commonly used configuration methods for logger objects include:
Logger.setLevel(): Specifies the lowest-severity log message a logger will handle.
Logger.addHandler() and Logger.removeHandler(): Add and remove handler objects from the logger object.
Logger.addFilter() and Logger.removeFilter(): Add and remove filter objects from the logger object.
Logger objects also offer methods for creating log messages:
Logger.debug(), Logger.info(), Logger.warning(), Logger.error(), and Logger.critical(): Create log records with messages and corresponding severity levels.
Logger.exception(): Similar to Logger.error(), but also includes a stack trace.
Logger.log(): Logs messages at custom log levels.
Additionally, getLogger() returns a reference to a logger instance with the specified name or root if not provided. Loggers can inherit levels from their parent loggers, making it easy to control logging behavior.
import logging
# Create a logger instance
logger = logging.getLogger('my_logger')
# Set logging level (optional, default is WARNING)
logger.setLevel(logging.DEBUG) # Log messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
In the code above:
We create a logger named 'my_logger'.
Optionally, we set the logging level to DEBUG, which means all messages at or above this level will be logged.
We log messages using different logging levels, such as DEBUG, INFO, WARNING, ERROR, and CRITICAL.
Handlers:
Handler objects dispatch log messages to specified destinations based on severity levels. Logger objects can add multiple handler objects to themselves. Common configuration methods for handlers include setLevel(), setFormatter(), addFilter(), and removeFilter().
Standard library includes various handler types such as StreamHandler and FileHandler, which are typically used in examples.
import logging
# Create a logger instance
logger = logging.getLogger('my_logger')
# Create a handler for console output
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
# Create a formatter
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_handler.setFormatter(formatter)
# Add the handler to the logger
logger.addHandler(console_handler) # Log messages
logger.debug('This is a debug message') logger.info('This is an info message')
In the code above:
We create a StreamHandler instance for logging messages to the console.
We set the logging level for the handler to DEBUG.
We create a formatter to specify the format of log messages.
We add the handler to the logger instance.
Finally, we log messages, and they will be routed to the console through the handler.
Formatters:
Formatter objects configure the structure and contents of log messages. Application code can instantiate formatter classes with optional message and date format strings. Formatters use string substitution or str.format() for message formatting.
The default date format is %Y-%m-%d %H:%M:%S, and the default message format includes the timestamp, log level, and message content.
import logging
# Create a logger instance
logger = logging.getLogger('my_logger')
# Create a handler for console output
console_handler = logging.StreamHandler()
# Create a formatter with custom format
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# Set formatter for the handler
console_handler.setFormatter(formatter)
# Add the handler to the logger
logger.addHandler(console_handler)
# Log messages
logger.info('This is an info message')
In the code above:
We create a formatter with a custom format specifying the timestamp, log level, and message.
We set this formatter for the console handler.
When we log a message, it will be formatted according to the specified format before being outputted to the console.
The flow of log event information in loggers and handlers is illustrated in the following diagram.
Logger Flow
Logging call in user code: The logging process starts with a logging call in the user's code, such as logger.info("This is an info message").
Is logger enabled for a level of call?: The logger checks if it is enabled for the level of the logging call. For example, if the call is logger.info, the logger checks if it is enabled for the INFO level or higher (e.g., DEBUG, WARNING, ERROR, CRITICAL).
Create LogRecord: If the logger is enabled for the level of the call, a LogRecord object is created. The LogRecord object contains information about the logging event, such as the message, level, timestamp, and name of the logger.
Does a filter attached to the logger reject the record?: The logger checks if any filters attached to it reject the LogRecord. If a filter rejects the record, the logging process stops.
Pass to handlers of current logger: If no filters reject the record, it is passed to the handlers of the current logger. Handlers are responsible for formatting and delivering the log messages to their destinations, such as files, the console, or a network server.
Handler Flow
Handler enabled for level of LogRecord?: The handler checks if it is enabled for the level of the LogRecord. Similar to loggers, handlers can also have different levels.
Does a filter attached to the handler reject the record?: The handler checks if any filters attached to it reject the LogRecord.
Emit (includes formatting): If the handler is enabled for the level of the LogRecord and no filters reject it, the handler emits the log message. This typically involves formatting the message according to the handler's configuration and delivering it to its destination.
Set current logger to parent: If the propagate attribute of the current logger is set to True, the LogRecord is passed to its parent logger. This allows messages to be propagated up the logger hierarchy.
Is there a parent logger?: If the current logger has a parent logger, the process repeats from step 2 (handler enabled for a level of LogRecord?) for the parent logger.
How to Use Python Logging?
Using Python Logging is essential for recording events, debugging, and monitoring the behavior of your application. Here's a step-by-step guide on how to use Python logging effectively:
Import the Logging Module:
Python's logging module is part of the standard library, so you don't need to install anything extra to use it. It provides functionality for recording log messages at various levels of severity.
import logging
Create a Logger:
A logger is an object responsible for capturing and processing log messages. It acts as an entry point to the logging system. You can create a logger by calling getLogger(name) with a name of your choice. If no name is provided, the root logger is returned.
logger = logging.getLogger('my_logger')
Here, 'my_logger' is an arbitrary name chosen for the logger. You can use any name that suits your application.
Set the Log Level:
The log level determines the severity of the messages that will be recorded by the logger. You can set the log level using the setLevel(level) method. The available log levels, in increasing order of severity, are DEBUG, INFO, WARNING, ERROR, and CRITICAL.
logger.setLevel(logging.DEBUG)
In this example, we've set the log level to DEBUG, which means the logger will record all events at level DEBUG and above.
Create a Log Message:
Once the logger is set up, you can create log messages using various methods such as debug(msg), info(msg), warning(msg), error(msg), or critical(msg). The msg argument is the actual message you want to log.
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
Each of these methods corresponds to a different log level. The severity of the message determines whether it will be logged based on the current log level setting.
Add a Handler:
By default, log messages are sent to the console. If you want to send them to a file or any other destination, you need to add a handler to the logger. In this case, we'll add a FileHandler to send log messages to a file named my_log.log.
handler = logging.FileHandler('my_log.log')
logger.addHandler(handler)
This handler will ensure that log messages are written to the specified file.
Complete Code:
Putting it all together, the complete code looks like this:
import logging
# Create a logger
logger = logging.getLogger('my_logger')
# Set the log level
logger.setLevel(logging.DEBUG)
# Add a file handler
handler = logging.FileHandler('my_log.log')
logger.addHandler(handler)
# Log some messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
When you run this code, it will create a log file named my_log.log in the same directory, containing the logged messages. This file will provide valuable insights into the behavior of your application during runtime. 😊
Advanced Python Logging Techniques
Here are some of the advanced Python Logging Techniques:
Logging to Multiple Destinations:
Sometimes, it's necessary to send log messages to multiple destinations simultaneously. For instance, you might want to log errors to a file for archival purposes while also displaying them on the console for immediate visibility.
Here's how you can achieve this in Python logging technique using multiple handlers:
import logging
# Create a logger logger = logging.getLogger('multi_destination_logger') logger.setLevel(logging.DEBUG)
# Create console handler and set level to INFO
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
logger.addHandler(console_handler)
# Create file handler and set level to DEBUG
file_handler = logging.FileHandler('multi_destinations.log')
file_handler.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
# Log messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
In this code:
We create a logger named 'multi_destination_logger'.
We set the logging level of the logger to DEBUG.
We create a StreamHandler to log messages to the console and set its level to INFO.
We create a FileHandler to log messages to a file named 'multi_destinations.log' and set its level to DEBUG.
We add both handlers to the logger.
Finally, we log messages at different levels, and they will be routed to both the console and the file.
Creating Custom Log Handlers:
Sometimes, the built-in log handlers may not fulfill specific requirements. In such cases, you can create custom log handlers tailored to your needs.
Here's a basic example of a custom log handler that sends log messages to an external service (e.g., an API endpoint):
import logging
import requests
class APILogHandler(logging.Handler):
def init(self, url):
super().__init__()
self.url = url
def emit(self, record):
log_entry = self.format(record)
requests.post(self.url, data=log_entry)
# Create a logger
logger = logging.getLogger('custom_handler_logger') logger.setLevel(logging.DEBUG)
# Create custom handler and set level to DEBUG
api_handler = APILogHan
dler(url='http://example.com/log')
api_handler.setLevel(logging.DEBUG)
logger.addHandler(api_handler)
# Log messages
logger.debug('This is a debug message sent to the API')
logger.info('This is an info message sent to the API')
In this code:
We define a custom log handler named APILogHandler that inherits from logging.Handler.
The emit() method is overridden to send log messages to an external service via an HTTP POST request.
We create an instance of APILogHandler with the URL of the external service and add it to the logger.
Log messages sent to this logger will be dispatched to the external service.
Contextual Logging with Filters:
Filters allow you to selectively process log records based on specific criteria. This enables contextual logging where different log records can be handled differently based on their attributes.
Here's a simple example of how to use filters for contextual logging:
import logging
class CustomFilter(logging.Filter):
def filter(self, record):
return record.msg.startswith('important')
# Create a logger
logger = logging.getLogger('filter_logger') logger.setLevel(logging.DEBUG)
# Create console handler
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_handler.addFilter(CustomFilter())
# Apply filter to handler
logger.addHandler(console_handler)
# Log messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
logger.info('important message: This is a special message')
In this code:
We define a custom filter CustomFilter that filters log records based on whether the message starts with 'important'.
We create a console handler and add the filter to it.
Only log messages starting with 'important' will be outputted to the console.
Asynchronous Logging:
Asynchronous logging involves performing logging operations in a separate thread or process to avoid blocking the main application's execution. This can improve performance, especially in high-throughput applications.
Here's a basic example using Python's built-in QueueHandler and QueueListener for asynchronous logging:
import logging
import queue
import threading
# Create a queue for log messages
log_queue = queue.Queue()
# Create a handler to put log records into the queue
queue_handler = logging.handlers.QueueHandler(log_queue)
# Create a logger
logger = logging.getLogger('async_logger')
logger.setLevel(logging.DEBUG)
logger.addHandler(queue_handler)
# Create a listener to process log records from the queue
listener = logging.handlers.QueueListener(log_queue, logging.StreamHandler())
listener.start()
# Log messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
# Stop the listener
listener.stop()
In this code:
We create a QueueHandler to put log records into a queue.
We create a logger and add the QueueHandler to it.
We create a QueueListener to process log records from the queue and add a StreamHandler to it to output logs to the console.
Log messages are put into the queue by the logger.
The listener processes log records from the queue and outputs them to the console.
Python Logging Libraries
Here are some popular Python logging libraries:
Standard Library Logging Module: It is a built-in logging module that is designed to meet the needs of beginners as well as enterprise teams. It is used to integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.
Loguru: Loguru is a third-party library that aims to bring enjoyable Python logging. It’s very user-friendly and requires minimal setup. It provides a simpler and more powerful syntax for logging.
Structlog: Structlog makes Python logging less painful and more powerful by adding structure to your log entries. It’s easier to configure and it works well with the standard library’s logging module
LogBook: LogBook is a logging system in Python that replaces the standard library’s logging module. It was designed with both complex and simple logging needs in mind.
Picologging: Picologging is a minimalistic logging library for Python. It’s lightweight and easy to use.
Standard Library Logging Module
The standard library logging module in Python is a ready-to-use and powerful module that is designed to meet the needs of beginners as well as enterprise teams. It’s used by most third-party Python libraries, so you can integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.
The logging module is part of the standard Python library and provides tracking for events that occur while the software runs. You can add logging calls to your code to indicate what events have happened.
import logging
# Create a logger
logger = logging.getLogger('my_logger')
# Set the log level
logger.setLevel(logging.DEBUG)
# Log some messages
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
In this example, a logger named ‘my_logger’ is created. The log level is set to DEBUG, which means that all messages of level DEBUG and above will be tracked. Then, several messages are logged at different severity levels.
Pros:
Versatility: The logging module is very versatile. It can log to several destinations at once, format log messages in any way you want, and filter log messages based on severity.
Integration: Since it’s used by most third-party Python libraries, you can integrate your log messages with the ones from those libraries.
Flexibility: It provides a lot of flexibility and control over what gets logged, where it gets logged, and how.
Cons:
Complexity: The logging module can be a bit complex to set up for more advanced use cases.
Performance: For applications that require high-speed logging, the logging module might not be the best choice as it can slow down your application.
Loguru
Loguru is a third-party logging library in Python that aims to bring enjoyable logging. It’s designed to make logging in Python easier and more pleasant. Loguru provides a more concise API and additional features, making log recording painless.
from loguru import logger
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
In this example, a logger is imported from the Loguru module and then used to log messages at different severity levels.
Loguru also simplifies file logging with rotation, retention, and compression:
logger.add("file_{time}.log", rotation="500 MB")
# Automatically rotate too big file
logger.add("file.log", retention="10 days")
# Cleanup after some time
logger.add("file.log", compression="zip")
# Compress log at closure
Pros:
Simplicity: Loguru is pre-configured with a lot of useful functionality, allowing you to do common tasks without spending a lot of time messing with configurations.
Ease of Use: Loguru provides a simpler and more powerful syntax for logging.
Flexibility: You can format logs, filter, or specify destinations to send logs using only a single add() function.
Cons:
Dependency: As a third-party library, Loguru is an additional dependency for your project.
Compatibility: While Loguru is entirely compatible with the standard logging, some existing libraries or systems that are built around the standard logging module might not work seamlessly with Loguru.
Structlog
Structlog is an open-source logging library for Python known for its simple API, performance, and quality of life features. It’s designed to make Python logging less painful and more powerful by adding structure to your log entries. It’s up to you whether you want Structlog to take care of the output of your log entries or whether you prefer to forward them to an existing logging system like the standard library’s logging module.
from structlog import get_logger
logger = get_logger()
logger.info("This is an info message", key="value", key2="value2")
In this example, a logger is imported from the Structlog module and then used to log a message at the INFO level. The message is accompanied by some structured key-value pairs.
Pros:
Simplicity: Structlog is pre-configured with a lot of useful functionality, allowing you to do common tasks without spending a lot of time messing with configurations.
Ease of Use: Structlog provides a simpler and more powerful syntax for logging.
Flexibility: You can format logs, filter, or specify destinations to send logs using only a single add() function.
Cons:
Dependency: As a third-party library, Structlog is an additional dependency for your project.
Compatibility: While Structlog is entirely compatible with the standard logging, some existing libraries or systems that are built around the standard logging module might not work seamlessly with Structlog.
LogBook
LogBook is a logging system for Python that replaces the standard library’s logging module. It was designed with both complex and simple applications in mind. This library is still under heavy development and the API is not fully finalized yet.
from logbook import Logger, StreamHandler
import sys
# Set up a default handler
StreamHandler(sys.stdout).push_application()
# Create a logger
log = Logger('A Fancy Name')
# Log some messages
log.debug('This is a debug message')
log.info('This is an info message')
log.warning('This is a warning message')
log.error('This is an error message')
log.critical('This is a critical message')
In this example, a logger named ‘A Fancy Name’ is created. Then, several messages are logged at different severity levels.
Pros:
Ease of Use: LogBook provides a simpler and more powerful syntax for logging.
Flexibility: LogBook is flexible and can be used for both simple and complex applications.
Compatibility: LogBook can work alongside the standard library’s logging module.
Cons:
Dependency: As a third-party library, LogBook is an additional dependency for your project.
Under Development: The library is still under heavy development and the API is not fully finalized yet.
Picologging
Picologging is a high-performance logging library for Python. It is designed to be used as a drop-in replacement for applications that already use the standard library’s logging module, and supports the same API as the logging module. Picologging is 4-10x faster than the logging module in the standard library.
import picologging as logging
logging.basicConfig()
logger = logging.getLogger()
logger.info("A log message!")
logger.warning("A log message with %s", "arguments")
In this example, Picologging is imported as logging to use Picologging instead of the standard library logging module. This patches all the loggers registered to use Picologging loggers and formatters. Then, several messages are logged at different severity levels.
Pros:
Performance: Picologging is 4-10x faster than the logging module in the standard library.
Compatibility: Picologging is designed to be used as a drop-in replacement for applications that already use logging, and supports the same API as the logging module.
Cons:
Dependency: As a third-party library, Picologging is an additional dependency for your project.
Under Development: The library is still under heavy development and the API is not fully finalized yet.
Conclusion
To conclude, Python logging is an indispensable tool for software development. By mastering its concepts and techniques, you can ensure the reliability and maintainability of your projects. From log levels to handlers and advanced techniques, effective logging practices will streamline your development workflow. Welcome Python logging as a powerful ally in your toolkit for building resilient and efficient applications.
Happy coding!
Comments