• Docs >
  • Logging on Torchserve
Shortcuts

Logging on Torchserve

In this document we will go through logging mechanism in TorchServe. We will also go over how to modify the behavior of logging in model-server. Logging in TorchServe also covers metrics, as metrics are logged into a file. To further understand how to customize metrics or define custom logging layouts, refer to the metrics document

Pre-requisites

Before getting into this tutorials, you must familiarize yourself with log4j configuration properties. Refer to this online document on how to configure the log4j parameters. Similarly, familiarize yourself with the default log4j.properties used by TorchServe.

Types of logs

Torchserve currently provides three types of logs.

  1. Access Logs.

  2. TorchServe Logs.

Access Logs:

These logs collect the access pattern to Torchserve. The configuration pertaining to access logs are as follows,

log4j.logger.ACCESS_LOG = INFO, access_log


log4j.appender.access_log = org.apache.log4j.RollingFileAppender
log4j.appender.access_log.File = ${LOG_LOCATION}/access_log.log
log4j.appender.access_log.MaxFileSize = 100MB
log4j.appender.access_log.MaxBackupIndex = 5
log4j.appender.access_log.layout = org.apache.log4j.PatternLayout
log4j.appender.access_log.layout.ConversionPattern = %d{ISO8601} - %m%n

As defined in the properties file, the access logs are collected in {LOG_LOCATION}/access_log.log file. When we load the TorchServe with a model and run inference against the server, the following logs are collected into the access_log.log

2018-10-15 13:56:18,976 [INFO ] BackendWorker-9000 ACCESS_LOG - /127.0.0.1:64003 "POST /predictions/resnet-18 HTTP/1.1" 200 118

The above log tells us that a successful POST call to /predictions/resnet-18 was made by remote host 127.0.0.1:64003 it took 118ms to complete this request.

These logs are useful to determine the current performance of the model-server as well as understand the requests received by model-server.

TorchServe Logs

These logs collect all the logs from TorchServe and from the backend workers (the custom model code). The default configuration pertaining to ts logs are as follows:

log4j.logger.com.amazonaws.ml.ts = DEBUG, ts_log


log4j.appender.ts_log = org.apache.log4j.RollingFileAppender
log4j.appender.ts_log.File = ${LOG_LOCATION}/ts_log.log
log4j.appender.ts_log.MaxFileSize = 100MB
log4j.appender.ts_log.MaxBackupIndex = 5
log4j.appender.ts_log.layout = org.apache.log4j.PatternLayout
log4j.appender.ts_log.layout.ConversionPattern = %d{ISO8601} [%-5p] %t %c - %m%n

This configuration by default dumps all the logs above DEBUG level.

Generating and logging custom logs

As a user of TorchServe, you might want to log custom logs into the log files. This could be for debug purposes or to log any errors. To accomplish this, simply print the required logs to stdout/stderr. TorchServe will capture the logs generated by the backend workers and log it into the log file. Some examples of logs are as follows

  1. Messages printed to stderr

    2018-10-14 16:46:51,656 [WARN ] W-9000-stderr org.pytorch.serve.wlm.WorkerLifeCycle - [16:46:51] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.8.0. Attempting to upgrad\
    e...
    2018-10-14 16:46:51,657 [WARN ] W-9000-stderr org.pytorch.serve.wlm.WorkerLifeCycle - [16:46:51] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
    
  2. Messages printed to stdout

    2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - preprocess time: 3.60
    2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - inference time: 117.31
    2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - postprocess time: 8.52
    

Modifying the behavior of the logs

In order to modify the default behavior of the logging, you could define log4j.properties file. There are two ways of starting TorchServe with custom logs

Once you define custom log4j.properties, add this to the

config.properties file as follows

vmargs=-Dlog4j.configuration=file:///path/to/custom/log4j.properties

Then start the TorchServe as follows

$ torchserve --start --ts-config /path/to/config.properties

Alternatively, you could start the TorchServe with the following command as well

$ torchserve --start --log-config /path/to/custom/log4j.properties

Enable asynchronous logging

If your model is super lightweight and seeking for high throughput, you can consider enable asynchronous logging. Note that log output maybe delayed and latest log might be lost if TorchServe is terminated unexpectedly. asynchronous logging is disabled by default. To enable asynchronous logging, add following property in config.properties:

async_logging=true

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources