Logging
1. Simulation and logging
Detailed log files help to monitor the evolution of a model’s internal state during the simulation period. Subsequent analysis of the log files generated by the model uncovers the decisions made during simulation and thus helps to explain the reasoning behind the observed model behavior.
Here are a few examples of raw data that can be logged when running a Supply Chain model:
-
The current status and position of each truck at all moments of time when a decision is made which truck to use to run the next transportation task.
-
The number of items waiting to be transported from/to each storage area (i.e., transportation queue sizes).
-
Total time en route for each truck at the start of each day.
With this data, we can find answers to the following questions:
-
Why was Truck A preferred over Truck B at some particular moment of time?
-
Are there any overloaded storage areas?
-
Are all vehicles used evenly?
Ultimately, logging should provide some thought-provoking data and facilitate insights on why the model works one way or another. This understanding is particularly advantageous for debugging and is indispensable when matching the simulated results back to the real world.
But why can’t we just keep the simulation events data in memory?
The first reason is the data size. The memory footprint of a running model can be quite large per se. Consider a model that takes half of some computer’s RAM. Any additional data that this model wants to keep in RAM would increase the memory requirements even more, making it impossible to run the simulation on some hardware. On the other hand, the amount of storage space available for log files on a HDD/SSD typically exceeds the size of RAM.
Another reason is the desirable separation between obtaining the simulation data and processing that data to get insights. With this separation in action, you can run the model, save its log files, and process them later. You can also run the model with different input data sets and compare the log data collected from several model executions to see how changes to the input data affect the model behavior.
2. The Logger
class
The Amalgama Platform contains the Logger class with the following features:
-
Logging is session-based. A logging session is started first. Then, some log messages are added. In the end, the logging session is closed. There may be at most one active logging session.
-
A log message is an instance of a properly annotated Java record.
-
All messages of the same type (i.e., instances of the same Java record) are written to the same log file, so there is only one file per message topic.
-
Internally, a log file is a tab-separated CSV file with ".log" extension.
-
One instance of the Logger class can be used concurrently by several message sources (such as simulation models running at the same time).
3. Declare a message body type
A message body type is a public Java Record annotated with @LogName that consists of fields annotated with the optional @LogColumnName annotation.
The LogName
annotation defines the message topic and also sets up the name of the log file.
Here is an example:
@LogName("Deliveries")
public record DeliveryLogRecord(
@LogColumnName("Delivery Id") String id,
@LogColumnName("Source Location Id") String sourceLocId,
@LogColumnName("Dest Location Id") String destLocId
) {}
Messages of this message body type will be stored in the 'Deliveries.log' file. The file will have three event body columns ("Delivery Id", "Source Location Id", and "Dest Location Id"). There will be a header line with column names. The column header names are set by the @LogColumnName annotation; if a record field is missing this annotation, the field name is used (e.g., "id", "sourceLocId", and "destLocId").
Column order in the log file is the same as in its defining Java Record.
4. Create a logger and start a logging session
A newly created Logger
instance has no active logging session, so
logging is initially disabled.
To start a new logging session, use the
openSession(String)
method.
This method:
-
Closes the current logging session (if any);
-
Prepares the passed logging folder by removing all old
*.log
files in it; -
Creates a new logging session and enables logging in this
Logger
instance.
Example:
Logger logger = new Logger();
logger.openSession("d://logs");
A new logging session is started. All files in the supplied folder are deleted. The logger is ready to accept log messages.
5. Write a simple log message
To write a log message, call the Logger.log()
method:
logger.log(new DeliveryLogRecord("delivery-1", "loc-32", "loc-89"));
A new log file (named "Deliveries.log", as defined by the @LogName
annotation of the DeliveryLogRecord
) is created.
A header string and the passed message are written to it.
The header string and each message in this log file contain 3 columns.
6. Close the logging session
This method closes the current logging session, if any:
logger.closeSession();
The logging session is closed. Any messages that have been accepted by the logger instance are flushed to disk, and the output log files are closed.
Any attempt to write a message beyond this point will be silently ignored, unless a new logging session is started.
Upon application exit, all active logging sessions are closed automatically.
7. Write a log message with a message source key
It is possible to write messages related to the same topic from several threads simultaneously. Concurrent logging is particularly useful when running a bunch of simulation experiments on a multi-core computer from a single instance of an application: each model instance created inside that application runs in a separate thread, but writes to a shared set of log files.
We will need to somehow remember which simulation experiment produced a log message.
To do so, define a public Java record whose instances will serve as message source descriptors. Here is an example:
public record ExperimentDescriptor(
@LogColumnName("Scenario file name") String scenarioFileName,
@LogColumnName("Scenario name") String scenarioName,
@LogColumnName("Seed") int randomSeed
) {}
The ExperimentDescriptor
is an event source key type, i.e., a public Java Record that consists
of fields annotated with the optional @LogColumnName annotation.
When logging a message, use the overloaded Logger.log(Record, Record) method (remember to start a logging session first):
logger.log(
new DeliveryLogRecord("delivery-1", "loc-32", "loc-89"),
new ExperimentDescriptor("scenario-1.xlsx",
"master scenario",
12)
);
The output log file "Deliveries.log" will contain 6 columns:
-
"Scenario file name"
-
"Scenario name"
-
"Seed"
-
"Delivery Id"
-
"Source Location Id"
-
"Dest Location Id"
Columns 1-3 come from the ExperimentDescriptor
, and the latter three columns are the
message body columns from the DeliveryLogRecord
.
Now we can run hundreds of simulation experiments and collect all the data in a "one file per topic" fashion.