You are not looking at the latest version of the documentation. Check it out there.
You are looking at draft pre-release documentation for the next release of Flows for APEX. Product features documented here may not be released or may have different behavior to that documented here. You should not make any purchase decisions based on this draft documentation..

Logging Philosophy and Lifecycle

Why Logging Exists

Logging in Flows for APEX is designed to support three different needs at the same time:

  • operators need enough information to understand what a process instance is doing now
  • administrators need enough information to investigate failures and unexpected behavior
  • organizations may need an audit trail that can be retained beyond the life of the run-time records

These needs overlap, but they are not identical. Good logging balances visibility, performance, storage cost, and retention obligations.

The Core Philosophy

The logging philosophy in Flows for APEX is simple:

  • capture enough information to operate safely
  • add more detail only when you need it
  • archive what you must retain
  • purge what you no longer need in the run-time system

This is why Flows for APEX provides flexible levels rather than treating every process instance as a full forensic capture.

The Logging Lifecycle

The normal lifecycle of logging is:

  1. Capture Runtime events are written while the process instance is executing.
  2. Observe Operators and administrators use Flow Monitor and related views to understand current and recent behavior.
  3. Archive After the process instance completes or terminates, an instance archive JSON document can be created for long-term retention.
  4. Purge Raw run-time log records can be deleted after they are no longer needed operationally.
  5. Analyze Archived records and aggregated statistics can be used for audit, trend analysis, and process improvement.

Why Retention Should Differ by Layer

Run-time logs and long-term archives solve different problems.

  • raw run-time logs are optimized for immediate investigation and Flow Monitor visibility
  • archive documents are optimized for retention and later retrieval
  • summary statistics are optimized for trend analysis rather than case-by-case forensics

Keeping all raw logs forever is usually unnecessary and expensive. Archiving first, then purging raw logs, is the normal operational pattern.

Environment Strategy

Different environments should normally use different logging strategies.

Development

Development systems often benefit from higher logging levels because developers are actively testing process behavior, expressions, and task routing.

Test and UAT

Test and UAT systems usually benefit from enough logging to diagnose failed test runs without creating unnecessary noise.

Production

Production systems should normally use a moderate default logging level and then temporarily raise logging only for the instance that needs investigation.

Audit, Privacy, and Discipline

Logging is powerful because it records behavior that might otherwise be invisible. That also means it should be managed intentionally.

Consider:

  • how much user-identifying information should be stored
  • how long raw logs should remain in the database
  • where archive documents should be stored
  • which teams are allowed to increase logging levels on live instances

The best operational posture is deliberate, not maximal.

This page explains the mental model. For implementation details, use the admin pages:

Updated: