You are not looking at the latest version of the documentation. Check it out there.
You are looking at draft pre-release documentation for the next release of Flows for APEX. Product features documented here may not be released or may have different behavior to that documented here. You should not make any purchase decisions based on this draft documentation..

Introduction

Flows for APEX includes a flexible event logging system that can be used for day-to-day operations, incident investigation, audit and compliance, and long-term analysis of process behavior.

You do not need to enable logging to run workflows. However, if logging is disabled, or if the raw log records have already been purged, the Flow Monitor cannot show detailed execution history for a process instance.

Logging in Flows for APEX serves four main purposes:

  • runtime monitoring of currently running process instances
  • audit and forensic review of completed process instances
  • developer and administrator debugging
  • longer term analysis of process behavior and performance

Like any audit trail, logging consumes CPU, storage, and operational attention. The right approach is to capture the level of detail you actually need, retain it for as long as you need it, and archive it before purging raw run-time records.

For the broader design principles behind this approach, see Logging Philosophy and Lifecycle.

What Gets Logged

Flows for APEX can log four different categories of information.

Diagram Events

Diagram logging records design-time changes to BPMN models. This is typically used to provide governance over production models.

Diagram logging can record:

  • changes to diagram metadata such as version or status
  • when a diagram is promoted to released
  • a copy of the BPMN model each time a released model is changed or archived

Diagram logging is controlled separately from run-time instance logging.

Instance Events

Instance events record major process-level activity such as creation, start, completion, termination, reset, and error handling.

These events are especially useful when something happens outside the end user’s immediate page flow, for example:

  • timer-driven execution
  • background processing
  • script task errors
  • variable expression failures
  • administrator intervention such as suspend, resume, rewind, or restart

When an instance enters error status, the instance log can include additional detail in error_info, which is often the fastest place to start an investigation.

Step Events

Step events record what happened at task and subflow level. These records show how work progressed through the process.

Typical step details include:

  • when a step became current
  • when work started
  • when work completed
  • reservation and due-date changes
  • process state when the step completed
  • subflow and sub-process context

Process Variable Events

Variable event logging records the new value each time a process variable is created or changed.

This is the most detailed run-time logging and is primarily intended for deep troubleshooting. It is very valuable during debugging, but it also creates the most data volume.

Logging Levels

The operator-visible logging levels are:

Level Name Typical use
0 none Disable run-time logging for low-risk or high-volume scenarios
1 abnormal events Capture errors, warnings, restarts, and unusual interventions
2 major events Normal production default for lifecycle visibility
4 routine Adds more operational detail for diagnosis and monitoring
7 AI intent auditing Enterprise Edition AI rationale, recommendation, and intent/audit logging without turning on full variable tracing
8 full / debug Includes process-variable logging and should be used selectively

In practice:

  • use 2 for most production systems
  • use 4 when you want richer operational detail on an active system
  • use 7 when investigating AI-assisted behavior and you need to understand why the engine recommended or initiated actions
  • use 8 temporarily on a problematic instance when you need full debug detail

Level 7 is intended for AI explainability and intent auditing, particularly for Enterprise Edition AI-assisted adhoc subprocesses. It sits between routine operational logging and full debug logging, so that AI rationale and recommendation outcomes can be captured without necessarily logging every process-variable change.

How the Effective Logging Level Is Chosen

Logging can be influenced at three points:

  • the system default configured in the Flows for APEX application
  • the diagram minimum logging level set on the BPMN process object
  • the logging level requested for an individual process instance

When a new process instance is created, Flows for APEX calculates the initial instance logging level from these sources. In effect, the instance starts at the higher of:

  • the requested instance logging level, and
  • the minimum level coming from the diagram or, if no diagram minimum is set, the system default

This means diagram and system settings establish a floor for new instances. From Flows for APEX v26.1 onwards, an administrator can also change the logging level for a running instance directly. See Managing Instance Logging Levels.

How Logging Is Used Operationally

The most common operator workflow is:

  1. Run most production instances at a moderate level such as 2.
  2. If one instance starts behaving unexpectedly, raise that one instance to 4, 7, or 8 depending on the type of diagnosis required.
  3. Reproduce or observe the problem.
  4. Review the instance and step logs in Flow Monitor.
  5. Reduce the instance logging level again once the issue is understood.

As a rule of thumb:

  • choose 4 for general operational diagnosis
  • choose 7 for AI intent and rationale auditing
  • choose 8 when you also need full variable-level debug detail

This approach keeps storage and background processing overhead under control while still allowing deep diagnosis when needed.

Configuration and Retention

Logging requires configuration so that administrators can decide:

  • whether run-time logging is enabled by default
  • how much detail should be captured
  • whether user identifiers should be hidden in logs
  • whether diagram archives and instance archives should be created
  • where archives should be stored
  • how long raw log records should be retained before purging

Configuration details are documented in Configuration Parameters.

Archiving and Purging

Flows for APEX separates short-term operational logging from longer-term retention.

The normal lifecycle is:

  1. log run-time events while the instance is active
  2. create an instance archive document after completion or termination
  3. keep the archive in a long-term store such as OCI Object Storage or a database table
  4. purge raw run-time log records when they are no longer needed

The instance archive is a single JSON document containing the logged execution history for one process instance. What appears in that archive depends on the logging level that was active while the instance ran.

For archive and purge operations, see Instance Archiving and Purging.

APEX Automations

Archiving, purging, and statistics collection are typically run using APEX Automations installed with the Flows for APEX application.

After installation, these automations are present but disabled. You should review and enable them in APEX Application Builder under Shared Components > Automations.

If enabling an automation raises ORA-01031: insufficient privileges, the Flows for APEX workspace parsing schema needs the CREATE JOB privilege required by DBMS Scheduler.

See Also

Updated: