Skip to content

TensorFlow oneDNN Custom Operations Warning

Problem

When importing TensorFlow, you may encounter this informational warning:

bash
tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.

This indicates oneDNN (Intel's optimized math library) is enabled by default in recent TensorFlow versions. While performance is improved, floating-point operations may execute in different orders, potentially causing minute numerical differences (generally negligible for most applications).

Common causes of persistence:

  • Environment variable set too late (after TensorFlow import)
  • Incorrect variable setting method
  • Need for system restart (Windows systems)

Solutions

1. Set Environment Variable Before Import (Python)

python
import os
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'  # MUST be before TensorFlow import
import tensorflow as tf
  • Critical: Setting must occur before importing TensorFlow
  • Fixes 99% of cases regardless of OS

2. Terminal/Shell Configuration (Linux/macOS/WSL)

bash
# Temporary session setting (non-permanent)
export TF_ENABLE_ONEDNN_OPTS=0 && python your_script.py

# Permanent solution (add to ~/.bashrc or ~/.zshrc)
echo 'export TF_ENABLE_ONEDNN_OPTS=0' >> ~/.bashrc
source ~/.bashrc

3. Windows System Configuration

  1. Open Start Menu → Search "Edit environment variables"
  2. Under "System variables", click New
  3. Variable name: TF_ENABLE_ONEDNN_OPTS
  4. Variable value: 0
  5. Reboot your system for changes to apply
  6. Verify with:
    python
    import os
    print(os.environ.get('TF_ENABLE_ONEDNN_OPTS'))  # Should output '0'

Why This Warning Appears

  • Performance optimization: oneDNN accelerates CPU computations
  • Numerical stability: Parallel processing may alter floating-point operation order
  • TensorFlow defaults: Enabled automatically in tensorflow-intel builds

Should You Disable oneDNN?

  • Keep enabled for: Performance-critical CPU tasks, training/inference speed
  • Disable for:
    • Reproducing exact numerical results from older TensorFlow versions
    • Debugging minute numerical discrepancies
    • When warnings interfere with your workflow

Best Practices

  1. Set early: Configure environment variables before any TensorFlow import
  2. Consistency: Use the same oneDNN setting across all environments
  3. Test: Verify differences are negligible for your specific use case
  4. Log suppression: Combine with:
    python
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'  # Suppresses info messages

Alternative: Full Warning Suppression

Disable all informational messages (use cautiously):

python
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # 3 = hide warnings + errors
import tensorflow as tf