Decision Tree

The UQ Decision Framework

This section presents our decision tree framework for selecting appropriate uncertainty quantification methods.

Decision Tree Structure

Start: Do you need uncertainty quantification?
|
+-- No -> Use standard deterministic model
|
+-- Yes -> What are your computational constraints?
    |
    +-- Tight budget (minimal overhead)
    |   +-- Post-hoc calibration methods
    |       - Temperature scaling
    |       - Platt scaling
    |       - Isotonic regression
    |
    +-- Moderate budget (2-5x inference cost)
    |   +-- Consider:
    |       - Test-time augmentation
    |       - Monte Carlo Dropout
    |       - Small ensembles (3-5 models)
    |
    +-- Flexible budget (>5x inference cost)
        +-- What type of uncertainty is most important?
            |
            +-- Epistemic (model uncertainty)
            |   - Deep ensembles
            |   - Bayesian neural networks
            |   - Variational inference
            |
            +-- Aleatoric (data uncertainty)
            |   - Heteroscedastic neural networks
            |   - Mixture density networks
            |
            +-- Both
                - Full Bayesian treatment
                - Ensemble of heteroscedastic models

Detailed Method Selection

When to Use Post-hoc Calibration

Best for:

  • Existing deployed models that need calibration

  • Very tight computational budgets

  • Quick improvements to prediction confidence

Limitations:

  • Does not capture epistemic uncertainty

  • Limited improvement in out-of-distribution scenarios

When to Use Monte Carlo Dropout

Best for:

  • Moderate computational budgets

  • Existing models with dropout layers

  • Need for epistemic uncertainty estimates

Limitations:

  • May underestimate uncertainty

  • Requires careful tuning of dropout rates

When to Use Ensembles

Best for:

  • High-stakes applications requiring robust uncertainty

  • Projects with sufficient computational resources

  • Need for both epistemic and aleatoric uncertainty

Limitations:

  • Higher training and inference costs

  • Increased model storage requirements

When to Use Bayesian Methods

Best for:

  • Small datasets where epistemic uncertainty is crucial

  • Research projects with computational resources

  • Applications requiring principled uncertainty quantification

Limitations:

  • Significant computational overhead

  • Implementation complexity

  • Potential approximation errors

Implementation Considerations

Practical Tips

  1. Start simple: Begin with post-hoc calibration before more complex methods

  2. Validate calibration: Use calibration plots and reliability diagrams

  3. Test on OOD data: Evaluate uncertainty on out-of-distribution samples

  4. Monitor computational costs: Track training and inference time

  5. Consider hybrid approaches: Combine multiple methods when appropriate

Common Pitfalls

  • Over-trusting uncalibrated softmax probabilities

  • Ignoring distribution shift in evaluation

  • Using uncertainty methods without proper validation

  • Neglecting computational constraints in production