Configure Probes with Precision

Learn how configuration files, flags, variables, and tuning settings shape probe behavior across different test scenarios.

View Setup Guide

Configuration Structure

API Probe configurations are typically organized as layered settings: a base file for shared defaults, environment variables for deployment-specific values, and command-line flags for run-time overrides. This structure keeps probe behavior predictable while still allowing targeted changes for different environments, workloads, and reporting needs.

Core Configuration Areas

Timeouts

Set request and overall probe timeouts to control how long checks can wait before being marked incomplete. Use shorter values for fast validation and longer values for slow or variable networks.

Retries

Define retry count, backoff behavior, and retry intervals to reduce false negatives from transient conditions. Keep retries conservative when you need strict latency measurement.

Thresholds

Configure acceptable limits for latency, success rate, or other probe metrics. Thresholds turn raw results into actionable pass or warn conditions for each run.

Output and Reporting

Choose output formats, report destinations, and summary detail levels to match your workflow. These settings help standardize results for local review, CI pipelines, or team reporting.

Environment Variables and Flags

Use environment variables to keep sensitive or environment-specific values out of the base file, and flags to override settings for a single execution. This separation makes automation easier to maintain.

Scenario-Specific Tuning

Adjust configuration profiles for smoke checks, regression runs, load-sensitive probes, or long-running monitoring jobs. Scenario-based tuning lets one probe definition support multiple testing goals.

How Settings Work Together

Probe behavior follows precedence rules: shared defaults establish the baseline, environment-specific values refine it, and flags apply the final override at execution time. In practice, timeouts, retries, and thresholds should be tuned as a group so they reflect the same tolerance for delay and variance. For example, stricter thresholds often pair well with lower retries in fast feedback runs, while broader thresholds and longer timeouts are better suited to stable monitoring or higher-latency scenarios. Output settings should also match the audience, with concise summaries for automation and more detailed reports for analysis.

Configuration Questions

Should I keep all probe settings in one file?

A single base file works well for shared defaults, but environment variables and flags are better for values that change by environment or run. This keeps the core configuration readable and easier to reuse.

When should I use flags instead of environment variables?

Use flags for temporary, run-specific overrides and environment variables for values that should persist within a deployment or pipeline context. That separation keeps execution intent clear.

How do I choose timeout and retry values?

Start with the expected response profile of the system under test, then balance responsiveness against tolerance for variance. Faster validation usually needs tighter values, while broader testing benefits from more room for delay.

Do thresholds need to match every scenario?

No. Thresholds should reflect the purpose of the run. Smoke checks, regression runs, and ongoing monitoring often need different limits even when they use the same probe definition.

What is the best way to handle reporting output?

Match the output format to how results will be consumed. Automated pipelines usually need compact machine-readable output, while review-focused runs may benefit from more detailed summaries.