kdevops includes comprehensive storage performance testing through the fio-tests workflow, providing flexible I/O benchmarking with configurable test matrices, A/B testing capabilities, and advanced graphing and visualization support.
The fio-tests workflow in kdevops is adapted from the original fio-tests framework, which was designed to provide systematic storage performance testing with dynamic test generation and comprehensive analysis capabilities. The kdevops implementation brings these capabilities into the kdevops ecosystem with seamless integration to support virtualization, cloud providers, and bare metal testing.
The fio-tests workflow enables comprehensive storage device performance testing by generating configurable test matrices across multiple dimensions:
- Block sizes: 4K, 8K, 16K, 32K, 64K, 128K
- I/O depths: 1, 4, 8, 16, 32, 64
- Job counts: 1, 2, 4, 8, 16 concurrent fio jobs
- Workload patterns: Random/sequential read/write, mixed workloads
- A/B testing: Baseline vs development configuration comparison
Configure fio-tests for quick testing:
make defconfig-fio-tests-ci # Use minimal CI configuration
make menuconfig # Or configure interactively
make bringup # Provision test environment
make fio-tests # Run performance testsFor full performance analysis:
make menuconfig # Select fio-tests dedicated workflow
# Configure test matrix, block sizes, IO depths, patterns
make bringup # Provision baseline and dev nodes
make fio-tests # Run comprehensive test suite
make fio-tests-graph # Generate performance graphs
make fio-tests-compare # Compare baseline vs dev resultsThe workflow supports multiple test types optimized for different analysis goals:
- Performance analysis: Comprehensive testing across all configured parameters
- Latency analysis: Focus on latency characteristics and tail latency
- Throughput scaling: Optimize for maximum throughput analysis
- Mixed workloads: Real-world application pattern simulation
Configure the test matrix through menuconfig:
Block size configuration →
[*] 4K block size tests
[*] 8K block size tests
[*] 16K block size tests
[ ] 32K block size tests
[ ] 64K block size tests
[ ] 128K block size tests
IO depth configuration →
[*] IO depth 1
[*] IO depth 4
[*] IO depth 8
[*] IO depth 16
[ ] IO depth 32
[ ] IO depth 64
Thread/job configuration →
[*] Single job
[*] 2 jobs
[*] 4 jobs
[ ] 8 jobs
[ ] 16 jobs
Workload patterns →
[*] Random read
[*] Random write
[*] Sequential read
[*] Sequential write
[ ] Mixed 75% read / 25% write
[ ] Mixed 50% read / 50% write
Advanced settings for fine-tuning:
- I/O engine: io_uring (recommended), libaio, psync, sync
- Direct I/O: Bypass page cache for accurate device testing
- Test duration: Runtime per test job (default: 60 seconds)
- Ramp time: Warm-up period before measurements (default: 10 seconds)
- Results directory: Storage location for test results and logs
The workflow automatically selects appropriate storage devices based on your infrastructure configuration:
- NVMe:
/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1 - VirtIO:
/dev/disk/by-id/virtio-kdevops1 - IDE:
/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1 - SCSI:
/dev/sdc
- AWS:
/dev/nvme2n1(instance store) - GCE:
/dev/nvme1n1 - Azure:
/dev/sdd - OCI: Configurable sparse volume device
/dev/null: For configuration validation and CI testing
The fio-tests workflow supports comprehensive A/B testing through the
KDEVOPS_BASELINE_AND_DEV configuration, which provisions separate
nodes for baseline and development testing.
make fio-tests # Run tests on both baseline and dev
make fio-tests-baseline # Save current results as baselinemake fio-tests-compare # Generate A/B comparison analysisThis creates comprehensive comparison reports including:
- Side-by-side performance metrics
- Percentage improvement/regression analysis
- Statistical summaries
- Visual comparison charts
The fio-tests workflow includes comprehensive graphing capabilities through Python scripts with matplotlib, pandas, and seaborn.
# In menuconfig:
Advanced configuration →
[*] Enable graphing and visualization
Graph output format (png) --->
(300) Graph resolution (DPI)
(default) Matplotlib thememake fio-tests-graphGenerates:
- Bandwidth heatmaps: Performance across block sizes and I/O depths
- IOPS scaling: Scaling behavior with increasing I/O depth
- Latency distributions: Read/write latency characteristics
- Pattern comparisons: Performance across different workload patterns
make fio-tests-compareCreates:
- Comparison bar charts: Side-by-side baseline vs development
- Performance delta analysis: Percentage improvements across metrics
- Summary reports: Detailed statistical analysis
make fio-tests-trend-analysisProvides:
- Block size trends: Performance scaling with block size
- I/O depth scaling: Efficiency analysis across patterns
- Latency percentiles: P95, P99 latency analysis
- Correlation matrices: Relationships between test parameters
Configure graph output through Kconfig:
- Format: PNG (default), SVG, PDF, JPG
- Resolution: 150 DPI (CI), 300 DPI (standard), 600 DPI (high quality)
- Theme: default, seaborn, dark_background, ggplot, bmh
The fio-tests workflow provides several make targets:
make fio-tests: Run the configured test matrixmake fio-tests-baseline: Establish performance baselinemake fio-tests-results: Collect and summarize test results
make fio-tests-graph: Generate performance graphsmake fio-tests-compare: Compare baseline vs development resultsmake fio-tests-trend-analysis: Analyze performance trends
make fio-tests-help-menu: Display available fio-tests targets
Results are organized in the configured results directory (default: /data/fio-tests):
/data/fio-tests/
├── jobs/ # Generated fio job files
│ ├── randread_bs4k_iodepth1_jobs1.ini
│ └── ...
├── results_*.json # JSON format results
├── results_*.txt # Human-readable results
├── bw_*, iops_*, lat_* # Performance logs
├── graphs/ # Generated visualizations
│ ├── performance_bandwidth_heatmap.png
│ ├── performance_iops_scaling.png
│ └── ...
├── analysis/ # Trend analysis
│ ├── block_size_trends.png
│ └── correlation_heatmap.png
└── baseline/ # Baseline results
└── baseline_*.txt
Each test produces detailed JSON output with:
- Bandwidth metrics (KB/s)
- IOPS measurements
- Latency statistics (mean, stddev, percentiles)
- Job-specific performance data
Detailed time-series logs for:
- Bandwidth over time
- IOPS over time
- Latency over time
The fio-tests workflow includes CI-optimized configuration:
make defconfig-fio-tests-ciCI-specific optimizations:
- Uses
/dev/nullas target device - Minimal test matrix (4K block size, IO depth 1, single job)
- Short test duration (10 seconds) and ramp time (2 seconds)
- Lower DPI (150) for faster graph generation
- Essential workload patterns only (random read)
# Ensure graphing dependencies are installed
# This is handled automatically when FIO_TESTS_ENABLE_GRAPHING=y- Verify device permissions and accessibility
- Check fio installation:
fio --version - Examine fio job files in results directory
- Verify Python dependencies: matplotlib, pandas, seaborn
- Check results directory contains JSON output files
- Ensure sufficient disk space for graph files
Enable verbose output:
make V=1 fio-tests # Verbose build output
ANSIBLE_VERBOSITY=2 make fio-tests # Ansible verbose output- Short tests (10-60 seconds): Quick validation, less accurate
- Medium tests (5-10 minutes): Balanced accuracy and time
- Long tests (30+ minutes): High accuracy, comprehensive analysis
- CPU: Scales with job count and I/O depth
- Memory: Minimal for fio, moderate for graphing (pandas/matplotlib)
- Storage: Depends on test duration and logging configuration
- Network: Minimal except for result collection
- Use dedicated storage for results directory
- Enable direct I/O for accurate device testing
- Configure appropriate test matrix for your analysis goals
- Use A/B testing for meaningful performance comparisons
The fio-tests workflow integrates seamlessly with other kdevops workflows:
- Run fio-tests alongside fstests for comprehensive filesystem analysis
- Use with sysbench for database vs raw storage performance comparison
- Combine with blktests for block layer and device-level testing
- Use
KDEVOPS_WORKFLOW_ENABLE_SSD_STEADY_STATEfor SSD conditioning - Run steady state before fio-tests for consistent results
- Start with CI configuration for validation
- Gradually expand test matrix based on analysis needs
- Use A/B testing for meaningful comparisons
- Enable graphing for visual analysis
- Establish baseline before configuration changes
- Run multiple iterations for statistical significance
- Use appropriate test duration for your workload
- Document test conditions and configuration
- Focus on relevant metrics for your use case
- Use trend analysis to identify optimal configurations
- Compare against baseline for regression detection
- Share graphs and summaries for team collaboration
The fio-tests workflow follows kdevops development practices:
- Use atomic commits with DCO sign-off
- Include "Generated-by: Claude AI" for AI-assisted contributions
- Test changes with CI configuration
- Update documentation for new features
- Follow existing code style and patterns
For more information about contributing to kdevops, see the main project documentation and CLAUDE.md for AI development guidelines.