Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 37 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,22 +64,57 @@ Please look at the [Steps to Run](#steps-to-run) section for Docker instructions
```

### Run the Script inside the Container

#### Option 1: Streamlit Web Interface (Recommended) ![New](https://img.shields.io/badge/-New-842E5B)
Run the interactive web interface:
```sh
streamlit run streamlit_app.py
```

The Streamlit interface provides:
- 🖼️ **Image Upload/Selection**: Choose from sample images or upload your own
- ⚙️ **Easy Configuration**: Select inference mode, adjust top-K predictions via UI
- 📊 **Real-time Results**: View predictions and benchmark metrics interactively
- 📈 **Visual Feedback**: See benchmark results in an organized table format

#### Option 2: Command Line Interface
```sh
python main.py [--mode all]
```

### Arguments
### Arguments (CLI)
- `--image_path`: (Optional) Specifies the path to the image you want to predict.
- `--topk`: (Optional) Specifies the number of top predictions to show. Defaults to 5 if not provided.
- `--mode`: (Optional) Specifies the model's mode for exporting and running. Choices are: `onnx`, `ov`, `cpu`, `cuda`, `tensorrt`, and `all`. If not provided, it defaults to `all`.

### Example Command
### Example Command (CLI)
```sh
python main.py --topk 3 --mode=all --image_path="./inference/cat3.jpg"
```

This command will run predictions on the chosen image (`./inference/cat3.jpg`), show the top 3 predictions, and run all available models. Note: plot created only for `--mode=all` and results plotted and saved to `./inference/plot.png`

## Streamlit Interface ![New](https://img.shields.io/badge/-New-842E5B)
The project now includes a user-friendly Streamlit web interface for running benchmarks interactively.

### Interface Preview
<img src="https://github.com/user-attachments/assets/eaa57e73-97d9-4319-b120-f5a3324f21b7" width="100%">

### Features
- **Interactive Image Selection**: Choose from sample images or upload your own
- **Flexible Configuration**: Select inference modes (ONNX, OpenVINO, PyTorch CPU/CUDA, TensorRT)
- **Real-time Benchmarking**: Run benchmarks and see results instantly
- **Visual Results**: View predictions and performance metrics in an organized format
- **System Information**: Check available hardware (CPU/GPU) and capabilities

### Benchmark Results Display
<img src="https://github.com/user-attachments/assets/82314f1e-ac3c-495b-8b86-fc9dc6379aa4" width="100%">

The interface displays:
- Top-K predictions with confidence scores
- Benchmark metrics (average inference time, throughput)
- Clear visual organization of results

## Results
### Example Input
Here is an example of the input image to run predictions and benchmarks on:
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ readme = "README.md"
requires-python = ">=3.12"
license = {file = "LICENSE"}
authors = [
{name = "DimaBir", email = ""}
{name = "DimaBir"}
]
keywords = ["pytorch", "tensorrt", "onnx", "openvino", "inference", "deep-learning"]
classifiers = [
Expand All @@ -30,9 +30,11 @@ dependencies = [
"numpy>=1.26.0",
"onnx>=1.16.0",
"onnxruntime>=1.18.0",
"onnxscript>=0.5.0",
"openvino>=2024.5.0",
"seaborn>=0.13.0",
"matplotlib>=3.8.0",
"streamlit>=1.41.0",
]

[project.optional-dependencies]
Expand Down
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,10 @@ Pillow>=10.0.0
numpy>=1.26.0
onnx>=1.16.0
onnxruntime>=1.18.0
onnxscript>=0.5.0
openvino>=2024.5.0
seaborn>=0.13.0
matplotlib>=3.8.0
pytest>=8.0.0
pytest-cov>=4.1.0
streamlit>=1.41.0
Loading