RelayInsights is a powerful, AI-driven audio transcription and analysis platform. It transforms raw audio recordings or uploaded files into actionable intelligence using state-of-the-art models from OpenAI. Whether it's meeting minutes, sales coaching, or sentiment analysis, RelayInsights provides deeper clarity from your audio data.
- Multi-Modal Input: Upload audio files (MP3, WAV, M4A) or record directly within the browser.
- High-Fidelity Transcription: Powered by OpenAI Whisper for industry-leading accuracy.
- Intelligent Templates: Choose from 15+ pre-configured AI templates including:
- Executive Summaries
- Action Item Extraction
- Sentiment Analysis
- Meeting Minutes
- SWOT Analysis
- Blog Post Conversion
- Interactive Workbench: View, edit, and download both transcripts and AI-generated insights.
- Secure & Private: Input your own OpenAI API key for direct control over your data and costs.
- Frontend: Streamlit
- Transcription: OpenAI Whisper
- Insight Engine: OpenAI GPT-4o
- Core Logic: Python 3.8+
- Python 3.8 or higher
- An OpenAI API Key (Get one here)
-
Clone the repository:
git clone https://github.com/alphatechlogics/RelayInsights.git cd RelayInsights
-
Create a virtual environment:
python -m venv env .\env\Scripts\activate -
Install dependencies:
pip install -r requirements.txt
Create a .env file in the root directory (optional, or input directly in UI):
OPENAI_API_KEY=your_api_key_here- Launch the App:
streamlit run main.py
- Setup: Enter your OpenAI API Key in the sidebar.
- Upload/Record: Drag and drop an audio file or use the built-in microphone.
- Transcribe: The app will automatically generate a transcript.
- Analyze: Select an Insight Template from the dropdown and click "Generate Insights".
- Export: Use the download buttons to save your results as
.txtor.mdfiles.
RelayInsights/
├── src/
│ ├── audio_processor.py # Whisper API integration
│ ├── llm_engine.py # GPT analysis logic
│ └── utils.py # File management & cleanup
├── config/
│ └── prompts.json # AI prompt templates
├── assets/
│ └── styles.css # Custom UI styling
├── main.py # Streamlit application entry point
└── requirements.txt # Project dependencies
AudioProcessor: Handles communication with the OpenAI Whisper API. It manages file reading and transcription requests.LLMEngine: Manages prompting and responses from GPT models. It injects transcripts into predefined templates fromprompts.json.
To add a new analysis type, simply append a new JSON object to config/prompts.json:
{
"id": "my_new_template",
"name": "Custom Analysis",
"template": "Analyze the following transcript for [X]:\n\nTranscript:\n{transcript}"
}This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by Alphatech Works