I specialize in building high-performance data pipelines and custom deep learning infrastructure. My core focus is architecting real-time systems that handle complex data pipelines.
Real-Time Data Ingestion Service
A resilient backend service that maintains persistent WebSocket connections to Binance for live market data ingestion. It features a custom "self-healing" mechanism that automatically detects sequence gaps caused by network instability and asynchronously fetches historical fragments to ensure 100% data continuity. It applies vectorized stationarity transformations via NumPy in real-time.
High-Concurrency Data Delivery Layer
A high-performance REST API built with FastAPI and Async SQLAlchemy. It acts as the bridge between the raw data engine and downstream ML models, serving normalized, scale-invariant financial features with low latency. Designed for asynchronous load handling to support multiple concurrent trading agents.
Deep Learning Research (PyTorch)
A research-grade implementation of a Multi-Target Time-Series Transformer built entirely from scratch. Unlike standard models, this architecture utilizes dual-output decoders to simultaneously solve regression tasks (Log Returns) and classification tasks (Volatility Regimes). It is engineered to handle chaotic, non-stationary financial distributions without look-ahead bias.
Engineered ML Dataset
A curated, high-frequency dataset of 1-minute Bitcoin candles, pre-processed specifically for deep learning stability. Unlike raw OHLCV data, this dataset features rigorous stationarity engineering using Robust Scaling and Arcsinh transformations to eliminate distribution shifts, making it ready for immediate training of gradient-sensitive models.
- Email: starlitvienna@starlitvienna.com
- Website: starlitvienna.com
- Kaggle: kaggle.com/evelynartoria
