Skip to content
View Cstannahill's full-sized avatar

Block or report Cstannahill

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Cstannahill/README.md

Christian Tannahill

Software Engineer | Cloud Infrastructure & Applied AI

I am a software engineer focused on building production-grade, data-heavy systems. I specialize in bridging the gap between highly performant backends and modern web applications, with a heavy emphasis on serverless architectures, event-driven data pipelines, and pragmatic AI integration.

Currently building enterprise tools and local inference APIs using TypeScript, C# / .NET 9, and Python.

Email | LinkedIn | Portfolio


🛠️ Core Stack

  • Cloud & DevOps: AWS (Lambda, EventBridge, DynamoDB, API Gateway, S3), Docker, GitHub Actions
  • Backend & Data: C# / .NET 8/9, Node.js, Python (FastAPI), PostgreSQL (Neon/Supabase)
  • Frontend: TypeScript, React, Next.js, Tailwind CSS
  • Applied AI: AWS Bedrock, Hugging Face, Local Inference (llama.cpp, ONNX), RAG architectures

📂 Featured Architecture & Engineering

TrendDev (Job Market Analyzer)

  • Context: Architected and deployed a multi-service AWS serverless platform in a 7-day sprint.
  • Tech: AWS (Lambda, EventBridge, DynamoDB), Bedrock Nova, TypeScript, React.
  • Impact: Processes thousands of postings daily via a custom ETL pipeline, utilizing LLM enrichment to calculate real-time market demand and AI-driven resume gap analysis.

LegisTrack

  • Context: Federal legislation tracker designed around a heavily normalized relational database.
  • Tech: Next.js 15, Prisma, PostgreSQL, multi-provider LLM routing.
  • Impact: Built automated ingestion and summarization workflows with strict rate-limit handling, producing plain-English legislative summaries at a sub-cent cost per bill.

LocalChat

  • Context: High-performance local inference API template optimized for autonomous AI modification.
  • Tech: C#, .NET 9 Web API, Local Inference Tooling.
  • Impact: Refactored complex domain logic into a modular, generalized foundation, specifically designed to allow AI coding agents (like Claude Code) to autonomously modify and scale the API architecture.

Schemantic

  • Context: NPM package built to eliminate manual API typing work.
  • Tech: TypeScript, Node.js, OpenAPI.
  • Impact: Automatically generates fully-typed TypeScript clients from OpenAPI schemas (ideal for FastAPI integration), reducing integration time and preventing runtime errors.

Pinned Loading

  1. LocalChat LocalChat Public

    Local-first .NET 9 backend for agent orchestration with multi-provider LLM routing, memory/retrieval pipelines, SSE streaming, and operations-ready APIs.

    C#

  2. job-market-analyzer job-market-analyzer Public

    This repository contains a small set of Node.js Lambda functions and a React frontend used to analyze job postings and extract skills. The project is organized to be friendly for CI/CD and producti…

    TypeScript

  3. legistrack legistrack Public

    A robust, scalable web application that automatically tracks, categorizes, and summarizes U.S. federal legislation in plain, understandable language.

    TypeScript

  4. ai-production-chat ai-production-chat Public

    A production-focused chat workspace built with Next.js 16, React 19, and the Vercel AI SDK.

    TypeScript

  5. LocalInference LocalInference Public

    A high-performance, modular General Inference API compatible with OpenAI's API specification. Built for local LLM inference with advanced context management, sliding window token optimization, and …

    C#

  6. quantization-toolkit quantization-toolkit Public

    YAML-driven scripts for quantizing Hugging Face LLMs and VLMs with llmcompressor, then evaluating quality drift with lm-eval. - Quantizes causal language models to INT4 (W4A16) using: - AWQ - GPTQ …

    Python