Hey, I’m Abhay Ravi Kumar a backend and platform engineer who enjoys building systems that actually survive production.
I’ve worked as a software engineer at Hewlett Packard Enterprise (HPE) and Tata Elxsi, where I built and optimized large-scale enterprise integrations and OTT backend platforms used in real production environments.
Over time, I gravitated toward distributed systems, cloud infrastructure, and AI-adjacent backend systems, focusing on scalability, performance, and operational reliability.
I’m currently pursuing my MS in Computer Science at Stony Brook University (Dec 2025) and spend most of my time designing scalable APIs, cloud-native platforms, and RAG pipelines with a strong emphasis on correctness, latency, and system design tradeoffs.
If you like well-designed backend systems, boring infrastructure, and thoughtful engineering decisions you’ll probably like my work.
Microservice Cloud DevOps Pipeline A complete GitOps workflow and automated CI/CD pipeline for deploying containerized microservices to AWS.
- Tech Stack: AWS (EKS, VPC, IAM), Kubernetes, Terraform, GitHub Actions, Docker, Helm, Prometheus/Grafana
- Key Highlight: Architected a production-grade AWS environment via modular Terraform (IaC) and established a zero-downtime deployment pipeline with multi-layered monitoring via CloudWatch and Prometheus.
LLM Pipeline Evaluation A comparative analysis system testing Direct API, RAG, and Agentic AI workflows using the SQUAD 2.0 dataset.
- Tech Stack: Python, LangChain, Pinecone (Vector DB), OpenAI GPT-4
- Key Highlight: Evaluated cost vs. reliability trade-offs across AI architectures, proving that while RAG/Agentic pipelines consume ~3x more tokens than Direct APIs, they establish a critical "performance floor" that virtually eliminates hallucinations.
