I fell in love with programming in college and never stopped. What started as Java evolved into Go, Scala, and Python — not because the market demanded it, but because every new language changed how I think about problems. I don't identify with one stack. I identify with the craft of building things that work under pressure.
Over 17 years, I've moved across five industries — not because I was restless, but because I was curious. Each domain taught me a constraint I couldn't learn any other way: fintech taught me precision, automotive taught me real-time, telecom taught me scale, and cybersecurity taught me that nothing is ever truly secure.
What I love most is the blank whiteboard. The moment before a single line of code exists, when the architecture is still a question. Which language? Which message broker? Monolith or microservices? Those decisions shape everything that follows, and I take them seriously.
At 37, I made a bet. I went back to school for an M.Sc. in AI/ML because I believed the next decade of engineering would look nothing like the last. Two years later, I graduated With Distinction, specialising in Generative AI & Agentic AI, with a thesis under review for Springer.
That bet is paying off. I now see AI not as a separate discipline but as an architecture problem — choosing the right model, designing cost-efficient evaluation pipelines, building systems where the retrieval layer matters as much as the model. The same instincts that helped me design backend systems for 17 years now help me design AI systems.
I write about what I learn. I build tools that scratch my own itch. I use AI coding assistants daily — not as a crutch, but with workflows that keep the code honest. I'm not just an engineer who learned AI, and I'm not just an AI student who can code. I'm the bridge — and that's where the most interesting problems live.
Java, Go, Scala, Python — production code across all four. Spring Boot, Play Framework, Kafka, Elasticsearch. I don't have a comfort zone; I have a toolkit.
AI-CodeMedic: LLM-powered AIOps debugging engine — auto-scans logs, diagnoses bugs, generates PRs. HackWeek 2nd place, actively being evaluated for production. Java 21 + Spring Boot 3.x + OpenAI APIs.
RAG semantic search (LangChain, FAISS, ChromaDB), gesture recognition CNNs (94% accuracy), NLP recommendation systems, melanoma detection.
17 years of production engineering + 2 years of formal AI education. Building toward production AI systems — from thesis research to AIOps tools to daily AI-augmented development.
Java, Go, Scala, Python — production systems across five industries. I've seen what breaks at scale and know how to build so it doesn't.
Designed systems in cybersecurity, mobile security, fintech, automotive IoT, and telecom. Each domain taught a different constraint — latency, compliance, real-time processing, scale. I bring that cross-industry lens to every whiteboard session.
Started writing code at TCS. Led teams at Globant. Architected platforms at Lookout and F-Secure. At every stage, I shipped on deadline — 45-day sprints delivered early, zero SLA breaches, teams of 5 to 15 unblocked and aligned.
Not a weekend course. Two years of formal education in GenAI, Agentic AI, deep learning, and NLP. Thesis under review for Springer. I didn't just learn AI — I researched it.
My thesis optimised LLMs for $51 instead of $2,000+. I bring the same instinct to every AI decision — what's the cheapest way to get the best result without cutting corners?
Most AI engineers lack production experience. Most production engineers lack AI education. I have both — and the enthusiasm to bring them together at scale.
Architecting migration into unified platform. Dual auth protocols (PKCE + HMAC-SHA256). Cloud costs down 20%.
Real-time threat monitoring on Kafka. 99.9% availability. 50% more volume, 10% less cost. 40% faster alerts.
AI-CodeMedic: LLM-powered AIOps engine. HackWeek 2nd place. Being evaluated for production adoption.
Breach reporting platform. 25% faster retrieval. 50% fewer disruptions. 5–15 engineers led. Delivered ahead of schedule.
The part I love most is the blank whiteboard. Evaluating which language fits the problem — Java for enterprise reliability, Go for concurrency, Python for rapid ML prototyping. Choosing between Kafka and RabbitMQ based on throughput needs. Deciding whether a monolith serves better than microservices for the current scale. Every architectural choice is a bet on the future, and I take those bets seriously.
Evaluated 40+ services for retain, integrate, or overhaul. Designed migration frameworks collaborating with product leads and engineering managers. Each service got a different technology decision based on its role in the unified platform.
Designed two competing auth protocols: session-based PKCE with Redis caching and stateless HMAC-SHA256 with hybrid salt protection. Documented attack vectors (replay, timing, reverse engineering) with mitigation strategies.
Architected from scratch: threat intelligence APIs, Kafka for streaming, webhooks for alerting. Chose Kafka over RabbitMQ for throughput at scale. 40% faster detection-to-notification. 50+ client customisations in 45 days.
Built a subscription service delivering real-time breach reports at 99.95% uptime. Integrated with headless CMS for content management. Optimised database schema cutting retrieval times by 25%, supporting 30% user growth. Failure recovery reduced disruptions by 50%. Delivered 5 days ahead of deadline.
Applied architectural thinking to AI. Two-tier model strategy: cheap model for exploration (52,844 evaluations), expensive model for validation of top candidates. Five-stage progressive filtering eliminated 95%+ weak candidates early. The architecture decision itself is what made $51 work.
This is where I'm headed — bringing 17 years of system design instincts to AI/ML architecture. Choosing the right model for the task. Designing evaluation pipelines that don't burn budget. Building RAG systems where the retrieval architecture matters as much as the model. The patterns transfer. The thinking scales.
LJMU M.Sc. Thesis • 2025 • Under review for Springer publication
Five-stage automated methodology that achieved statistically significant LLM improvements (p < 0.001, Cohen’s d > 0.8) for $51.12 total — 40x cheaper than fine-tuning. Two-tier model strategy (Claude Haiku for exploration, Sonnet for validation) with progressive filtering. No specialised hardware required.
Technical articles on backend engineering, AI integration, and lessons from 17 years of shipping code.
Writing about what I learn — from optimising LLMs on a budget to designing authentication protocols that resist timing attacks. The blog is the thinking out loud.
Read on Medium →Open-source projects, AI experiments, and the code behind the blog posts.
From sentiment-based recommendation systems to LLM tooling experiments. The repo is the proof of work.
View on GitHub →I've spent 17 years proving I can build. I went back to school at 37 because I believed the next decade of engineering would look nothing like the last. Now I'm looking for the kind of role where engineering depth meets strategic impact — where I can architect systems today, shape engineering culture for the long term, and think well beyond the next sprint.
The right role will find its own title. Let's talk.