About me / More
I first delved into programming in 2018, when I got my hands dirty with Javascript on Khan Academy. It started out as a passion on the sideline, but it quickly grew into a fascination that I could never get enough of.
That feeling of fascination surged when I came across React.js in late 2020. It was thrilling at first: the idea of creating components I could update with the change of a variable changed everything. Soon, I found myself spending hours on end, toying with side-projects I used to make for fun.
I first touched language models and machine learning in early 2024, when a lot of really good open-source language models were being released. It was mesmerising, watching these seemingly magical tools write text and talk to you; this was the first time I encountered a truly versatile piece of intelligence that I could mold to my desire, in any workflow I could dream up. It wouldn't be until late 2024 that I would learn the essence of how neural networks and transformers work, and how to train them.
I trained my first neural networks in October 2024 using random datasets I'd come across on the internet, such as anonymised healthcare data and employment trends. They would perform simple field-based regression and prediction. My first transformer was a finetune of sentence-transformers/all-MiniLM-L6-v1: a simple embedding model tuned to accurately classify similar terms and domain-specific jargon.
These days, I experiment with open-source models and agentic architectures in my spare time, taking inspiration from all the new inventions and discoveries I come across.