Hallucination is Structural, Not Accidental
Published:
The standard framing of LLM hallucination is as a bug. Train better. Prompt better. Retrieve better. The bug will eventually go away.
Published:
The standard framing of LLM hallucination is as a bug. Train better. Prompt better. Retrieve better. The bug will eventually go away.
Published:
There’s a class of problems in ML that most supervised learning frameworks can’t handle cleanly: you know the label for a group of examples, but not for any individual one.
Published:
When you fuse audio and text representations, the obvious approach is to encode both independently and then concatenate or cross-attend. It works. But it misses something important: the correspondence between what was said and how it was said, at the same moment in time.
Published:
The hallucination survey went live on MetaArXiv in March. I want to write about what the process was actually like before the memory fades.
Published:
There’s a version of the story I told about myself for a while that went like this: I spent years doing applied work in industry before getting serious about research. That framing treated the industry work as a detour — something to acknowledge and move past.
Published:
Nobody teaches you how to read a paper. You’re expected to figure it out, and most people do eventually, but the path is inefficient and kind of humbling.
Published:
I had my first real research meeting with Prof. Paulik in late August. Not a class, not office hours — a working meeting about a project I was contributing to. I want to write down what it felt like before I forget.
Published:
I started at DASION in 2021 as a high school intern. By the time I enrolled at Berkeley this month, I had spent three years building ML systems that actually ran in clinical settings — models that processed real patient data, infrastructure that stayed up at 99.9%, pipelines that clinicians depended on. I thought that experience would translate directly to research.