“The most exciting thing I’ve done this year is reduce a model’s inference time by 400 milliseconds,” he says with a straight face. “Four hundred milliseconds. That is the difference between a human staying in a flow state or tabbing out to check Twitter.”
If you work in enterprise software, there is a decent chance you have already used a system he helped design. Known in industry circles as a "translator" between raw computational power and tangible business value, Dintakurthi has carved out a niche that most engineers avoid: the messy, beautiful, frustrating space where humans actually have to click the buttons. Dintakurthi’s philosophy is simple yet radical for a technologist of his caliber: AI should not be the hero of the story; the user should be. sumanth dintakurthi
“A self-driving car that makes a mistake is a headline,” he explains, leaning back in his chair. “An AI assistant that makes a decision for a CFO and gets it wrong? That’s a catastrophe. We don’t need more automation; we need better augmentation .” “The most exciting thing I’ve done this year
Currently, he is working on a stealth project involving "Inverse Reinforcement Learning"—teaching AI to understand human values by watching what humans actually do, rather than what they say they do. It is a subtle distinction, but one that could finally bridge the gap between cold logic and human intent. Known in industry circles as a "translator" between
His recent work focuses on what he calls "Ambient Intelligence"—AI that doesn’t demand attention but provides context exactly when needed. While many of his peers chase the glitter of Generative AI and autonomous agents, Dintakurthi focuses on the hard problem of control .
That obsession with friction has led to a design principle now informally named after him within his team: Dintakurthi’s Threshold —the idea that any AI interaction slower than a human’s instinct to give up is a failed interaction.