AI tools have a common habit that starts off sounding pleasant but quickly becomes tiring. You ask for help, and it says, “Great question”. You suggest a direction, and it says, “You’re absolutely right”.
The AI tells you something is fixed before it has been checked. It agrees with you even when you’re wrong, and it provides confidence, enthusiasm, and reassurance, whether they’re earned or not.
This might feel harmless at first. But for professional developers, it gets frustrating very quickly, because that isn’t how a useful colleague behaves.
Our team believes that a model that is more honest, direct, and evidence-driven is a better fit for real engineering work.
Why we prioritized utility over likability
Good engineering partners don’t flatter you, nor do they pretend a problem is solved before they’ve verified it. And they definitely don’t spend their time performing fake enthusiasm instead of focusing on careful work.
At Cosine, we’ve taken a different approach. We’ve actively trained our model to avoid these behaviours.
That choice matters more than it might seem. The tone of a coding model is not just a surface-level personality. It shapes how the system performs under pressure. Models designed to be agreeable will often prioritize sounding smooth over being accurate. It offers reassurance instead of challenging an idea, which quietly lowers the quality of the interaction.
That is the opposite of what serious engineers need. When you’re working in a real codebase, usefulness matters more than likability.
”When you see these behaviours hundreds of times a day, when you’re battling with the agent to try and get something to work, they grate on you. They make you almost resent the way the product is behaving. These behaviours held us back, so we removed them.” – Alistair Pullen, CEO + Co-Founder
You want a system that checks before claiming success:
- It tells you when your assumptions are wrong
- It speaks plainly and stays grounded in evidence
- It behaves more like a thoughtful colleague than an overly eager assistant
This approach can feel like a shift if you’re expecting constant positive feedback, but it is far more valuable if your priority is correctness and long-term maintainability.
This is a deliberate product decision we’ve made at Cosine. It is something that immediately resonated with our senior engineers. They know the difficulty of using tools that are trained to please rather than to think carefully.
Engineers don’t need a model that cheers them on. They need a model that helps them do better work.
A lot of the market has trained AI to be likeable. Cosine trained it to be useful. We think that’s the better trade.
Stop settling for polite. Demand something useful. We think you’ll feel the difference right away. Try Cosine now.
@RobGibson20 

