12 Comments
User's avatar
Hitesh Joshi's avatar

I am just wondering.. if we simulate the development of sqlite over 26 years and give the LLM step by step direction of the design decision they made.... will it still produce plausible code over correct code?

Hōrōshi バガボンド's avatar

solid followup topic!

jane madden's avatar

1000% the best read i’ve seen on the utmost critical problem facing agentic programming in 2026.

Hōrōshi バガボンド's avatar

thanks! appreciate it. took a lot of time to compile and dig into

Jack Timonen's avatar

Out of interest, how long did it take? :)

Hōrōshi バガボンド's avatar

been researching this on the side for some weeks now. then, just recently, stumbled across some pieces that I think made it "pop".

Been integrating LLM's into my worklows for a few months myself now and all these little quirks and incedence you run into here and there amounted to me going on a quest of finding out what's up

Jack Timonen's avatar

Makes sense, thanks for sharing.

Ahmet Sezen's avatar

This is an amazing article, thank you.

Noah's Titanium Spine's avatar

So much great analysis to reach the wrong conclusion

> LLMs are useful.

No, they are not.

Hōrōshi バガボンド's avatar

That's a bit absolute for my test. There's definitely lots of cool and fun stuff you can play around with ob the side just by promoting here and there. But the higher up the ladder your skillset the more diminishing returns you get. I agree that there's little to no point at all to even consider prompting them with a fully laid out spec to write a database when you're already a db expert. You'd basically have to shove all your knowledge down its throat first so it's probably easier to just go ahead and do it yourself.

Hōrōshi バガボンド's avatar

damn I really butchered that:

- test = taste

- ob = on the side

- promoting = prompting