reneroth.xyz

Mind the Gap: LLMs work best when you are the bottleneck (and aware of it)

· 4 min

There’s this theory that a group of people acts only slightly smarter than the dumbest of them.

I’m not aware if there have been any empirical studies done on this, but anecdotally, it seems to hold true. The idea behind that concept is that the intelligence in decisionmaking (substitute for knowledge/experience/skill as needed) of a group, as a whole, is bottlenecked by its least capable (smart/experienced/skilled etc) member, but, crucially, the group will also be able to impart some of its wisdom (experience etc…) on that member.

To break it down in practical terms: If a group is to make an unanimous decision, the decision will be the smartest that all members still understand, which will be somewhat smarter than the best decision its dumbest member could’ve come up with. Thus, the lower threshold will be approximately weakestMembersSkill + Math.min(transferableKnowledgeInGroup, weakestMembersCapacityForGrowth), under ideal conditions.

Keep that in mind, it will come in handy in life.

The same holds true for a group consisting of two participants, even if one of them is a large language model. If you ever do anything important or even of consequence using LLMs (“AI”), you should be aware of the skill hierarchy between you and the LLM at every step along the way.

If you’re trying to handle something way out of your depth, you will be unable to validate the results you’re getting. You are not going to solve problems in particle physics using ChatGPT if you’re stuck on third grade arithmetics. More importantly: You absolutely should not have any LLM writing business critical code for you if you wouldn’t have been able to write the same code by yourself. The dangers cannot be overstated.

If you find yourself having to explain the basics of what you want to achieve to the LLM over and over again, it’s time to reconsider as well. Are you sure the time spent arguing with Claude so far might not have exceeded the time of “just doing this tedious thing by hand”? Maybe you’re better of doing it by yourself for now. Just do the thing. I know it can suck, but just go do it.

Ideally, you want to reach a sweet spot where you’re skirting along the upper limits of your knowledge. You know what you’re doing, you know why and you know how; it’s just that one detail you’re unaware of. Most importantly, you should be confident in your ability to identify wrong answers.

No matter if you take your advice from a LLM or a human: If you learn one new thing, great. If you learn nothing new, well, it happens. If you feel like you learned twenty new things, better be very careful when applying that knowledge!

Good examples of what to go for: You know the programming language, and want to get into a new framework for it. You know how to mix decent cocktails, and are looking for new ideas based on the bottles at hand. You’ve built your own furniture before, and are wondering what kind of joint would be best for this specific project.

Be aware of where you sit on the Dunning Kruger graph — and be aware of where the LLM is at.

comment by email →