The best output given two inputs
Once upon a time, I had an interview question that went something like this. You are trying to predict some event , and you have a model with which to do so. Suppose there is some other model . How should you personally value model ?
Here is the naive answer. Suppose and are the performances of models and , respectively. (Maybe is the amount of money that makes you.) Then the value of model is if , and otherwise is worthless to you. Basically, the value of is how much better does than .
But this is very much the wrong answer. The value of is based on the best model you can get by combining and . Let me illustrate this for you with a concrete example.
Suppose I have a model to predict yes/no events. It is correct only of the time. Suppose you and your friend have two models and which do the same. Individually, they are no better than guessing. However, the models have an interesting relation: The majority vote of the models is always correct. Simply put, if at least two of the models think the answer is “yes”, then it is “yes”. Likewise for “no”.
- correct, correct, incorrect
- correct, incorrect, correct
- incorrect, correct, correct
Do you see how by acquiring and , models which individually do just as well as , combined with do much better?
Now this is a bit of a contrived example. But the point remains the same. This is quite literally a mathematical example of the power of teamwork.
What’s the upshot? A lot of people implicitly hold the naive view that because they are worse than someone else in their group or team, they hold no value in it.1 Two years ago, I wrote about how I don’t believe this is the case for creative fields. Two years later, I am glad to report that this is this is the case even for fields with objectively right answers. That group test? Unless Alice is literally perfect, the A-student Alice and B-student Bob do better than Alice could alone, even though Alice gets better scores than Bob every test.
On a related note, there are some people who believe that generative AI will render humans completely useless. Again, this is not the case. Even if a moron with generative AI can produce code faster than a skilled software engineer working alone,2 that skilled software engineer will be leagues more useful with generative AI.
Some people cope with this by lying to themselves and pretending it’s rare for someone to just be better than you in every way (in some area). But if you’ve ever talked to anyone smart, you know this happens all the time.↩︎
And based on reports of “workslop”, this is decidedly not the case in the medium to long run.↩︎