2026 Apr 21
- AI has limits, even if many AI people can't see them - “He describes some famous results from the research of psychologist Paul Meehl on medical and other decisions, which suggested that 'statistical prediction provided more accurate judgments about the future than clinical judgments' under certain conditions. But the conclusion that Ben comes to is not that this means that statistical prediction is generally better than expert judgment. Instead, it is better when there are clearly defined outcomes, good data, and clear reference cases that can be used for comparison. There are many situations in which this is not true, and cannot readily be made true.”
- everything is a nail, or at least it ought to be - “As he puts it (following Paul Meehl), algorithmic decision making is always going to have the evidence on its side. Because once you have put the problem in terms of the kinds of things which can be measured and defined a specific success metric - once there is any standard of evidence with which to judge the results - then 'optimisation' means what it says. Anything you do differently from the output of an optimiser is … suboptimal.
But this often means that all the work is done in deciding what to measure and what the optimand should be, what counts as evidence and what as a test. Not only is that process a great way to put your thumb on the scale without leaving fingerprints, a lot of the time things get measured because they are convenient to measure, rather than any particularly principled reason. As I’ve constantly said in econometric context, the easiest way to find a valid instrument for an unobservable quantity is simply to lower your standards.”
- LLMe - An interesting observation here is that LLM code ingestion doesn’t appear to distinguish good from bad examples, and tends towards examples that work at all, but both good and bad have a lot to teach if we keep clear which one we’re looking at.