“
Part of the issue is the characterisation of generative AI as a human replacement. This makes people treat the tool as a hyperintelligent magical being that deserves reverence. Recent research, however, shows that AI tools get the law wrong between 69 and 88 per cent of the time, producing 'legal hallucinations' when asked 'specific, verifiable questions about random federal court cases'. A human lawyer or judge with that kind of error rate would undermine public faith in justice. Automation bias means we are more likely to believe the machine than the person who questions it, but also more likely to cut it some slack when we know it has got things wrong. Automation bias's little sibling, automation complacency, means that we are also less likely to check the output of a machine than that of a human. The problem is not the technology; it is the human perception of it that leads us to put it to utterly unsuitable uses which makes it dangerous.
”
”