Tyler Cowan, on Russ Roberts’ podcast EconTalk in early 2023, spoke of the lack of models that might explain how exactly AI might pose an existential risk to humanity:
So, if you look, say, at COVID [corona virus disease] or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI [artificial general intelligence] and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.
So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.
And then, for some individuals, at the end of it all, you scream, 'The world is going to end.' Other people come away, 'Oh, the chance is 30% that the world will end.' 'The chance is 80% that the world will end.' A lot of people have come out and basically wanted to get rid of the U.S. Constitution: 'I'll get rid of free speech, get rid of provisions against unreasonable search and seizure without a warrant,' based on something that hasn't even been modeled yet.
So, their mental model is so much: 'We're the insiders, we're the experts.' No one is talking us out of their fears.
My mental model is: There's a thing, science. Try to publish this stuff in journals. Try to model it. Put it out there, we'll talk to you. I don't want to dismiss anyone's worries, but when I talk to people, say, who work in governments who are well aware of the very pessimistic arguments, they're just flat out not convinced for the most part. And, I don't think the worriers are taking seriously the fact they haven't really joined the dialogue yet.
https://www.econtalk.org/tyler-cowen-on-the-risks-and-impact-of-artificial-intelligence/#audio-highlights