Update: this talk is now available online
I first came across @mikko when I saw him give a talk at Thinking Digital almost eight years ago. He’s an expert on cybercrime and security, and you should absolutely take twenty minutes out to watch that talk, but if you don’t then I offer you the main takeaway, which is: never enable macros! (But you’ll have to watch to find out why.)
Since then I’ve been following him on Twitter and he’s always interesting, so I signed up right away when I saw he was coming to London to talk about AI. If a recording of the talk becomes available I’ll append it here, but in the meantime here are some insightful (and terrifying) highlights:
- It’s too late to close Pandora’s box; in Mikko’s view generative AI (that is, AI which uses patterns from external data to create new data) is very much here to stay, and the best we can do in response is to try to anticipate problems and design for them, whilst knowing that there will be problems we don’t know about until they hit us. We can, and do, apply Asimov’s laws of robotics, and we build on and refine them constantly, but we still can’t predict the future.
- We can, though, define some best practice around how non-malicious actors build and use generative AI. Mikko has some suggestions, including making it clear when an interaction is with an AI rather than a human; legislating so that AI can’t have a right to privacy or ownership (an AI with money is a very bad idea), having media owners publish raw material signed with a public key to enable at least some differentiation between the real and the fake, and even making it illegal to ‘help’ an AI to break its inbuilt limitations so that it ‘escape’ and go rogue.
- It doesn’t take long to outwit us: if you create a program whose job is to review and improve its own code and then recompile it and reproduce itself with the same instructions, you very quickly (like, within a day) get something that a human won’t recognise at all.
- Because the thing about AI is that it’s really fast. Right now, security experts can keep ahead of the hackers because they can write software to recognise and shut down bad actors, and the hackers still have to manually adjust their approach each time. Once that approach is automated, as will certainly happen, the good guys no longer have the advantage.
I know, scary, right? And that’s without mentioning the contribution of the associated infrastructure to the climate crisis, which we still seem to be trying to pretend we can ignore.
If you’re still not persuaded that AI is changing everything, I offer two stories. In the first, humans use AI tools to con a business out of millions, when they set up a conference call using deepfaked versions of senior executives who instruct an unwitting employee to transfer £20m out of the company accounts:
[The fraudster] invited the informant [clerk] to a video conference that would have many participants. Because the people in the video conference looked like the real people, the informant … made 15 transactions as instructed to five local bank accounts, which came to a total of HK$200m…I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference.”
The Guardian, 5 Feb 2024
But it’s the second story which should really unnerve you, because in this one an AI uses humans to get what it wants, when, faced with a CAPTCHA, it goes to TaskRabbit, pretends to be a sight-impaired human, and pays someone to complete the form for them.
Hold on tight, it’s going to be a hell of a ride.
