What if artificial intelligence learns how to keep secrets from humans?
Yeah, well, Google has taught its AI bots to do just that.
In a somewhat questionable move, Google researchers asked one of their AIs, Alice, to try to communicate with Bob, another AI, without a third, Eve, being able to listen in. It took her a while, but after 15,000 attempts, Alice and Bob exchanged an encrypted conversation which Eve was unable to understand or take part in.
Effectively, what Alice and Bob did was to create their own language, and Google’s researchers couldn’t work it out. The very nature of how neural networks, like these Google AIs, work means that it’s also pretty much impossible to find out precisely how the bots encrypted the message in the first place. Alice hadn’t been told to use any particular encryption method, so she just worked it out on her own.
Whilst working out how to encrypt a message and send it over to Bob, Alice also learned another trick. She learned to decide what data to share with others, and what to keep secret. So, essentially, she could say something to Bob or Eve, and the human researchers would know nothing about it.
Scary or Meh?
Google seem untroubled by the accomplishments. In fact, they see it as a positive development in the realm of cybersecurity. Indeed, there are some tangible practical uses for such encryption.
However, it seems pretty strange that they wouldn’t be concerned about the way that this new learning can empower AI. Mainly, I guess, because the artificial intelligence we’re dealing with right now isn’t exactly that intelligent…
Not that we know of, anyway.