Two of Silicon Valley’s most influential visionaries, Facebook CEO Mark Zuckerberg and Tesla CEO Elon Musk, this week engaged in a public spat over the future of artificial intelligence and whether the government should take the wheel to counter the threat this emerging technology might pose to mankind.
Musk, a longtime advocate of AI, has expressed serious concerns over the years about the potential for the technology to accelerate faster than society can learn to manage its growth. He has raised fears that intelligent machines could pose a risk to civilization if not properly regulated.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” Musk told attendees at the National Governors Association meeting last week.
Zuckerberg, who decided to split his time between chatting on Facebook Live and overseeing a backyard cookout on Sunday, responded to a question about Musk’s views on AI by saying that he was optimistic about its potential.
“People who are naysayers and try to drum up these doomsday scenarios . . . I just, I don’t understand it,” he said. “It’s negative, and I think in some ways it’s pretty irresponsible.”
When a Twitter fan later asked about Zuckerberg’s comments, Musk’s reply was terse.
“I’ve talked to Mark about this,” he wrote Tuesday on Twitter. “His understanding of the subject is limited.”
AI will be one of the “paradigm-shifting developments of the next century,” Rep. John Delaney, D-Md., founder of the Artificial Intelligence Caucus, wrote earlier this month in an op-ed.
Government officials need to examine the long-term impact of the technology, particularly the effect it will have in terms of the industrial workforce in the U.S., he said.
Musk’s concerns are not unique, noted Charles King, principal analyst at Pund-IT.
Other major figures, ranging from Google’s Peter Norvig to Stephen Hawking and Bill Gates, in 2015 signed a letter urging more study of AI’s potential impact on society.
“You could argue that the Zuck/Musk dustup is generational in nature and an expected part of the jostling between the younger and older guard,” King told TechNewsWorld.
However, Musk’s concerns should be greeted with a bit more gravity than Zuckerberg’s “just trust me on this” posture conveys, he said.
On the other hand, Musk has drawn criticism for being hypocritical, given that he has been a proponent of AI technologies at Tesla, SpaceX and other firms that depend heavily on artificial intelligence and machine learning for their success.
His recent remarks amount to fear mongering — and the claims he made serve his own interests, suggested Jim McGregor, principal analyst at Tirias Research.
“Musk promotes autonomous vehicles but trounces on AI,” he pointed out. “Autonomous vehicles are not possible without AI.”
The activities at Musk’s Neuralink, a new company that is working on a way to enhance human performance through computer implants, should raise concerns about the impact of AI on humankind, McGregor said.
“As we have indicated before, by 2025, every new platform you come in contact with will leverage local, network or cloud-based AI in some form,” he told TechNewsWorld. “This may be something as simple as the user interface or search engine, or as advanced as digital assistants and autonomous control.”
The biggest concern about AI should be its ability to replace human labor faster than the existing labor force can be retrained to do something new, McGregor suggested.
It’s easy for someone to call for greater regulation of AI, but much harder to put that talk into action, observed Paul Teich, also a principal analyst at Tirias.
“The fundamental question for proponents of AI regulation is, what are the behaviors that need regulating? How do we specify the regulations, and then how do we measure conformance to those regulations?” he asked.
“AI is becoming an emotional issue for many in the Silicon Valley leadership,” said Michael Jude, a research manager at Stratecast/Frost & Sullivan.
“The fact is, AI is still a long way from becoming truly intelligent,” he told TechNewsWorld. “In actual fact, AI is very much less capable, in most applications, than a well-trained dog.”
Unlike truly intelligent human beings, AI does not possess consciousness or self-awareness, and researchers do not yet know how to program expressions of those traits into the technology.
The concerns Musk raised involve what is known as “artificial super intelligence,” Jude said, which would be akin to the capabilities of SkyNet in the Terminator film trilogy.
Zuckerberg appears to be thinking more in terms of artificial narrow intelligence (ANI) or artificial general intelligence (AGI), which are found in application-specific AI, he said.
In the context of Facebook, AGI would have a fair amount of utility, said Jude — for example, in making connections, interpreting usage data and targeting advertising.
“Musk is worried about what happens if a computer wakes up, decides its best interests aren’t humanities’ best interests, and takes over the world,” Jude said. “I think that is unlikely . . . at least for the foreseeable future.”
David Jones is a freelance writer based in Essex County, New Jersey. He has written for Reuters, Bloomberg, Crain’s New York Business and The New York Times.
Two of Silicon Valley’s most influential visionaries, Facebook CEO Mark Zuckerberg and Tesla CEO Elon Musk, this week engaged in a public spat over the future of artificial intelligence and whether the government should take the wheel to counter the threat this emerging technology might pose to mankind. Musk, a longtime advocate of AI, has expressed serious concerns over the years about the potential for the technology to accelerate faster than society can learn to manage its growth. He has raised fears that intelligent machines could pose a risk to civilization if not properly regulated.