Is Google a Step Away from Developing a Computer that Can "Program Itself"?

Neural.jpg

New Scientist reports that Google may soon have a computer with "human-like learning" abilities that will "program itself":

Your smartphone is amazing, but ask it to do something it doesn’t have an app for and it just sits there. Without programmers to write apps, computers are useless.

That could soon change. DeepMind Technologies, a London-based artificial-intelligence firm acquired by Google this year, has revealed that it is designing computers that combine the way ordinary computers work with the way the human brain works. They call this hybrid device a Neural Turing Machine. The hope is it won’t need programmers, and will instead program itself.

MInd-and-Technology3.jpgAs is typical these days, media reports on AI — and particularly AI and Silicon Valley companies like Google — offer a bedazzling welter of prophetic fiction and misinformation.

"Human-level learning" and "self-programming" (more generally: self-replication) are central memes in the latest Sci-Fi fad hyping smart machines becoming smarter and smarter, imminently overtaking mere humans. But, predictably, the scientific merit of the purported "breakthroughs" is paltry at best. Notwithstanding the fad and the hype, there’s, well, no news here. A BetaBeat story on the topic, "Google’s New Computer With Human-Like Learning Abilities Will Program Itself," is essentially a big, emotive headline with no real content.

As a hype-busting, fad-avoidance exercise we might offer the following as genuine takeaways:

  • Google is experimenting with Artificial Neural Networks (ANNs) for performing supervised or semi-supervised learning tasks, including those that human programmers undertake when manipulating data or writing code.

  • Some variation on the decades-old ANNs showed a slight performance bump on a small, well-defined, carefully chosen and essentially uninteresting problem involving data copy and manipulation.

  • Work on learning methods continues, as anyone even moderately active in computer science knows it will, since it’s the current dominant paradigm (and has been since the late 1990s).

  • Reading between the lines a bit: Nothing much is advancing with supervised learning methods. BetaBeat is reporting on stories that would barely qualify as exciting in university computer labs, where similar research on ANNs and other supervised learning methods continues daily, but without the gee-wiz expectations that go along with the mention of Google and its acquisitions.

  • Readers eager to believe that smart, human-like machines are imminent will chatter about and forward and post the story anyway, oblivious to its actual newsworthiness or lack thereof. Hype about AI will go on, unabated.

At least for now.

Image source: A Health Blog/Flickr.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Computational SciencesContinuing SeriesMind and TechnologyNewstechnology