In 2018, Hinton was given the Turing Award, the most prestigious prize in computer science, for his work on neural networks. The approach showed fits of promise over the years, but it wasn’t until a decade ago that its real power and potential became apparent. The networks could digest pixels as input, and, as they saw more examples, adjust the values connecting their crudely simulated neurons until the system could recognize the contents of an image. In the 1980s, Hinton, a professor at the University of Toronto, along with a handful of other researchers, sought to give computers greater intelligence by training artificial neural networks with data instead of programming them in the conventional way. Researchers at Google invented a type of neural network known as a transformer, which has been crucial to the development of models like PaLM and GPT-4. In fact, he says, the company moved relatively cautiously despite having a lead in the area. Hinton says he didn’t leave Google to protest its handling of this new form of AI. But we should put equal effort into mitigating or preventing the possible bad consequences.” “First of all, I don't think that's possible, and I think we should continue to develop it because it could do wonderful things. “A lot of the headlines have been saying that I think it should be stopped now-and I've never said that,” he says. But since leaving Google, Hinton feels his views on whether the development of AI should continue have been misconstrued. Last month, a number of prominent AI researchers and others signed an open letter calling for a pause on the development of anything more powerful than currently exists. Hinton isn’t the only person to have been shaken by the new capabilities that large language models such as PaLM or GPT-4 have begun demonstrating. “Now I think it's more likely to be five to 20.” “I used to think it would be 30 to 50 years from now,” he says. Hinton concluded that as AI algorithms become larger, they might outstrip their human creators within a few years. PaLM is a large program, but its complexity pales in comparison to the brain’s, and yet it could perform the kind of reasoning that humans take a lifetime to attain. Hinton’s second sobering realization was that his previous belief that software needed to become much more complex-akin to the human brain-to become significantly more capable was probably wrong.
0 Comments
Leave a Reply. |