Last week renowned AI pioneer, Geoffrey Hinton, expressed concerns about the future of artificial intelligence (AI) and its potential implications for humanity. Hinton, who has been instrumental in the development of AI technologies, recently left Google after a decade of work in machine learning. His departure comes at a time when concerns are mounting about the rapid advancements in AI and their potential to lead humanity towards a machine-driven dystopia. Hinton’s concerns seem to echo those of over 27,000 individuals, including many technology experts, who have called for a six-month pause in AI development.

The tech industry is currently experiencing an outbreak of excitement over generative AI, driven by chatbots, large language models (LLMs), and other innovative technologies built on machine learning. Hinton’s departure from Google and his subsequent expression of concern highlight the growing unease about the trajectory of AI development and the power it wields in the hands of a few large corporations.

Geoffrey Hinton has a storied lineage of intellectual ancestors, including his great-great-grandfather, George Boole, the inventor of the logic that underpins all digital computing, and his cousin Joan Hinton, a nuclear physicist who worked on the Manhattan Project. Hinton’s own work in AI focuses on building machines that can learn, and he has been a driving force behind the development of neural networks.

In 1986, Hinton and two colleagues at the University of Toronto published a groundbreaking paper on neural networks that solved the problem of enabling a machine to become a constantly improving learner. This approach, which Hinton dubbed “deep learning,” has since become the cornerstone of AI research.

During his time at Google, Hinton led and inspired a team of researchers working on machine learning within the company’s Google Brain group. Although he expressed some concerns about the potential dangers of AI while at Google, his recent departure has allowed him to speak more openly about his fears.

In an interview, Hinton discussed the fact that new AI technologies are capable of learning faster and more efficiently than humans, and that this knowledge can be instantly transferred between machines.

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

He also expressed concerns about the fact that such powerful technology is controlled by a small number of large corporations.

The rapid growth of AI has led many to draw parallels with the development of nuclear technology, which also carried significant risks for humanity. Like nuclear technology, the potential consequences of AI could be far-reaching and transformative, with the potential to both benefit and harm society.

As the debate around AI’s potential dangers continues, it remains to be seen what steps will be taken to mitigate these risks. However, the voices of those like Geoffrey Hinton, who have been instrumental in shaping the technology, should not be ignored. Their concerns highlight the need for responsible development and thoughtful consideration of the impact of AI on society.