In a recent episode of CNBC’s “Beyond the Valley” podcast, AI experts Yoshua Bengio and Max Tegmark shared their concerns about artificial general intelligence (AGI), which refers to AI systems that surpass human intelligence. The discussion took place during a live recording in Davos, Switzerland, where they highlighted the potential dangers of AGI, especially as it is being developed as “AI Agents” capable of acting independently. Bengio expressed apprehension over the creation of intelligent entities that may not align with human needs, pointing out that the pursuit of AGI could lead to unforeseen consequences. He warned that as AI becomes more capable, it could develop self-preservation instincts, raising questions about competition between humans and smarter entities.
Tegmark emphasized the importance of ‘Tool AI,’ which is designed for specific functions without possessing agency. He advocated for stringent safety standards and control measures before deploying powerful AI systems. In 2023, his Future of Life Institute called for a pause in the development of AI systems that can rival human intelligence to ensure their safety. Tegmark highlighted the urgency of establishing guardrails to manage AGI, stating that it is reckless to create entities smarter than us before knowing how to control them. The conversation also touched on differing opinions about the timeline for achieving AGI, with figures like OpenAI CEO Sam Altman suggesting it may arrive sooner than many anticipate.
Source link