SPONSORED
Elevate Magazine
August 22, 2024

This AI Scientist Runs Its Own Experiments

this ai scientist runs its own experiments

Researchers at the University of British Columbia (UBC), in collaboration with the University of Oxford and Sakana AI, have created an “AI scientist” capable of inventing and running its own experiments. This project intends to tap into the potential of artificial intelligence systems to learn through open-ended experimentation, which could lead to remarkable new capabilities and insights.

The AI scientist is designed to generate machine learning experiments, evaluate their potential using large language models (LLMs), and then write and execute the necessary code to test these ideas. While the initial results may not be groundbreaking, they demonstrate the ability of the AI scientist to come up with novel ideas and explore them in an iterative manner.

Jeff Clune, the professor leading the UBC lab, emphasises that as computer power increases, open-ended learning programs, similar to language models, could become increasingly capable. “It feels like exploring a new continent or a new planet. We don’t know what we’re going to discover, but everywhere we turn, there’s something new,” he said.

However, the reliability and trustworthiness of the AI scientist’s output are currently limited, as pointed out by Tom Hope, an assistant professor at the Hebrew University of Jerusalem and a research scientist at the Allen Institute for AI. Hope stated that while the direction is valuable, he also added that “none of the components are trustworthy right now.”

The potential of open-ended learning extends beyond scientific breakthroughs. It may be crucial in developing more capable and useful AI systems in the near future. The investment firm Air Street Capital highlighted the potential of Clune’s work to create more powerful and reliable AI agents, which are seen as the next big thing by major AI companies.

Meanwhile, the UBC lab has recently launched a new project that involves an AI program inventing and building AI agents. These AI-designed agents have outperformed human-designed agents in tasks such as maths and reading comprehension. However, Clune acknowledges the potential dangers of such a system and the need to find ways to prevent the generation of misbehaving agents.

“It’s potentially dangerous. We need to get it right, but I think it’s possible,” he said.

While challenges remain in terms of reliability and safety, and if security is always taken into consideration, the direction could hold immense promise for the future of scientific discovery and AI development.