Google Deep Learning – 800 Years of Human Experimentation in One Discovery

In a remarkable feat of technology and science, Google DeepMind’s AI system, GNoME, has discovered over 2.2 million new crystal materials, including 380,000 that are considered stable and potentially useful for future technologies.

πŸ”— Image credits

This discovery represents an advancement equivalent to nearly 800 years’ worth of knowledge in material science. With such a massive scientific impact, Google Brain undoubtedly deserved to get their paper accepted for the world’s most credible research publication, Nature.

GNoMe expands the number of crystals experimentally identified as computational stable from 20k to 421k! (source)

The traditional approach to discovering new materials has been a slow, expensive, and often inefficient process, typically involving making incremental changes to known crystals or experimenting with new element combinations.

GNoME revolutionizes this process by employing two distinct deep-learning models.

  • The first model generates structures by tweaking elements in existing materials, while
  • The second model makes predictions based solely on chemical formulas, thereby broadening the scope of potential discoveries.

GNoME’s efficiency is notable: initially, it predicted materials’ stability with about 5% accuracy, but this figure increased significantly through iterative learning, reaching over 80% accuracy for one model. The use of AI in this domain isn’t new; projects like the Materials Project have previously used similar techniques to discover and improve the stability of materials.

However, GNoME’s scale and precision are unparalleled, making it a standout in the field.

One of the most exciting aspects of this development is the potential applications of these new materials.

For instance, GNoME identified 52,000 new layered compounds similar to graphene, which could revolutionize electronics with the development of superconductors. It also discovered 528 potential lithium-ion battery conductors, which could lead to more efficient batteries, a crucial component in energy storage and electric vehicles.

πŸ’‘ A superionic conductor is a type of material that conducts electric current primarily through the movement of ions, rather than electrons. These conductors exhibit high ionic conductivity, which means ions can move through them easily, especially at elevated temperatures. This unique property makes superionic conductors particularly useful in applications like solid-state batteries and fuel cells, where efficient ion transport is crucial for their performance.

In parallel, Berkeley Lab’s new autonomous laboratory, the A-Lab, has been instrumental in synthesizing these materials.

The A-Lab integrates robotics with machine learning to optimize the development of new materials, managing to synthesize 41 out of 58 proposed compounds over 17 days. This level of efficiency far surpasses that of traditional, human-led labs.

This breakthrough in materials science is not just a testament to the power of AI in accelerating discovery but also holds promise for significant advancements in various sectors, including energy, computing, and electronics.

The synthesis of these materials could herald a new era of technological innovation, particularly in areas critical to addressing global challenges like the climate crisis.

Let’s end this article with an insightful conclusion provided by the DeepLearning researchers:

“The application of machine-learning methods for scientific discovery has traditionally suffered from the fundamental challenge that learning algorithms work under the assumption of identically distributed data at train and test times, but discovery is inherently an out-of-distribution effort. Our results on large-scale learning provide a potential step to move past this dilemma, by demonstrating that GNoME models exhibit emergent out-of-distribution capabilities at scale.” — 🌳 Nature 2023

Also, they showed that these improvements seem to be consistent with the AI Scaling Laws:

“The test loss performance of GNoME models exhibit improvement as a power law with further data. These results are in line with neural scaling laws in deep learning and suggest that further discovery efforts could continue to improve generalization.” — 🌳 Nature 2023

So, this is only the beginning of a series of expected scientific breakthroughs that will lift humanity to new heights.

πŸ’‘ Recommended: AI Scaling Laws – A Short Primer