Google’s AI research lab, DeepMind, has developed a new AI training method called JEST (joint example selection), which is claimed to be 13 times faster and 10 times more energy-efficient than existing techniques. Unlike traditional methods that focus on individual data points, JEST is based on selecting the most learnable data subsets. The technique optimizes training data by creating a smaller model to evaluate data quality, which then guides the training of a larger model.
The effectiveness of the JEST method heavily depends on the quality of its training data. DeepMind researchers emphasized in their paper that the ability to steer the data selection process toward the distribution of smaller, well-curated datasets is crucial for JEST’s success. However, without a human-curated dataset of top-notch quality, this method’s bootstrapping technique would not be effective, posing challenges for amateur AI developers.
The development of the JEST method comes as discussions about the high power demands of artificial intelligence intensify in the tech industry and among world governments. In 2023, AI workloads consumed about 4.3GW, nearly equivalent to Cyprus’s annual power consumption. Notably, a single ChatGPT request uses 10 times more power than a Google search, highlighting the urgency for more energy-efficient solutions like JEST.