April 19, 2024
Teaching Machines to Teach: Google's AI Mastery Leads to 40% Boost in Coding Skills
AI

Teaching Machines to Teach: Google’s AI Mastery Leads to 40% Boost in Coding Skills

In a significant breakthrough, artificial intelligence (AI) researchers at Google Research and Google DeepMind have unveiled a method to augment large language models (LLMs) with other language models, addressing a major challenge in the field. This innovation allows developers to enhance existing models with new capabilities without the need to start from scratch or undergo costly retraining sessions.

The Google Research team shared that augmenting an LLM with another language model not only improves performance in existing tasks but also enables the models to tackle new tasks that were previously beyond their reach. The research was conducted using Google’s PaLM2-S LLM, described as comparable to GPT-4, the AI behind OpenAI’s ChatGPT.

In experiments, PaLM2-S was benchmarked both in its standalone form and after augmentation with smaller, specialized language models. The hybrid model exhibited notable improvements, particularly in translation tasks, where it demonstrated up to a 13% enhancement over the baseline. Additionally, in coding tasks, the augmented model showcased a remarkable 40% relative improvement over the base model for code generation and explanation tasks, rivalling fully fine-tuned counterparts.

While the immediate implications of this research are substantial for the AI sector, particularly in addressing challenges related to language translation, the broader impact could extend to legal concerns surrounding AI models. Large language model developers, including makers of popular models like ChatGPT, have faced legal challenges related to allegations that these AI systems are trained on copyrighted data.

The looming question involves whether for-profit companies can legally use such data to train language models. If courts were to rule against the use of copyrighted material, the AI sector might face significant hurdles, potentially impacting the viability of services like ChatGPT due to the high costs and data dependencies associated with training large language models.

However, Google’s innovative approach to LLM augmentation could potentially mitigate the scaling requirements and costs associated with developing or retraining models. If further developed, this method might offer a solution to the challenges posed by the evolving legal landscape, providing a more sustainable and cost-effective path for the future of large language models.

Image: trustedreviews.com

Related posts

Microsoft Invests $1.5B in UAE AI Company

Bran Lopez

OpenAI CEO Sam Altman’s Worldcoin Project Secures Additional $100M Funding for AI-Related Problems 

Kevin Wilson

Meta Prohibits Generative AI Ad Tools for Political Advertisers

Henry Clarke

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More