May 29, 2024
Qualcomm's Snapdragon 8 Gen 3: Unveiling the Future of AI-Powered Mobile Technology

Qualcomm’s Snapdragon 8 Gen 3: Unveiling the Future of AI-Powered Mobile Technology

The third generation of Qualcomm’s Snapdragon 8 processor is about to be unveiled. Qualcomm is a giant in the world of premium smartphone CPUs. This announcement is especially noteworthy because, according to Antutu benchmarks, its current top-of-the-line predecessor, the Snapdragon 8 Gen 2, is currently the second-most potent smartphone chip ever, trailing only the Dimensity 9200+ created by Taiwanese manufacturer MediaTek.

Information about the new chip spilt online on Monday, ahead of its official announcement on Tuesday. According to MS Poweruser, the new processor not only offers improved speed but also a considerable improvement in on-device AI capabilities.

The new system-on-a-chip (SoC) contains a robust AI engine capable of supporting high accuracy in local AI models and can run local large-language models (LLMs) of over 10 billion parameters in addition to the anticipated minor performance improvements—like a 25% faster GPU and a 30% faster CPU. The text-to-image model Stable Diffusion with the ControlNet neural network may even be run on the chip.

Manufacturers of mobile devices are making a strong turn toward AI. For instance, Google has updated its Tensor processors so that the current Pixel phones can manage the plethora of AI capabilities locally, without the need for cloud processing. Similar to this, Apple has improved its performance with the A17 Pro SoC, boosting its AI capabilities.

Consumer electronics with AI-capable CPUs are the beginning of a revolutionary new age in technology. These cutting-edge processors, built to run complex AI models locally, provide a glimpse into a future rife with opportunities, from customized chatbot assistants and incredibly lifelike games to real-time health monitoring and flexible user interfaces.

These chips are positioned as the foundation of a future in which our gadgets aren’t simply smart but instinctively adapted to our wants and tastes. They provide quicker processing, better customisation, and more privacy.

The CEO of Invoke AI, a business at the forefront of AI image production and utilizing Stable Diffusion, is Kent Kersei. He said there are several benefits to operating AI locally rather than on the cloud.

“Cloud-based compute is a great option for larger enterprises, however, for individuals looking to create content, on-device AI offers a more affordable, personalized, and private alternative,” he stated.

He also stressed how Apple’s strategy might be used to deploy a bigger collection of unified memories, speculating on a day when a powerful local LLM assistant will take the place of, for instance, a cloud-based Siri.

Despite how potent this new processor is, Kersei stated that Invoke has no intentions to adapt its program to Android or iOS devices.

“Our commercial solution is typically used for more intricate workflow processes,” he stated. “While we don’t have immediate plans for a local, OSS Android app, we’re keenly observing the trajectory of usage patterns for creatives and the emergence of cutting-edge technology that supports mobile deployment.”

The Draw Things app for iOS is now the preferred choice for creating local images. Users of Android don’t have any equivalent choices.

Regarding the impact of Qualcomm’s new technology on the larger IT ecosystem, Kersei sounded upbeat.

As technology advances, he predicted that app developers will include on-demand AI capabilities in their services. “Offering this chip for on-device AI improves the capabilities that can be deployed on mobile devices,” he said.

However, every innovation has certain difficulties. Kersei drew attention to possible problems with transferring model weights to end devices, particularly for proprietary models like the massive, closed-source GPT-3 or GPT-4.

“This can be non-viable for a proprietary model where managing intellectual property is paramount,” he stated. However, Kersei was quick to draw attention to the possibility of open-source models like Mistral7B and Stable Diffusion as game-changers.

The lightweight model Mistral7B, trained on 7 billion parameters, has gained attention in the AI community. When Decrypt evaluated its responses against stronger rivals like LlaMA and Stable Beluga 2, it outperformed their results. To put things in perspective, GPT-4 has 1.7 trillion parameters, whereas Mistral 7B uses 7 billion.

More than just a CPU, Qualcomm’s Snapdragon 8 Gen 3 is a sign of a mobile future focused on AI. It won’t be long until your assistant can keep all of your secrets while still being useful for chores and answering queries, thanks to industry heavyweights like Qualcomm, Google, and Apple paving the way.

Image: Wikimedia Commons

Related posts

Musk Warns of AI Dominance, xAI Secures $6B

Chloe Taylor

34 U.S. States Sue Meta Over Alleged Manipulation of Minors Amid AI Advancements

Chloe Taylor

Hiber Partners with Google’s Generative AI to Revolutionize Game Development

Christian Green

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Please enter CoinGecko Free Api Key to get this plugin works.