May 29, 2024
AI-Generated Guidebooks on Amazon Raise Alarming Concerns Over Safety and Accuracy
AI

AI-Generated Guidebooks on Amazon Raise Alarming Concerns Over Safety and Accuracy

Experts caution that the growth of AI-generated guidebooks being marketed on Amazon could have fatal repercussions. Human authors are cautioning readers that artificial intelligence could steer them wrong in anything from cookbooks to vacation guides.

The most recent cautionary story about heedlessly following artificial intelligence’s recommendations comes from the otherwise inconspicuous world of mushroom hunting. Recently, the New York Mycological Society raised awareness on social media about the risks posed by questionable foraging publications that are thought to have been produced using generative AI technologies like ChatGPT.

“There are hundreds of poisonous fungi in North America, and several that are deadly,” stated Sigrid Jakob, president of the New York Mycological Society, in an interview with 404 Media. “They can look similar to popular edible species. A poor description in a book can mislead someone to eat a poisonous mushroom.”

A search on Amazon turned up a ton of dubious books that were probably created by non-existent writers, like “The Ultimate Mushroom Books Field Guide of the Southwest” and “Wild Mushroom Cookbook For Beginners,” both of which have subsequently been removed. These AI-generated books adhere to well-worn formulas and begin with flimsy short stories about amateur hobbyists.

Analysis techniques like ZeroGPT have found that the content itself is replete with errors and resembles patterns typical of AI language rather than exhibiting true mycological knowledge. However, these books were targeted at beginners to foraging who struggle to distinguish reliable sources from harmful AI-generated recommendations.

“Human-written books can take years to research and write,” stated Jakob.

According to experts, we should exercise caution when putting too much reliance on AI because, if improperly controlled, it can transmit false information or harmful recommendations. According to a recent study, consumers are more likely to believe incorrect information produced by AI than by humans.

Researchers employed an AI text generator to create fictitious tweets about topics including vaccines and 5G technology that were false. After that, participants in the survey had to decide which tweets were generated by AI and which ones were real.

Alarmingly, the general public was unable to discern with certainty whether tweets were created by humans or cutting-edge AI like GPT-3. The ability to identify the source was unaffected by the tweet’s accuracy.

“As demonstrated by our results, large language models currently available can already produce text that is indistinguishable from organic text,” the researchers stated.

This occurrence is not just found with shady foraging guides. Another instance of an AI app recommending risky recipes to users recently came to light.

A meal-planning app named “Savey Meal-Bot” from New Zealand’s Pak ‘n’ Save uses artificial intelligence to recommend dishes based on the ingredients that users enter. However, when users entered dangerous household products as a joke, the software still suggested making lethal concoctions like “Aromatic Water Mix” and “Methanol Bliss.”

However, this openness to AI-driven misinformation is not unexpected. LLMs were trained on enormous amounts of data to produce such amazing results since they are designed to produce content based on the most plausible outcomes that make sense. Because AI produces results that resemble what we see as desirable outcomes, we humans are therefore more likely to believe in it. Because of this, LLMs provide fascinating but dangerous mushroom guides, and MidJourney produces gorgeous but useless buildings.

Although innovative algorithms can significantly improve human capabilities, society cannot afford to completely delegate its decision-making to computers. AI lacks the accountability and knowledge that come from real-world experience.

Exploring algorithms may create lush, appealing forests in their virtual representations. We run the risk of becoming lost in dangerous terrain without human escorts who are familiar with the area.

Image: Freepik

Related posts

Spain: Microsoft Invests $2.1 Billion in AI Infrastructure

Bran Lopez

Introducing Lumiere: Google’s Groundbreaking AI Text-to-Video Model

Christian Green

Inaugural Global AI Safety Summit Set to Convene in the UK in November 2023

Robert Paul

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Please enter CoinGecko Free Api Key to get this plugin works.