Andrea Colamedics invented a philosopher, presented it as an author and produced a book, generated in secret with the help of artificial intelligence, to manipulate reality in the digital age.
The people were deceived. The accusations of dishonesty, bad ethics and even illegality flew.
But the man behind this, Mr. Colamedics, insists that he was not a hoax; Rather, he described him as a “philosophical experiment”, saying he helps to show How AI “will destroy slowly but inevitably our ability to think.”
Mr. Colamedics is an Italian editor who, together with two AI tools, generated “Hypnocracy: Trump, musk and the architecture of reality”, an ostensibertad of text written by Jianwei Xun, the non -existent philosopher.
In December, Mr. Colamedics Press printed 70 copies of an Italian edition that supposedly translated. Even so, the book quickly received fast attention, being covered by the media in Germany, Spain, Italy and France, and being coded by technological luminaires.
“Hypnocracy” describes how powerful people use technology to shape perception with “hypnotic narratives”, putting the public in a kind of collective trance that can be exacerbated by trusting AI
The publication of the book occurred when schools, companies, governments and Internet users around the world are struggling with how to use, and not use, the tools of AI, that giants and new technology companies have made Aviaff widely. (The New York Times has sued Openai, the creator of Chatgpt, and his partner, Microsoft, claiming the infraction of the copyright of the news content. The two companies have denied the claims of the lawsuit).
However, the book also turned out to be a demonstration of its thesis, which develops in involuntary readers.
The book, said Mr. Colamedics, intended to show the dangers of “cognitive apathy” that could develop if thought were delegated to machines and if people do not cultivate their discernment.
“I tried to create an performance, an experience that is not just the book,” he said.
Mr. Colamedics teaches what he calls “the art of promoting” or how to ask AI’s intelligent questions and give processable instructions, at the European Design Institute of Rome. He said that he or sees two extreme response, although opposite, to tools such as Chatgpt, with many students who want to trust them exclusively and many teachers who think that AI is inherently incorrect. Instead, try to teach users how to discern the facts of manufacturing and how to interact with tools in a productive way.
The book is an extension of this effort, argued Mr. Colamedics. The AI tools that he used helped him refine the ideas, while the clues (real and invented) about the false author (online and in the book), intentionally suggested potential problems to boost readers to ask questions, he said.
The first chapter discusses the car fake horship, for example, and the book contains dark references to Italian culture that is unlikely to come from a young Hong Kong philosopher, who may help to bring the true author who operates as Aerlatator to a revival.
Sabina Minardi, editor of the Italian departure L’Presso, picked up the clues, exposing Jianwei Xun as false earlier this month.
The Colamedics then updated the biographical page of the false author and spoke with publications, including some deceived for their work. The new editions and experiences printed this month with postscripts about the truth.
But some who first hugged the book now reject it and question if Mr. Colamedici has acted innetically or has violated a law of the European Union on the use of ai
The French media Le Figaro wrote about “L’Affaire Jianwei Xun”, explaining that the “problem” with his previous interview of Hong Kong’s philosopher was that “does not exist.”
The Spanish newspaper El País in Spain retracted a report on the book, replacing it with a note that said “the book did not recognize AI’s participation in the creation of the text, a violation of the new European Law of AI.”
Article 50 of that law says that IFOON uses an AI system to generate text for the purpose of “informing the public on matters of public interest”, then it must (with limited exceptions) be revealed that the generative AI was used in the university.
“That provision in his face seems to cover the creator of the book and perhaps anyone who publishes its content again,” he said. “The law does not enter into force until August 2026, but it is common in the EU for people and institutions wanting to follow laws that seem morally good when they still do not apply technically.”
Jonathan Sitting Train, professor of law and computer science at Harvard, said he was more inclined to call Mr. Colamedics “a work of art, or simply marketing, which involved using a pseudonym.”
Mr. Colamedics is disappointed that some initial champions have denounced the experiment. But it plans to continue using AI to demonstrate the dangers it increases. “This is the time,” he said. “We are risking cognition. He uses it or loses it.”
He said he plans to have Jianwei Xun, describing him as a group of humans and artificial intelligence, teach a course on AI next autumn.