Cofounder of Ethereum Vitalik Buterin has expressed his discontent with the state of artificial intelligence based products, as in particular the one by OpenAI ChatGPT. Through his X post Buterin stated he dislikes AI written content and discussed its risks for upcoming tech development.
Through his social media update Buterin criticized how creators submit bullet points to ChatGPT for it to create entire posts. He says this tactic leads to worse reading and fails to benefit readers.
If you find yourself writing a bullet point list and passing it to GPT to make a “proper” article, it’s often better to just give people the bullet points.
The GPT adds “wordcel noise” that the reader has to struggle to extract useful info from more than it adds useful context.…
— vitalik.eth (@VitalikButerin) February 10, 2025
He explained that users gain more benefit when they provide ChatGPT with bullet points rather than detailed content. The technology generates text that complicates matters rather than making content easier to understand in his opinion.
Buterin’s Vision for a Safer AI Future
In his evaluation Buterin reveals that AI-generated guidance remains useful for only half a year before becoming outdated. He wants to explain his doubts about the consistency and endurance of AI insights throughout the diverse technologies of today.
Beyond creating texts Vitalik actively warns about AI that could become smarter than human minds. Through his blog during 2019 he put forward extreme steps to control AI development which involved lowering global computing capacity by almost all of its current usage.
He recommends this step to provide us more time for developing defenses against powerful AI systems that may exceed our abilities.
Buterin recommends that organizations should validate computer user identity while registering all computer hardware systems. The author recommends providing special chips in industrial computers to limit hardware use to approved machines.
He proposes these precautions because they must control AI growth to avoid forfeiting human control once it reaches its endpoint.
Buterin Warns of AI’s Potential Threat to Humanity
Buterin’s security suggestions match other experts’ discussions about AI advancement strategies. He predicts that badly designed AI systems would destroy humanity and permanently strip people of their power.
According to him slowing AI progress through computing power restrictions lets society better control its development. His statements lead people to think about how we should properly manage technology growth.
Several people see AI as a path to new technology but Buterin focuses on the possible issues that can happen without proper control. Discussions about AI growth and ethics will grow stronger as AI keeps developing.
Through his evaluations Buterin examines potential difficulties that arise from artificial intelligence growth. People view his strict rules on artificial intelligence as valuable because they lead worldwide discussions about controlling AI technology and monitoring human-safe practices.
This news is republished from another source. You can check the original article here