Blog

Artificial intelligence has a problem of confidence-The technology of maintaining decentralization can reform



Opinion: Felix Show, co -founder of ARPA and Bella

Artificial intelligence has been a dominant narration since 2024, but users and companies are still unable to trust it completely. Whether it is finance, personal data or health care decisions, the frequency about the reliability and reliability of artificial intelligence is still high.

This increased deficit in AI TRUST is one of the most important barriers that prevent them from being widely adopted. The techniques of maintaining decentralized privacy are quickly recognized as viable solutions that provide verification and transparency and protect the strongest data without prejudice to the growth of artificial intelligence.

The deficit of the spread of artificial intelligence scattered

Artificial intelligence was the second most popular category that occupies the encryption of the mind in 2024, with more than 16 % of investors. Startial companies and multinationals have allocated significant resources to Amnesty International to expand technology to people’s financial affairs, health and each other.

For example, the emerging Defi X AI (Defai) is shipping more than 7000 projects with a peak market ceiling 7 billion dollars In early 2025 before the markets were destroyed. DEFAI has shown AI’s transformational capabilities to make decentralized financing (Defi) more suitable for use of natural language orders, implement complex multiple steps, and complex market research.

However, innovation alone did not replace AI: hallucinations, manipulation and privacy interests.

In November 2024, user Amnesty International’s Undersecretary persuaded a base to send $ 47,000 Despite its programming, do not do it. Although the scenario was part of the game, it raised real concerns: Can artificial intelligence employees be trusted with autonomy on financial operations?

Auditing processes, errors and red difference help, but do not remove the risk of immediate injection or logical defects or use unauthorized data. According to KPMG (2023), 61 % Among the people still hesitate to trust Amnesty International, and even industry professionals share this anxiety. A poll in Forster in Harvard University’s business review Find 25 % of the analysts were called Trust as the largest obstacle to Amnesty International.

Doubt is still strong. He conducted a survey at the Wall Street Journal summit Find 61 % of the leading information technology leaders in America are still trying artificial intelligence agents. The rest was still trying or avoiding it completely, noting that it is not reliable, the risks of cybersecurity and the privacy of data as the most important concerns.

Industries such as health care feel these risks more severely. The sharing of electronic health records (EHR) with LLMS to improve the results is promising, but they are also fraught with risks legally and morally without protecting a tightly closed privacy.