Artificial intelligence has a problem of confidence-The technology of maintaining decentralization can reform

Opinion: Felix Show, co -founder of ARPA and Bella
Artificial intelligence has been a dominant narration since 2024, but users and companies are still unable to trust it completely. Whether it is finance, personal data or health care decisions, the frequency about the reliability and reliability of artificial intelligence is still high.
This increased deficit in AI TRUST is one of the most important barriers that prevent them from being widely adopted. The techniques of maintaining decentralized privacy are quickly recognized as viable solutions that provide verification and transparency and protect the strongest data without prejudice to the growth of artificial intelligence.
The deficit of the spread of artificial intelligence scattered
Artificial intelligence was the second most popular category that occupies the encryption of the mind in 2024, with more than 16 % of investors. Startial companies and multinationals have allocated significant resources to Amnesty International to expand technology to people’s financial affairs, health and each other.
For example, the emerging Defi X AI (Defai) is shipping more than 7000 projects with a peak market ceiling 7 billion dollars In early 2025 before the markets were destroyed. DEFAI has shown AI’s transformational capabilities to make decentralized financing (Defi) more suitable for use of natural language orders, implement complex multiple steps, and complex market research.
However, innovation alone did not replace AI: hallucinations, manipulation and privacy interests.
In November 2024, user Amnesty International’s Undersecretary persuaded a base to send $ 47,000 Despite its programming, do not do it. Although the scenario was part of the game, it raised real concerns: Can artificial intelligence employees be trusted with autonomy on financial operations?
Auditing processes, errors and red difference help, but do not remove the risk of immediate injection or logical defects or use unauthorized data. According to KPMG (2023), 61 % Among the people still hesitate to trust Amnesty International, and even industry professionals share this anxiety. A poll in Forster in Harvard University’s business review Find 25 % of the analysts were called Trust as the largest obstacle to Amnesty International.
Doubt is still strong. He conducted a survey at the Wall Street Journal summit Find 61 % of the leading information technology leaders in America are still trying artificial intelligence agents. The rest was still trying or avoiding it completely, noting that it is not reliable, the risks of cybersecurity and the privacy of data as the most important concerns.
Industries such as health care feel these risks more severely. The sharing of electronic health records (EHR) with LLMS to improve the results is promising, but they are also fraught with risks legally and morally without protecting a tightly closed privacy.
For example, the healthcare industry suffers negatively from data privacy violations. This problem is vehicles when hospitals share EHR data to train AI’s algorithms without protecting the patient’s privacy.
Decentralization, infrastructure to maintain privacy
JM Barrie wrote in Peter Ban“The whole world is made of faith, confidence, and baki dust.” Trust is not just a good idea to have artificial intelligence – it’s an institution. The expected economic blessing of artificial intelligence of $ 15.7 trillion by 2030 may not be fulfilled without it.
Enter decentralized encryption systems such as zk-snarks. These technologies provide a new path: allow users to verify artificial intelligence decisions without revealing the personal data or internal business of the model.
By applying encryption, maintaining privacy to automatic learning infrastructure, artificial intelligence can be validable and confident and respect privacy, especially in sectors such as financing and health care.
recently: Big Blockchain Big Blink
ZK-SNARKS depends on advanced encryption systems that allow one of the parties to prove that there is something correct without revealing how. For artificial intelligence, these forms allow verification of right without disclosing training data, entry values or special logic.
Imagine the decentralized Amnesty International lending factor. Instead of reviewing full financial records, you verify the evidence of encrypted credit points to make independent loans without accessing sensitive data. This protects both user privacy and institutional risks.
ZK technology also deals with the Black-Box nature of LLMS. Using dynamic evidence, it is possible to check artificial intelligence outputs while protecting both data safety and model architecture. This is a victory for users and companies – one is no longer afraid to misuse data, while the other protects his IP address.
Decentralization artificial intelligence
We enter a new stage of artificial intelligence as the models are not enough. Users request transparency; Companies need flexibility. Organizers expect accountability.
No central, verified encryption delivers the three.
Techniques such as ZK-Snarks, multiple threshold calculation, BLS-based verification systems are not just “encryption tools”-have become the basis of self-confident artificial intelligence. Along with Blockchain’s transparency, it creates a strong new set to maintain and reliable AI’s privacy.
Gartner predicted that 80 % of companies will use artificial intelligence by 2026. The adoption will not be driven by noise or resources alone. It will depend on the construction of Amnesty International that people and companies already have trusted.
This begins with decentralization.
Opinion: Felix Show, co -founder of the ARPA network and Bella Protocol.
This article is intended for general information purposes and does not aim to be and should not be considered legal or investment advice. The opinions, ideas and opinions expressed here are alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
publish_date