Defi and AI intersection calls for transparent security

Opinion by: Jason Jiang, Certik’s Chief Business Officer
Since its beginning, the decentralized finance (DEFI) ecosystem has been defined by change, from decentralized exchanges to lending and borrowing protocols, stablecoins and more.
The latest change is the defai, or defi enabled by artificial intelligence. Within defai, autonomous bots trained in large data sets can significantly improve efficiency by implementing trading, risk management and participation in management protocols.
As is the case with all of the innovations based on the blockchain, however, the defai can also introduce new attack vectors that the crypto community should discuss in order to improve user safety. It requires an elaborate view of weaknesses with innovative ideas to ensure security.
Defai agents are a step beyond traditional intelligent contracts
Within the blockchain, mostly Smart contracts Traditionally operated in simple logic. For example, “If X happens, then Y. will be performed.” Due to their natural transparency, smart contracts can be proven and proven.
Defai, on the other hand, pivots from the traditional structure of the intelligent contract, as its AI agents are naturally probabilistic. These AI agents make decisions based on emerging data sets, previously inputs and contexts. They can interpret the signals and adapt rather than react to a predetermined event. While some may be right to argue that this process provides sophisticated changes, it also creates a breeding area for mistakes and exploitation by natural uncertainty.
So far, early AI-powered iterations Trading bots In decentralized protocols the transfer to the defai is signed. For example, users or decentralized autonomous organizations (DAOs) can implement a bot to scan for specific market patterns and carry out trading in seconds. As innovative because it may sound, most bots operate on a web2 infrastructure, which brings the Web3 the weakness of a centralized point of failure.
Defai creates new attacks of attack
The industry should not be caught up in the chaos of AI integration into decentralized protocols when this change may create new attacks that are not ready for. Evil actors can take advantage of AI agents by modeling the model, data poisoning or opponent input attack.
It was shown by an AI agent trained to identify arbitration opportunities between the Dex.
Related: Decentralized Science meets AI – Legacy institutions are not ready
Threats can intervene in its input data, making the agent to conduct unprofitable trade or even remove funds from a pool pool. Moreover, a compromised agent may deceive an entire protocol in believing misinformation or will serve as a starting point for a greater attack.
These risks are combined with the fact that most AI agents are currently black boxes. Even for the developers, the decision-making capabilities of the AI agents they created may be unclear.
These features are the opposite of web3 ethos, which is built on transparency and verifiability.
Security is a shared responsibility
Thinking of these risks, concerns may be expressed about defai implications, which potentially call a pause in this development. Defai is, however, likely to continue to change and see more levels of adoption. The need then is to adapt the security industry strategy accordingly. Defai-involved ecosystems are likely to require a standard security model, in which developers, users and third-party auditors will determine the best way of maintaining security and prevention risks.
AI agents should be treated like any other piece of onchain infrastructure: with doubt and investigation. It combines with the strict audit of their code logic, mimic the worst cases and even uses red-team exercises to expose attacks by vectors before being exploited by malicious actors. Moreover, the industry should develop criteria for transparency, such as open-source models or documentation.
Regardless of how the industry looks at this change, Dafai has introduced new questions when it comes to the confidence of decentralized systems. When AI agents can automatically hold the property -ownership, interact with smart contracts and vote on management measures, trust is no longer about verifying logic; It’s about validation of purpose. It calls for exploration how users ensure that agent’s goals are aligned with short -term and long -term goals.
Towards safe, transparent intelligence
The path forward should be one of the cross-discipline solutions. Cryptographic techniques such as zero-knowledge proof can help verify the integrity of AI actions, and onchain testimony frameworks will help monitor the origins of decisions. Finally, auditing tools with AI elements can check agents comprehensively as developers review the Smart Contract Code.
The truth remains, however, that the industry is not there yet. Today, strict stress, transparency and stress testing will remain the best defense. Users who consider the involvement in defai protocols should prove that protocols are embracing these principles to the AI logic that drives them.
The Nesession of the future of AI change
Defai is not naturally unsafe but differs from most of the current web3 infrastructure. The speed of its adoption risks exceeds the security frameworks that are currently in the industry. As the crypto industry continues to learn – often the difficult way – a change of security is a recipe for disaster.
Given that AI agents are about to act on behalf of users, handle their property and shape protocols, the industry must deal with the fact that each line of AI logic is still code, and each line of code can take advantage.
If the defai adoption takes place without compromising safety, it should be designed using security and transparency. Anything that is less invited to the very outcomes of decentralization is intended to prevent.
Opinion by: Jason Jiang, chief business officer of certik.
This article is for general information purposes and is not intended to be and should not be done as legal or investment advice. The views, attitudes, and opinions expressed here are unique and do not necessarily reflect or represent the views and opinions of the cointelegraph.