Blog

How are proof zero-knowledge can make AI fairly



Opinion by: Rob Viglione, Co-Founder and CEO of Horizen Labs

Can you trust your AI unconditional? A recent research role suggests that this is a bit more complicated. Unfortunately, bias is not just a bug – this is an ongoing feature without the right cryptographic guardrails.

A September 2024 Study From Imperial College London shows how Proofs that zero-knowledge ZKPs) can help companies prove that their machine study models (ML) treat all demographic groups while still maintaining model details and user data is private.

Zero-knowledge proofs are cryptographic methods that allow one party to prove to another that a statement is true without announcing any further information beyond the authenticity of the statement. When “fairness,” however, we open a whole new can of worms.

Mechanical study

In machine study models, bias shows in a variety of different ways. It can cause credit marking services to mate a person who is different based on their friends’ credit scores and communities, which may be inherently discriminated against. It can also promote AI image generators to show Papa and Ancient Greek as people with different races, such as Google’s AI tool Gemini It doesn’t matter made last year.

It is easy to discover an unfair machine study model (ML) in the wild. If the model is leaving people loans or credit because of who their friends are, discrimination. If it changes the history or treating specific demographics that are different to overcorrect the name of equity, that is also discrimination. Both situations distract trust in these systems.

Consider a bank using an ML model for loan approvement. A ZKP may prove that the model is not biased against any demographics without revealing the customer’s sensitive data or the details of the owner model. In ZK and ML, banks may prove that they are not systematic discrimination against a race group. That proof will be real-time and consistently compared to the government’s poorly auditing private data.

The perfect ML model? The one who does not change history or treat people differently based on their background. AI must comply with anti-discrimination laws such as the American Civil Rights Act of 1964. The problem lies in cooking AI and makes it proven.

The ZKPs offer a technical path to ensure this compliance.

AI is bias (but it doesn’t have to be)

When communicating with the study of the machine, we need to make sure any proof of being fair will keep the underlying ML models and confidential training data. They need to protect users’ intellectual and privacy privacy while providing adequate access for users to know that their model is not discrimination.

Not an easy task. The ZKPs offer a proven solution.

ZKML (Zero Knowledge Machine Learning) is how we use zero-knowledge proof to prove that an ML model is what it says in the box. ZKML combines zero-knowledge cryptography with machine study to create systems that can verify AI properties without exposing underlying models or data. We can also take that concept and use ZKPs to identify ML models that treat everything equally and fairly.

Recently: Get to know your peer – the KYC’s Pride and weakness

Previously, the use of ZKPs to prove the fairness of AI was extremely limited as it could only focus on a pipeline of the ML pipeline. This makes it possible for dishonest model providers to develop data sets that will satisfy fairness requirements, even if the model failed to do so. ZKPs will also introduce unrealistic computational requests and long wait time to produce proof of fairness.

In recent months, ZK frameworks have made it possible to measure ZKPs to determine the end-to-end fairness of models with tens of million parameters and make it safe.

The trillion-dollar question: How can we measure if an AI is fair?

Let us divide the three of the most common definitions of group fairness: demographic consistency, equality of opportunity and predictable equality.

Demographic parity means that the likelihood of a certain prediction is the same in different groups, such as race or gender. The difference -different, departments of equity and integration often use it as a measurement to try to show the demographics of a population within the manufacture of a company. This is not the perfect fair measure for ML models as it is hoped that each group will have the same outcomes are unrealistic.

Equality -the opportunity is easy for most people to understand. This gives each group the same opportunity to have a positive outcome, thinking they are equally qualified. This is not to be optimized for outcomes – only that every demographic should have the same opportunity to get a job or a home loan.

Also, unpredictable equality of steps when an ML model makes predictions of the same accuracy to different demographics, so no one is punished just for being part of a group.

In both cases, the ML model does not put its thumb on the scale for equity factors but only to ensure that groups are not discriminated against in any way. This is a clear reasonable organization.

Being fair becomes the standard, one way or another

During the past year, the US government and other countries have released statements and mandate around the fairness of AI and protect the public from ML bias. Now, with a new US administration, AI’s fairness is likely to be closer to others, returning the focus to equality of opportunity and away from equity.

As with the transfer of political landscapes, so are the definitions of fair to AI, which moves between the equity -focused paradigs and focuses on the paradigs. We invite ML models that treat everything evenly without putting the thumbs up. Zero-Knowledge proofs can serve as a Airtight method to verify ML models do this without announcing private data.

While ZKPs have faced many scalability challenges in recent years, the technology has finally become reachable for cases of use of the basic. We can use ZKPs to verify the integrity of training data, protect privacy, and ensure that the models we use are what they say.

As ML models become more and more together in our day -to -day life and our future job prospects, college and mortgage admissions depend on them, we can use a little more certain that AI treats us fairly. If we can agree everything in the sense of fairness, however, is another question altogether.

Opinion by: Rob Viglione, co-founder and CEO of Horizen Labs.

This article is for general information purposes and is not intended to be and should not be done as legal or investment advice. The views, attitudes, and opinions expressed here are unique and do not necessarily reflect or represent the views and opinions of the cointelegraph.