Cointime

Download App
iOS & Android

Zero-Knowledge AI Will Take Over Your World, and You’re Gonna Be OK With It

Validated Individual Expert

Does the end justify the means?

We all, without exception, have suffered from AI invading our privacy.

It’s a silent attacker, but it’s present in almost everything we do nowadays.

Sadly unbeknownst to many, AI controls social media, and social media controls and drives our lives, to a point that it knows us better than we know ourselves.

It knows our secret desires, our secret personality traits… even before we even acknowledge some of them.

For years, it’s had a completely free entrance to the deepest secrets of our lives, things not even our closest friends or even family know.

Suddenly, now the world has decided that privacy must be protected at all costs.

About time!

Consequently, a series of regulations across the world are putting into serious hazard the progress of AI, a technology that needs data more than anything.

AI, without data, is basically nuttin’.

Nuttin’!

Inevitably, AI, after a somehow mildly successful year in 2022, has arrived at a crossroads, and it needs to act quickly if it wants to continue to disrupt our world the way it has been promoted.

The answer to its problems? Surprisingly, the answer is cryptography.

Privacy is non-negotiable

“The end justifies the means.”

— Niccolo Machiavelli

This now immortal quote is attributed to the historical figure of Machiavelli, a man that influenced many of the most powerful men and women in human history, with clear examples like Napoleon.

Considering that the man who wrote that now immortal quote thought that rulers should be “brutal, calculating, and, when necessary immoral”, I feel pretty confident in myself when I claim that the end, more often than not, doesn’t justify the means.

AI? Great, but privacy-centric

AI shouldn’t be allowed to progress if that means stamping over our rights.

It’s time to grow AI, but not without control. However, this is not a simple matter.

AI needs your data to know you and thus train algorithms that learn from us, to serve us — or to manipulate us, depending on the use case.

Consequently, we find ourselves in an utter contradiction: How do we allow AI to grow if we put hurdles on the element — data — that allows it to progress?

Luckily for us, the solution is now in the palm of our hands.

The power of proving without showing

Imagine you’re a research scientist trying to cure cancer.

For whatever reason, you’ve made an incredible discovery by which, with certain patient data, you could potentially predict cancer in its very early stages and, thus, revert it from spreading or at least slow it down.

Sounds enticing, right? Well, unsurprisingly, the element that can allow for this is none other than data.

But there’s a problem.

Patient data is protected by very strict confidentiality clauses. For that matter, hospitals can’t simply share patient data like candy. Is not surprising, then, that this data remains siloed in the hospital’s data centers.

This fact, inevitably, hinders the capacity of AI to disrupt the healthcare sector, a sector that has enormous potential for AI-based use cases that would save lives around the world.

Up until now, this problem was unsolvable. How do we allow to leverage this data without compromising the right to privacy of patients?

Please let’s not get blinded by Machiavelli’s quote, humans have every right to not disclose their illnesses to society.

Therefore, privacy is non-negotiable, and private data can’t be used to train the AI models that, ironically, could potentially save the lives of those patients whose privacy we’re protecting.

Death for the sake of privacy seems like nonsense, but let’s not forget that we haven’t got certainty that these models would actually work.

Thus, the more suitable dichotomy is “Surrendering privacy for the sake of life… maybe?”.

As no evidence can clearly suggest that the AI models would work (+90% of AI models fail to achieve the expected results) we can’t forsake privacy in return for a “maybe”.

Naturally, this problem is unsolvable unless we might a way to treat data while preserving privacy.

Seems impossible, right?

Well, it isn’t, anymore.

The case of federated learning

At first, one can think about federated learning.

With federated learning, we are capable of training AI models in a delocalized manner so data doesn’t need to be shared.

However, federated learning has several disadvantages that seriously cripple its capacity to deliver.

Model consolidation of weights and parameters has still to be done in a centralized manner, and the different research teams have to trust that other teams are properly executing their training (teams can tamper data to trick results, manipulate weights, etc.).

This unsurprisingly leads to discrepancies and power battles between researchers, as a trust-based working model will always make people skeptical of what others are doing.

Needless to say, researchers are tempted to tamper with their models to achieve greater success among their peers.

But, what if we had a way to ensure homogeneous execution by enabling trustless environments where teams could collaborate with each other in a decentralized manner, or in an even more powerful scenario, being capable of sharing that data among parties while protecting privacy?

The answer is zero

Zero-knowledge proofs are a cryptographic primitive that allows proving, with high certainty, the veracity of a statement, without showing any additional information besides the fact that the statement is true.

In other words, is managing to convince an entity about a certain statement being true, while not revealing any other information.

Quickly, a question comes to mind, how can we prove something without showing why it is true?

Let’s see this with a quick example:

You have two pens, a green one and a red one. They are identical in everything else; form, shape, touch, weight, etc, but one thing; they can be differentiated by color.

You want to prove to me that, besides their uncountable similarities, they are actually different.

But there’s a catch, I’m daltonic.

Thus, I have no possibility to identify they are different pens because I am incapable of seeing that they are different colors.

Consequently, the only way I could know with full certainty of their difference is if you told me that they are, indeed, different colors. Naturally, you’re inclined to think that this is the only way you can convince me they are different, right?

Well, you’re wrong. You can make me play a game.

You give me the two pens and tell me to put each one of them in a hand and put both hands behind my back. This way, I have the capacity to switch pens in my hands without you seeing the switch.

As I am missing a critical piece of information, right now I am convinced they are the same and that this will simply be a guessing game for you.

I show my hands and you automatically detect I’ve switched pens between hands. I’m intrigued, but nevertheless not convinced.

There was a 50% chance you would guess my switch, right? Automatically, you offer to play the game again.

The result?

This time I didn’t switch the pens, but you managed to guess right. Now, I’m starting to feel annoyed, as now you had a 25% of guessing right and you still managed to guess right.

We play the game five more times.

All five times you’ve guessed right, one time after the other.

Now I’m actually amazed, as you’ve guessed the correct result seven straight times, each with a 50% guessing chance, which means that, if you were to be guessing, the chances of you guessing right seven straight times were 0.78%.

Therefore, although we can’t reach full certainty, you have somehow convinced me that those two pens are different, without revealing why they’re different.

And that is a zero-knowledge proof, the capacity to prove a statement with an extremely-high certainty, without revealing any other information besides the fact that the statement is, indeed, true.

Understanding this, and comprehending the actual limitations of AI, how can we bridge both concepts to achieve the desired synergized outcome?

The options are limitless

With zk-proofs, suddenly AI has the tools to protect privacy while still training the models in a business-as-usual style.

Let’s see this with a handful of examples:

  • Zero-knowledge proofs can allow multi-team projects to collaborate in a trustless environment, as each team can train their data separately and include a zk-proof that proves to the rest of the teams that the model has been trained accordingly, and with very high certainty that results haven’t been tampered with.
  • Data can be shared among silos by completely anonymizing it while including zk-proofs that give high certainty that the data is, indeed, real. This is particularly important because data anonymity nowadays more often than not results in a heavy loss of data granularity that affects model performance. With zk-proofs, data can be shared with high-level granularity but anonymized without fear of it being false.
  • Zk-proofs also allow training data while storing the important parameters in a blockchain. Blockchains are expensive to store data, but you can include the most relevant data with the highest security requirements on-chain, while outsourcing execution off-chain. While including a zk-proof, you can verify that off-chain computations have been performed according to the necessary standards while avoiding having to store and execute data in the blockchain.
  • Zk-proofs can also be used with Fully Homomorphic Encryption, a technique that allows data to be treated and used for training while remaining encrypted, to allow for highly-confidential data to be shared among separate teams in a safe environment.

This revolution will allow technology to progress with ease while throwing down the barriers that ethics imposes for technology.

Comments

All Comments

Recommended for you

  • Blockchain Life 2024 thunderstruck in Dubai

    Dubai, April 17, 2024 - The 12th edition of the Blockchain Life Forum, known as the leading gathering for global cryptocurrency leaders, concluded with an impressive turnout of 10,162 attendees despite the unprecedented storm that happened in Dubai.

  • Modular Data Layer for Gaming and AI, Carv, Raises $10M in Series A Funding

    Santa Clara-based Carv has secured $10m in Series A funding led by Tribe Capital and IOSG Ventures, with participation from Consensys, Fenbushi Capital, and other investors. The company plans to use the funds to expand its operations and development efforts. Carv specializes in providing gaming and AI development with high-quality data enhanced with human feedback in a regulatory-compliant, trustless manner. Its solution includes the CARV Protocol, CARV Play, and CARV's AI Agent, CARA. The company is also preparing to launch its node sale to enhance decentralization and bolster trustworthiness.

  • The US GDP seasonally adjusted annualized rate in the first quarter was 1.6%

    The seasonally adjusted annualized initial value of US GDP for the first quarter was 1.6%, estimated at 2.5%, and the previous value was 3.4%.

  • The main culprit of China's 43 billion yuan illegal money laundering case was arrested in the UK, involved in the UK's largest Bitcoin money laundering case

    Local time in the UK, Qian Zhimin appeared in Westminster Magistrates' Court for the first time under the identity of Yadi Zhang. She was accused of obtaining, using or possessing cryptocurrency as criminal property from October 1, 2017 to this Tuesday in London and other parts of the UK. Currently, Qian Zhimin is charged with two counts of illegally holding cryptocurrency. Qian Zhimin is the main suspect in the Blue Sky Gerui illegal public deposit-taking case investigated by the Chinese police in 2017, involving a fund of 43 billion yuan and 126,000 Chinese investors. After the case was exposed, Qian Zhimin fled abroad with a fake passport and held a large amount of bitcoin overseas. According to the above Financial Times report, Qian Zhimin denied the charges of the Royal Prosecution Service in the UK, stating that she would not plead guilty or apply for bail.

  • Nigeria’s Central Bank Denies Call to Freeze Crypto Exchange Users’ Bank Accounts

    In response to the news that "the Central Bank of Nigeria has issued a ban on cryptocurrency trading and requested financial institutions to freeze the accounts of users related to Bybit, KuCoin, OKX, and Binance exchanges," the Central Bank of Nigeria (CBN) stated in a document that the CBN has not officially issued such a notice, and the public should check the official website for the latest information to ensure the reliability of the news. According to a screenshot reported by Cointelegraph yesterday, the Central Bank of Nigeria has requested all banks and financial institutions to identify individuals or entities trading with cryptocurrency exchanges and set these accounts to "Post-No-Debit" (PND) status within six months. This means that account holders will not be able to withdraw funds or make payments from these accounts. According to the screenshot, the Central Bank of Nigeria has listed cryptocurrency exchanges that have not obtained operating licenses in Nigeria, including Bybit, KuCoin, OKX, and Binance. The Central Bank of Nigeria will crack down on the illegal purchase and sale of stablecoin USDT on these platforms, especially those using peer-to-peer (P2P) transactions. In addition, the Central Bank of Nigeria pointed out that financial institutions are prohibited from engaging in cryptocurrency transactions or providing payment services to cryptocurrency exchanges.

  • Universal verification layer Aligned Layer completes $20 million Series A financing

    Ethereum's universal verification layer Aligned Layer has completed a $20 million Series A financing round, led by Hack VC, with participation from dao5, L2IV, Nomad Capital, and others. The Aligned Layer mainnet is scheduled to launch in the second quarter of 2024. As the EigenLayer AVS, Aligned Layer provides Ethereum with a new infrastructure for obtaining economically viable zero-knowledge proof verification for all proof systems.

  • The total open interest of Bitcoin contracts on the entire network reached 31.41 billion US dollars

    According to Coinglass data, the total open position of Bitcoin futures contracts on the entire network is 487,500 BTC (approximately 31.41 billion US dollars).Among them, the open position of CME Bitcoin contracts is 143,600 BTC (approximately 9.23 billion US dollars), ranking first;The open position of Binance Bitcoin contracts is 109,400 BTC (approximately 7.07 billion US dollars), ranking second.

  • Bitcoin mining difficulty increased by 1.99% to 88.1T yesterday, a record high

    According to BTC.com data reported by Jinse Finance, the mining difficulty of Bitcoin has increased by 1.99% to 88.1T at block height 840,672 (22:51:52 on April 24), reaching a new historical high. Currently, the average network computing power is 642.78EH/s.

  • US Stablecoin Bill Could Be Ready Soon, Says Top Democrat on House Financial Services Committee

    The top Democrat on the U.S. House Financial Services Committee, Maxine Waters, has stated that a stablecoin bill may be ready soon, indicating progress towards a new stablecoin law in the U.S. before the elections. Waters has previously criticized a version of the stablecoin bill, but emphasized the importance of protecting investors and ensuring that stablecoins are backed by assets. Congressional movement on stablecoin legislation has recently picked up pace, with input from the U.S. Federal Reserve, Treasury Department, and White House in crafting the bill. The stablecoin bill could potentially be tied to a must-pass Federal Aviation Administration reauthorization due next month, and may also be paired with a marijuana banking bill.

  • Crypto mining company Argo mined 1,760 bitcoins last year and earned $50.6 million

    Crypto mining company Argo Blockchain has released its 2023 financial year performance report, which includes: