It has been reported that Google’s Gemini AI has been facing courage after generating historically inaccurate imagery and biased results. Decentralized AI development could represent the key to creating more unbiased and transparent results.
The key to more unbiased algorithms
Decentralized AI development is important for creating safer and unbiased AI algorithms. While such solutions have been receiving funding, decentralized AI solutions will eventually be the crucial element for creating more transparent and unbiased AI solutions, says Calanthia Mei, co-founder of Masa Network.
She recently told Cointelegraph the following:
“Decentralized AI attempts to address the existential flaws in AI, ensuring a more unbiased and safer AI.”
The founder explained that blockchain-based decentralized AI provides more transparent decision-making, enhanced data privacy, and user-owned models that enable people to exchange their data or computing resources for token incentives.
In the past, some of the widely used centralized AI models have generated significant inaccuracies and sparked social media outrage.
For instance, Google withdrew its AI image generator in February when it produced historically inaccurate and “woke” images. The incident raised concerns among people about the decision-making process of the application.
Mei also said the following:
“Centralized AI amplifies pre-existing power imbalances, privacy concerns and biases at an unprecedented pace, as exemplified by the Google Gemini AI incident where the AI depicted U.S. founding fathers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.”
Blockchain-based decentralized AI protocols provide transparency to check the data provenance of the AI output which is vital for improving algorithms as the AI is only as good as the data that it’s trained on.
Mei continued and said that the quality, diversity and representativeness of the data influence the performance and fairness of AI systems. She said that biased or limited data can lead to skewed and biased results, compromising the reliability of AI-driven decisions.
Leave a Reply