Skip to content
Mitigating Social Biases in AI Systems with a Groundbreaking Training Technique

Mitigating Social Biases in AI Systems with a Groundbreaking Training Technique

A groundbreaking training methodology called 'FairDeDup' intends to tackle the prevalent issue of social bias in Artificial Intelligence systems. Developed by researchers at Adobe in partnership with a doctoral candidate from Oregon State University, FairDeDup represents an affordable and promising approach towards combating societal biasiness.

The method was conceptualized by Eric Slyman at Oregon State University (OSU) College of Engineering and researchers at Adobe. They sought to refine an artificial intelligence training technique called SemDeDup, merging economic considerations with principles of fairness.

Slyman discovered that eliminating redundant data from the training inputs could potentially escalate harmful societal biases that AI could learn. To counter this, FairDeDup method was engineered to discern which data to discard while consciously maintaining varying human-defined dimensions of diversity.

More frequently than not, datasets acquired from the internet bear biases prevalent in our society, which when trained into AI models, risk embedding these biases into AI's response and behaviour. By understanding how deduplication impacts biasiness in AI, the team envisioned ways to mitigate these negative consequences.

How does FairDeDup work?

FairDeDup improves upon the technique of deduplication which refers to removing redundant information from AI training data. This method surpasses its predecessor as it saves on computing resources while curtailing the scope of bias in the AI system.

To achieve this, FairDeDup prunes the dataset of image captions collected on the web. Pruning is the act of selecting a subset of data that accurately represents the entire dataset. Pruning, when executed in a content-sensitive manner, allows informed decisions about which parts of the data to discard and which to retain. Slyman eloquently highlights that, "Our approach enables AI training that is not only cost-effective and accurate but also more fair."

The benefits of FairDeDup aren’t only a matter of fairness, but of precision as well. By addressing these biases in the early stages of dataset pruning, we pave the way for more socially just AI systems. 'Fairness,' Slyman insists, is context-dependent, a notion reinforced in the practical application of FairDeDup.

Alongside jobs, race, and gender, biases linked to age, geography, and culture are also to be addressed during the training. Slyman's work marks a significant step towards the spectrum of inclusivity and digital fairness.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on ScienceDaily.