Meta Launches New FACET Dataset to Address Cultural Bias in AI Tools

Meta’s looking to ensure greater representation and fairness in AI models, with the launch of a new, human-labeled dataset of 32k images, which will help to ensure that more types of attributes are recognized and accounted for within AI processes.

Meta FACET dataset

As you can see in this example, Meta’s FACET (FAirness in Computer Vision EvaluaTion) dataset provides a range of images that have been assessed for various demographic attributes, including gender, skin tone, hairstyle, and more.

The idea is that this will help more AI developers to factor such elements into their models, ensuring better representation of historically marginalized communities.

As explained by Meta:

“While computer vision models allow us to accomplish tasks like image classification and semantic segmentation at unprecedented scale, we have a responsibility to ensure that our AI systems are fair and equitable. But benchmarking for fairness in computer vision is notoriously hard to do. The risk of mislabeling is real, and the people who use these AI systems may have a better or worse experience based not on the complexity of the task itself, but rather on their demographics.”

By including a broader set of demographic qualifiers, that can help to address this issue, which, in turn, will ensure greater presentation of a wider audience group within the results.

In preliminary studies using FACET, we found that state-of-the-art models tend to exhibit performance disparities across demographic groups. For example, they may struggle to detect people in images whose skin tone is darker, and that challenge can be exacerbated for people with coily rather than straight hair. By releasing FACET, our goal is to enable researchers and practitioners to perform similar benchmarking to better understand the disparities present in their own models and monitor the impact of mitigations put in place to address fairness concerns. We encourage researchers to use FACET to benchmark fairness across other vision and multimodal tasks.

It’s a valuable dataset, which could have a significant impact on AI development, and ensuring better representation and consideration within such tools.

Though Meta also notes that FACET is for research evaluation purposes only, and cannot be used for training.

“We’re releasing the dataset and a dataset explorer with the intention that FACET can become a standard fairness evaluation benchmark for computer vision models and help researchers evaluate fairness and robustness across a more inclusive set of demographic attributes.

It could end up being a critical update, maximizing the usage and application of AI tools, and eliminating bias within existing data collections.

You can read more about Meta’s FACET dataset and approach here.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Web Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – webtimes.uk. The content will be deleted within 24 hours.

Leave a Comment