Case Study: Fairalyze AI (GNEC Hackathon 2025)
The Challenge
Artificial intelligence models can inadvertently perpetuate and amplify societal biases, particularly concerning gender and other
protected characteristics. This project aimed to create an accessible tool to help developers and researchers identify such biases
in their models, contributing to fairer AI systems aligned with UN SDGs 5 and 10.
My Role & Approach
As part of a [Your Team Size, e.g., 3-person] team, I took on responsibilities for [Specify Your Role, e.g., UI/UX design, front-end
development, and integrating the bias detection logic]. We adopted a rapid prototyping approach due to the hackathon's time
constraints, focusing on core functionality and a clear user interface.
Research & Discovery
Brief research into existing AI bias detection tools and the specific challenges related to SDGs 5 and 10. We identified common
types of biases (e.g., gender bias in language models, demographic imbalances in training data) that Fairalyze AI should aim to
highlight.
{/*

*/}
Design Process & Iterations
We sketched initial UI concepts focusing on simplicity: allowing users to input model data/predictions and receive clear, actionable
bias reports. Iterations were quick, focusing on a dashboard-like interface to present findings. Due to time, extensive user testing
was limited, but we received peer feedback during the hackathon.
Solution & Key Features
Fairalyze AI allows users to [describe core functionality, e.g., upload a dataset and model predictions, or connect to a model API].
It then presents a visual report highlighting potential biases based on [mention metrics or methods, e.g., disparate impact
analysis, representation disparities]. The UI was designed to be intuitive, making complex data understandable.
Tools & Technologies
Frontend: [e.g., HTML, CSS, JavaScript, React/Vue]. Backend/AI Logic: [e.g., Python, Flask, scikit-learn]. Design: [e.g., Figma].
Version Control: Git/GitHub.
Outcome & Learnings
[Mention any hackathon results, e.g., Placed X, Received Y feedback]. Key learnings included rapid ideation, teamwork under
pressure, and the complexities of visualizing AI ethics issues. This project solidified my interest in responsible AI development.