The "coded gaze" refers to the way algorithms and technology reflect and perpetuate the biases present in their training data and the intentions of their creators, often leading to unequal representations and outcomes for marginalized groups. The speaker may have experienced algorithmic bias through instances where algorithms misidentified or failed to recognize their face or the faces of others belonging to underrepresented demographics, highlighting disparities in the technology's performance.
Facial recognition software often fails to detect all types of faces due to a lack of diversity in the training datasets, which can lead to poor performance on faces that differ significantly from those most represented in the data, such as people of color, women, or individuals with non-standard features. Algorithms can lead to discriminatory practices through biased training data that reflects social inequalities, and by automating decisions without adequate human oversight, perpetuating systemic biases inherent in the decision-making processes.
To help stop algorithmic bias, we can ensure diverse representation in training datasets, implement strict auditing and transparency measures for AI systems, and promote inclusive design practices that involve communities affected by the technology. In my opinion, while it is difficult to create algorithms that are completely free of bias due to the complexities of human society and historical inequalities, it is possible to minimize biases through careful design, continuous evaluation, and a commitment to fairness and accountability in technology development.