🌿 Note from Catalina:
This blog comes from a neurodivergent mind and an immigrant heart. It’s a mix of memories, plants, recipes, travels, and reflections—no straight lines, just stories from a brain that works differently.
I write to be the voice I once needed—for anyone who’s ever felt out of place, misunderstood, or too much. You’re not alone.
This piece was originally written for my AI Ethics class at Miami Dade College.
I decided to share it here because the questions it explores—about fairness, bias, and responsibility—do not belong only in classrooms or academic journals. They show up in everyday life. They show up in who gets access, who gets judged, who gets believed, and who gets left behind.
Artificial intelligence is often described as neutral or objective. But AI does not exist outside of society. It learns from our data, our systems, our histories, and our assumptions. When those systems are unequal, AI does not correct them—it amplifies them.
As a Latina immigrant, a woman returning to school in my 40s, and someone navigating technology from both lived experience and formal education, I approach AI ethics as something deeply personal. I have seen how systems can overlook people. I have also experienced what it means to finally have a voice and the tools to question how those systems are built.
This essay reflects that perspective. It combines ethical theory, academic research, and everyday examples—not to argue that AI is inherently harmful, but to ask a harder question: Can technology be fair if the society training it is not?
I’m sharing this work as part of an ongoing exploration. AI is becoming embedded in healthcare, education, labor, and decision-making systems that affect real lives. If we want ethical technology, we need to talk openly about fairness, power, and responsibility—together.
— Essay begins below —
Bias, Fairness, and Algorithmic Judgment: A Human Issue
Introduction
Artificial intelligence is often described as objective—a set of mathematical systems making neutral decisions. But AI is not born neutral. It learns from us, from our data, from our structures, from our moral blind spots and our failures. And if our society struggles to treat people fairly, then AI will inherit and automate that unfairness on a massive scale.
For me, AI ethics is not abstract. It’s personal. It’s rooted in the story of real people who are judged without context, whose lives are reduced to data points. A few months ago, I read about a mother arrested after leaving her ten-year-old in charge of her toddler so she could go to work. Many rushed to judge her as irresponsible. But no one asked:
Why didn’t she have childcare? Why is survival criminalized when you’re poor? Why do we ignore the system that pushed her into that position?
This case stays with me because it reveals a truth at the center of AI ethics: humans judge without understanding. AI learns from those judgments. And that is why fairness matters. AI will mirror us. So the question becomes: What are we showing it?
As a Latina woman, an immigrant, and a student returning to education in my 40s, I have lived on both sides of vulnerability and privilege. I understand how systems overlook you—and I also understand what it means to have a voice and a chance to speak up. This dual perspective shapes my central claim: Artificial intelligence cannot be fair unless the society training it is fair. If we want ethical AI, we must first examine the human systems that teach it what “fairness” means.
Understanding Algorithmic Fairness (and Unfairness)
Algorithmic fairness is usually discussed mathematically: equal accuracy, equal error rates, equal opportunity. Scholars like Narayanan and Kapoor explain that fairness is not a single formula—it is a value choice. But fairness is also emotional, cultural, and political.
If AI is built only by people who have never experienced poverty, displacement, language barriers, racial bias, or the struggle to rebuild a life, then their systems will reflect their worldview. Fairness collapses when the designers cannot imagine the lives of the people who will be judged by their systems.
This is where Rawls’ “veil of ignorance” becomes powerful. Rawls argues that just societies are built when people design rules without knowing whether they themselves will be privileged or vulnerable. AI desperately needs this principle. Systems should be built as if we do not know who we will become in their eyes—patient or doctor, immigrant or citizen, poor or wealthy, mother or child.
Bias in AI Systems: Learning the Wrong Lessons
AI bias often begins with biased data, but we must be honest about what that means. AI is learning from:
• policing shaped by racism
• healthcare shaped by unequal access
• education shaped by poverty
• job histories shaped by discrimination
• housing shaped by segregation
• social media shaped by popularity, not truth
So when AI predicts “risk,” “trustworthiness,” “eligibility,” or “creditworthiness,” it is not predicting the future—it is repeating history.
People become coded as:
• poor → irresponsible
• immigrant → suspicious
• single mother → negligent
• Black patient → “less in pain”
• disabled worker → unemployable
• working-class → individual failure instead of systemic failure
These patterns are not accidental. They are mathematical reflections of social prejudice.
Pérez et al. (2023) explain that bias mitigation can reduce accuracy—but fairness should not be sacrificed for convenience or efficiency. An AI system that is “efficient” for the privileged but harmful for the vulnerable is not ethical; it is incomplete.
Real-World Impacts: Healthcare, Education, and What Society Chooses to See
Healthcare shows some of the most dangerous consequences of biased systems. Kuo, Tseng, and Kim found that many AI health tools perform worse for minority and underrepresented groups simply because our data is missing or misclassified. When AI fails to see us, the medical system fails to care for us.
But the problem extends far beyond medicine. It touches the core of how society organizes opportunity.
One of the clearest real-world examples of fairness done right is Finland’s public education model. In Finland, private schools are almost nonexistent. Children of doctors, millionaires, cleaners, immigrants, and government officials all attend the same public schools. The privileged do not escape the system—they improve it. They invest in it. They demand equity because their own children experience the same conditions as everyone else.
This is justice through shared vulnerability.
This is Rawls in action.
Whenever I mention Finland to people—especially mothers in the U.S.—the response is almost always:
“Oh, that’s why I homeschool. I don’t want my kids in public schools.”
But for me, even though I do not have children, that response reveals the problem. Homeschooling is often born from privilege—the ability to opt out. It isolates children from the complexity of society: poverty, diversity, inequality, human rights, different cultures, different realities.
A society cannot function if everyone retreats to their own bubble.
Opting out protects individuals but weakens the collective. It also mirrors AI development: the privileged distance themselves from public systems, and public systems decline. The same happens when privileged designers build AI without including vulnerable communities. The result is predictable: systems that serve the few and fail the many.
My own experience reinforces this. Yes, I am Latina, an immigrant, a woman navigating tech later in life—but I am still privileged in many ways. I have a voice. I have access to school. I can learn, speak up, and write this paper. Others do not have that chance.
Privilege is layered and complicated.
But naming it is part of being ethical.
Objection and Response
Some argue that fairness requirements slow technological innovation or reduce system performance. They claim that prioritizing equity introduces constraints that limit AI’s potential. However, this objection overlooks a fundamental ethical reality: a system cannot be considered performant if it only performs well for the privileged. As Rawls would argue, justice must be evaluated from the perspective of those most affected by harm. An AI that produces fast results while reinforcing inequality is not efficient—it is dangerous. True innovation must include fairness, or it risks amplifying the injustices society already struggles to overcome.
Ethical Reflection: What We Owe Each Other
Rawls teaches that fairness begins when we imagine ourselves in someone else’s position. Finland shows that fairness becomes real when privilege and vulnerability share the same systems.
AI, however, is often designed from the top down, insulated from the people it will impact.
Ethical AI requires more than metrics.
It requires empathy, humility, context, diversity, responsibility, courage.
I believe this strongly because I have lived on both sides—those with a voice and those without one. And I know that if AI only learns from the powerful, it will erase the experiences and struggles of everyone else.
Machines are mirrors.
If we want AI to be fair, we must show it a society that takes fairness seriously.
Conclusion
AI will reflect us.
So we must choose carefully what we teach it.
Do we want a world where struggling mothers are criminalized?
Where privilege decides who deserves opportunity?
Where fairness is optional for those who can afford to escape?
Where vulnerable people vanish inside biased data?
Or do we want a world where fairness is shared, where systems are built for everyone, where empathy and justice guide our decisions?
Artificial intelligence is a tool—but it will become a dangerous one if it only learns from our worst habits. Fairness is not an algorithmic challenge; it is a moral obligation. And if we build AI with the values we wish society had—not just the ones it currently reflects—we can create technology that lifts people up instead of pushing them down.
We owe this to each other.
And we owe it to the future we are shaping.
References
Kuo, T. T., Tseng, C., & Kim, H. E. (2023). Bias and fairness in artificial intelligence–driven healthcare: A systematic review of the literature. Journal of Medical Internet Research, 25, e43251. https://doi.org/10.2196/43251
Narayanan, A., & Kapoor, S. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Communications of the ACM, 64(6), 30–33. https://doi.org/10.1145/3464903
Pérez, D., Delgado-Santos, J., & Morell, C. (2023). Algorithmic fairness in AI: A systematic review of bias mitigation methods. Scientific Reports, 13, 15487. https://doi.org/

Catyobi

Leave a comment