Summary: AI ethics, often perceived as a unified discourse, is actually a blend of diverse perspectives addressing the implications of automated decision-making systems. While primarily concerned with social values and justice, the term “AI ethics” encompasses various stakeholders’ interests and their approaches to addressing ethical considerations in AI development and application. Historical roots trace back to machine and robot ethics, focusing on philosophical and operational ethics within software engineering. Public awareness of AI ethics increased with the advent of technologies like self-driving cars and the revival of ethical thought experiments like the trolley problem. Google’s acquisition of DeepMind led to the formation of an internal ethics board, emphasizing the ethical dimensions of advanced AI and surveillance capitalism. Despite the proliferation of “AI ethics principles” by tech companies, these initiatives often lack transparency and enforceability, serving more as ethics washing to sidestep regulation. In contrast, critiques from technology and sociology have highlighted AI’s potential for reinforcing societal biases, leading to calls for data and algorithmic justice. Efforts to address these issues have included conferences and research on fairness, accountability, and transparency in algorithms, highlighting the ongoing challenge of mitigating bias in AI applications.