
September 7, 2023
Authored: By Ananya
Published: Scientific American
Summary: Algorithms used for decision-making, from hiring to medical care, can produce varying results depending on how humans annotated the training data. A study shows that when humans described data using terms relevant to a given rule, algorithms produced different outcomes compared to when humans were told to judge whether data violated that rule. These discrepancies highlight the complexity and potential biases introduced during the training process and underscore the need for careful consideration of labeling practices in automated decision systems. Researchers suggest the burden of ensuring the safety and fairness of algorithms should rest with developers, and transparency and regulation are essential to address these issues.
