Automated decision making (ADS) across industries, promises to increase efficiency and efficacy. ADS can provide real-time feedback and offer more accurate recommendations to help enhance the experience of clients or business results. The systems can also assist in a myriad of decisions that are currently handled by humans for example, grading student exams and determining eligibility for loans, or deciding jurors in court cases.
Many organizations have embraced ADS or are planning to do so. But how do they know whether the system they’re looking at is in compliance with the standards for responsible use of the technology?
A standard quality assurance measure is to perform tests before the release of any new software. This isn’t enough, however it is necessary for ADS because these systems are more complex than other software and provide greater possibilities of outputs.
In the case of ADS the tests must also include a review for their impact on people in a variety of ways and the capacity for human-on-the-loop systems that can make sure that the outputs are understandable to those who use them.
While fairness-aware algorithms can help reduce biases, they cannot tackle long-standing disparities. This is because they do not alter the policies on housing and insurance policies, credit allocation or employment practices. The Supreme Court cases as well as the Equality Act raises questions about the legality of discrimination because of gender identity or sexual orientation in many contexts where ADS are used.