Employability of Predictive Analytics Using Artificial Intelligence (AI) Tools and Techniques for Forecasting Bail Decisions
Ishant Sangwan
Abstract
The integration of Artificial Intelligence (AI) into judicial systems has sparked widespread debate, particularly regarding its use in pretrial risk assessments and bail decisions. This paper critically examines whether predictive analytics in the courtroom serve the cause of justice or reinforce systemic bias. Drawing on prominent case studies like the COMPAS tool and empirical data from jurisdictions such as Kentucky and New Jersey, we explore how AI-based risk assessment tools perform in comparison to human judges in terms of accuracy, fairness, and societal trust.
References
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research (PMLR), *81*, 1–15. https://proceedings.mlr.press/v81/buolamwini18a.html
- Mishler, A., Kennedy, E. H., & Chouldechova, A. (2020). Fairness in risk assessment instruments: Post processing to achieve counterfactual equalized odds. arXiv. https://doi.org/10.48550/arXiv.2009.02841
- Morgan, A., & Pass, R. (2017). Paradoxes in fair computer-aided decision making. arXiv. https://doi.org/10.48550/arXiv.1711.11066
- Nunes, A., [et al.]. (2023). Machine learning in bail decisions and judges’ trustworthiness. AI & Society. https://doi.org/10.1007/s00146-023-01673-6
Back