Problem
Study Aim:
The research seeks to address the underexplored area of reliability in offender risk assessment, focusing on how consistently and accurately trained raters score the Level of Service/Case Management Inventory (LS/CMI).
Impact on System/Public:
Offender risk assessments guide decisions affecting liberty and public safety, such as sentencing and supervision. Inconsistent or inaccurate assessments can undermine trust in the justice system and lead to improper resource allocation or unjust outcomes.
Research Questions:
- How reliably do raters score the LS/CMI?
- How accurate are these scores compared to expert ratings?
Method and Analysis
Program Evaluated/Gaps Addressed:
The study evaluates the LS/CMI, a widely used tool in offender risk and needs assessment, integrating case management strategies with assessment results. It fills the gap in research about inter-rater reliability and accuracy, especially for subjective and dynamic elements.
Data and Sample Size:
- Data collected from four trained university students who rated nine audio-recorded offender interviews.
- Expert trainers established "gold standard" scores for comparison.
Analysis Used:
- Intra-class Correlation Coefficient (ICC) to measure inter-rater reliability.
- Deviation scores and percent agreement to assess rater accuracy against expert scores.
Outcome
Key Findings:
- Reliability: Adequate to strong inter-rater reliability for most LS/CMI domains.
- Accuracy: Wide variation in accuracy across domains and items. The education/employment and family/marital domains had better consistency, while procriminal attitude and companions had lower reliability and accuracy.
- Challenges Identified: Certain abstract or subjective items (e.g., offender attitudes) were more prone to scoring discrepancies.
Implications or Recommendations:
- Improved training is crucial, with a focus on items prone to inconsistency.
- Annual refresher training and enhanced guidelines for subjective items could increase reliability and accuracy.
- Developing semi-structured interview protocols might help standardize the information collected and reduce variability.
- Agencies need to balance predictive accuracy with reliability in their evaluation of risk assessment tools.
This study emphasizes the importance of refining training methods and scoring guidelines for risk assessments to ensure fairness and reliability in criminal justice decisions.