Essex Police discloses ‘incoherent’ facial recognition assessment

Documents obtained by Big Brother Watch and shared with Computer Weekly reveal that Essex Police has not adequately considered the potentially discriminatory impacts of its live facial recognition (LFR) use.

Despite claiming in an equality impact assessment (EIA) that they have carefully considered issues of bias and algorithmic injustice, the force may have failed to fulfill its public sector equality duty (PSED) to assess how its policies and practices could be discriminatory.

Big Brother Watch criticized Essex Police for relying on false comparisons to other algorithms and repeating misleading claims from the supplier about the lack of bias in the LFR system.

Essex Police stated that they would set the system threshold at 0.6 or above for equitable false positive identification across all demographics, citing testing from the National Physical Laboratory (NPL) on NEC’s Neoface V4 LFR algorithm used by other police forces.

However, Essex Police opted to use an algorithm developed by Corsight, raising concerns about the accuracy of their claims regarding bias and algorithmic fairness.

The EIA highlighted testing of the Corsight_003 algorithm by the US National Institute of Standards and Technology (NIST), claiming a low bias differential False Match Rate (FMR), which was not supported by publicly available data on the NIST website.

Furthermore, Essex Police had not conducted detailed testing of the system itself as of January 16, 2025, raising questions about the validity of their claims regarding algorithmic bias.

Essex Police’s lax approach to assessing the dangers of a controversial and dangerous new form of surveillance has put the rights of thousands at risk

Jake Hurfurt, Big Brother Watch

Jake Hurfurt from Big Brother Watch expressed concerns about Essex Police’s compliance with equality law and the potential risks posed by their facial recognition system. He called for the immediate cessation of facial recognition surveillance by Essex Police.

The legal challenge against South Wales Police by Cardiff resident Ed Bridges highlighted the importance of considering the discriminatory impacts of facial recognition technology. The UK Court of Appeal ruled the use of LFR by the force unlawful due to privacy violations and failure to conduct appropriate data protection impact assessments.

Academic Karen Yeung criticized Essex Police’s EIA as inadequate and incoherent, lacking a comprehensive analysis of the technology’s systemic equalities impacts. She emphasized the importance of rigorous testing and evaluation to ensure compliance with equality laws.

Essex Police responds

Essex Police emphasized their commitment to meeting their public sector equality duty and conducting thorough testing and evaluation of the LFR system. They highlighted successful deployments and arrests resulting from the use of LFR technology.

The force stated that they are working with academic partners for independent reviews of the software and algorithms, including upcoming testing by the National Physical Laboratory (NPL). They also mentioned achieving ISO 42001 certification for their processes.

However, Essex Police did not address specific questions about their reliance on testing of a different algorithm, raising further concerns about the accuracy and impartiality of their assessments.

Computer Weekly reached out to Essex Police for clarification on the testing with Cambridge, but did not receive a response by the time of publication.

‘Misleading’ testing claims

Despite claims by Essex Police and Corsight about the low bias in their facial recognition algorithm, there is a lack of publicly available data to support these assertions. Analysis of the NIST testing data raises questions about the algorithm’s performance across different demographic groups.

Concerns were raised about the discrepancy between one-to-one verification testing and the one-to-many deployment scenario used by police forces. The limitations of facial recognition technology in real-world applications were highlighted, underscoring the need for rigorous testing and evaluation.

Corsight defended their algorithm’s performance, citing positive feedback from NIST and other entities. However, they did not provide specific details to address the concerns raised about the testing methodology and algorithmic bias.

Computer Weekly’s inquiries about further testing and threshold settings were met with no response from Corsight, indicating potential gaps in transparency and accountability in their testing processes.

The lack of continuous improvement and transparency in Corsight’s testing practices raised questions about the reliability and accuracy of their facial recognition algorithm.

Homeland Security testing

The EIA mentioned testing of the Corsight algorithm by the Department of Homeland Security (DHS) in 2022, highlighting its performance across demographic groups. However, concerns were raised about the focus on true positives rather than false positives in the testing process.

The DHS disputed Corsight’s claims of being ranked #1 in various categories, indicating discrepancies in the interpretation of test results. The need for accurate and unbiased testing of facial recognition algorithms was underscored by the conflicting claims made by Corsight.

Essex Police did not directly address the issues raised about Corsight’s testing practices, raising questions about the thoroughness and reliability of their assessments.

Key equality impacts not considered

The EIA and ethics panel meeting minutes revealed gaps in the consideration of key equality impacts related to the deployment of facial recognition technology. Concerns were raised about the lack of specific criteria for watchlist creation and the potential disproportionate impact on certain demographic groups.

Essex Police’s focus on specific crimes and policing purposes raised questions about the necessity and proportionality of their deployments. The need for clear guidelines and safeguards to prevent arbitrary decision-making and abuses of power was emphasized.

Academic experts criticized the lack of transparency and specificity in Essex Police’s approach to watchlist creation and deployment decisions. The failure to address the legal and ethical implications of facial recognition technology raised concerns about compliance with data protection and equality laws.

Essex Police defended their watchlist creation process and highlighted the successful outcomes of their deployments. However, questions remain about the potential biases and discriminatory impacts of their facial recognition practices.

Proportionality and necessity: the Southend ‘intelligence’ case

Essex Police’s reliance on the “intelligence case” to justify watchlists and deployments raised concerns about the proportionality and necessity of their actions. The focus on catching serious offenders and the justification based on seasonal factors highlighted the complexities of deploying facial recognition technology.

Experts criticized Essex Police for failing to provide clear guidelines and safeguards for watchlist creation and deployment decisions. The lack of transparency and accountability in their decision-making processes raised concerns about potential privacy violations and discriminatory practices.

The need for rigorous testing, evaluation, and adherence to legal requirements in facial recognition deployments was underscored by the criticisms raised against Essex Police’s approach. The call for greater transparency and accountability in the use of facial recognition technology emphasized the importance of upholding privacy rights and equality laws.

Leave a Reply

Your email address will not be published. Required fields are marked *