Civil society fought a strong campaign against allowing the use of “real time” remote biometric identification (RBI) systems in publicly accessible spaces by public and private entities, arguing that the use of such systems in such places effectively puts everyone constantly in a suspects’ line-up that threatens the exercise of fundamental rights such as the right to demonstrate and express one’s opinion in public, i.e., that it amounts to mass surveillance. But in the end, the AI Act only contains an in-principle prohibition on the use of such systems in publicly accessible places for law enforcement purposes, subject to complex conditions and exceptions
As well as tighter definitions, my legal analysis (accessible below) shows there are at least five areas where the final text of the EU AI Act on this issue could be improved from a fundamental rights perspective:
- In Article 5(4), it would make more sense to require all Member States to adopt legal national rules to lay down the relevant conditions in proper detail, with reference to their own peculiar legal system.
- A reference in Article 5(2) to not-yet-registered systems may relate to RBI systems that are being developed and tested “in real world conditions” in “regulatory sandboxes”, prior to deployment (cf. the definition of “AI regulatory sandbox” in Article 3(44bg) and Title V). If I understand this correctly, such “sandboxed” systems need not yet have undergone a fundamental rights impact assessment, and need not yet be registered in the relevant database – but may, where “appropriate”, be operated in “real world conditions”, which presumably means, applied to real world data in “real time”. If I am correct in this, it could create a loophole in that it would allow authorisation for the use in the real world (outside of the sandboxes) and in “real time” of not yet fundamental rights assessed or registered RBI systems “in duly justified cases of urgency”.
- In Article 5(3), the caveat appears to say that the decision of the the court or administrative authority to authorise “real time” RBI in a publicly accessible place for law enforcement purposes may not itself be based on “the output of the RBI system”. But that makes no sense: prior to any authorisation, “the RBI system” in question may not be used – so how can there already be any “outputs”? Unless of course this refers to the use of such a system without (i.e., prior to) authorisation, “in duly justified cases of urgency”. But that suggests that this supposedly highly exceptional unregistered use of the system may in practice be not quite that exceptional
- Notification may only happen after the specific use of the relevant “real time” RBI has finished. In other words, if this reading is correct, the national supervisory authorities can only do a retrospective check on the deployment of “real time” RBI by law enforcement agencies – but even then without being given access to “sensitive operational data”. That does not appear to be a very effective form of oversight. Moreover, it would appear that these reports will not be published: in contrast to the report mentioned in the next bullet-point, Article 5(5) does not refer to publication.
- The Article 5(6) system of reporting only aggregate data, without even the underlying national statistics, will not ensure transparency over the real use of “real time” RBI in publicly accessible places by law enforcement agencies after the coming into application of the AI Act. Given the singular failure of the Member States and the Commission to provide proper, peer-reviewable information on other intrusive systems such as mandatory e-communication data retention and the collection, analyses and uses of PNR data, and the “results” of those other intrusive activities (and the refusal to even develop proper methodologies for the collection and reviewing of such data, or to even define how the “results” of such activities are to be assessed),3 this gives rise to the expectation that reporting on the use of highly intrusive “real time” RBI in publicly accessible places by law enforcement, too, will be useless and amount to the misrepresentation of the real effects of those systems. This is a major defect of the rules as currently drafted.