Sweden’s data protection authority, the IMY, has fined the local police authority €250,000 ($300k+) for unlawful use of the controversial facial recognition software, Clearview AI, in breach of the country’s Criminal Data Act.
As part of the enforcement the police must conduct further training and education of staff in order to avoid any future processing of personal data in breach of data protection rules and regulations.
The authority has also been ordered to inform people whose personal data was sent to Clearview — when confidentiality rules allow it to do so, per the IMY.
Its investigation found that the police had used the facial recognition tool on a number of occasions and that several employees had used it without prior authorization.
Earlier this month Canadian privacy authorities found Clearview had breached local laws when it collected photos of people to plug into its facial recognition database without their knowledge or permission.
“IMY concludes that the Police has not fulfilled its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require,” the Swedish data protection authority writes in a press release.
The IMY’s full decision can be found here (in Swedish).
“There are clearly defined rules and regulations on how the Police Authority may process personal data, especially for law enforcement purposes. It is the responsibility of the Police to ensure that employees are aware of those rules,” added Elena Mazzotti Pallard, legal advisor at IMY, in a statement.
The fine (SEK2.5M in local currency) was decided on the basis of an overall assessment, per the IMY, though it falls quite a way short of the maximum possible under Swedish law for the violations in question — which the watchdog notes would be SEK10M. (The authority’s decision notes that not knowing the rules or having inadequate procedures in place are not a reason to reduce a penalty fee so it’s not entirely clear why the police avoided a bigger fine.)
The data authority said it was not possible to determine what had happened to the data of the people whose photos the police authority had sent to Clearview — such as whether the company still stored the information. So it has also ordered the police to take steps to ensure Clearview deletes the data.
The IMY said it investigated the police’s use of the controversial technology following reports in local media.
Just over a year ago, US-based Clearview AI was revealed by the New York Times to have amassed a database of billions of photos of people’s faces — including by scraping public social media postings and harvesting people’s sensitive biometric data without individuals’ knowledge or consent.
European Union data protection law puts a high bar on the processing of special category data, such as biometrics.
Ad hoc use by police of a commercial facial recognition database — with seemingly zero attention paid to local data protection law — evidently does not meet that bar.
Last month it emerged that the Hamburg data protection authority had instigating proceedings against Clearview following a complaint by a German resident over consentless processing of his biometric data.
The Hamburg authority cited Article 9 (1) of the GDPR, which prohibits the processing of biometric data for the purpose of uniquely identifying a natural person, unless the individual has given explicit consent (or for a number of other narrow exceptions which it said had not been met) — thereby finding Clearview’s processing unlawful.
However the German authority only made a narrow order for the deletion of the individual complainant’s mathematical hash values (which represent the biometric profile).
It did not order deletion of the photos themselves. It also did not issue a pan-EU order banning the collection of any European resident’s photos as it could have done and as European privacy campaign group, noyb, had been pushing for.
noyb is encouraging all EU residents to use forms on Clearview AI’s website to ask the company for a copy of their data and ask it to delete any data it has on them, as well as to object to being included in its database. It also recommends that individuals who finds Clearview holds their data submit a complaint against the company with their local DPA.
European Union lawmakers are in the process of drawing up a risk-based framework to regulate applications of artificial intelligence — with draft legislation expected to be put forward this year although the Commission intends it to work in concert with data protections already baked into the EU’s General Data Protection Regulation (GDPR).
Earlier this month the controversial facial recognition company was ruled illegal by Canadian privacy authorities — who warned they would “pursue other actions” if the company does not follow recommendations that include stopping the collection of Canadians’ data and deleting all previously collected images.
Clearview said it had stopped providing its tech to Canadian customers last summer.
It is also facing a class action lawsuit in the U.S. citing Illinois’ biometric protection laws.
Last summer the UK and Australian data protection watchdogs announced a joint investigation into Clearview’s personal data handling practices. That probe is ongoing.
from TechCrunch https://ift.tt/3p7JVba
No comments:
Post a Comment