Facial recognition: ban or regulation? - 02.03.2020
Facial recognition technology is present in our daily life, having been deployed by companies to change the way we use our phones, our travel and shopping experiences, allegedly increasing our security. This technology has also been seen as a fast, noninvasive method of security for identification of wrongdoers and control of public spaces. However, it is this remote use by public authorities, together with artificial intelligence, which has increasingly become a controversial issue, facing criticism for its use around the world, from protests in Hong Kong, to several U.S. cities with bans over fears that it paves the way for potential privacy violations and mass surveillance. The non-authorized use of photographs to build a giant database by Clearview AI also sparked the debate and brought light to the dangers of the online availability of photographs of our faces. Against this background, in its white paper on AI, the EU Commission initially considered a general ban of the remote use of the technology up to five years. However, the final document dropped this temporary ban, bringing to the forefront of the discussion the dangers of facial recognition technology, especially when combined with AI.
Though facial recognition is used extensively for strengthening the security in multiple areas, the technology also has the potential to be very dangerous. In practice, it can be hacked, databases can be breached or sold, and sometimes it is just not effective. It is important to understand the interaction of the collection, storage and processing of sensitive personal biometric data within the rigorous requirements of the GDPR and other data protection laws around the world. The remote use of facial recognition technology frequently breaches the need to give unambiguous, freely given and fully informed consent, as stipulated in the GDPR, as people are not even aware that they are being tracked or that their faces are being used for some other purpose other than for what it was initially authorized or indicated. The use of the technology to identify potential criminals in public places with AI can also lead to bias or discrimination and, as such, carries specific risks for fundamental rights. Although EU data protection rules generally prohibit the processing of biometric data for the purpose of uniquely identifying a natural person, there are some exceptions that may allow a duly justified, proportionate use, for reasons of substantial public interest, based on EU or national law, provided it is subject to adequate safeguards. Hence, although allowing facial recognition is currently the exception, the Commission AI White Paper, which expectedly will lead to a legislative proposal expected late this year, launched a broad debate to explore with member states, businesses and other organizations whether new exceptions should be added. The Commission will take some time before deciding on how to legislate facial recognition remotely, but will not prevent national initiatives from using the technology according to existing rules. The Commission intends to adopt a risk-based approach to introduce new rules in some areas. The debate is whether we should restrict facial recognition to already identified viable use cases or if facial recognition technology has other potentially beneficial cases that should be allowed.
Tik Tok: is it safe for children? - 13.02.2020
TikTok is a social network owned by a Chinese company, founded in 2012. It is used to create short lip-sync, talent and other types of videos. Approximately half of its users are between ages of 16 and 24. It was the 7th most downloaded app of the decade and is now growingly used by brands for online advertising. However, several concerns have arisen from different entities as to the insufficiency of security features implemented to avoid risks resulting from detected security flaws, as well as the dangers for children in respect of inappropriate or abusive content or contacts from potentially malicious users.
TikTok has been criticized for cybersecurity and privacy shortcomings. Investigations have emerged in both the US and the EU. The US FTC started an investigation linked to security flaws discovered within the app and related concerns of collaboration with Chinese intelligence agencies due to the possibility that US Army members using the app would reveal sensitive data such as location of military units. However, no evidence of exploitation of vulnerabilities was found and these having been rapidly fixed. In the EU, both the UK and Italian Data Protection Authorities focused on the processing of personal data of minors. The latter DPA inclusively suggested coordinated action within the European Data Protection Board and the creation of a task force aimed at prevention of risks to children’s data. Parents also expressed concern about the risks the use of the network may represent, such as inappropriate content, as well as potential contacts from strangers with malicious intentions. The way the accounts are set up (public by default) and type of data that can be collected (e.g.: location-related data and other sensitive personal information) increases the risk of ill-intended persons directly contacting children on the app, leading to risky situations. The widespread use of the app by minors reignites the discussions on adequate mechanisms to verify a user’s age and consent from parents. TikTok recently decided to include new features, including a new Family Safety Mode to help parents manage screen time, turn on a restricted mode and set restrictions on comments. The discussions around this topic may have a significant impacts on all similar apps.
About #AbreuForward - Global Legal Trends
#AbreuForward - Global Legal Trends are a set of publications for globalized economy and digital business world hot topics. Follow this news, weekly, and get to know the legal implications related to this issues explored with our expertise and insights.