AI-Based COVID-19 Apps and Data Privacy

 

Beyond HIPAA: Inside the Use of AI to Collect COVID-19-Related Information  From Employees | Legaltech News

 

by Zaheema Iqbal*    18 September 2020

The unprecedented threat from the COVID-19 has pushed multiple tech giants and public health authorities to leverage Artificial Intelligence and develop various apps necessary to curb the virus across the globe. These AI-based apps helped world economies to trace, test, trial, and treat COVID-19 patients successfully. The automated technology has become a centerpiece of infection control and contact tracing.  Data-driven decision making and affiliated technologies are being used in a variety of ways to combat the virus.

There are more than a hundred apps introduced by various countries that are implemented to curb COVID-19. Since the outbreak of the virus, Chinese authorities have introduced smart helmets, disinfecting robots, drones, advanced facial-recognition software, and thermal camera-equipped drones in the fight against COVID-19. They have also introduced a mobile app “Close Contact Detector” which allowed individuals to check whether they remained in close contact with an infectious individual or not. The Singapore authorities have developed an app ‘TraceTogether’ to trace and control the virus but later this app has raised serious privacy concerns. At the same time, South Korea has analyzed millions of data points from credit card transactions, cellphone geolocation data, and CCTV footage to help its citizen in contact tracing and infection control. Another one such example is King’s College London, Guys & St Thomas’ hospitals which have developed an app attracting more than a million downloads since its launch date after being circulated on social media applications. The tech giants Google and Apple also joined hands in developing a contact tracing app in Android and iOS.  Similarly, in Pakistan, the Agha Khan University has developed the Corona-Check app which enabled its citizens to evaluate the COVID-19 symptoms with home-based screening tools and guide the patient what to do next.

Having discussed that, despite multiple pros, AI poses new risks and challenges to data privacy and security, public trust, and ethical use. Today the question is not about how these AI-based apps are making the people free from the virus but how much exposure of the data is involved in and the use of this data once the pandemic gets over is the point to ponder. The tracking and public surveillance apps such as location data stored on or generated by smartphones, facial recognition, computer vision surveillance technologies, and scanning public spaces for infected people fever detection raise privacy compliance issues. Many countries such as South Korea, China, Iran, Israel, Poland, Singapore, Italy, Taiwan, and others are using data from cellphones for various applications that are tasked to combat the virus. Supercharged with artificial intelligence and machine learning, the acquired data is not only used for social control and monitoring but also forecast travel patterns, identify future outbreak hot spots, project immunity, and model chains of the infection. These apps are serving as vehicles for abuse and disinformation while providing a false sense of security to justify people well before it is safe to do so.

It is not a fallacy to state that the future implications for data privacy reach far beyond the containment of COVID-19 which was introduced as a short-term fix to the immediate threat. Time is not far when the widespread data-sharing, monitoring, and surveillance could become the fixture of modern public life if it is not streamlined at the right time. Under the guise of shielding people from health emergencies, governments have introduced immature technologies and in some cases to oblige citizens by law to set a dangerous precedent. Sorting the wheat from the chaff is an arduous task and governments are not fully equipped to accomplish it at this time.

It is obvious that the data showing the health and geolocation of citizens is as personal as it gets. The potential benefits weigh heavy and so do concerns about the use and abuse of these applications. There are safeguards for data protection perhaps the most important one being the European GDPR but the government holds the right to use the data during times of national emergency. However, the data protection frameworks for the lawful and ethical use of AI are less developed. If an application is developed to tackle the public health crisis, it should end up as public application with the data, algorithms, all inputs, and outputs held for the public good by public health agencies. Therefore, the harm of technological intervention should be reduced which is increasingly inevitable in the wake of the pandemic. There should be clear rules to head off threats to personal data privacy, by implementing appropriate safeguards. Governments should recognize the privacy and data protection implications implementing these apps with complete transparency, in consultation with all stakeholders, and robust privacy protections to gain the complete trust of its citizen.

 

**The author is a cybersecurity policy researcher at the National Institute of Maritime Affairs, Bahria University Islamabad. Her interest includes cyber governance, data privacy, and emerging technologies. She can be reached at zaheemaeckbaull@gmail.com

Posts Carousel

Leave a Comment

You must be logged in to post a comment.

SAJ on Facebook

SAJ Socials

   

Top Authors