Some commentators have raised privacy and other concerns about Government collating together a massive datastore in order to deal with COVID-19. There is no reason why a centralised datastore cannot be part of the solution, presumably for the purposes of utilising algorithms to provide predictive analytics, such as mapping future surges in hospital use, PPE shortages and even clusters of cases and their potential severity, based on demographics.

It is true that AI is a novel technology, but there is no need for unfounded fears. It is possible to "do AI" in a privacy protective way and there is a plethora of recent (non-binding) guidance to assist. The EU Guidelines for Trustworthy AI, published in April 2019, contains a useful "assessment checklist" in Chapter III. A company proposing to develop a technology (and the Government considering its deployment) would be wise to consider the following, for example: technical robustness and safety (resilience against cyberattacks, accuracy and reproducibility of results); privacy and data governance (measures to ensure the quality and integrity of data); transparency (explainability and good (public) communication with data subjects); non-discrimination (unfair bias avoidance) and accountability (documenting the trade-off decisions made and carrying out an impact assessment of the system).

In our experience, the best first step is to conduct an "enhanced" Data Protection Impact Assessment (DPIA) to flush out key concerns.