By Lauren Fisher
Published December 2020

Artificial intelligence (AI) technologies use algorithms to carry out functions that previously required human thinking. Machine learning is an increasingly popular application of AI, which involves machines learning from trends in data, and making decisions accordingly.

The increasing use of AI in decision-making has been accelerated by the Covid-19 pandemic. For example, South Korea is using AI to categorise Covid-19 patients, and recommend treatment accordingly. Embracing AI not only increases efficiency of processes, but allows services to operate with reduced face-to-face contact. As a consequence, AI is likely to become increasingly important, both while the pandemic persists and beyond.

Currently, no clear legal regime exists for determining who is accountable for decisions made by AI. This uncertainty is problematic, as it may deter organisations from developing AI technologies, and deter businesses from using the technology.

Implications for data protection

A particular instance where this uncertainty is problematic is in the context of data protection. For the purposes of the General Data Protection Regulation (GDPR), it is essential to establish who is accountable for decisions made that involve personal data. This is because the decision-maker is regarded as the 'controller' of the personal data, and is subject to the highest level of compliance responsibility.

Where AI uses personal data to make decisions, it is arguably unclear who (or what) may be regarded as the 'controller' under the GDPR. The European Commission suggested that accountability for decision-making may either be placed on the creator of the machine, the operator (that is, the user of the machine), or on the machine itself. In the context of the GDPR, it may be unhelpful to hold the machine itself accountable as the 'controller', as this would provide data subjects with no means of obtaining compensation where the controller's obligations are breached.

Arguably, the creator of the technology should be regarded as the 'controller', as it ultimately determines how the machine will process and learn from the personal data. However, this may not be entirely unproblematic, as the high level of compliance responsibility that controllers are subject to may deter companies from developing AI. This may hinder innovation, which is not socially or economically beneficial.

The machine operator also has the ability to influence the decisions made by the machine, because the machine's decisions are affected by the data that is fed into the machine by the operator. On this basis, the machine operator could be regarded as the 'controller' of personal data under the GDPR. This may also be problematic because decisions are in reality still partially determined by the machine itself. It may therefore be inaccurate to regard the machine operator as a 'controller' within the GDPR definition.


Overall, it is clear that it is not straightforward to determine who may be held accountable for decisions made by AI. In the absence of a clear solution, it will be a commercial matter for parties to a contract involving AI technologies to agree upon.

However, the issue of who may be regarded as the 'controller' under data protection legislation remains unresolved. Parties may agree who will be responsible for any regulatory fines in this context, but there is still a lacuna in the law which may need to be filled as AI plays an increasingly important role in society. Resolving this legal uncertainty may be crucial in ensuring that innovation of beneficial technology is encouraged, rather than stifled.

Back to Insights

Please contact us to find out more