top of page
ChatGPT - the lawyer of the future?

Felix Langrock




 

The Artificial Intelligence (AI) software ChatGPT has gained over 100 million active users within the first two months of its release.[1] No other software has built up such a broad user base in such a short time. Moreover, ChatGPT can also solve complicated legal issues. That at least is the result of a law exam at the University of Minnesota.[2] According to Professor Jonathan Choi, the AI passed the exam in the areas of constitutional law, tort law, and tax law. The multiple-choice questions on the test were the same that human students must solve. But what does this mean? Will this technology be able to take over central tasks of legal work?

 

Artificial Intelligence makes decisions but does not justify them.[3] In addition to its ability to solve multiple-choice questions, the Minnesota law exam shows that ChatGPT had problems solving open-ended tasks. For example, the AI is fundamentally incapable of logical reasoning. So far, the software makes decisions based on patterns it systematically finds in huge amounts of data. But what if unfounded correlations are found and decisions are made on this basis? If, for example, a pattern emerges according to which an above-average proportion of people with red socks do not repay their loans on time, the conclusion could be to not grant loans to people with red socks.[4] This conclusion will also be drawn even if the correlation is completely unfounded. AI does not give reasons for its decisions. However, the whole concept of legal reviewability of decisions - from appeals against official decisions to appeals in court - starts with reasons. AI can therefore be helpful in analysing data sets, but human interpretation of these results is necessary to develop reasoned legal solutions.

 

Nevertheless, the question remains whether AI will at some point make human decision-making superfluous. Humans fundamentally think in terms of causalities and thus try to establish connections between different concepts. Intuition, experience, morality, and law play a central role in this approach. We cannot transfer this kind of thinking to software. AI does not take factors such as intuition or morality into account when making decisions. Their decisions are based solely on data and examples. This divergence can lead to misunderstandings. This is exactly what Professor Dr. Kenneth D. Forbus is trying to change with his research.[5] The American computer scientist from Northwestern University in Illinois is trying to transfer the analogue thinking of humans to Artificial Intelligence. If this approach is successful, the software could in future make decisions according to intuition or morality in a similar way to humans.

 

Even if research has not yet reached this stage, the question arises as to how these technical innovations should be legally regulated. A key issue is the use of personal data in AI. In this regard, for example, the EU General Data Protection Regulation (GDPR) has enacted provisions that establish legal safeguards for data subjects.[6] These include, among others, the right to transparency, the right to explanation and the right to challenge fully autonomous decisions based on personal data. This shows that the government is able to set legal limits to digital innovation and does not rely on voluntarism and self-regulation by companies.

 

Artificial Intelligence will permanently change the relationship between humans and software in legal matters in the future. Thus, AI can analyse important correlations in simple factual matters on the basis of large amounts of data, which can serve lawyers as a basis for decision-making. A large number of repetitive simple questions can be solved well this way. Especially in complex cases, however, small subtleties are usually decisive. Only humans have so far been able to use these subtle differences to argue conclusively in court. Nevertheless, it should not be forgotten that the technology is still at the beginning of its development. Consequently, the full extent of the impact of AI on the handling of legal issues cannot yet be estimated.  

 

[1] David Curry, 'ChatGPT Revenue and Usage Statistics (2023)' (Business of Apps, 20 February 2023) <https://www.businessofapps.com/data/chatgpt-statistics/> accessed 01 March 2023

 

[2] N/a, 'KI-Software : ChatGPT besteht Jura-Prüfung in Minnesota' (ZDFheute, 15 January 2023) <https://www.zdf.de/nachrichten/panorama/chatgpt-jura-pruefung-minnesota-100.html> accessed 01 March 2023

 

[3] Wolfgang Janisch, 'Die Entmachtung des Menschen durch die Maschine?' (Süddeutsche Zeitung, 10 November 2022) <https://www.sueddeutsche.de/wissen/recht-kuenstliche-intelligenz-haftung-selbstlernende-systeme-1.5693405> accessed 01 March 2023

 

[4] Wolfgang Janisch, 'Die Entmachtung des Menschen durch die Maschine?' (Süddeutsche Zeitung, 10 November 2022) <https://www.sueddeutsche.de/wissen/recht-kuenstliche-intelligenz-haftung-selbstlernende-systeme-1.5693405> accessed 01 March 2023

 

[5] Universität bielefeld, 'Maschinen beibringen, wie Menschen zu denken' (Universität Bielefeld, 16 January 2023) <https://aktuell.uni-bielefeld.de/2023/01/16/maschinen-beibringen-wie-menschen-zu-denken/> accessed 01 March 2023

[6] Quest contributor, 'How has the law been pushed aside in the age of AI?' (University of Birmingham, N/A) <https://www.birmingham.ac.uk/research/quest/emerging-frontiers/ai-and-the-law.aspx> accessed 01 March 2023

bottom of page