Algorithmic decision-making: Sources of error in algorithmic decision systems and starting points for future regulation
The regulation of algorithmic decision-making systems (ADM systems) is a subject of controversy in politics, academia and society. These are systems that use an algorithm to assess a person or a situation or to predict the likelihood of an event occurring and then make a decision on the basis of that assessment or forecast. Algorithmic decision-making systems are characterised by the fact that they are not only used to support decision-making, but also to automatically make autonomous decisions. An example of application is an algorithmic decision-making system which assesses creditworthiness based on age, gender and annual income and then decides whether to grant a loan (credit rating). Often, ADM systems are also ‘learning’ systems that belong to the field of artificial intelligence. The system is therefore able to find patterns and laws in the data provided, which are used for decision-making.
Opportunities for companies
The increasing availability of large amounts of data as a result of the advancing digitalisation and the technical progress achieved in the area of computing capacity make the use of ADM systems increasingly attractive in companies. The considerable potential of algorithmic decision-making systems lies in the possible increase in the cost and time efficiency of decision-making processes which can be achieved by automating decision-making. This applies in particular to decisions made on the basis of large amounts of data and based on rational criteria.
Significant sources of error and risks
The listed benefits of using ADM systems are naturally countered by certain risks, in particular the risk of an incorrect decision by the system.
One of the main sources of error is the selection of training data the algorithm uses to develop statistical connections. The quality of the decisions depends largely on the quality of the training data used ("you are what you eat"). First, the amount of data available may be too small, so that no meaningful patterns can be derived from it. A dataset may also have a connotation in the form of discrimination, which is then perpetuated when it is applied to determine patterns. The use of outdated data can also lead to incorrect results, as the data does not accurately reflect the decision-making situation.
Another source of error is the consistency of the decision-making criteria applied. On the one hand, this allows for a broadly objective decision-making process; unlike in the case of human decisions, emotional circumstances such as the decision-makers’ mood on a particular day have no influence. On the other hand, such a system has weaknesses in particular cases.
In addition to the sources of error described, there is a risk of a lack of comprehensibility in decision-making. If the decision-making cannot be understood, it is de facto not possible to detect any wrong decisions. The problem of comprehensibility arises especially in learning systems, which cannot always be fully understood by their programmers and are therefore often referred to as a ‘black box’. This problem of comprehensibility rooted in the functioning of algorithmic decision-making systems is often amplified by the user of the ADM system. The correct interpretation of decision-making results requires knowledge in the field of computer science, mathematics (especially stochastics) and statistics. In addition to the user’s often insufficient expertise, he or she tends not to question the decision-making results, so that there is simply no subsequent check made. One of the reasons for this is the widespread assumption that machines make better decisions, as the decisions are more objective.
Starting points for future regulation and outlook
Firstly, it is clear that not all forms of algorithmic decision-making systems should be subject to the same rules. Given the diversity of different systems, it is advisable to have a graduated regulatory system based on criticality. While in certain areas an absolute ban on the use of algorithmic decision-making systems also appears appropriate, even at the expense of the state (such as in the use of weapons of war), other areas of use may be reserved for the state, based on official authorisation (in healthcare, for example) or prior notification of use. Finally, areas of application where the use of ADM systems is generally considered permissible (such as personalised advertising) come into consideration.
In order to counter risks arising in connection with the training data used, already known approaches in statistical law can be used (such as neutrality and objectivity of the data). To ensure the verifiability of decision-making, it makes sense to establish recording obligations. Furthermore, establishing an independent testing body which checks compliance with the legal requirements of algorithms may be considered. In any case, in connection with learning algorithms it should be noted that a single check is not sufficient. Given that learning systems are constantly evolving, it is also necessary to ensure ongoing checks. In addition, a clarification that the ultimate responsibility for the machine decision lies with the user of the ADM system should be considered, in order to encourage users to critically analyse the decisions.
The use of ADM systems is associated with not insignificant risks which are critical in different ways depending on the area of application. In view of the opportunities associated with the use of algorithmic decision-making systems, these risks must be countered not by prohibitions, but by balanced regulation.
Any questions? Please contact: Dr. Thomas Thalhofer, Marieke Merkle
Practice Areas: Digital Business