The relationship of law and code has been subject to an intensifying debate in particular in the field of legal informatics and computational law. The birth of the Internet would give rise to the insight that the use of code and the architecture of cyberspace can yield law-like effects (“code is law”). The increasing use of both rule- and data-based technological systems in the application and administration of law came to focus the analysis on technology’s complementary function in the enforcement of law, such as in legal tech applications.
With a view to blockchain technology and smart contracts as “regulatory technologies,” De Filippi and Hassan argue that we have now entered a phase beyond enforcement of law through code by relying on code to draft and elaborate on legal rules (“law is code”).1 De Filippi and Wright dub the effects as a “rule of code” and a “lex cryptographia.”2 They argue that the blockchain “could make it easier for citizens to create custom legal systems, where people are free to choose and to implement their own rules within their own techno-legal frameworks.” This hypothesis has been criticized as an unrealistic cyber-libertarian dream.3 There are indeed several constraints to the replacement of current state functions and the law as trust mechanisms by digital technologies—in particular if the technology is not being backed up by regulation and adequate oversight.
At the same time, however, the probability is high that the increasing use of code, algorithmic and thus automated decision-making (“ADM”) and AI may have profound effects on the design, structure and enforcement of the law. For example, the city of Pittsburgh has introduced AI-powered traffic light coordination as a pilot project. Such a traffic system, which uses sensors, cameras, big data, cloud computing, ADM and AI, allows for the substitution of pre-programmed public decisions. It may signal an era of adaptive law and “algorithmic regulation,”4 or at minimum of the automated and situation-specific application and enforcement of regulation. With the advance of technologies such as AI and computer vision, the granular enforcement of regulation will become more widely available.
At least in such situations, digital technologies which significantly reduce coordination, monitoring and enforcement costs will not replace the state—neither in its functions as a regulator nor as a trust guarantor, nor as a service provider. But digital technologies may contribute to better-targeted regulation to coordinate behavior and reduce risks. They may in particular reduce costs of regulation to the extent such regulation is over- and under-inclusive thus far due to a lack of monitoring, data crunching and predictive capabilities—whether in the areas of mobility (e.g. traffic management), financial markets (e.g. capital requirements) or environmental protection (e.g. pollution levels).
To enable the safe and fair use of digital technologies and in particular of ADM in regulation, and to facilitate adaptive, data-driven regulation and granular enforcement, several important conditions such as the adequate protection of privacy and cybersecurity have to be resolved. This in turn may increase the need for technological and especially legal safeguards, i.e. regulation on a technical level. However, the result could be better-targeted regulation. To justify this claim, this short Essay first looks at the overall effects of ADM and AI on the “market” for the law.
1. Increasing the demand for enabling and risk regulation
As a major innovation which entails significant potential benefits, but also new risks, AI may require the adaptation of existing law or the adoption of new law to allow for and enable its use, but also to account for its risks. The regulation of risks associated with AI (aside from the analysis of its potential labor markets effects) indeed seems to be the field in which most legal scholarship5 and policy work has been focused to date. This includes potential bias, discrimination and manipulation in ADM,6 privacy issues ranging all the way to an Orwellian dystopia, security issues regarding (for example) autonomous driving, cybersecurity such as in the Internet of Things, and national security regarding the use of AI in warfare.
This list of examples shows that already the magnitude of the issues requires analysis beyond the “just another new technology” risk regulation perspective.7 In addition and most importantly, the nature of advanced AI is its (limited) autonomy, which may bring it into a similar relationship to the law as a human being: it may break or circumvent, but also learn to comply with the law (“compliance by design”). One example is the use of price algorithms which may come into conflict with antitrust rules, but might learn not to use illegal anti-competitive strategies.
The basic feature of ADM—the delegation of decision-making to an algorithm, which may be self-learning—has triggered a general debate on responsibility and liability for the use of AI and on AI’s relationship to human actors, including proposals for a separate “legal entity” status. While there also may be a need for deregulation to allow for ADM in, for example, the transport or health sector, in sum the issues mentioned are likely to increase the need for enabling and risk regulation. It could even be envisaged that AI could learn to manipulate the law.8
2. Complementing the use and enforcement of existing law
Digital technologies transform the use and enforcement of existing law by private actors and governments. LegalTech and RegTech already have had profound effects on legal services9 and assist human beings along the legal chain. For example, consumers can use LegalTech tools to enforce their rights. Companies and regulators in the financial industry use RegTech applications to ensure compliance. In particular, lawyers can use apps which process legal texts and arguments, and use computational models of legal reasoning to generate arguments for and against certain legal outcomes and to predict a problem’s outcome.10
Many functions of the digital services mentioned as examples here are to some extent still complementary to the application of existing law. They lead to a partial automation of the use and enforcement of existing procedural and substantive law, but do not alter fundamentally the structure of the law. While digital contract templates would be complementary in the above sense, as an example the shift to automated (AI-powered) micro agreements would enter the territory of those techno-legal innovations which change and substitute existing law.
3. Changing and substituting existing law
The arrival of “regulatory technologies,” i.e. technologies which can be applied to “define and incorporate legal or contractual provisions into code,” (De Filippi and Hassan) will transform the law at least in the long run. This trend will not be confined to private law, contracts, or blockchain technology, as the introductory example of the AI-powered traffic system in Pittsburgh shows.
The combination of new technologies, data, ADM and AI dramatically reduces the costs of monitoring, prediction and coordination of human actions. This combination may allow for the adaptation of the interpretation of law, or of the law itself, on a fluent and permanent data basis. This development would extend beyond LegalTech and RegTech, a phenomenon we are beginning to see today. The technologies may allow adaptive contracts and at least lower levels of regulation in real-time. In both cases—i.e. smart contracts and public regulation11—it is the law itself that is influenced by ADM and AI. These effects will be the focus of the remainder of this Essay. The next section will attempt to identify potential trends for the structure of future law.
Institutional, informational and behavioral economics as well as empirical research over recent decades have made many inroads into the neoclassical model of human behavior in economics, retracing insights from psychology, neuroscience, sociology and other disciplines. In a nutshell, from the point of view of informational and behavioral economics, humans are bound rationally and biased, respectively, and the law, in particular contracts, is incomplete due to informational and cognitive limitations.
The arrival of more perfect information markets due to big data and potentially more rational decision-making thanks to AI, however, may bring both decision- and law-making more in line with the Chicago world view. Law may become granular and better-targeted in the following ways.
1. More contracts, and more complete contracts
The law and economics literature has analyzed contracts as potentially incomplete in particular due to transactions costs, i.e. the costs of foreseeing and finding contractual solutions for contingencies.12 Those costs will decrease due to the use of predictive algorithms, big data and smart contracts. Similarly, monitoring and compliance costs will fall. The computability of terms13 will allow for more, as well as more complete contracts. Personalized default rules will become more widely available.14
Computable contracts will have several ramifications: First, autonomous “computer-to-computer” contracting will become more widely available. Algorithms will negotiate and conclude a contract.15 Second, while computer-to-computer contracting may transfer problems of negotiation power and asymmetrical information from the human to the algorithmic level, this may also reduce standard hold-up problems. Third, technology will potentially unequally reduce internal and external transaction costs and therefore shift the current trade-off of market transaction versus integration into the firm.16 Fourth, the reduced transaction costs in contracting will necessitate a reassessment of (statutory) provisions which have been based on certain assumptions on the level of transaction costs, in particular of those provisions which have assumed prohibitive transaction costs in contracting.17
2. “Micro-directives”—and the end of the standards/rules dichotomy?
One important choice regarding the structure of the law that has to be reconsidered due to ADM and AI is that which is between rules and standards. While rules (e.g. a speed limit) provide certainty by clearly stating ex ante the content of the law, they are costly to design and may be over- or under-inclusive. On the contrary, standards (“drive with appropriate care”), which are adjudicated ex post, can be calibrated to a specific situation, but also generate uncertainty.18
This trade-off between certainty and calibration and the choice between rules and standards have to be reconsidered due to the availability of digital technologies. We may see a trend towards fewer per se and mandatory rules and more differentiated and better-targeted rules. The AI-powered traffic light system in Pittsburgh is a case in point, which may adapt the speed limit according to the traffic and weather situation.
Casey and Niblett have gone further and predict the “death of rules and standards.”19 They envision the scenario of a micro-directive which “might provide a speed limit of 51.2 miles per hour for a particular driver with twelve years of experience on a rainy Tuesday at 3:27 pm.”20 In their vision, the law “morphs from a standard (for the legislature) to a set of complex rules (within the machine process) to a simple command (for the citizen).”21 Such granular interpretation and application of the law through machine processes would require a re-assessment of the division of tasks between legislator, regulatory bodies and courts.
3. Better internalization of externalities—and the end of property/liability rules dichotomy?
Parts of the law are designed to internalize externalities that are harmful or socially perceived to be harmful. Assuming that regulation can be targeted more accurately in general with the help of digital technologies, i.e. can better avoid over- and under-inclusiveness, it is fair to assume that regulation also can better reach the goal of internalizing externalities. Which legal instrument is the best to protect against harmful externalities has been subject to a long-standing debate, started by Coase and continued by Calabresi and Melamed, who have introduced the distinction between property rules which allow for injunctive relief and liability rules which relegate to damages (as well as inalienation rules, which will not be discussed here).
One illustrative example is the current debate on how to protect personal data against misuse. Some have discussed on a normative level whether “data subjects” (to use the EU’s General Data Protection parlance) should enjoy exclusive rights in their personal data.22 Others have argued on a positive level that the GDPR already provides for not only liability rules, but for property-derived rights and remedies.23
Factors relevant to the choice between a property and a liability rule are transaction and (ex post) assessment costs.24 If digital technologies reduce both type of costs—in a similar way as discussed for the standard/rule choice—both property and liability rules tend to be similarly efficient. In other words, the differences are leveled out, and the choice becomes similarly less relevant as between standards and rules from an efficiency point of view. Thus, if in the short term no digital technologies are available for “data rights management” in terms of transaction and assessment costs, the choice is important. But once software is able to protect the data subject’s rights according to her preferences and breaches of those rights can be assessed in real-time, both a property and a liability type of rule may equally be able to protect those rights.
The above overview on some technological possibilities and trends shows that the information problem that led to typification in law can now be (partly) solved through digital technologies.25 Whether we call it “algorithmic,” “adaptive,” “personalized,” “granular,” or simply “digital” law—there will be an increasing trend towards granularization, which will affect the legislature, executive branch and judiciary. The 20th century insight “to legislate means to generalize”26 may have to be adapted. Law, at least in its application, is becoming more granular and thereby accounts for an increasingly granular society. This in turn may both allow but also put more pressure on the legislature to focus on setting the objectives of legislation.27 If the law and its application are built on language, logic and social judgment, the logic may be provided to some degree and for parts of the law by AI. This may allow legislators, enforcers and judges to concentrate more on questions of social judgment and values.
The picture sketched above is limited due to the focus potential efficiencies of the technologies. It is far from complete. There are many strong countervailing forces to the potential trends outlined—both on a positive and normative level. The challenges of privacy, (cyber)security, transparency of decision-making and the prevention of algorithmic discrimination and bias have been mentioned and are already fundamental issues today. Furthermore, adaptive regulation may be used to reduce freedom and autonomy. The mirror-side of the adaptivity of law is instability. Higher granularity of law may be a challenge for equality and fairness.28 All of these issues raise fundamental questions of acceptance and legitimacy.29
A central inquiry that arises from the trends on the structure of the law sketched above is what role the future state will have. The decrease of coordination costs on the markets may allow for more private solutions to coordination problems. At the same time, the need for oversight and guarantor functions of public authorities may increase, in particular in those instances where adaptive law may not be provided by public actors (such as in the traffic light example), but by private actors.30 Finally, one of the most fundamental ethical debates is to what extent ADM and AI may de-humanize the law and its application.31 Susskind has therefore rightly asked the ultimate normative and political question regarding “when digital law is an appropriate way of governing human affairs and when it should be resisted.”32 Digital law may be efficient in many circumstances, but it also has to pass the test that it must not undermine social values and skills which are fundamental for society, such as respect, empathy and solidarity.