The Persistence of “Dumb” Contracts

This Article is an exploration of the similarities and differences, for lawyers, not just of language and code, but also those aspects of human thinking and interaction that will continue to be the most difficult to replicate on a machine.
by Jeffrey M. Lipshaw, Professor at Suffolk University Law School
Updated Jan 21, 2019 (1 Older Version)chevron-down
0 Discussions (#public)
1 Contributor
The Persistence of “Dumb” Contracts
··

Abstract

“Smart contracts” are a hot topic. Presently, smart contracts are mostly evidence of property, like cryptocurrencies or mortgages, created and/or transferred using blockchain technology. This is an exploration of the theoretical possibilities of artificial intelligence in a far broader range of complex and heretofore negotiated transactions that occur over time. My goal is to understand what it means to make a contract smarter, i.e. to delegate more and more of the creation, performance, and disposition of legally binding transactions to machine thinking. Moreover, I want to do so from the perspective of one who is neither a true believer in the purported technological singularity to come nor a digital Luddite.

There are two primary themes. First, the extent to which complex transactions occurring over time can be embodied in computer programs—the ability of the contracts to be smarter rather than dumber—depends on the extent to which the subject of the transaction becomes not just a social fact, but an institutional reality. The dumb contract is merely a map of an antecedent reality, but the smart one is a real thing in itself. Second, smart rather than dumb contracts will require the translation of often fuzzy legal predicates, otherwise capable of expression in truth-functional logic, into digital proxies expressible in the non-ambiguous discrete units of code. The upshot of these two themes is that, at least until there is some better evidence that a technological singularity will occur, deciding will remain something that is fundamentally different than reasoning by way of logic or code. Hence, for the time being, dumb contracts, ones that leave open the possibility of what Karl Llewellyn called “situation sense,” will persist.


Introduction


If you search the Internet, you can very quickly find a number of outfits willing to teach you Solidity, one of the coding languages in which so-called “smart contracts” are written. “Blockgeeks” is one such firm, offering up examples of how smart contracts will revolutionize apartment leasing, supply chains, automobile insurance, and health care, all under the headline: “Smart Contracts: The Blockchain Technology That Will Replace Lawyers.”

1

Smart Contracts: The Blockchain Technology That Will Replace Lawyers, Blockgeeks, https://blockgeeks.com/guides/smart-contracts (last visited Sept. 15, 2018).

Is this mere puffing? Or are technologies like these capable of replacing the kinds of contracts law students have been studying and lawyers have been drafting for at least the 140 years or so since C.C. Langdell decided it would be a good idea to assemble a casebook on the subject?
2

Smart contracts are a hot topic. When I typed “smart contract” into Google Scholar on September 15, 2018, it returned “about” 2,390 results since the beginning of 2018, 3,710 results since the beginning of 2017, 4,920 since the beginning of 2013, and very few before that. This, I think, was an apt recent observation: “Today, smart contracts are a prototypical example of ‘Amara’s Law,’ the concept articulated by Stanford University computer scientist Roy Amara that we tend to overestimate new technology in the short run and underestimate it in the long run.” Stuart D. Levi & Alex B. Lipton, An Introduction to Smart Contracts and Their Potential and Inherent Limitations, Harv. L. Sch. F. on Corp. Governance & Fin. Reg. (May 26, 2018), https://corpgov.law.harvard.edu/2018/05/26/an-introduction-to-smart-contracts-and-their-potential-and-inherent-limitations. In addition to the foregoing, here is a small sampling of the recent scholarly literature. It does not include the myriad commercial outlets offering to train lawyers in Ethereum and other coding languages (see supra note 1). Ian Grigg, The Ricardian Contract, http://iang.org/papers/ricardian_contract.html (last visited Sept. 15, 2018); Henry Kim & Marek Laskowski, A Perspective on Blockchain Smart Contracts: Reducing Uncertainty and Complexity in Value Exchange, 26th Int’l Conf. on Computer Comm. & Networks (ICCCN) (2017), https://ssrn.com/abstract=2975770 (2017); Riccardo de Caria, Law and Autonomous Systems Series: Defining Smart Contracts - The Search for Workable Legal Categories, Oxford L. Fac. Blog (May 25, 2018), https://www.law.ox.ac.uk/business-law-blog/blog/2018/05/law-and-autonomous-systems-series-defining-smart-contracts-search; Max Raskin, The Law and Legality of Smart Contracts, 1 Geo. L. Tech. Rev. 305 (2017); Reggie O’Shields, Smart Contracts: Legal Agreements for the Blockchain, 21 N.C. Banking Inst. 177 (2017); Harry Surden, Computable Contracts, 46 U.C. Davis L. Rev. 629 (2012); Harry Surden, Machine Learning and Law, 89 Wash. L. Rev. 87 (2014).

Predicting is hard, especially about the future.

3

There seems to be some debate about whether the source of this aphorism is Yogi Berra, Samuel Goldwyn, or Niels Bohr. The perils of prediction, June 2nd, The Economist (July 15, 2007, 17:59), https://www.economist.com/blogs/theinbox/2007/07/the_perils_of_ prediction_june.

This Article continues my reflection on the relationship of human lawyers and their real or contemplated digital counterparts. Ultimately, it is an exploration of the similarities and differences, for lawyers, not just of language and code, but also those aspects of human thinking and interaction that will continue to be the most difficult to replicate on a machine. I believe that everything that can be digitized will be digitized. But I have pondered whether the “halting problem”—the mathematical truth that no conceivable digital computer will be able to say for every problem it faces that an answer is computable—is meaningful in educating lawyers.
4

Jeffrey M. Lipshaw, Halting, Intuition, Heuristics, and Action: Alan Turing and the Theoretical Constraints on AI-Lawyering, 5 Savannah L. Rev. 133 (2018).

The more important questions, to me, involve whether there is anything that cannot be computed and, if so, what are the consequences? Considering those questions tends to devolve quickly into dire descriptions of technology singularities where machines rule the humans, wistful reminiscences of the days when people did not spend most of their waking hours staring at their mobile phones, or philosophical debates about the capability of computers to have human-equivalent cognition. I am going to try to avoid all of those. They are fascinating but, as a smart friend observed to me recently, it is simply too early in the game to make good calls on what it all means.
5

That was Jack Copeland’s comment to me over coffee in Boston one morning in April 2018. It was consistent with a position he took over twenty years ago on the question whether anything would ever be able to do the work of a notional “oracle machine,” i.e. a Turing machine not constrained by the halting problem. B. Jack Copeland & Richard Sylvan, Beyond the Universal Turing Machine, 77 Australasian J. Phil. 46, 63-64 (1998).

Let us simply say then that there are matters as to which computability presently is a significant problem. We can reasonably focus on those as the areas in which human beings will continue to bring something to the party.
6

Helena Haapio referred me to two essays that are consistent with my assessments here. The first is Stephen Wolfram, Computational Law, Symbolic Discourse, and the AI Constitution, Wired.com (Oct. 12, 2016), https://www.wired.com/2016/10/computational-law-symbolic-discourse-and-the-ai-constitution. We are coming at the issue from different directions. Wolfram is looking at the translation of natural language into computer code from the standpoint of a physicist and computer scientist (i.e. his attempt to build the “Wolfram Language” that can begin to replicate natural language capabilities in code), and applying it to the law of contracts. I am a lawyer experienced in dealing with complex natural language contracts and wondering what it would take to translate them into code. Similarly, see Eliza Mik, Smart contracts: terminology, technical limitations and real world complexity, 9 Law, Innovation & Tech. 269 (2017). Professor Mik’s essay is a critique of the extension of blockchain and other technologies so as to automate previously “unautomated” transactions. We come to similar conclusions about real world complexity, the relationship of contract language to the “deal,” and the problems of translating natural language into code. In particular, her recognition of the problem of having a contract right but not wanting to enforce it for extra-contractual reasons is spot on. Id. at 283. Here, I discuss why doing so is also problematic from the standpoint of coding the relationship. See infra Section II D.

That is the standpoint from which I want to explore artificial intelligence in the real-world context of making, performing, modifying, and putting aside legally binding contracts. A broad definition of “smart contracts” is that they “are simply computer code that automatically execute agreed-upon transactions.”

7

Stuart Levi, Gregory Fernicola, & Eytan Fisch, The Rise of Blockchains and Regulatory Scrutiny, Harv. L. Sch. F. on Corp. Governance & Fin. Reg. (Mar. 9, 2018), https://corpgov.law.harvard.edu/2018/03/09/the-rise-of-blockchains-and-regulatory-scrutiny.

Presently, when somebody refers to a “smart contract,” the person likely means a transaction that can be completed on blockchain technology.
8
Levi & Lipton, supra note 2.
The most publicized examples are cryptocurrencies like Bitcoin. These are barely “contracts” in the sense of the documentation of the reduction of a prior understanding to a set of words and sentences with legal effect.
9
Stephen McJohn & Ian McJohn, The Commercial Law of Bitcoin and Blockchain Transactions, 47 UCC L. J. 187, 210 (2017).
But it is not hard to imagine—and, indeed, legal technologists are hard at work on—the use of blockchain in any application where one either creates a digital asset, transfers ownership digitally of a non-digital one, say, a security or security interest in personal property or recorded title or liens in real property, or records the movement of assets through space and time.
10

Manav Gupta, Blockchain for Dummies®, IBM Limited Edition 25-30 (2017) (listing exemplary uses of blockchain, including commercial financing approvals, trade finance, nostro/vostro accounts in cross-border transactions, insurance claims processing, governmental identity documentation, supply chain management, medical records, and health care payment authorizations).

A far more interesting challenge is to describe a smart contract in which multiple parties with potentially different models of the same transaction come to agree on a single “well-formed formal model.”

11

Kim & Laskowski, supra note 2.

At the risk of opening a philosophical can of worms, my goal is to understand what it means to make a complex and heretofore negotiated contract smarter, i.e. to delegate more and more of the creation, performance, and disposition of legally binding transactions to machine thinking.
12

This is not a critique of the uses of AI in lawyering nor a consideration of its morality. For a thorough discussion of how legal automation might incorporate human values, see Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, Geo. Wash. L. Rev. (forthcoming 2019), https://papers.ssrn.com/abstract_id= 3135549.

What would it mean, say, for a long-term lease, a joint venture buy-sell agreement, or a coal supply agreement, all of which might span years, to be smart?

That entails consideration of two attributes of smarter and dumber contracts. Part I deals with the first attribute: what we conventionally deem to be real—our community ontologies—and the extent to which those realities are reducible to digital code. Cryptocurrency programs are smart because they create realities in which there are discrete, finite, and complete states of the world—the “smart” universe. For users to accept bitcoins as valuable, they need to perceive digital universe within which bitcoins exist to have a reality on equal standing with a physical dollar bill. That is not what most contracts do. I shy away from binary definitions so I prefer to consider the core characteristics of prototypically smart and what I am going to call “dumb” contracts. We start with transactions and understandings that occur between humans in a continuous world with infinitely complex states—the “dumb” universe.

13

Ian R. Macneil, The Many Futures of Contracts, 47 S. Cal. L. Rev. 691, 731 (1974):

Promise, even at its transactional narrowest, always is shadowed by non-promissory accompaniments. The doctrines mentioned in the immediately preceding paragraphs are legal reflections that promises have always been accompanied by the burdens of the impurities of incompleteness of content and communication, objectivity, implication, custom, usage, and above all “ongoingness” and its accompanying clouds of imprecision and future uncertainty.

Contracts model in language those antecedent transactions and understandings. They do so by reducing the complexity of the transaction into something with far fewer bits and bytes of information than the almost infinite amount presented by the physical environment in which the transaction takes place. At the same time, they use enough bits and bytes of information to make the contractual model useful.
14
For an apt description of this process of linguistic reduction, written before the jargon of cybernetics was commonplace, see Macneil, id. at 726-29.

Yet whether the contract is its own alternative reality of the transaction or merely a model has been at the heart of doctrinal debates over parol evidence and interpretation since the early days of modern American legal education. In 1885 the Supreme Court of Minnesota asked: “But in what manner shall it be ascertained whether the parties intended to express the whole of their agreement in the writing?”

15
Thompson v. Libbey, 26 N.W. 1 (Minn. 1885).
The court’s view of the completeness of a contract to sell timber, and one party’s attempt to supplement its terms with an oral warranty not included in the document, might well anticipate the issues that would arise were someone to challenge an aspect of a bitcoin transaction. “The only criterion of the completeness of the written contract as a full expression of the agreement of the parties is the writing itself. If it imports on its face to be a complete expression of the whole agreement, — that is, contains such language as imports a complete legal obligation, — it is to be presumed that the parties here introduced into it every material item and term….”
16
Id.

More than eighty years later, in 1968, Chief Justice Roger Traynor of the Supreme Court of California wrote two opinions that are the seminal expressions of the opposing view. In Pacific Gas & Electric Co., a case involving the interpretation of an indemnity clause, he quoted Arthur Corbin’s treatise to the effect that words, like symbols in code, have no inherent meaning.

If words had absolute and constant referents, it might be possible to discover contractual intention in the words themselves and in the manner in which they were arranged. Words, however, do not have absolute and constant referents. “A word is a symbol of thought but has no arbitrary and fixed meaning like a symbol of algebra or chemistry, ...” The meaning of particular words or groups of words varies with the “... verbal context and surrounding circumstances and purposes in view of the linguistic education and experience of their users and their hearers or readers (not excluding judges). ... A word has no meaning apart from these factors; much less does it have an objective meaning, one true meaning.”

17
Pacific Gas & Elec. Co. v. G.W. Thomas Drayage & Rigging Co., 442 P.2d 641, 644-45 (Cal. 1968).

In Masterson v. Sine,

18
Masterson v. Sine, 436 P.2d 561 (Cal. 1968).
the same logic led Traynor, over a stern dissent, to permit parol evidence as to additional terms of an otherwise apparently complete promissory note. Twenty years later, Judge Alex Kozinski of the Ninth Circuit criticized the application of the Pacific Gas/Masterson logic to another promissory note, observing that “Pacific Gas casts a long shadow of uncertainty over all transactions negotiated and executed under the law of California.”
19
Trident Center v. Conn. Gen. Life Ins. Co., 847 F.2d 564, 569 (9th Cir. 1988).
Challenging a promissory note as having meaning beyond the conventional understanding of a promissory note would be as unsettling as challenging the meaning of a dollar bill.

The literature of law and economics is already replete with the concept of the perfect or “complete” contract, one that anticipates all future “state contingencies.” That literature also generally acknowledges that all contracts in the real world are necessarily incomplete. The ideal but unlikely complete contract would be the economic optimum because it would perfectly align the incentives of the contracting parties and reduce transaction costs. Hence, to one with an economic bent, “incomplete contracting” presents a problem to be solved.

20
Dylan Hadfield-Menell & Gillian K. Hadfield, Incomplete Contracting and AI-Alignment (Univ. of California, Berkeley Center for L. & Soc. Sci., Research Paper Series No. CLASS18-10, Univ. of S. California Legal Research Papers Series No. 18-10, 2018), https://ssrn.com/abstract=3165793.
If you conceive of contract law and behavior in that way, using artificial intelligence to close the gap between human capability and the optimally complete contract is a hopeful avenue for research.
21
Id. at 2.
Thoughtful scholars in this area try not to overstate things.
22
Id. at 3 (“Similarly, we suggest that perfect reward specification is routinely not possible, so AI researchers should be focused on designing optimally in the face of the irredeemable divergence between AI and human utility.”).
Nevertheless the ultimate social scientific conception of a contract is one that depends on a reduction to regularities expressible in equations and Cartesian graphs.

Now imagine this reduction, this effort to “scientize” contracting behavior, in the context of digitally documented transactions that span the complexity gamut from one-shot commodity sales to longer-term relationships like commercial leases or shareholder agreements. There is or will be a continuum across which “smarter” contracts will do less mapping of antecedent understandings and create more generally accepted social realities than dumber ones. I see no point in trying to create mutually exclusive sets. At one end of the continuum, the smart contract is little more than a cybernetic artifact like Bitcoin, a virtual dollar bill having a social ontology and no less a fixed and timeless meaning than a physical Federal Reserve note. At the other end, it is little more than a digitized form into which someone plugs a few chunks of data and comes out with a Kindle book or a mortgage loan. Somewhere in the middle, say in connection with a program that can sort out the puts and takes of ten years’ worth of contingency in a 50,000-square-foot office lease, the contract needs to be able to create virtually a complex world of real estate business and law that either maps on or substitutes for the physical version. Dumb contracts will persist, and the extent to which individual contracts are dumb rather than smart will depend on the extent to which there is utility to smartness or dumbness in the particular circumstance.

Part II explores the second key attribute of a smart contract: that, even if it is expressed in natural language, it needs to be programmed in the language of computer code. Even if the programming language is higher order, incorporating natural language commands, everything in it must ultimately reduce to binary machine code—0s and 1s in the computer’s elementary logic gates. What makes the potential digital automation of contracts tantalizing is the fact that most contracts and contract law have a deductive “if-then” structure that can be expressed in formal first order logic. Many of the natural language predicates useful in drafting contracts are capable of inclusion in formal logic. Nevertheless, they are not sufficiently precise to be expressed in code without some translation of continuous characteristics into discrete units. There are several implications: (1) elastic or fuzzy language has utility and is expressible in code by way of fuzzy logic, but even that cannot be infinitely fuzzy; (2) there cannot be an infinite regress of judgment-making in code; at some point, whether in the higher order programming language or down to the level of the elementary logic gates, an exogenous programmer must tell the computer that, with respect to the translation of continuous concepts into discrete units, enough is enough; and (3) what Karl Llewellyn called “situation sense” in the context of making a legal (or business) judgment is capable of being coded, but not infinitely so.

I conclude with thoughts about why lawyers will not need to be worried for some time about the Borg of smart contracts assimilating all practice into the digital body.

23

Star Trek: First Contact (Paramount Pictures 1996).

Resistance is not necessary, much less futile. Contract rights are amenable to formal logic, flow charts, and computer code. There are fruitful areas for the development of smarter contracts in areas previously consigned to bespoke drafting. But the upshot of the problems of ontology and formal coding language is that, at least until there is some better evidence that a technological singularity will occur, deciding in human brains will remain something that is fundamentally different than reasoning by way of logic or code. Whether the ultimate programmer at the asymptotic limit of the infinite regress of judgment can be a machine rather than a body is a philosophical question others have debated to a fare-thee-well. I am not going to try to answer that question. Its relevance to smart contracts, however, is that we live in a universe presenting us, for the time being, with a significant portfolio of seemingly intractable dualities, complementarities, limits, and paradoxes in transactions as much as anything else. Human-like judgment continues to have an advantage over machine-like judgment, at least at the theoretical extremes. Hence, for the time being, dumb contracts and situation sense will persist.


I. Artifacts and Social Ontologies


Everything we are discussing here is an artifact, “a discrete material object, consciously produced or transformed by human activity, under the influence of the physical and/or cultural environment.”

24

Mark C. Suchman, The Contract as Social Artifact, 37 Law & Soc’y Rev. 91, 98 (2003).

Marc Suchman has explored at length the status of contracts as artifacts. He observes that artifacts can be technical or symbolic, and that contracts can be both. A technical artifact is one, like a tool or a machine, that serves a utilitarian, productive purpose.
25
Id. at 99-100.
The key difference between a technical and a symbolic artifact is that the former needs to work and somebody needs to know how to make it work. If the artifact is a horseshoe, the rider likely does not care how it is made to work; her only concern is that it suits the intended purpose. She relies on the farrier to understand the relationship between a horse’s hoof, the shoe, and the gait. My MacBook Pro is a far more complex artifact, and I only care that it works. I am also quite sure that it would take me a lot longer to learn, down to fundamental principles, how it works than it would to gain equivalent knowledge about shoeing a horse. A contract that documents currency hedging, i.e. allocates risk of currency fluctuation between two parties, is the example par excellence of contract as technical artifact.

A symbolic artifact, on the other hand, is one that carries a cultural message. My wedding band is a purely symbolic artifact. It has no internal technical operation. It merely symbolizes something. What about the ketubah, the Jewish betrothal contract, written in Aramaic, hanging on the wall of our matrimonial home? The ketubah now operates as a symbol because we really do not care what the Aramaic sentences say. At one time, however, ketubot had technical legal significance. They may have symbolized the betrothal, but as to things like dowry and other property rights, they had to work. To be clear, even very complex modern contracts can be both technical and symbolic; I am convinced, after many years of practice, that contracts, whether for a residential real estate purchase or a corporate merger, have symbolic overtones regardless of their technical content. Moreover, good lawyers ignore the symbolic aspect at their peril.

What currently pass as “smart contracts” are artifacts akin to dollar bills. On the continuum from wedding band to MacBook Pro, they are far more like the former than the latter. They are things that have value, symbolic or otherwise, simply because there is universal consensus they are what they are. Like wedding bands or dollar bills they have a physical timelessness. Even through the passage of time, they always will be what they are. Dumb contracts, like merger agreements, are artifacts as well, but they are far more like the legal equivalent of MacBook Pros. Somebody, if not the client then at least the client’s lawyer, needs to know how they operate internally. What is clear is that they do not work if they fail to map appropriately on a transaction that would be meaningful even if there were no contract to document it. Moreover, unlike a wedding band, for which the passage of time is irrelevant, the utility of the contract as map can change as the external circumstances change. In other words, for a dumb contract to become smart, it has to be able to operate over time without outside (read: human) intervention or debate. Interestingly enough, how well a computer can replicate that exogenous change is also the subject, within computer science, of ontology and computational complexity. In short, the progression from dumb to smart contract is along a continuum of ontology and complexity—the extent to which the contract evolves from map to thing.


A. Smart Contract as Thing


  1. Blockchain and cryptocurrencies

The legal literature is now replete with good accounts of “smart contracts,” primarily in the context of cryptocurrencies like Bitcoin and their underlying technology, blockchain.

26

McJohn & McJohn, supra note 9, at 201-04; Usha Rodriques, Law and the Blockchain, forthcoming, 104 Iowa L. Rev. (forthcoming 2018), https://papers.ssrn.com/ sol3/ papers.cfm?abstract_id=3127782.

By most accounts, the first reference to smart contracts in the sense now commonly used was in a short 1997 essay written by Nick Szabo, a lawyer and computer science expert.
27
Nick Szabo, The Idea of Smart Contracts (1997), https://web.archive.org/web/ 20150328060814/http://szabo.best.vwh.net/smart_contracts_idea.html.
The idea was that you could embed contractual rights within a product’s hardware and software and secure them digitally so that breaching the contract would be prohibitively expensive. The “primitive ancestor” of a smart contract would be a vending machine. The machine takes in coins, dispenses product, and makes change, all with sufficient security to protect the transactions from attackers. Szabo’s original example of a more sophisticated smart contract was easy for most of us to understand: a digital security system for an automobile that would protect against theft but would also contain a lien protocol if the owner failed to make the payment to the bank.
28
Id.
No doubt today Szabo would also have the automobile self-repossess by driving itself to a secure location.

The most notable implementation of, and the most common current reference to, smart contracts is Bitcoin. In 2009, a mysterious author or authors named Satoshi Nakamoto published a paper proposing a system of electronic cash whose security depended on a peer-to-peer network of computers.

29
Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System (2009), https://bitcoin.org/bitcoin.pdf.
The computer network Nakamoto proposed for Bitcoin was a more generally applicable technology that came to be called “blockchain” or “distributed ledger” technology.
30

Hilary J. Allen, $=€=BITCOIN?, 76 Md. L. Rev. 877, 886 (2017).

Nakamoto wanted to solve the “double spending” problem in banking. This occurs when somebody pays from an account within the banking system, but before the payment can be confirmed, the payor makes another payment, i.e. commits a fraud. The problem exists in the banking system because only trusted third parties, i.e. the banks, have the ability to assure that the payments are legitimate. Hence, the banks have to mediate disputes among payors and payees if there is a double payment issue. The peer-to-peer system depends instead on cryptographic proof of the chronological order of transactions.
31
Nakamoto, supra note 29, at 2.
Assume that one owns property (here, cryptocurrency) identified in the network. The owner’s identity is not public but the owner has a public identifier (a “public key”). A server within the network creates a timestamp for every transaction. Once a transaction has been “proved,” i.e. timestamped and verified, it is a permanent “block” of data added to the “chain” of previous transactions. It cannot be reversed without also reversing all of the previous transactions.
32

Proof of the transaction differs as between cryptocurrencies and other business applications of blockchain. For cryptocurrencies, the proof is also distributed. Some computer somewhere in the world has expended sufficient CPU power to “prove” the transaction (the “real” one being the one with the longest chain). The network incentivizes participants to help “prove” transactions by granting to the “prover” (or “miner”) of the block a new asset (or coin) within the network. In short, the provers (miners) use their computing power to earn currency. Id. at 2-4. Other business uses of blockchain rely on “selective endorsement” under which some non-anonymous institution verifies the transactions. Matt Lucas, The difference between Bitcoin and blockchain for business, IBM Blockchain Blog (May 9, 2017), https://www.ibm.com/blogs/blockchain/2017/05/the-difference-between-bitcoin-and-blockchain-for-business.

2. The ontology of dollar bills and smart contracts

“Smart contract” is an ironic and unfortunate misnomer.

33
I would rather call it an “automated transaction manager” or “ATM,” but that acronym has already been taken.
As others have suggested, bitcoin transactions by way of blockchain are not that smart and probably are not contracts.
34
McJohn & McJohn, supra note 9, at 209-10.
But whether or not a bitcoin is a contract in the traditional sense, both bitcoins and traditional contracts are artifacts. The technical-symbolic distinction is not enough to understand the difference between a dumb and a smart contract. Take, as respective examples, a bespoke merger agreement and a dollar bill. Both are complex artifacts having legal significance and which accomplish technical and utilitarian ends. Both may have symbolic value. The real difference lies in relative reality or “thingness” of the dollar bill as compared to the merger agreement. The smarter a contract is, the more “thingness” it will have. That is because the operation of the smart contract has to be as opaque to its users as the MacBook Pro is to me; indeed, the details of the complex social structure underlying the smart contract ought to be as irrelevant to its users as the legal, governmental, and social structure of a dollar bill.

The study of “thingness” in philosophy is ontology, “the study of what there is,” as well as “the most general features and relations of the entities which do exist.”

35

Thomas Hofweber, Logic and Ontology §3.1, Stan. Encyclopedia of Phil. (Oct. 11, 2017), https://plato. stanford.edu/ archives/win2017/entries/logic-ontology.

John Searle’s conception of social ontology is helpful here.
36

John R. Searle, Social ontology: some basic principles, 6 Anthropological Theory 12 (2006).

He asks:

How can animals such as ourselves create a “social” reality? How can they create a reality of money, property, government, marriage and, perhaps most important of all, language? A peculiarly puzzling feature of social reality is that it exists only because we think it exists. It is an objective fact that the piece of paper in my hand is a $20 bill, or that I am a citizen of the United States, or that the Giants beat the Athletics 3–2 in yesterday’s baseball game. These are all objective facts in the sense that they are not matters of my opinion. If I believe the contrary, I am simply mistaken. But these objective facts only exist in virtue of collective acceptance or recognition or acknowledgment. What does that mean? What does “collective acceptance or recognition or acknowledgment” amount to?

37
Id. at 13.

The answer lies in features of reality that exist independently of each of us. To Searle, there are two key distinctions. The first has to do with the relationship of the thing and the observer of it. Things like mountains or muons or gravity are observer-independent. They would exist regardless of the existence of a human observer or that observer’s subjective attitudes toward it. On the other hand, social facts are observer-relative; they depend upon human beings for their existence.

38
Id.
A dollar bill has significance as a dollar bill because of the attitudes of the observers toward it; otherwise it is simply a green piece of paper.
39
Id.
But the fact that those attitudes exist is not observer-relative; “the observer-relative existence of social phenomena is created by a set of observer-independent mental phenomena, and our task is to explain the nature of that creation.”
40
Id. at 14.

The second distinction is between objectivity and subjectivity, both epistemically and ontologically. This is an epistemically objective statement: “I am typing this on a MacBook Pro.” This is an epistemically subjective statement: “Macs are easier to use than PCs.” They are both reflections of what I know; one is true or false regardless of my attitude, and one is not.

41
Id. at 15.
Ontological objectivity and subjectivity are different. The MacBook is real whether I believe it is real or not. Its reality is a matter of ontological objectivity. The pain in my back that has prompted me to see a chiropractor this afternoon is ontologically subjective. It exists only to the extent that I experience it. The importance of the distinction for Searle is this. That the piece of paper in my wallet is a $20 bill is an epistemically objective fact. But it is a social and institutional fact that arises because of a fairly universal consensus of human attitudes toward that piece of paper. Hence, Searle concludes: “observer relativity implies ontological subjectivity but ontological subjectivity does not preclude epistemic objectivity. We can have epistemically objective knowledge about money … even though the kind of facts about which one has epistemically objective knowledge are themselves all ontologically subjective, at least to a degree which we need to specify.”
42
Id. at 15 (emphasis in original).

The final step in Searle’s social ontology is the move from social fact to institutional reality. A $20 bill or a game or a smart contract each carries a meaning imposed “by collective intentionality of status functions.”

43
Id. at 16.
Collective intentionality is the attribution of that feature of mind “by which mental states are directed at or about objects and states of affairs in the world” and which “is shared by different people.”
44
Id.
Indeed, Searle defines a social fact as “any fact involving collective intentionality of two or more human or animal agents.”
45
Id. at 17.
Some things, like $20 bills, have functions “not intrinsically but only in virtue of the collective assignment.”
46
Id.
For each individual, the function is observer-relative. But the final element is necessary for the creation not just of a social fact but an institutional fact as well: “the collective intentionality assigns a certain status to that person or object and that status enables the person or object to perform a function which could not be performed without the collective acceptance of that status. An obvious example is money.”
47
Id.
And why is this important? Searle’s answer “is that status functions are the vehicles of power in society. The remarkable thing is this: we accept the status functions and in so accepting, we accept a series of obligations, rights, responsibilities, duties, entitlements, authorizations, permissions, requirements and so on.”

This is the key move when we think about smart contracts. The value of a bitcoin arises from conventions that every player of the cryptocurrency game accepts. Artifacts can take on meanings so universally shared as to constitute social facts, i.e. intangibles perhaps not as timeless and universal as Newton’s Third Law but having a generally accepted institutional reality. There is nothing in nature that requires me or anyone else to attribute a particular meaning to the piece of paper labeled “Federal Reserve Note,” but a $20 bill is a $20 bill by universal social consensus. In Searle’s words, “[i]t is this move whereby we create status functions that marks the difference between social reality in general and what I will call institutional reality.”

48
Id. at 18.
To be effective, the smart contract needs to be as opaque and universally accepted as the $20 bill. The fact that it is instantiated in code rather than paper is irrelevant. It must be a “currency robot” that blindly carries out the instructions embedded in its code; “once the smart contract is activated, the parties have no entitlements beyond those in the code. They get what they get and cannot get upset.”
49
McJohn & McJohn, supra note 9, at 210.

  1. Rules, regularities, and timelessness

A fork is an artifact. So is a $20 bill or a Subaru Forester. In Searle’s concept of institutional reality, what makes the $20 bill different from the other two artifacts is that there needs to be a collective attribution of meaning to the $20 bill. In contrast, the fork is an instrument to keep one’s hands clean while eating. You could use a popsicle stick. The use of a fork or a Subaru does not require any collective understanding of the meaning of a fork or a Subaru. To drive the Subaru safely, however, does require significant understanding of institutional realities. A yellow box with cylindrical protuberances hanging over your intended route and intermittently displaying red, green, and yellow lights is also an artifact, but different from the fork or the Subaru because (a) you need to understand what the lights mean, and (b) to be effective, so does everybody else.

How the $20 bill or the traffic light obtains that significance is by way of what Searle calls “constitutive” as opposed to “regulative” rules.

50

John R. Searle, Speech Acts: An Essay in the Philosophy of Language 33-42 (1970). For other treatments of this distinction, see Frederick Schauer, Playing by the Rules: A Philosophical Examination of Rule-Based Decision-Making in Law and Life (2002); Max Black, Models and Metaphors: Studies in Language and Philosophy (1962); H.L.A. Hart, Definition and Theory in Jurisprudence, in Essays in Jurisprudence and Philosophy (1983); David Lewis, Scorekeeping in a Language Game, in Philosophical Papers, Vol. 1 (1983); Joseph Raz, Practical Reason and Norms (1975).

While regulative rules “regulate antecedently or independently existing forms of behavior,” constitutive rules “do not merely regulate, they create or define new rules of behavior.” The classic examples of constitutive rules are those of a game like football or chess. The rules create the shape of field, the goal line, and the concept of touchdown. But for the constitutive rules, there would be no game of football. Fred Schauer uses the example of clipping in football. It is a constitutive rule to call hitting someone from behind clipping. It is a regulative rule to make it illegal.
51

Frederick Schauer, Playing by the Rules: A Philosophical Examination of Rule-Based Decision-Making in Law and Life 6-7 (2002).

Attributing significance to red, yellow, and green lights in a traffic light now strikes me as the creation of constitutive rules in the game of driving. It is a violation of the regulative rules of driving to proceed on red or to failure to proceed on green. There is an institutional reality to traffic signals as Searle conceived it.

Searle’s goal was to understand human language because it “is the presupposition of the existence of other social institutions in a way that they are not a presupposition of language.”

52
Id. at 14 (“The point can be stated precisely. Institutions such as money, property, government and marriage cannot exist without language, but language can exist with them.”)
Language operates according to constitutive rules that have risen to the level of institutional reality:

The obvious explanation for the brute regularities of language (certain human made noises tend to occur in certain states of affairs or in the presence of certain stimuli) is that the speakers of a language are engaged in a rule-governed form of intentional behavior. The rules account for the regularities in exactly the same way that the rules of football account for the regularities in a game of football, and without the rules there seems to be no accounting for the regularities.

53

Searle, Speech Acts, supra note 50, at 53.

There is a continuum of fluidity to rigidity in the constitutive rules underlying institutional reality. The rules of the language game are real, but far more fluid than football or traffic lights, and we accommodate the fluidity by changing the rules. As David Lewis pointed out, the constitutive rules in language games move and adapt themselves—they accommodate—in a way that the constitutive rules of a board game or a sport do not.

54

Lewis posits “rules of accommodation” for presuppositions and permissibility in language games. Lewis, supra note 50, at 234-35.

A $20 bill is not real in the sense of being part of nature (and hence governed by the laws of physics), but it achieves status as a social fact and its value lies in a regularity and a timelessness that makes it as describable and predictable as though it were part of nature. The meaning of the word “gay” can change over time and still have value; the same cannot be said about the meaning of a $20 bill.

I do not want to overstate this. Like many binary distinctions, there are clear prototypes at either end of the continuum between constitutive and regulative, and more difficult characterizations in the middle.

55

My views on this have evolved. Traffic rules themselves are constitutive but regulate another activity, namely, driving. Hence, they have both constitutive and regulative attributes. See Jeffrey M. Lipshaw, Models and Games: The Difference between Explanation and Understanding for Lawyers and Ethicists, 56 Clev. St. L. Rev. 613, 621-22 (2008).

My sense is that institutions are more real the less we argue about the constitutive rules that create them, and the fewer regulative rules we need to use them. A $20 bill is the product of many rules creating many institutions, many of them interlocking.
56

Searle, supra note 36, at 13-14, 18.

I cannot imagine ever arguing over the reality of a $20 bill. Similarly, in American football, a “touchdown” occurs “if any part of the ball is on, above, or behind the opponent’s goal line while legally in possession of an inbounds player, provided it is not a touchback.”
57

Official Playing Rules of the National Football League 11 (2017), https://operations. nfl.com/media/2646/2017-playing-rules.pdf.

That incorporates other clearly constitutive rules such as the layout of the field, including the boundaries and the goal lines.
58

Id. at 1-2.

There are rules that specify the attributes of the ball, but those are regulative, because they regulate an antecedent concept not otherwise created by the football, namely, a “ball.”
59

Id. at 3.

We are unlikely to argue about constitutive rules establishing what a ball or a touchdown is. But we might well argue, under regulative rules, what a legal ball is.
60

Kevin A. Hassett & Stan A. Veuger, Deflating ‘Deflategate’, N.Y. Times, June 14, 2015, at SR9.

Or we might argue, for purposes of touchdowns, what it means to have possession of the ball legally.
61

Alex Kirshner, Why Zach Ertz’ key TD catch against the Patriots’ [sic] stood, but Jesse James’ didn’t, SBNation (Feb. 4, 2018), https://www.sbnation.com/nfl/2018/ 2/4/16972328/zach-ertz-td-eagles-patriots-super-bowl.

To see the constitutive rules as regulative, I would have to be able to argue that the previous play already is a touchdown under a reasonable interpretation of an antecedent understanding of “touchdown.” But because the concept of a touchdown is constitutive, my only recourse would be to modify the rules, for example, so that a play ending closer to the goal line than the one-yard line becomes a touchdown.

A smart contract like a bitcoin is “smart” because of a similar regularity and timelessness. The bitcoin is not merely code mapping an antecedent understanding. Its institutional reality lies in the collective attribution of status to the bitcoin by way of acceptance of the constitutive rules of blockchain as applied to currency. The individual’s subjective intentionality means as little to a bitcoin as my wanting the ball just short of the goal line to be a touchdown makes the play a touchdown. Meanings so encoded are fixed in two directions. They are fixed at any moment in time across the population of meaning-interpreters. A $20 bill means the same thing to me in Cambridge, Massachusetts that it does to a Chinese entrepreneur in Guangzhou. And the meanings are fixed over time. A $20 bill may change in value relative to another currency over time, but its accepted meaning as a $20 bill is the same now as it will be in 2050. For bitcoins to have value, the same must be true. As one primer on blockchain observes, the key characteristics of the transaction system is that it is marked by consensus (all participants agree on its validity), provenance (the ownership and transfer of assets is completely transparent to the participants), immutability (all transactions are permanent once recorded and can only be changed by way of another valid transaction), and finality (the system provides the only means of determining the ownership of an asset or the completion of a transaction).

62

Gupta, supra note 10, at 7.

The term “smart contract” in this conception is, indeed, ironic. What a smart contract does, through blockchain technology, is to embody an asset, like a fork, money, or a security interest, and record the transfer of asset from one owner to another. As an artifact, it is to contract design what a fork is to AlphaGo. It cannot adapt to change in environments without significant aid of a human being. But that is the source of its value. How it operates, how it maps, what is going on inside is no more relevant to the bitcoin than it is to the fork or the $20 bill. Its institutional reality is marked by complete consensus, undebatable provenance, finality, and immutability.


B. The “Dumb Contract” as Map


1. The ontology of dumb contracts

Most contracts, from the very simple ones used to teach principles of contract law (“A agrees to sell B 100 bushels of wheat at $2.00 per bushel) to the very complex ones used to transfer business entities or document sovereign debt, are unlike Bitcoin. There are elements of universal status attribution (e.g. it is a contract) in the same way that we would universally agree that the mix of lines and colors on the piece of paper in front of me purports to be a map of Cambridge, Massachusetts. Whether it is a good map of Cambridge or works for the user’s intended purpose is another question entirely. The same is true of most contracts toward the dumber end of the smart-to-dumb continuum. The less collective status attribution to the natural language or code of the contractual artifact, the “dumber” it is.

Consider the element of “consensus” in a smart contract created wholly on blockchain versus one on the opposite end of the “smart-dumb” continuum. The “dumbest” of all contracts, at least in my coinage, would be one with precisely the opposite characteristics of transactions capable of being enclosed within blockchain. The contract would not create its own complete and unarguable institutional reality. It would instead be a linguistic map seeking to replicate another piece of reality, namely, an antecedent understanding. Would it carry a collective attribution of status (in Searle’s terms) or consensus (in smart contract terms) to create an institutional reality equivalent to a $20 bill? In a project finance package of documents, the promissory notes or letters of credit are pretty smart. They are contracts but akin to currency even if they are not negotiable. If you sign a promissory note, whether or not it tracks an antecedent transaction is usually irrelevant even if it does. The 100-page loan agreement, full of representations, warranties, covenants, technical and payment default rules, is not so smart. That is not to say that dumber contracts are not valuable. They simply do not do the same thing as a completely integrated “smart contract” like Bitcoin or one like a bank draft that comes close.

Let us posit an example. I own a unit in a condominium governed by a master deed. Some parts of that deed are “smarter” than others. The master deed itself creates the property, the units, as constitutive rules in the same way the rules of football create a touchdown. Nobody is going to argue that I do not have property in the form of a condominium unit. It is likely, in the not-too-distant future, I would be able to transfer my ownership interest in the condominium unit by way of a transaction on blockchain.

But what about the arrangement is not so smart? As I suggested above, institutions are more real the less we argue about the constitutive rules that create them, and the fewer regulative rules we need to use them. The master deed also incorporates regulative rules mapping an antecedent understanding of a community interest in the appearance of the exterior of condominium units. One such regulative rule is that any new construction and any subsequent “change” to the exterior of a unit must be approved by an association review board. My house was originally approved with a steel roof, rather than cedar shake like the rest of the houses. I want to replace the leaking steel roof with another steel roof, but the review board has ruled that any new roof must be cedar shake. I do not want to spend the money on cedar shake. Is my proposed work on the roof a “change” or merely a replacement? We can argue whether the deal was that identical roofs were to be considered “changes.”

Hence, even a contract that fixes rights in time, and does not purport to map or predict the future, is going to be dumber to the extent there are regulative rules the meaning of which are not the subject of effectively universal consensus. As a matter of social ontology, as demonstrated by the permanence of $20 bills versus the fluidity of language, constitutive rules may or may not change over time. But for many purposes, not the least of which are the utility of $20 bills or touchdowns, the subjective experience of moving through time has to be eliminated from any abstract conception of the reality that has been created. A $20 bill needs to be exchangeable for twenty singles into the foreseeable future. A first down requires gaining ten yards now and will require ten yards at least through the end of the current season.

The smarter the contract, the closer it maps on the actual deal. The smarter the contract, the more it relies purely on constitutive rules that are the subject of complete consensus, and the less it incorporates regulative rules about which the parties could argue. The smartest contracts, as with cryptocurrencies, are the deal. They exist and operate without need for any input from the outside or interpretation of the rules that embody them. But to the extent that the once dumb but increasingly smart contract maps rather than is the deal, it will need to fine tune and fix at the outset the rules that would apply in every relevant future contingency.

2. Making dumb contracts smarter

a. Some examples

How smart could we make a contract modeling a complex transaction over time? The following are three modeling exercises I encountered in practice.

63

These were real deals. I have changed the names and the geographic details.

Lease amendment. The parties were Landlord and Tenant. Tenant was a major industrial corporation. Landlord developed, owned, and leased commercial office and industrial properties nationwide. Sometime earlier, Tenant executed a lease for 36,000 square feet from Landlord on a single floor in an office building in a corporate office park in a suburb of Cincinnati, Ohio (“Fleetwood”). The term of the lease for Fleetwood was ten years, and the parties were in the fourth year of the term. At that point, Tenant decided it no longer needed 36,000 square feet but did not want to lease a partial floor of a building. Tenant’s CEO also preferred to be in a building in another area of suburban Cincinnati closer to his home. Tenant was willing to extend the lease to create another ten-year term if Landlord could provide a satisfactory solution.

Landlord was interested in solving the problem and obtaining four more years on a lease commit. It had a building and space that would perfectly fit Tenant’s need (“Riverview I”). The problem was that the space was currently occupied by another tenant (“Riverview Tenant”). Landlord was willing to extend Tenant’s lease for a new ten-year term if it could successfully negotiate an agreement with Riverview Tenant to vacate the space in Riverview I and move Tenant there from Fleetwood. Landlord was also willing to lease suitable space in a new building, yet to be constructed, adjacent to Riverview I (“Riverview II”) if Landlord were not successful in negotiating the foregoing agreement with Riverview Tenant.

For its part, Tenant wanted to move into Riverview I no later than one year following the execution of the lease extension agreement (“Relocation Date”). If no later than ninety days before the Relocation Date, Landlord were still negotiating in good faith with Riverview Tenant, Tenant would have been willing to extend the Relocation Date for ninety days. For its part, Landlord needed the ability to determine at some point that it would not be able to move Tenant into Riverview I, and to bind Tenant to a commitment to move into Riverview II no later than one year following the Relocation Date (the “One-Year Extended Date”). In turn, Tenant demanded that Riverview II be a building comparable to Riverview I, with a similar elevation and materially the same footprint, traffic patterns, and exterior appearance, with like finishes, all of which must be made with Tenant’s prior consultation.

Finally, Landlord needed the flexibility of an additional year, if necessary, to build Riverview II (the “Two-Year Extended Date”). Tenant needed to know, at least by the One-Year Extended Date, whether Landlord would be able to deliver the required space in Riverview I or Riverview II. Because it had too much space in Fleetwood, Tenant wanted Landlord’s commitment that, if Landlord could deliver Riverview I by the Relocation Date, Tenant would get rent deductions at Fleetwood of a certain amount per month. Landlord was willing to do so, subject to a maximum cap, unless the reason it could not deliver Riverview I was on account of “Tenant Delay.” To top it off, the entire deal was to be contingent on one of Tenant’s subsidiaries entering into a lease agreement with Landlord for 130,000 square feet of space in the greater Birmingham, Alabama area, with such agreement to be executed within six months.

Joint venture buy-sell. MegaCorp was a large American corporation with a diversified and multi-national portfolio. It had an automotive component business (call the component a “widget”) with a presence in North America, Europe, and Asia. A German corporation, Kleine AG, also had a widget business in Europe and Asia. MegaCorp and Kleine combined the two businesses into a global joint venture comprised of two constituent companies. One was a Delaware LLC named MegaKleine, LLC, in which MegaCorp owned two-thirds of the equity and Kleine the remainder. The other was a German company, KleineMega GmbH, in which Kleine owned two-thirds of the equity and MegaCorp the remainder.

The question was whether there should be an “exit” provision in joint venture agreement to deal with its eventual breakup. The underlying understanding was that only Kleine and not MegaCorp would want to sell its interests in the joint venture. MegaCorp believed that, by virtue of its size, global presence, and access to capital, it would ultimately come to own all of the joint venture business and would have the leverage in any endgame negotiation with Kleine. Its preference was to have no such provision at all. Kleine, on the other hand, wanted the parties to be bound by the determination of an independent party’s evaluation of the price at which one joint venture would sell its interest to the other. MegaCorp was unwilling to have a price set other than by way of a market test.

The compromise embodied in the agreement was first to provide an exit methodology geared to Kleine’s concerns by limiting the circumstances under which it would apply. The only permitted triggers for the exit mechanism were (a) the existence of an irreconcilable deadlock between MegaCorp and Kleine (such a deadlock itself defined under a detailed set of rules), or (b) Kleine’s demonstrated inability to service out of the joint venture’s cash distributions the debt that it had incurred to enter into it in the first place. Second, the contractual methodology itself became complex on account of two factors: (a) the parties owned different percentages of the different components of the constituent joint venture entities, and there was no assurance that the entities would have equal values in the future; and (b) it was presumed that Kleine would not have the financial ability to be the acquiring party. Hence, simple (and relatively common) joint venture exit mechanisms like “I cut-You choose” or the “Texas shoot-out,” in which the party initiating the exit could be either the buyer or the seller, would not work. But the complexity of the final version of the methodology played to MegaCorp’s desire that any exit valuation occur by way of a negotiated deal and not a contractual mechanism.

Long-term coal supply. In the early 1970s, a combination of the United States Environmental Protection Act and the Arab oil embargo prompted a number of major electrical utilities to secure long-term supplies of low-sulfur coal from surface mines in Wyoming and Montana.

64

Eugene T. Holmes, Negotiating, Drafting, and Enforcing Coal Supply Contracts, 9 Nat. Resources Law. 353, 356-61 (1976).

Some of these contracts extended for as long as twenty or thirty years.
65
Northern Indiana Public Service Co. v. Carbon County Coal Co., 799 F.2d 265, 267 (7th Cir. 1986).
The contracts locked in prices by way of base prices and escalation provisions designed to operate off of otherwise publicly available information (e.g. increases in tax rates, freight rates, price indices, government impositions, labor costs, etc.).
66
Id.; Holmes, supra note 64, at 364-71.
Those escalation clauses operated independently of the actual market price for coal.

b. Timelessness and the alignment problem

The hallmark of the foregoing exercises is the complexity of the modeling task. Each of the problems is capable of reduction to flow charts of “if-then, else” instructions. The problem is predicting at a fixed moment in time (the creation of the contract) which of the almost infinite array of real-world contingencies needs to be included in the flow chart that will constitute the spine of the contract.

Recall that the hallmarks of smart contracts expressed in computer code, in addition to consensus, include immutability and finality from the time they are created and going forward. We have already seen that a bitcoin or a $20 bill is itself immutable and final and simply moves through time. The rule about change in the condominium exterior is equally applicable in its present form no matter when in time it gets applied. Any contract that has to incorporate algorithms to account for changes in circumstances over time necessarily becomes a regulative rather than constitutive exercise in rule-making. That is because the contract seeks to map or predict in its rules antecedently or independently existing forms of behavior—namely the flow of events through time in the physical world—what is fair or foul in a game that already exists. As drafters, our job is to anticipate what rules need to apply in future states of the world. We fix the rules at the time of the making of the contract. While circumstances may change over time, the rules do not. We may anticipate a change in circumstance and a different rule that goes into effect at a particular moment in time, but even that anticipated or contingent rule gets fixed at the time of the making of the contract.

Trying to make contracts smarter, to operate like calculating machines rather than humans, is the project of making a science out of most lawyers have previously considered an art. The problems inherent in making dumb contracts smarter, in scientizing them, become clearer if we contrast modeling the complex transactions described above with the kind of modeling with done by physicists and economists. In particular, we need to focus on role of timelessness and time in the modeling. Timelessness in the physical and social sciences means that the rules themselves do not change over time. They are timeless. In physics, the second law of thermodynamics or Schrödinger’s wave equation was the same a million years ago and will be the same in another million years. Once you identify the system being modeled, it is deterministic. If you know its initial configuration, the initial direction and speed of changes in the system, and the forces the system will be subject to as it changes in time, you can predict the future state of the system.

67

Lee Smolin, Time Reborn: From the Crisis in Physics to the Future of the Universe 43-44 (2013). I first encountered and discussed Smolin’s book in connection with a critique of Ronald Gilson’s conception of the economics of transactional lawyering. What Is It Like to Be a Beetle? The Timelessness Problem in Gilson’s Value Creation Thesis, 15 U.C. Davis Bus. L. J. 23, 31-34 (2015). I have borrowed from that discussion in what follows.

“In classical physics, the space of states is a mathematical set. The logic is Boolean, and the evolution of states over time is deterministic and reversible.”
68

Leonard Susskind & Art Friedman, Quantum Mechanics: The Theoretical Minimum 94 (2014).

The physicist Leonard Susskind calls this the “minus-first” law: the conservation of information. “The conservation of information is simply the rule that every state has one arrow in and one arrow out. It ensures that you never lose track of where you started.”
69

Leonard Susskind & George Hrabovsky, The Theoretical Minimum: What You Need to Know to Start Doing Physics 9-10 (2013). Quantum physics is also deterministic in the sense of the conservation of information and distinctions. Susskind & Friedman, supra note 68, at 94-97, 274. At least between experimental observations, “the state of a [quantum] system evolves in a perfectly definite way, according to the time-dependent Schrödinger equation.” Id. at 126. It is the act of measurement that “collapses” the wave function, permitting observation of only one of two complementary (“non-commuting”) properties, like position and momentum. Id. at 127, 137-39. In short, the unmeasured quantum system is deterministic; it is the act of measuring one quantity that “destroys any information we may have had about the other one.” Id. at 130.

This is the Newtonian paradigm. In it, mathematical expressions we use to describe and predict the flows of things are unchanging and timeless, even if in reality we move through moments in time.

70

Smolin, supra note 67, at 43-44. Somewhat paradoxically, the conception of time is timeless in the sense that it does not change over time. In quantum mechanics as well, the aspect of conservation of the system’s information over time is called unitarity. Susskind & Friedman, supra note 68, at 94-95. “In physics lingo, time evolution is unitary.” Id. at 99. In other words, even in quantum physics, time itself does not change over time.

In that mathematical model, time itself sits outside the systems being measured. It is absolute and “unitary.”
71

Susskind & Friedman, supra note 68, at 94-99.

In short, a timeless world is one in which every final configuration is simply the initial configuration acted upon by the laws of physics. Nothing novel or surprising can occur: “What the Newtonian paradigm does is replace causal processes—processes played out over time—with logical implication, which is timeless.”
72

Id. at 51. Smolin describes the paradox alternatively as follows: “If the universe is all that exists, then how can something exist outside it for it to be described by? But if we take the reality of time as evident, then there can be no mathematical equation that perfectly captures every aspect of the world, because one property of the real world not shared by any mathematical equation is that it always some moment.” Id. at xvi.

The result of the paradigm is, however, still a model and thus an incomplete representation of physical reality. There can be no scientific determinism (a la Laplace) under which the laws of classical physics predict the future in perfect and infinite detail even with complete knowledge of all the laws and vast computing power. That is because the space of states in classical physics is continuously infinite, described in the model by real numbers and not discrete integers. Because the observer can never know the initial conditions with infinite precision, the ability to predict (or trace backwards) is limited. Moreover, even in classical physics, the choice of where to begin the measurement of initial conditions is subjective and not part of the deterministic system.
73

Susskind & Friedman, supra note 68, at 13-14; Michael S. Gazzaniga, The Consciousness Instinct: Unraveling the Mystery of How the Brain Makes the Mind 178-79 (2018).

The theorists who most aspire to find physics-like regularities in contracting behavior are law and economics scholars. The reason is not hard to see. Contracts anticipate a future state of the world and provide an outcome if the conditions are met. They do so by describing transactions as a series of antecedent conditions related to legal consequences connected by rules of inference in truth-functional logic. The predominant rule of inference is modus ponens: if p, then q; p; therefore q.

74
See infra notes 103-108 and accompanying text.
Not surprisingly, however, the same issues of time and timelessness affecting the laws of physics affect the laws of economic behavior.
75
The physicist Smolin asserted that the utility function and single equilibrium assumptions of neoclassical economics deal with time just as physics has: they abstract it away. Id. at 261. The simplifying assumptions of the pure theory of neoclassical economics make for “a sense of inhabiting a timeless realm of pure truth, against which the time and contingencies of the real world pale.” Id. at 262-63.
Like physicists creating models of future state changes in complex real-world physical systems under timeless laws, economists reduce complex real-world contracting behavior to timeless laws (i.e. the observed regularities of economic behavior which themselves are presumed not to change over time), albeit without a “minus-first” law that conserves information. Hence, the perfect contract in economic terms would be “complete”: i.e. it would anticipate every state-contingency of the world. Like the transaction in the physical universe it was modeling, the contract would hold time evolution to be a unitary attribute of the system.
76

See Oliver Hart & John Moore, Incomplete Contracts and Renegotiation, 56 Econometica 755, 756 (1988); Oliver Hart & John Moore, Foundations of Incomplete Contracts, 66 Rev. Econ. Stud. 115 (1999); Eric Posner, Karen Eggleston, & Richard Zeckhauser, The Design and Interpretation of Contracts: Why Complexity Matters, 95 Nw. U. L. Rev. 91, 98 (2000); Oliver Hart & John Moore, Incomplete Contracts and Ownership: Some New Thoughts, 97 Am. Econ. Rev. 182, 183 (2007); Hadfield-Menell & Hadfield, supra note 20.

Time marches on, and the rules embedded in the contract must be sufficiently complete—i.e. timeless—to incorporate all the desired state changes during the march.

Heretofore, I would not have suggested that any economist would propose that a “complete”—i.e. timeless—contract is possible; rather, the work is almost always in reconciling the incompleteness of contracts with a theory that would be more coherent if they always were. As Hadfield-Menell and Hadfield observe, “economists and legal scholars have recognized that writing complete contracts is routinely impossible for a wide variety of reasons.”

77
Hadfield-Menell & Hadfield, supra note 20, at 2.
This is because states of the world may be unobservable or, if observable, unverifiable by contract enforcers, humans might not be able to predict all possible states of the world, evaluate the optimal actions in them, or determine optimal incentives. Their ability to describe states of the world and optimal actions unambiguously might be limited or too costly to write down. Finally, some of the parties’ intentions in certain instances—say a storm wipes out a crop that seller has contracted to sell to buyer—might not be specified. In short, “[b]ecause the contract does not specify the intended behavior in all contingencies it is incomplete. Contracts in human relationships are usually, and maybe necessarily, incomplete.”
78
Id.

But there is a nice synergy between the economist’s state contingencies and codability. All digital computers are Turing machines that work by creating a finite number of states constituting the relationship between the program and the inputs on which the program is running.

79
See infra notes 101-102 and accompanying text.
Smart contracts, ones that would fully map arrangements over time, would have to run on a Turing machine. They would need to be constructed out of Turing-complete code, meaning that the reality the smart contract creates would be replicable on any universal Turing machine, or in other words, a digital computer. Indeed, Hadfield-Menell and Hadfield have speculated on the extent to which artificial intelligence can advance the creation of contracts that are more complete (i.e. anticipate more state contingencies) in the economic sense.
80

Hadfield-Menell & Hadfield, supra note 20, at 12:

By recognizing and elaborating the parallels between the challenge of incomplete contracting in the human principal-agent setting and the challenge of misspecification in robot reward functions, this paper provides AI researchers with a different framework for the alignment problem. … Our most important claim is that aligning robots with humans will inevitably require building the technical tools to allow AI to do what human agents do naturally: import into their assessment of rewards the costs associated with taking actions tagged as wrongful by human communities. These are the lessons learned by economists and legal scholars over the past several decades in the context of incomplete contracting.

Because smart contracts are marked by complete consensus, undebatable provenance, finality, and immutability, even when they are maps of antecedent and independently existing arrangement, the rules from which they will be built will be as timeless and unvarying as the laws of physics. This is what Hadfield-Menell and Hadfield refer to as the “AI-alignment problem”:

[It] arises because of differences between the specified reward function and what relevant humans (the designer, the user, others affected by the agent’s behavior) actually value. AI researchers intend for their reward functions to give the correct rewards in all states of the world so as to achieve the objectives of relevant humans. But often AI reward functions are—unintentionally and unavoidably—misspecified. They may accurately reflect human rewards in the circumstances that the designer thought about but fail to accurately specify how humans value all state and action combinations.

81
Id. at 1.

In addition, if we are going to try to map human affairs by way of timeless laws, the ability of that mapping to conserve all information going forward is going to be limited by the precision of the initial state (i.e. how closely did the contract map on exactly the conditions affecting the parties at the time it was drafted) and the subjectivity of the contract drafter, as discussed earlier in connection with physics.

82

See supra notes 67-72 and accompanying text.

The alignment and initial state problems do not only exist for AI-contract designers. Even the most sophisticated digital computer program does no more than replicate (except much, much faster) a hypothetical human computer using a pencil and paper.

83

Alan M. Turing, Computing Machinery and Intelligence, 59 Mind 433 (1950), in The Essential Turing 441, 444 (B. Jack Copeland ed., 2004).

Every lawyer who drafts a more-than-passingly complex contract confronts potential alignment problems of precisely the same nature. The alignment issue, not surprisingly, reared its head in each of the examples. The lease amendment failed to anticipate that Tenant would be acquired by another corporation and that any space for a world headquarters was redundant. The joint venture exit methodology failed to anticipate that it was MegaCorp, not Kleine, that first decided to exit the widget business. The long-term coal supply agreement failed to anticipate a collapse in world energy prices in the early 1980s, causing the automatic price escalators to create a significant gap between the market price and the contract price. In one case, the price of the coal had escalated from $24 per ton in 1978 to $44 per ton in 1985 while the market price of equivalent coal had collapsed to a point that the coal cost more to mine than it was worth on the market.
84
799 F.2d at 267, 279. In the NIPSCO case, the utilities simply walked away from the contract, incurring a $181 million damage award in favor of Carbon County and causing the mine to shut down. Id. at 268.

c. Social ontology, computer ontologies, and institutional realities

For all of the example contracts to be “smart,” in the sense of immutability and finality, they would have had to do something more than merely create the cyber version of a $20 bill. We will assume that each could do “smart” things to be self-executing, like tap into bank accounts, record liens and mortgages, engage contractors, and do everything that a human contract administrator might do. More significantly, they would have needed to anticipate and map a flow chart extending from the date of execution through various contingencies to a final resolution. In short, unlike a fork or a $20 bill or a bitcoin that is timeless in the sense that it merely moves through time, these contracts would have needed to incorporate time into the very understanding between the parties. If they were truly smart contracts, they would have been timeless in the sense that, once created, they would be immutable and final. Unlike the fluid constitutive rules of language but like the fixed constitutive rules of football, the contract rules could not be permitted to change once the deals were sealed.

In computer science, an ontology is a formal definition of concepts and their relationships, related to a domain of interest.

85

Johannes Busse, et al., Actually, What Does “Ontology” Mean? A Term Coined by Philosophy in the Light of Different Scientific Disciplines, 23 J. Computing & Info. Tech. 29, 31 (2015).

There is at least a family resemblance in the usages of the term across the disciplines of philosophy and computer science having to do with an intense interest in how to describe and categorize various states of being. The difference, I think, is that philosophers think about what “is” in terms of what the physical, metaphysical, and social worlds bring to the observer; in computer science, on the other hand, an ontology is a reality the programmer creates because it is helpful in achieving a particular end. An example would be programming a search engine to help a traveler find a suitable vacation spot. When the user asks to find a pet-friendly hotel on a New England lake that permits motorized watercraft, the program needs to be able to place all those terms within an alternative “tourism” reality. That is a “tourism” ontology.
86
Id. at 38.

There is a similar ontology of contract documentation. In my first-year contract law class, we turn to legal issues concerning the scope of the contract at the beginning of the second semester, after we have spent the first semester learning about contract enforceability (consideration and related doctrines) and formation (offer and acceptance and related doctrines). At this point in the course, I ask the students to assume that the parties have formed an enforceable contract. The issues will then be what they agreed to, particularly when they reduced their agreement to writing. This will involve understanding the status of language that they used coming to an agreement but which did not make it into the written document (the parol evidence rule), the meaning of the language they actually included in the document (interpretation), and the possibility that there was language they never shared, either orally or in writing, that should nevertheless be considered part of the written contract (implied terms).

I suggest to the students that all of these issues relate to the process of getting the essential point of the understanding across using fewer bits and bytes of information than we would use to describe the entirety of our relationship. By doing so we create an objective record, capable of being read by a third party, of what was previously merely a second-person, inter-subjective communication. That is, the contract is a model of our relationship, but it is not the whole of our relationship. For example, A in Los Angeles and B in New York sign a letter of intent that they intend to execute a contract to sell a business. They set a deadline date for the execution of the agreement. They even agree in writing that the seller will prepare the first draft. They realize that the definitive agreement is going to take multiple negotiation sessions over the course of several weeks. Whose office? They may well not put that in the agreement, even though they could. Indeed, no matter how complex the written agreement is, even if it is a 100-page merger agreement, the real world is always more complex because the real world is all of reality and not just what was reduced to writing.

Moreover, we can never code our entire relationship (i.e. make what the economists call a complete contract), but we could code more of it. Yet at what cost? There are other examples of reduction. Sometimes the reduction furthers our ends and sometimes it does not. For example, we could think of a live tennis game as a metaphor for our relationship and a computer game version of tennis as a metaphor for the contract. We could agree that, instead of playing tennis, we would use a model for tennis. I then suggest there are alternative computer models for tennis. My reference to the old Atari Pong (one square dot and two rectangular paddles) usually gets a laugh. EA Sports, on the other hand, makes a version of tennis for the Nintendo Wii machine. If we want to model real tennis, the advantage of Pong is that it is cheaper to create and has relatively few rules to administer. But it bears only the slightest relationship to actual tennis. The advantage of the EA Sports version is that it is far more like real tennis, even though it is not real tennis. The website proclaims, “EA Sports Grand Slam Tennis puts the racket in the palm of your hand and offers the deepest tennis experience ever for the Wii.”

87

EA Sports, Grand Slam® Tennis, https://www.ea.com/games/tennis/grand-slam-tennis (last visited Sept. 15, 2018).

Its disadvantage is that it is far more complex, far costlier to create, and has far more rules to adjudicate. Without seeking to stretch the metaphor too far, I claim the contract is a model of reality in the same way that Pong or EA Sports Grand Slam® Tennis can be a model of reality.

The question is how much of the physical world gets replicated in the cyber world. That is the issue of computer ontology. A computer ontology that permits a relationship like the lease amendment, the joint venture exit, or the coal contract to be smart (or at least smarter) is going to have to be substantially more developed and incorporate far more of the real world than the word processing programs used to write their dumber counterparts. The viability of a smarter contract that incorporate changes in circumstances over time will depend on human willingness to submit to its ontology.

I suspect dumb contracts will persist because that consensus is unlikely in the vast majority of cases, even if the parties were confident in the model’s ability to track changes in circumstances over time (see the discussion of computational complexity that follows). Just as it is true that there are no private languages (per Wittgenstein) if we are to be able to speak to each other, when we talk metaphorically about the contractual “meeting of the minds,” there are no private ontologies. Rather, each of us has a subjective viewpoint on an objective world, in life, law, and transactions, including the artifacts we have created in non-private language and which we label as our “contract.” And, unlike the artifact, which is timeless, our subjectivity moves through time. What we wanted before we may not want later. In short, we will continue to use non-private languages to translate our private and subjective desires into intersubjective understandings. We will create objective and timeless artifacts of those understandings in the same non-private language. Those artifacts may or may not become obsolescent in the real world over time. They may or may not continue to be congruent with subjective desires. That will continue to be true, notwithstanding the work being done in the automation of routine contract drafting.

d. Computational complexity

Another response to the economist’s complete contract or a programmer’s complete smart contract ontology lies at the intersection of legal and computational complexity. Eric Kades’ 1997 article anticipates the issue, even if the computer processing technology to accommodate complexity is significantly advanced twenty-one years later.

88

Eric Kades, The Laws of Complexity and the Complexity of Laws: The Implications of Computational Complexity Theory for the Law, 49 Rutgers L. Rev. 403 (1997).

Legal complexity arises from the sheer number of rules applied to the infinite number of possible circumstances to which they apply.
89
Id. at 407-21.
Computational complexity is, in contrast, a mathematical concept whose application to computing is the ability “to produce quantifiable measures of how long a program will take to process input of various sizes.”
90
Id. at 427.
Professor Kades’ focus was on the extent to which computer technology could make previously unsolvable problems manageable, on one hand, but the limitations imposed on such efforts by computational complexity, on the other, in a number of areas: debtor-creditor priorities, tax liens, bankruptcy, corporate voting, criminal conspiracies, and complex contracts.
91
Id. at 444 et seq.
In short, some problems become so complex because of the number of variables as to be intractable, and at that point, even programmers need to resort to heuristics over deductive or computational logic.
92

Id. at 482.

Again, even if the parties were willing to submit to a contract capable of a complete ontology effective over time, there are significant questions whether that ontology would require a computational complexity beyond the limits of practical programming.


II. Logic and Computability in Smart and Dumb Contracts


The tantalizing possibility of the “smart contract” idea extending beyond thing-like cryptocurrencies to map-like contracts arises from the fact that contracts and contract law can, and indeed often do, have a formally logical structure. Yet many concepts that are capable of expression and manipulation in formal logic cannot be expressed in computer code without creating proxies in the discrete symbols of binary code for the continuous concepts otherwise expressible in language or logic. If you posit a smart universe in which smart contracts operate, its common language would be computer code. The reality the smart contract creates is capable of being replicated on any universal Turing machine, or in other words, a digital computer. Hence the language of the smart contract is obliged to be not only expressible in a formal logic, but it also has to be computable. In the smart universe, all of reality consists of a set of states between which you move in discrete steps.

The hallmarks of a formal coding language are (a) strict syntax, (b) that the expressions in the language have no inherent meaning, and (c) near or complete non-ambiguity. The distinction between formally logical and computable is critical here. Consider all of the contractual standards lawyers regularly employ that could be quantified on a scale, say of one to ten, but which are in reality polar continuums capable of clear examples on either end but significantly gray the closer to the middle the example lies. A partnership requires only a majority vote of the partners for approval of ordinary matters, but a unanimous vote for extraordinary ones. The president of a corporation has actual express and implied authority to bind the corporation in connection with its “day-to-day operations,” but needs board approval otherwise. A carpenter is obliged to build the frame of a house in a workmanlike manner. Article 2 of the Uniform Commercial Code is notorious for being replete with these vague standards expressed in wiggle words like “reasonable,” “seasonable,” “satisfactory,” “fit,” or “merchantable.”

One can re-create the operation of the contract terms or the governing law in first-order predicate logic that satisfies (a) and (b) of the coding language parameters. But first-order predicate logic only requires that the language have a formal syntax and grammar; it does not require complete non-ambiguity. Even with ambiguous predicates, classical logic can take us deductively from a set of premises to a valid conclusion that must be true assuming the premises are true, as long as we correctly employ the appropriate rules of inference.

93

Stewart Shapiro & Teresa Kouri Kissel, Classical Logic, Stan. Encyclopedia of Phil. (Mar. 11, 2018), https://plato.stanford.edu/entries/logic-classical.

The problem with the wiggle words in computer code rather than mere predicate logic is (c): to be computable, everything must be expressible by discrete rather than continuous mathematics. For a more widely understood example, take the judging of competitive figure skating. A “lutz” is a toepick-assisted jump with an entrance from a back outside edge and landing on the back outside edge of the opposite foot. Making a judge “smart” here would mean first translating the characteristics of a lutz into sentences in first order predicate logic. Some predicates would be relatively objective criteria like “the jump was high,” “the jump was long,” “the jump was fast.” But there would also have to be sentences covering relatively subjective criteria like “the jump was pretty,” or “the jump was original.”

Much (if not most) of the natural language lawyers use to draft contracts resists the complete non-ambiguity or vagueness of computer code. Professor Bayless Manning captured this in his “law of the conservation of ambiguity”:

Elaboration in drafting does not result in reduced ambiguity. Each elaboration introduced to meet one problem of interpretation imports with it new problems of interpretation. Replacing one bundle of legal words with another bundle of legal words does not extinguish debate; it only shifts the terms in which the debate is conducted.

94

Bayless Manning, Hyperlexis and the Law of Conservation of Ambiguity: Thoughts on Section 385, 36 Tax L. 9, 21 (1982); see also Bayless Manning, Hyperlexis: Our National Disease, 71 Nw. U. L. Rev. 767 (1977); Andrew Stumpff, The Law is a Fractal: The Attempt to Anticipate Everything, 44 Loyola U. Chi. L. J. 649 (2013).

The downside of such ambiguity or vagueness is what I have previously called “lexical opportunism,” the ability to use it to further ends that likely were never in the contemplation of either party when they executed the contract.

95

Jeffrey M. Lipshaw, Lexical Opportunism and the Limits of Contract Theory, 84 U. Cin. L. Rev. 217 (2016).

If contracts can be logical, can they also achieve the precision of which computer code is capable? The thesis here is it is possible, even likely, but not without the aid of human-like intelligence. The reasons to expect the persistence of dumb contracts go beyond ontology. The very language in which one would need to express the smart contract, that of a digital computer, requires someone or something, even at an almost inconceivably deep level of granularity, to draw a line in the sand, thus converting the continuous into the discrete, from merely logical to wholly computable. That someone or something cannot be the computer itself. One commonly cited provision of Article 2, §2-314, the implied warranty of merchantability, demonstrates this.


A. The Formal Logic of UCC Article 2


Let us consider §2-314 by way of the formal syntax of first-order predicate logic. Article 2’s wiggle words like “reasonable,” “seasonable,” “satisfactory,” “fit,” or “merchantable” work just fine in this system. As with prettiness and originality in skating, they are objective, conventional, but continuous standards; nevertheless, they can only be assessed from somebody’s subjective standpoint. When two skating judges have different assessments of prettiness, they can average them together for a final score. When two parties to a sales contract have a different view of the fitness of a good for its ordinary purpose under §2-314 of the U.C.C., we have a legal dispute to be resolved. Moreover, we can set that predicate of fitness standard into a formal proof by which an award of damages is inferable deductively from the truth of the fitness predicate. But, as we will see in the next section, to make application of legal rules with continuous, subjective predicates not only logical but computable (and thus “smart”), we are going to need to translate them into discrete ones.

Let us take the example of a buyer’s right to recover damages if there is a breach of the warranty of merchantability. The long-term coal supply contract referred to above would be governed by Article 2 of the UCC. The utility uses it in the boilers of electric power plants (the coal gets burned, heating water to steam which drives a turbine that generates electricity). It turns out the coal has so much moisture content that it would not work to generate sufficient steam in any currently operating boiler. Can we demonstrate the buyer’s right to damages under Article 2 as a formally logical implication of that fact?

Accepting Fred Schauer’s characterization of legal rules as entrenched generalizations,

96

Schauer, supra note 51, at 17.

I am going to precede each logical statement of a rule by the universal quantifier ∀. The warranty of merchantability is UCC §2-314(1). It translates into formal logic, as follows, and will be a premise in our deduction of the right to recover.
97

I am using natural deduction symbols and rules of inference found in Patrick J. Hurley & Lori Watson, A Concise Introduction to Logic (13th ed. 2017). For another reduction of natural language legal statement to natural logic, see Layman E. Allen, Symbolic Logic: A Razor-Edged Tool for Drafting and Interpreting Legal Documents, 66 Yale L. J. 833 (1957).

Natural language

Formal logic

2-314(1): a warranty that the goods shall be merchantable is implied in a contract for their sale if the seller is a merchant with respect to goods of that kind.

∀x∀y[(Gx & Sy & My & Tyx) →Wx], where

    G=good

    S=seller

    M=merchant

    T=sells

    W=is warranted to be merchantable

In natural language, the logical sentence on the right reads “For every x and every y, if x is a good and y is a seller and a merchant and sells the good, then x is warranted to be merchantable.” Or, if I am a merchant and I sell a good, it will be merchantable or I am in trouble.

What does it mean to be merchantable? To simplify things, I am going to use only one definition of merchantable in §2-314(2), namely sub (c), which provides that merchantable goods “must be at least such as … are fit for the ordinary purposes for which such goods are used.”

98

The reason for the simplification is that §2-314(2) has six acceptable properties for “merchantable.” If I were to call those six properties Q, R, T, U, V, and Y, the correct logical statement would be ∀x((Qx v Rx v Tx v Ux v Vx v Yx)↔Sx). I do not need to get that detailed to make the point.

In logical notation, it would be something like:

Natural language

Formal logic

2-314(2)(c): merchantable goods “must be at least such as … are fit for the ordinary purposes for which such goods are used.” (For simplicity, I am only using one definition of merchantable. To be complete I would use a long disjunctive sentence.)

∀x(Fx ⇔ Vx), where

    F=fit for the ordinary purpose

    V=merchantable

In natural language, it is the conditional “for every x, if and only if it is fit for ordinary purposes, then it is merchantable.” This ought to be a bi-conditional because there cannot be a case where something is merchantable but it is not either fit for ordinary purposes or one of the other indicia of merchantability. I am leaving “good” as the property of the thing out, because it is not necessary to the equivalence.

UCC §2-714 establishes the right of a buyer to be compensated in damages if the buyer accepts a non-conforming tender and gives the appropriate notice.

Natural language

Formal logic

2-714: Where the buyer has accepted goods [and given notification (subsection (3) of Section 2-607)] he may recover as damages for any non-conformity of tender the loss resulting in the ordinary course of events from the seller’s breach as determined in any manner which is reasonable.

∀x∀y∀z[(Gx & Bz & Sy & Azx & Nzy & -Cx)→Dzy], where

    G=is a good

    S=is a seller

    C=is conforming

    B=is a buyer

    A=accepts

    N=notifies

    D=recovers damages from

In natural language, the logical sentence reads: “For every x, if x is a good and is not conforming, then for every y and z, if y is a buyer and z is a seller, and buyer accepted the good and notified seller, y recovers damages from z.”

The condition we are most concerned about is -Cx, “is not conforming.” Referring to UCC §2-607(3), we can find a connection between a breach and the predicate “non-conforming.” It seems reasonable to me to say a breach occurs when the good is not merchantable (-Vx) and if there is a breach of the warranty of merchantability (Wx), the good does not conform. Hence:

Natural language

Formal logic

2-607(3): Where a tender has been accepted(a) the buyer must within a reasonable time after he discovers or should have discovered any breach notify the seller of breach or be barred from any remedy.

∀x((Wx & -Vx) → -Cx), where

    W=warranted to be merchantable

    -V=not merchantable

For every x, if x is warranted and not merchantable, it is not conforming.

We need to apply the rules to a particular situation, which means that the coal at issue here needs to be represented logically not by a variable, but by a logical constant. We will represent the circumstances as follows:

Natural language

Formal logic

A good

Gg

A seller who is a merchant sold the good

Ss & Ms & Tsg

There is a buyer

Bb

The good is not fit for ordinary purposes

-Fg

The buyer accepted the good

Abg

The buyer notified the seller

Nbs

If I now begin with the premises that a specific good has been sold, that it has been accepted, that the buyer has notified the seller, and the good is not fit, I can prove formally that the buyer is to be compensated in damages.


Step

Justification

1

Gg

Premise

2

Ss & Ms & Tsg

Premise

3

Bb

Premise

4

-Fg

Premise

5

Abg

Premise

6

Nbs

Premise

7

∀x∀y[(Gx & Sy & My & Tyx) →Wx]

Premise

8

∀x(Fx ⇔ Vx)

Premise

9

∀x∀y∀z[(Gx & Bz & Sy & Azx & Nzy & -Cx)→Dzy]

Premise

10

∀x((Wx & -Vx) → -Cx)

Premise

11

(Gg & Ss & Ms & Tsg) →Wg

Universal instantiation (UI) 7

12

Fg ⇔ Vg

UI 8

13

Gg & Ss & Abg & Nbs & -Cg → Dbs

UI 9

14

(Wg & -Vg) → -Cg

UI 10

15

Gc & Ss & Ms & Tsg

Conjunction (CONJ) 1, 2

16

Wg

Modus ponens (MP) 11, 15

17

(Fg → Vg) & (Vg → Fg)

Material equivalence 12

18

Vg → Fg

Simplification 17

19

-Vg v Fg

Implication 18

20

-Vg

Disjunctive syllogism 4, 19

21

Wg & -Vg

CONJ 17, 21

22

-Cg

MP 14, 21

23

Gc & Ss & Ms & Tsg & -Cg

CONJ 15, 22

24

Dbs

MP 13, 23



Q.E.D.

That is the formal logic of Article 2. In this form, it is logically impeccable but not smart.


B. Why These Concepts are Difficult to Code—Discrete and Continuous


Can it be coded as is? The answer is no, because one of the essential elements of the proof, namely the property F, or fitness, is a continuous and not a discrete function. To make this program smart, we need to do more work, namely creating either a deductive or inductive proxy for fitness.

When the predicates from formal logic are continuous properties, then on a hypothetical measurement scale between 0 and 10, the actual performance is capable of any of an infinite number values within the finite interval. That is, the reality of thing being measured is continuous and not discrete. For any two measurements say, of height, there are always infinitely more precise measurements capable of expression. We simply agree that (a) the measurement will use a finite and discrete set of numbers within the interval, and (b) a certain degree of precision is appropriate (say, rounding the height to the nearest quarter of an inch). The digitization of objective criteria is something we do all the time, and whether a figure skating jump was 36 or 37 inches high will not depend on the national fervor of a given judge.

But in figure skating, subjective criteria like “the jump was pretty” are also continuous and capable of infinite values within the interval. The predicate of “pretty” is perfectly capable of inclusion in a formally logical structure. It cannot be coded unless somehow it turns into something measurable, even if deep within the program, on a discrete scale. The reason for this has to do with the mathematics at the very core of the operation of a computer. “Natural numbers” are the counting numbers like 1 or 2. “Rational numbers” are those which can be expressed as ratios of natural numbers. A fraction like 2/5 that has a finite number of digits when expressed as a decimal (.4) is a rational number. So is a fraction like 1/3 whose decimal expression is infinite (.3333…). “Irrational numbers” are those which are not rational. Examples are the square root of 2, pi, and e, the base number for a natural logarithm. A number, whether rational or irrational, is computable in the sense that some sequence of discrete steps can continue producing it to more decimal places. The real numbers divide into rational and non-rational numbers. Not all real numbers are computable. One headache-inducing upshot of this is that however infinitely large the set of computable numbers (those that can be calculated using a fixed number of axioms) may be, it still does not include infinitely more real numbers that sit between the computable numbers. The set of all real numbers is continuous, and a single real number is best described as a “slice” or “cut” that divides all the higher numbers from all the lower numbers.

99

Bernard Linsky, Logical Constructions, Stan. Encyclopedia of Phil. (Sept. 10, 2014), https://plato.stanford.edu/archives/win2016/entries/logical-construction; Erich Reck, Dedekind’s Contributions to the Foundations of Mathematics, Stan. Encyclopedia of Phil. (Oct. 28, 2016), https://plato.stanford.edu/archives/win2016/entries/dedekind-foundations.

And to make things even more confusing, even though the set of natural numbers is infinite and the set of real numbers is infinite, there are still more real numbers than natural numbers.
100

The 19th century mathematician, Georg Cantor, proved this by way of his diagonal method. See Roger Penrose, The Emperor’s New Mind 108–13 (1999).

Computers operate on natural numbers expressed in base two rather than our usual base ten. The advantage of base two is that rather than having ten symbols for numbers as in base ten (1 through 9 and 0), there are only two symbols, 0 and 1. When a computer computes, it operates only on 0s and 1s. This goes back to the theoretical genesis of computing. In order to prove a thesis in number theory,

101
That was the “halting problem.” See generally Lipshaw, supra note 4.
Alan Turing imagined a computing machine (now referred as a “Turing machine” even though it is a thought experiment, not a physical machine). A Turing machine consists of a scanner that contains a finite set of “states.” States are discrete sets of instructions for the scanner. An imaginary and infinitely long tape runs past the scanner. The tape is divided into squares. Each square of the tape may contain a symbol, usually either 0 or 1, or may be blank. The scanner only “reads” one square at a time. The machine works by being in a particular “state,” “seeing” what symbol is or is not in the square, and then acting on the state instructions that tell the machine what to do based on what it “sees” in the square.
102

A.M. Turing, On Computable Numbers, With an Application to the Entscheidungs-problem (1936) [hereinafter Turing, On Computable Numbers], in The Essential Turing, supra note 83, at 59–60; Melanie Mitchell, Complexity: A Guided Tour 61-62 (2009).

Few people, even computer programmers, ever think about it, but anything that can be done on any modern digital computer must reduce to the primitive operation of a hypothetical Turing machine. In modern computer science, the reading of a 0 or 1 in a scanned square in the Turing machine translates into the operation of an elementary logic gate.

103

Noam Nisan & Shimon Schoken, The Elements of Computing Systems: Building a Modern Computer from First Principles 7 (2008).

To work in the logic gates, all higher order languages, Python, Ethereum, C++, for example, and every physical operation, for example hitting a key and having a letter appear on the screen, have to reduce to machine code that is expressed in nothing but 0s and 1s. The elementary logic gates in turn “are physical implementations of Boolean functions.”
104
Id. at 8.
Boolean functions are based on Boolean or binary values “that are typically labeled true/false, 1/0, yes/no, on/off, and so forth.”
105
Id.
That is, a Boolean function is one that “operates on binary inputs and returns binary outputs.
106
Id.

Computer software at the level of the processor’s machine code thus expresses everything as binary values of 1/0, and computer hardware is designed to manipulate those binary values through the logic gates. All Boolean functions can be expressed using three operators: And, Or, and Not. In turn, those three basic operators can be built from the even more basic single Boolean operator “Nand” meaning “Not and.” That is, assume two binary inputs, x and y, both of which can be either 0 or 1. If the inputs are not both 1, the output will be 1. If the inputs are both 1, then the output will be 0.

107

In a truth table, it looks like this:

x y x Nand y

0 0 1

0 1 1

1 0 1

1 1 0

For example, the operator Or can be constructed logically from Nand:

[x Or y = (x Nand x) Nand (y Nand y)].

This can also be confirmed in a truth table as follows:

x y x Nand y x Nand x y Nand y

0 0 1 1 1

0 1 1 1 0

1 0 1 0 1

1 1 0 0 0

(x Nand x) Nand (y Nand y) x OR y

0 0

1 1

1 1

1 1







The practical importance of this is that, if you have a physical device that can implement the Nand
function, you can build an entire computer from it. And that is what happens.
108
Id. at 9-14.

To be represented on a computer, anything that is continuous in nature, like color or sound that are carried physically through space by waves, or continuous in concept, like prettiness, originality, or fitness, must be translated into code. Consider a range of musical pitch. In theory at least, between any two notes we ought to be able to strike another one, and so on infinitely to an asymptotic limit. That range would be represented on an analog basis by a range of real number. To put it into code, we need to use computer languages of higher and lower order that ultimately translates into 0s and 1s of elementary logic gates in machine code. So how would we make the predicate “fit for ordinary purpose” in UCC §2-314 smart?

First, there would need to be a deductive, algorithmic foundation. A computer creates the illusion of continuous color first by instructing each pixel how many units of red, green, and blue to add (like the hardware store mixing your paint) and then creating higher and higher resolution by way of more and more pixels (hence 1080 resolution is higher than 480—as if George Seurat added more and more dots to his paintings).

109
The predominant technology for creating a pixel, a dot of light on a screen, is called bitmap. Bitmap involves an instruction from the processor to display dots according to their coordinates on typical x and y axes. Id. at 257.
A computer creates the illusion of continuous waves of sound by increasing the rate of sampling, the number of packets of digital information representing pieces of the analog sound. Similarly, we would need to develop digital units of fitness.

Second, having reduced “fitness” to a binary foundation of fitness units, what would the units measure? The problem is defining the discrete prettiness or originality units. In figure skating, even if the performance occurs objectively, each judge makes a subjective assessment of prettiness or originality and assigns a discrete mark to each one. The replacement of the continuous with the discrete is all the more glaring here but not because the reality of the subjective criteria is any more continuous than the objective. Instead, it is because we are far less comfortable, conventionally, with taking each judge’s subjective measurement as “truth” when we cannot independently verify it. That, however, is a far more uncomfortable translation of a continuous and subjective property into a discrete scale. If you are a merchant who sold me a good that is not working, your application of the fitness standard to the good might not match up to mine.

Whether with prettiness or originality in figure skating, or fitness for ordinary purposes in the sale of goods, assuming we could come to an agreement on the units, we could probably write an algorithm for the AI “smart judge” with “big data” access that would be able to assign a discrete measurement of “pretty” or “original” or “fit.” That is no longer a formal deductive system. In an AI deep learning program, what the programmer is doing, effectively, is replacing deductive inferences with inductive inferences. In first-order predicate logic (which is truth-functional, meaning that the conclusion must be true if the premises are true), the “if-then” connector says only that if x is true, then y must also be true. It makes no inferences at all if x is false. That is, in formal logic, y may be true even if x is false. In inductive logic, which is not “truth-functional,” the “if-then” connector really means “because,” where the even truth of the antecedent x does not guarantee the truth of the consequent y.

110

Gary M. Hardegree, Symbolic Logic: A First Course 36-38 (3d ed. 1999).

The truth of the antecedent x only makes it more likely that the consequent y will be true. In short, because the standard is applied inductively and not deductively, somebody has to tell the computer when and when not to draw the appropriate inference that the jump is pretty or that the good is fit.


C. Elastic Language and the Finite Regress of Code


1. The utility of vagueness in natural language and the law

Natural languages tolerate ambiguity and vagueness, and that those characteristics are often useful to speakers. The linguist Grace Q. Zhang studied the use of vague language and concluded that “the ability to use vague language is as important as, if not more important than, the ability to use other types of language (e.g. precise language).”

111

Grace Q. Zhang, Elastic Language: How and Why We Stretch Our Words xiii (2015).

She coined the term “elastic language” so as to avoid the pejorative implications of vagueness, and to highlight its indispensability in human communication.
112
Id. at xiv.

Words are slingshots with a rubber band, and speakers “stretch” their words to achieve communicative purposes … Language is inherently vague … but this is not a deficiency. It occurs even in so-called “precise” contexts such as mathematical language … and legal language … Strategic use of language is essential in successful communication, and being vague is one of many strategies.

113
Id. at 1.

Professor Zhang suggests a typology for elastic language that includes (1) “approximate stretchers” (e.g. words like “about,” “many,” “some”); (2) “general stretchers” (e.g. “things,” “stuff,” “somebody”); (3) “scalar stretchers” (e.g. “very,” “a bit,” “kind of”), and (4) “epistemic stretchers” (e.g. “maybe,” “I think,” “possibly”).

114
Id. at 35-37.
The pragmatic functions of such language include avoidance, deliberate withholding of information, emphasis or de-emphasis, maintaining friendliness, mitigation, politeness, face-saving, and self-distancing.
115
Id. at 37-45.
Not surprisingly, studies demonstrate the use of elastic language in classrooms, academic conferences, courtrooms, and business negotiations.
116
Id. at 45-47.

But vagueness and ambiguity likely have far more utility when transactors govern themselves by custom rather than law. For example, in Lisa Bernstein’s study of the cotton industry, otherwise unresolved disputes get litigated in private arbitration governed by detailed trade association rules and individual contracts.

117

Lisa Bernstein, Private Commercial Law in the Cotton Industry: Creating Cooperation through Rules, Norms, and Institutions, 99 Mich. L. Rev. 1724, 1726-37 (2001).

When the parties have to resort to such litigation, the cases get decided formalistically according to the contract and the trade rules and not according to custom or the background of the deal.
118

Id. at 1737.

In contrast to the cotton industry rules, Article 2 of the Uniform Commercial Code reflects the desire of Karl Llewellyn, Article 2’s primary drafter and one of the great legal realists, to have commercial law incorporate “immanent business norms” into after-the-fact legal dispute resolutions, whether by way of interpretation of the parties’ agreement or gap-filling default terms. Hence, Article 2’s default rules themselves use the fuzzier standards by which the drafters believed business people governed themselves—e.g., to act reasonably, to perform seasonably, to sell goods that are fit for ordinary purposes or are without objection in the trade, to abide by usages of trade, and so on.
119

Id. at 1735; Lisa Bernstein, The Questionable Empirical Basis of Article 2’s Incorporation Strategy, 66 U. Chi. L. Rev. 710, 712 (1999).

Here is the problem with incorporation of elastic language into the law. Legal rules generally respect the law of the excluded middle that prevails for truth-functional logic. You either have a right or you do not. Business norms, on the other hand, have no law of the excluded middle. What made sense when we wrote the contract might not make sense now. In the cotton industry, for example, the legal rule is merely one of several possible governing norms. Another check on behavior is one’s commercial reputation, the determinants of which include the willingness to live up to contract commitments yet “to be flexible in work-a-day transactions, and [willing] to renegotiate commitments when circumstances change or adverse events occur.”

120
Bernstein, supra note 117, at 1749 (footnotes omitted).
To have a legal right but, in light of other business and social realities, to choose not to enforce it is to go beyond the law and its truth-functional logic. Indeed, it was not unusual for one cotton merchant to accept good reasons from another not to enforce contract rights: “There is suggestive evidence that cotton transactors may view themselves as conducting their everyday interactions according to a set of flexible understandings that requires them to make many adjustments, and ignore minor deviations in ways not required by their contract’s written provisions, yet preserves their unfettered right to insist on strict performance of their contract when they think their contracting partner is behaving badly.”
121
Id. at 1781.
But, ironically, when it comes to resolution of disputes by way of contract, “merchants do not want adjudicators to look to usage to decide cases in any but the narrowest of circumstances. They are consistent with other more generalized studies of transactor preferences and contracting choices which strongly suggest that business transactors have a strong preference for formalistic adjudication.”
122

Lisa Bernstein, The Myth of Trade Usages: A Talk, 23 Barry L. Rev. 119, 125 (2018).

In short, such merchants may want their relationships to be governed primarily by non-legal norms under which the same antecedents could generate different outcomes. Our natural language, with all its elasticity, allows for that flexibility. But they still do not want their contracts to replicate the entirety of a complex business relationship. They want a relatively simple backup set of unambiguous rights to which they can turn formalistically when the relationship breaks down.

  1. The finite regress of code

The question becomes: could a smart contract allow these cotton merchants to have their cake and to eat it; that is, to have a formal mechanism that both obeys and does not obey the law of the excluded middle? The answer is: to a point. Natural language can be vague or elastic, predicates in formal logic can be ambiguous or vague, and human beings can perceive an infinite regress in judgment-making. Computer programs can approximate all of these capabilities but, in the end, cannot be built from a foundation that tolerates ambiguity. For digital computers, the regress of judgment is finite. In the smart universe, there is a supernatural Author at the end of the regress, and we know who it is. When we drill down to the most elementary logic gates, somebody (whether human or futuristic conscious machine) needs to have established an archimedean fulcrum of meaning, even if it is a basic as saying that 1 means “not both” and 0 means “both” in the Nand operator.

123

I borrow the term “archimedean” from Ronald Dworkin, who used it to describe those who “purport to stand outside a whole body of belief, and to judge it as a whole from premises or attitudes that owe nothing to it.” Ronald Dworkin, Objectivity and Truth: You Better Believe It, 25 Phil. & Pub. Aff. 87, 88 (1996).

In the dumb universe of human judgment-making, what lies at the end of the regress, if there is one, is not so clear.

In the debates over artificial intelligence, one of the iconic images and most powerful arguments is John Searle’s Chinese Room.

124

John R. Searle, Minds, brains, and programs, 3 Behav. & Brain Sci. 417 (1980).

Searle meant to refute “strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition.”
125
Id. at 417.
Searle’s thought experiment imagines him locked in a sealed room, knowing English but not knowing any Chinese. A first batch of Chinese symbols gets fed into the room. Then a second batch of Chinese symbols gets fed into the room along with a set of instructions in English for correlating the second batch with the first batch. Then a third batch of Chinese symbols gets fed into the room, again with a set of English instructions on how (a) to correlate the third batch with the first two and (b) to feed back out of the room certain Chinese symbols (“answers”) in response to symbols fed in with the third batch (“questions”). I have simplified the picture of the Chinese Room in Figure 1.

Figure 1.

Then assume that the people feeding in the symbols feed in stories and questions about them in English as well as those same stories and questions encoded in Chinese symbols. Searle hypothesizes that he gets so good at manipulating the Chinese symbols according to the instructions that a person outside the room would believe the Chinese answers were as good as those in English. Searle’s conclusion is that there is a difference in cognition as between his English and Chinese “answers.” “In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements.”

126
Id. at 417-18.

The fact is that, in the cognition debates, Searle’s image of the Chinese Room is powerful. The objections to it tend to accept it as a fair characterization of a computer’s processor, but then debate whether it is a fair analogy to human cognition.

127

Id. at 419-22. For a summary of the debates, see the chapter entitled “The curious case of the Chinese room,” in B. Jack Copeland, Artificial Intelligence: A Philosophical Introduction 121-37 (1993).

The most coherent objection is called the “systems reply.” The gist of it is that, even if Searle, locked alone in the room, does not understand the Chinese symbols, a wider system, for example, one that includes the rules, the pencils and paper used to translate, and so on, does understand.
128

Copeland, supra note 127, at 126-30.

For my purposes, I do not care whether a blockchain system or the automobile security/lien system has human-like cognition or subjective consciousness. If we draw the line in the sand at the most elementary logic gates, then indeed smart contracts, even if run with advanced AI, are the legal equivalent of Searle’s Chinese Room, operating to generate legal results without an identifiable subjective viewpoint. If we accept the systems reply, then it means that there is a wider system, somewhere outside the programming of the smart contract itself, giving semantic meaning to the syntax of the code, whether at the level of a higher order language or in the 0s and 1s of the machine code. By the very nature of binary code, that must be so. The systems reply effectively acknowledges that the regress for the computer itself ends at the elementary logic gates, and merely kicks the can farther down the road as to the ultimate “Author” of human subjective cognition.


D. The Infinite Regress of “Situation Sense”


This infinite regress of human judgment is sometimes phrased as “there is no rule for the application of a rule.” In order to apply a rule (or an algorithm or a model) to a particular situation, we have to choose the rule. If you try to determine what circumstances fit within a rule, setting another rule merely leads to another rule to another rule to another rule, all the way down. In short, there is no final answer for selection of the rule; it is an infinite regress.

129

Kant observed the regress. Immanuel Kant, Critique of Pure Reason 268–69 (Paul Guyer & Allen W. Wood trans. 1999) (1787). Wittgenstein did, as well. Ludwig Wittgenstein, Philosophical Investigations §201, at 69e (1953) (3d ed., G.E.M. Anscombe, trans. 2001). See also Dennis M. Patterson, Law’s Pragmatism: Law as Practice and Narrative, 76 Va. L. Rev. 937 (1990); Linda Ross Meyer, Between Reason and Power: Experiencing Legal Truth, 67 U. Cin. L. Rev. 727, 744-45 (1999).

Given that a computer’s “thinking” is necessarily a finite regress, at least as long as computers are constructed from elementary logic gates, could a “dumb” contract ever become wholly smart?

I am still agnostic on the question of how humans came to have the capability even to perceive the infinite regress of judgment. Because we are discussing contracts, I do want to reflect on one way of characterizing judgment, what Karl Llewellyn, the primary drafter of Article 2 of the Uniform Commercial Code,

130

Bernstein, supra note 119, at 712.

called “situation sense.” Llewellyn coined the term in his book The Common Law Tradition.
131

Karl N. Llewellyn, The Common Law Tradition: Deciding Appeals (1960).

His subject was how appellate judges used good judgment to mediate between “the felt duty to justice which twins with duty to the law.”
132

Id. at 121.

In other words, how do judges go about applying rules laid down in prior decisions to the facts of the situation at hand? Legal scholars have since puzzled over a passage of seeming metaphysical import that Llewellyn, the great legal realist, quoted with approval:

Every fact-pattern of common life, so far as the legal order can take it in, carries within itself its appropriate, natural rules, its right law. This is a natural law which is real, not imaginary; it is not a creature of mere reason, but rests on the solid foundation of what reason can recognize in the nature of man and of the life conditions of the time and place; it is thus not eternal nor changeless nor everywhere the same, but is indwelling in the very circumstances of life. The highest task of law-giving consists in uncovering and implementing this immanent law.

133

Id. at 122, quoting Levin Goldschmidt, Preface to Kritik des Entwurfs eines Handelsgesetzbuchs, Krit. Zeitschr. f.d. ges. Rechtswissenschaft, Vol. 4, No. 4.

That is an astoundingly non-legal realist observation.

134

See Kenneth M. Casebeer, Escape from Liberalism: Fact and Value in Karl Llewellyn, 1977 Duke L. J. 671, 680.

Indeed, it is Kantian, in the sense that every person’s choice of a moral outcome in every particular set of circumstances ought to be governed by the rule that would constitute the universal law for that set of circumstances.
135

Immanuel Kant, Fundamental Principles of the Metaphysics of Morals (Thomas K. Abbott, trans. 1785) in Basic Writings of Kant 161 (Allen W. Wood, ed. 2001).

The profundity of Llewellyn’s observation goes far beyond appellate judging. It is a recognition that human judgment is more than mere deduction or induction. It includes something characterized as abductive reasoning: the process by which a human being looks at a jumble of circumstance and decides which of a number of possible algorithms might apply to the situation.

136

Jeffrey M. Lipshaw, Beyond Legal Reasoning: A Critique of Pure Lawyering 34-40 (2017).

Llewellyn deemed that “the sizing up of ‘the case’ into some pattern is of the essence of getting to the case at all, and the shape it starts to take calls up familiar, more general patterns to fit it into or to piece it out or to set it against for comparison.”
137

Llewellyn, supra note 131, at 268.

Llewellyn himself rejected the idea that one could reason one’s way to the application of situation sense, seeing it instead as a kind of professional know-how or intuition: “It is quite independent … of any philosophy as to the proper sources of ‘Right Reason’ which may be held by any ‘Natural Law’ philosopher…. It answers instead to current life, and it answers to the craft.”
138

Id. at 422-23.

So, could we program the smart contract to operate with situation sense, i.e. good human judgment? The answer is yes … but approximately and then only up to a point. Return to the sale of goods contract whose logical structure we previously derived. Let us assume again the smart contract is for the sale of widgets. The predicate “fitness” is vague, but we might be able to code some proxy for “fitness” in the Chinese Room. Perhaps the smart contract would have access to “big data” and include algorithms that would allow it to conclude, based on that data, whether or not the widget conformed. The smart contract program would be connected to the buyer’s operations and would be designed to spit out the conclusion of “no breach” if it sensed a conforming widget or “breach” if the widget were defective. And the metaphoric Chinese Room works perfectly if the question that comes in is whether the widget conforms to the contract. If the answer, according to the instructions is “yes,” the answer “No Breach” comes out. If the answer to the question is “no,” that answer “Breach!” comes out. Law, like a computer, cannot work if it has a choice of either “breach” or “no breach” for the same input. It has to be one or the other. Hence, section 2-314 of the Uniform Commercial Code simply cannot work logically, much less computably, if a non-conforming product can result in either breach or no breach.

139
In the logical denotation used above, and as shown in Figure 3, the expression would be ∀x(Fx ⇔ (Vx v -Vx)). In natural language, it would read “for every x, if and only if it is fit for ordinary purposes, then it is merchantable or it is not merchantable.” That, of course, is meaningless nonsense.

But here is the problem with trying to incorporate business situation sense into the law of Article 2. By business situation sense I mean the possibility that a rule ought not be applied even though the antecedent conditions call for its application. What if the appropriate business judgment is like those often made by the cotton merchants in Lisa Bernstein’s study? In a particular situation, the buyer who is a party to this smart contract does not want to declare a breach even if the program says that the widget does not conform. Could the smart contract accommodate this? Yes, but…. In the computer’s logic gates, there can only be one output for any given input, as shown in Figure 1. What we cannot do is code in a way that creates a contradiction in the program by making it possible for the same input to generate two different outputs, as shown in Figure 2.

Figure 2.

Assuming (a) the parties’ willingness to accept the computer ontology as the institutional reality of their relationship and (b) problems of computational complexity were overcome, that situation would need another Chinese Room inside the Chinese Room, with its own instructions on when not to enforce the legal right in the main program. This is where the mathematics of computer judgment butt up against the heretofore unresolved mysteries of human judgment. To be human, the computer needs Chinese Rooms inside Chinese Rooms “all the way down.” (See Figure 3.) To make the contract smarter, you have to get beyond the finite regress of logic gates, and thus make it capable of the infinite regress of human judgment. Ironically, in my coinage, that means “dumber” because it cannot be wholly expressed in computer code.

Figure 3.

If all smart contracts do is law, even those that embody transactions in coded documents so real as to create institutional realities are still limited to a finite computational complexity. And they still need to be expressed in language and rules that have a finite regress. On the other hand, situation sense, particularly that which employs elastic language and incorporates business judgment, invokes the regress of judgment applied to an infinite variety of circumstances.

There is a formal logic that does not abide by the law of the excluded middle and can accommodate the vagueness or elasticity of natural language. Fuzzy logic developed out of fuzzy set theory in which the elements of the set’s universe have a degree of membership represented by a real number between 0 and 1.

140
Id.
For example, assume the set is “all tenured faculty members at Suffolk Law School.” An individual is either a member of the set (and representable by a binary digit, 1) or is not a member (and representable by a binary digit, 0). Fuzzy sets, on the other hand, can accommodate degrees of membership and therefore non-binary boundaries.
141

Ying Bai & Dali Wang, Fundamentals of Fuzzy Logic Control—Fuzzy Sets, Fuzzy Rules and Defuzzifications, in Advanced Fuzzy Logic Tech. in Indus. Applications (“Bai & Wang”) 22 (Ying Bai, Hanqi Zhuang, & Dali Wang, eds., 2006).

An example would be the division of temperature into sets of “hot,” “medium,” and “cool.” A classical set would require each temperature to be a member of only one set. Thus, any temperature above 80°F would count only as “hot,” and could only be a member of that set. In fuzzy sets, that same temperature could have a degree of membership in both the “hot” and “medium” sets. By defining each temperature’s degree of membership in each set, fuzzy sets blur the boundaries between the three classifications.
142

Id. at 22-23.

So, for example, the temperature 60°F might be a .8 member of the “medium” set and a .1 member of the “hot” set.

Fuzzy logic permits formal deduction from premises to truth-functional conclusions by allowing propositions to be increasingly true on a similar scale from 0 to 1. Thus, it permits one to “model logical reasoning with vague or imprecise statements like ‘Petr’ is young (rich, tall, hungry, etc.)’.”

143

Petr Cintula, Christian G. Fermüller, & Carles Noguera, Fuzzy Logic (Jul. 18, 2017), Stan. Encyclopedia of Phil., https://plato.stanford.edu/archives/fall2017 /entries/logic-fuzzy.

In classical logic, modus ponens looks like this:

If P, then Q.

P.

Therefore Q.

P and Q have only two possible values, true or false, 0 or 1, on or off, etc. But fuzzy logic looks like this:

If to a degree of P, then to a degree of Q.

A degree of P.

Therefore, a degree of Q.

144

Id:

The standard set of truth degrees for fuzzy logics is the real unit interval [0,1], with its natural ordering ≤, ranging from total falsity (represented by 0) to total truth (represented by 1) through a continuum of intermediate truth degrees. The most fundamental assumption of (mainstream) mathematical fuzzy logic is that connectives are to be interpreted truth-functionally over the set of truth-degrees. Such truth-functions are assumed to behave classically on the extremal values 0 and 1. A very natural behavior of conjunction and disjunction is achieved by imposing x y = min {x, y}and x y = max {x, y}for each x, y ∈ [0,1].

Computer control systems based on fuzzy logic have significant practical abilities. They are used to control continuously variable inputs and outputs in all sorts of systems including manufacturing, heating controls, and cement kilns. For example, a classical furnace turns on when the temperature is below a certain level and turns off when the temperature exceeds it. An HVAC system with fuzzy controls will determine the degree of outside temperature and thereby adjust the degree of operation.

145
Bai & Wang, supra note 141, at 24-29.

Could fuzzy logic replicate situation sense? It is still code and therefore limited by code’s finite regress. At some point, the computer code needs to translate the continuous and fuzzy concept into binary code capable of working in an elementary logic gate. While humans can conceive of the degree as a continuous range of real numbers between 0 and 1, the computer begins with two discrete natural numbers, 0 and 1, and works everything from there. In other words, “vague answers can only be created and implemented by human beings, but not machines…. Computers can only understand either ‘0’ or ‘1’, and ‘HIGH’ and ‘LOW’.”

146

Id. at 17.

What fuzzy control systems do is allow the computer to approximate degrees on continuous range to higher and higher rates of resolution, a la color or sound.

Hence, in the same manner as the HVAC system, the UCC Article 2 programmer might well give the smart contract the ability to make nuanced decisions about whether the non-conformance of the goods is serious, not serious, or somewhere in between. It might be able to decide where performance falls on a scale running from completely reasonable to completely unreasonable. Perhaps it could tap into relevant databases that would provide empirical benchmarks for “good, better, and best.” But in every case, at the end of the computer’s finite regress, a human being (or an entity with the human’s infinite regress of judgment) would be setting the parameters.


Conclusion: “Deciding” Somewhere Between Smart and Dumb


Presently “smart contracts” are mostly limited to the creation of intangible assets, the very nature of which are expressible in code, like bitcoins, security interests in personal property, or recorded title to real estate, and executed on blockchains. My goal here has been to address the intuitive sense that a broader range of contracts (and the law) could map antecedent understandings in code so thoroughly that contracts do achieve the complete status ascribed by the Minnesota Supreme Court in 1885. Not only would the coded contract constitute entirely the unambiguous and full expression of the agreement of the parties, but it would be self-executing as well. The truth-functional logic at the core of legal reasoning suggests that we ought to be able to make contracts smart by restating that logic into code. That ought to be possible not just for cryptocurrencies or the tracing of security interests. We already reduce complex future contingencies to relatively simpler and logical structures, whether in business transactions, pre-nuptial understandings, estate planning, or any other matter that involves expressing expectations by way of inter-subjective communication. Code is just the last step. That intuition seems to be the source, on one hand, of the futurists’ dream of wholly digitized lawyering and, on the other, the fears of high technology Luddites and the digitally-challenged when contemplating the same thing.

One effort presently underway to make such heretofore dumb contracts “smarter” is James Hazard’s CommonAccord project, described in a paper he and Finnish law professor Helena Haapio presented to the 2017 International Legal Informatics Symposium.

147

James Hazard & Helena Haapio, Wise Contracts: Smart Contracts that Work for People and Machines, in Erich Schweighofer, et al., Trends & Communities of Legal Informatics. Proc. of the 20th Int’l Legal Informatics Symp. IRIS 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2925871.

Their goal is to use open source collaboration and code to optimize what people and machines can bring to the contract drafting party. Hence, a “wise contract” is one that is smart, i.e. automatable and enforceable, but permits human input about the contracts’ business and legal objectives.
148
Id. at 1.
The problem in creating wise contracts that can operate on machines but are human-friendly is the relationship between the natural language prose in which contracts are written, on one hand, and machine code, on the other.
149
Id. at 2.
As Hazard and Haapio acknowledge, it is an ontological problem, albeit in the computer science sense. The number of things contracts are capable of touching in the real world are boundless; the question is whether “it is possible to make a robust, extensible templating system from only a few <<classes>> of things.”
150
Id. at 5.

The goal, then, is to use open-sourced code to create document assembly systems consisting of standardized and searchable structures and “prose objects,” namely pieces of contract language humans can read but which have been “codified” at the appropriately useful level of granularity—phrases, sentences, paragraphs, or issues.

151
Id. at 6-7.
Hazard and Haapio observe, “The system of prose objects is not <<intelligent>>. Prose objects are static; they do not reason; they are nouns, not verbs. But these unintelligent prose objects provide anchors for many forms of intelligence. First is human intelligence, such as commercial or legal expertise.”
152
Id. at 7.
Machines will aid more and more in the assembly of prose objects into contracts, but people will still be the assemblers, because people are the ones who do the ultimate deciding. No doubt it would make the assembly of contracts more efficient. To put the effort in the context of the themes I have expressed here, the long-term goal would be, even in the context of complex bespoke agreements (like the ones described above), to push more and more of the granular chunks of prose along the continuum from mere mapping to universal collective status attribution. Opportunistic contracting parties might still debate meaning, but the amount of text subject to that debate would reduce. How that might play out over time is beyond me, but I am aware of studies of the evolution, even without this kind of automation, of common terms in certain specific kinds of contracts, such as public company acquisition agreements.
153

John C. Coates IV, Why Have M&A Contracts Grown? Evidence from Twenty Years of Deals (Eur. Corp. Governance Inst. Working Paper Series in Law, No. 333/2016, 2016), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2862019.

The stumbling block to complete smartness, however, and the reason for the persistence of dumb contracts, is the gap between two different worlds. On one hand, there is an objective, observer-independent, and deterministic “smart contract” universe, the components of which can be represented in an ever more elementary logical relationship, but whose regress is finite. On the other hand, without beings who have a subjective and self-aware point of view on the objective world (for the time being, humans, but perhaps someday self-replicating automatons) and who have subjective desires they want documented in those contracts, there would be no need for the smart contracts. One hallmark of those beings is the capability of perceptual or theoretical judgments of what is or will be or what ought to be, both descriptively and normatively. Trying to find the archimedean fulcrum of that subjective judgment leads one down an infinite regress.

When it comes to smart contracts, push will come to shove when the regress gets down to the level of the elementary logic gates, if not before. Each logic gate inside the digital machine that operates the increasingly smart digital contract will still only be able to release “yes/no” outputs for each set of inputs. As long as subjective, self-aware agents are employing the smart contracts, at least at the theoretical extreme, one or more of them is going to have keep selecting the algorithms that allow the discrete and finite world of computation to map, objectively and deterministically, onto the continuous, infinite, and time-dynamic real world. Moreover, to the extent that norms other than those rules embedded in the logic still operate, those agents will be able to exercise extra-legal judgment, namely, deciding the “ought” rather than the “is” of any non-trivial matter, particularly when the “ought” involves not enforcing a contract right that the logic gates of the smart contract say exist.

Permit me just a few more paragraphs of speculation here at the end about those agents who would be doing that deciding, informed by recent speculations of the renowned neuroscientist, Michael Gazzaniga,

154

Gazzaniga, supra note 73. He is the director of the SAGE Center for the Study of the Mind at UC Santa Barbara, the president of the Cognitive Neuroscience Institute, and the founding director of the MacArthur Foundation’s Law and Neuroscience Project. He is widely cited in the legal academic literature on the subject of criminal responsibility.

on subjective and self-referential consciousness as a product of physical neurons constituting the human brain. Here is his thesis. While we still do not understand how the brain’s neurons create our sense of personal consciousness, the answer is not going to lie in more and more granular reduction of biological processes to the deterministic assumptions of classical or quantum physics. Rather, the idea that needs to be borrowed from quantum physics is complementarity: some things “have complementary properties that cannot be measured, and thus known, at the same time.”
155
Id. at 171.

For living matter to be conscious, first it had to have the ability to self-replicate and evolve over time.

156
Id. at 181.
For a system to self-replicate, something meaningful as the “self” had to have developed. The replicating system needed to have symbols to describe what “it” was, a means to translate those symbols into a plan for replication, and the means to undertake the actual replication. Additionally, the system needed to be able to describe the parts of itself that were doing the describing, the translating, and the replicating.
157
Id. at 193-94.
That is in contrast to the study of systems in physics, where the observer’s act of measuring and recording the system stands outside the system. There, what the observer does to identify data within the system, to measure it, and to interpret it, is not governed by the deterministic and timeless laws of physics. The act of measuring, of identifying the initial state, is itself arbitrary and irreversible.
158
Id. at 183.

To achieve subjective “selfness,” biological systems evolved to use certain molecules (the patterns of nucleotides in DNA) as the carriers of arbitrary symbols for replication.

159
Id. at 184-85.
Moreover, the meaning of those arbitrary symbols as code or map or pattern for building amino acids and then proteins could never be understood merely by reducing the physical molecules to their component atoms, particles, and sub-particles.
160
Id. at 186-87.
In short, the relationship between genotype (the DNA symbols carrying the instruction) and the phenotype (the physical characteristics of the living matter) came to be governed by a set of rules—the code. In living matter, it is a “semiotic system … a triad of signs, meanings, and code that are all produced by the same agent, i.e., by the same codemaker.”
161
Id. at 187.
A computer, even one capable of learning through neural nets, is not a semiotic system because the ultimate codemaker is a programmer who is not part of the system.
162
Id. at 188.
Cells, on the other hand, while not self-aware, are semiotic systems because they contain the components by which they program (and reprogram) their own replication via the code.
163

Id. at 190, quoting Marcello Barbieri, Biosemiotics: A New Understanding of Life, 95 Naturwissenschaften 579 (2008).

The upshot is that cell replication and evolution, while following the deterministic laws of physics at the molecular level and down, came to be governed from the cell level up by a system “of symbolic information (the nucleotide sequences) controlling material function (the action of enzymes), linked by a rule-governed code.”
164
Id. at 192.
That system, unlike the underlying components of the cell molecules, was and is not reversible and deterministic as required by the physics models used to understand them.
165
Id. at 190-91.

“Semiotic closure” was the key step toward the evolution of consciousness. “The closing of the semiotic loop, the physical bonding of the molecules, is what defines the limits of the ‘self,’ the subject, in ‘self-replication.’”

166
Id. at 193-94.
Gazzaniga says, “I am not suggesting that single cells are conscious. I am suggesting that they may have some type of processing that is necessary or similar to the processing that results in conscious experience.”
167
Id. at 197.
Rather, the biological development of self-awareness, of consciousness, the study of which is still in its infancy, needs to be explored not as a matter of the deterministic laws of physics but as higher-order matter of biological systems that use symbolic information to describe themselves.

So, what, according to Gazzaniga, are the biological sources of consciousness in the brain? Consciousness, unlike other mental or physical functions, does not arise from a particular area within the brain.

168
Id. at 201-02.
It must, however, be the result of a physical process within the brain. “There is no ghost in the system, and the physical structure must constrain all the lawful dynamic processes of construction following Newton’s laws.”
169
Id. at 194.
Gazzaniga proposes “that what we call consciousness is a feeling forming a backdrop to, or attached to, a current mental event or instinct…. It is the result of a process embedded in an architecture [of the brain].”
170
Id. at 106.
The critical aspects of that architecture are (1) all brains, from those of worms to humans, are modular in that some areas serve some functions and some serve others;
171
Id. at 83-96.
(2) the connections between the modules are complex;
172
Id. at 93-95.
(3) there is no specific “consciousness module” because people retain consciousness even when some or many modules are lost by injury or stroke;
173
Id. at 103-06.
(4) the brain has functional “layers,” both within the sub-cortex and the cortex, that have evolved over time;
174
Id. at 118-22.
(5) the sub-cortical brain areas arose earlier in the evolutionary process, are common to all mammals, and include the functions of the subcortical limbic system that drive animals to engage in survival-like behaviors (e.g., food, shelter, mates, safety, self-protection);
175
Id. at 145-46.
and (6) “the endless fluctuations of our cognitive life, which are managed by our cortex, ride on a sea of emotional states, which are constantly being adjusted by our sub-cortical brain.”
176
Id. at 135.

All of this leads Gazzaniga to the final link in his speculation. Instinct, per William James, is “the faculty of acting in such a way as to produce certain ends, without foresight of the ends, and without previous education in the performance.”

177

Id. at 232, quoting William James, What is an Instinct? 1 Scribner’s Mag. 355 (1887).

We should think of consciousness as we do the emotional instincts, like “anger, shyness, affection, jealousy, envy, rivalry, sociability, and so on” we share with other animals.
178
Id. at 231-36.
Higher level mental states that arise in cerebral cortex interact with instincts arising in the sub-cortex to produce complex behaviors.
179
Id. at 232-35.
All of that together produces what we perceive as consciousness. Moreover, Gazzaniga combines his neuroscientific viewpoint with William James’s insights for a compatibilist view of free will:

To [James], a complex behavioral state can be produced by varying the combinations of simple independent modules…. James’s stance is clearly stated: “My first act of free will shall be to believe in free will.” This proclamation is consistent with the idea that beliefs, ideas, and thoughts can be part of the mental system. The symbolic representations within this system, with all their flexibility and arbitrariness, are very much tied to the physical mechanisms of the brain. Ideas do have consequences, even in the physically constrained brain. No despair called for: mental states can influence physical action in the top-down way!

180
Id. at 235.

At the risk of revealing my own confirmation bias, Gazzaniga’s thesis lines up with two of my own long-held beliefs as applied to smarter and dumber contracts (among other things). First, there is something fundamental and irreducible about the subjective-objective dichotomy. To incorporate elastic business language (which may deliberatively be vague or elastic) into the law, much less into a computer program that anticipates both business judgment and legal outcomes, is to try to bridge the irreconcilable gap between the subjective and objective. The idea of objective law achieving subjective harmony is an oxymoron or a paradox. Law works because it is objective. There is no “intersubjective reality,” no “meeting of the minds,” except to the extent there is no disagreement about what the artifact, whether dollar bill, contract, or law, means. When the agreement is universal, the artifact is a social fact. Searching for an immanent intersubjective reality when the two parties disagree about the artifact is a fool’s errand. There is subjectivity, there is objectivity, and never the twain shall meet. In contract law, when there is a disagreement between two parties, a third party may decide who wins. If the parties can work it out intersubjectively, they will. If not, they have surrendered the right to have the final say in the matter.

Second, it follows that, if consciousness is an instinct, then so is deciding. For the time being, silicon brains cannot deal with paradox, quantum complementarity, or the subjective-objective dichotomy, unless one of us human brains programs an approximation of it. Resolving the infinite regress of judgment by deciding is more like acting than thinking. It is thus beyond expression in either the logic of natural deduction or the even more precise syntax of computer code.

181

Dissecting the Two-Handed Lawyer: Thinking Versus Action in Business Lawyering, 10 Berkeley Bus. L. J. 231 (2013); Lipshaw, supra note 136, at 124-37.

“In the end, the moment of judgment, of decision and choice, is not a matter of scientific reduction. I am not sure what it is. All I can reach for are metaphors like my leaping out of an airplane, taking the plunge, or, more classically, Kierkegaard’s leap of faith…. Evaluation is a thought process. But not to decide is to decide, not to choose is to choose, and not to act is to act.”
182
Id. at 130-33.

That seems consistent with Gazzaniga’s conception of the roles of the cortical and sub-cortical regions of the brain in producing consciousness. My contemplation occurs in the cortex, and perhaps that is capable of being modeled in computer code. But the will to act is mammalian if not reptilian. It is sub-cortical and instinctual. Deciding, particularly when situation sense tells us to ignore the deductive output either of biological or silicon brains, is hard, but it is still what we humans can do better than machines. It is another reason why dumb contracts will persist.

Discussions


Labels
Sort

No Discussions Yet