Cite as: Leith P., “The rise and fall of the legal expert system”, in European Journal of Law and Technology, Vol 1, Issue 1, 2010.
"To build expert systems is to attempt to capture rare or important expertise and embody it in computer programs. It is done by talking to the people who have that expertise. In one sense building expert systems is a form of intellectual cloning. Expert system builders, the knowledge engineers, find out from experts what they know and how they use their knowledge to solve problems. Once this debriefing is done, the expert system builders incorporate the knowledge and expertise in computer programs, making the knowledge and expertise easily replicated, readily distributed, and essentially immortal." [2]
Those very few of us who were critical of the rise of legal expert systems in the early 1980s probably wonder, in idle moments, whether there is a possibility of rejuvenation of an approach which was once multi-various and is now obscure and esoteric. Is it possible that after rising and falling, that legal expert system research programme could rise again? What were the conditions which gave impetus to the field and could they be repeated? In this article I want to return, with a personal viewpoint, on the rise of expert systems and why - despite their failure - the appeal of commoditising legal expertise continues to allure the unwary.
The 1980s was the decade of the legal expert system and also the decade in which my research career began. The idea of such systems developed from work in other fields (particularly medicine as with Mycin [3]) into a whole host of legally oriented projects, some of which became publicly accessible (Susskind and Capper's, The Latent Damage System [4].) and many which were described but never became available for public testing. By the 1990s projects were still underway and texts were being published to describe various other systems (Shyster by James Popple, for example [5]) and many of computer and law's earliest researchers had moved into the expert system approach. The realisation had grown, though, that law was being more difficult to manipulate into an 'expert system' format than the early researchers believed.
For much of the period of the 1980s lawyers appeared to be keen supporters of the idea of legal expert systems. For example, Capper who co-developed the Latent Damage System - though having been a programmer - was a well respected substantive lawyer Others developed such systems in law schools. It seems likely that the lawyer's philosophical support for the notion of a legal expert system encouraged other more technical researchers to believe that such systems would work. By 2000, though, most had given up on the term 'legal expert system' and revised their expectations downwards under the descriptor, 'decision support system'; but - of course - some continue to believe that we can produce systems which encapsulate the knowledge and tactical nous of a lawyer in a CPU.
The enthusiasm for the new area - particularly in the early 1980s - was substantial, and this author was a reasonably early convert to the idea, starting a PhD project in around 1981 in a department of computer science. Quite quickly, though, critical insight developed and my position became much more sceptical of the value of expert systems in law. Such scepticism was not at a premium during the 1980s - in a kind of gold rush of research funding for any project which had the requisite buzz words and made sufficiently grandiose claims, criticism was seen to be letting the side down.
One underlying objection I had to the expert system concept was to the notion of being able to formalise knowledge through some logical or semi-logical formalism. A formalism is, of course, essential in computing (the basic foundation of the subject being "program = algorithm + data") and the difficulty of doing this suggested to me that legal knowledge therefore couldn't be so simply handled as proponents argued. My dissertation basically argued that point, but being undertaken within a computer science department, a program was required to be produced.
Dissertation completed but not examined, I moved to a law school, encouraged by Colin M. Campbell who was an early advocate of the use of computing in law. Campbell had a relatively radical jurisprudence group around him - e.g. Ray Geary, Stephen Livingstone, Bob Moles, John Morison, and Malcolm Woods - and their sociological perspective mirrored my own. They were all critics of Herbert Hart and his The Concept of Law, which I too had found fault with in my PhD studies. [6] However, what became clear to me through discussion with my new colleagues was why lawyers of the time were so keen on expert systems in law: they had been brought up in the context of a simplistic rule-oriented view of law of the sort promoted by Hart, and such a perspective neatly dovetailed with the approach used in rule based expert systems. In effect, there was a culture in law of denying the complexity of law. Those who teach in law schools today may be surprised that, 25 years or so ago, most law staff were not socio-legally or contextually inclined. They often taught part-time, did little research, and what research they did do was of the case-notes format. Hart, to this group, seemed perfectly fine in terms of legal philosophy.
The examination of my dissertation did not go well and I was required to make amendments, the most major of which was to deal more fully with logic programming in law, which I had dismissed in a page or two. A school of programming had developed around Robert Kowalski at Imperial College based on the language Prolog (programming in logic), which was being heavily promoted as 'the language of the future', and many appeared to be persuaded. Prolog became extremely influential in the UK, and was seen as offering a core methodology for expert systems, particularly legal expert systems with the group's work on formalising the British Nationality Act. It was, in effect, perceived in the UK as the de facto standard for this kind of legal application and heavily support and funded by Alvey.
One of the examiners of my dissertation, it unfortunately turned out, was keen on Prolog and believed that I had dismissed this too lightly and the examiners required amendments. Armed with my newly expanded legal understanding, I produced the extra work, and also published it as Fundamental Errors in Legal Logic Programming. [7] The paper was a strident critique of logic programming and became relatively high profile, with discussion of its view in the IT section of The Guardian newspaper. [8] As an indication of how heated the argument was, The Computer Journal, which had published the paper, felt the need to distance itself from it. In part this was because there was so much at stake: the artificial intelligence movement had been devastated in funding terms by the Lighthill report in the 1970s. [9] By the 1980s, the situation was reversed with astronomical amounts of funding available to AI researchers via the Alvey Programme. Little wonder criticism of funded projects was such a sensitive matter. But it was also that - to me - Prolog had become the tool of an ideological movement and the tone of my paper was perhaps less measured than it might otherwise have been. In his hagiographic portrait of Kowalski, Marek Sergot described the view from within the group to my attacks - basically suggesting that I was wrong on most/all counts - but what Sergot's portrait lacks is an honest account of what happened to logic programming after the encounter. [10] My view is that the intellectual and funding bubbles were burst and logic programming was no longer seen as a formalism capable of living up to its hype, as Kowalski at last appeared to accept by 2001:
In its heyday, many of us thought that LP [logic programming] would provide new foundations for all of Computing. It would reconcile declarative and procedural representations of knowledge, unify programs and databases, include programs and program specifications, and encompass sequential, parallel and concurrent models of computation. … But we failed to live up to some of our most ambitious promises. We failed, in particular, to convince programmers that LP languages should replace conventional programming languages. To some extent, standardisation on languages such as C++ and Java can be held partly to blame. But standardisation doesn't explain everything. If LP were as good as we believed, it would have occupied the position that Java occupies today. [11]
It had not just been on a technical level that the logic programmers had grandiose expectations; they believed that information itself was best processed by logic (and thus logic programming):
There is only one language suitable for representing information - whether declarative or procedural - and that is first-order predicate logic. There is only one intelligent way to process information - and that is by applying deductive inference methods. [12]
Unfortunately for the development of the field of legal technology as a whole, for much of the 1980s research funders believed this nonsense and flung money at the Imperial College team and their imitators.
By 1990 my interests were moving elsewhere - partly because it was difficult to do any other kind of technical project within a law school because funders were only interested in a technology which I considered had failed. The book Formalism in AI and Computer Science, was in effect, my farewell to discussion of properly technical issues (until I recently returned to discuss invention in software patenting [13]) as I began - by a process of osmosis - to become more interested in law subjects than technical developments, yet always propelled by the underlying interest in legal information (e.g. The Barrister's World was initiated from wondering what legal research lawyers actually do [14]). My work provided a clear and early opposition to the idea of legal expert systems, yet has almost disappeared from view. It was rarely cited, even by those who had read it and followed in my path (including one of the examiners of my dissertation [15]) but that, of course, is the reward for most published work.
This short article is something of a re-run of the old arguments from the 1980s. Ideas always have a tendency to return with a vengeance, and there are certainly indications that the idea of a legal expert system has not disappeared entirely.
A relatively simple idea underpins the notion of a legal expert system: that one can take rules of law, mould them into a computer-based formal system, and advice will come out the other end. It was not uncommon to hear funders of research projects in the 1980s assert that to build a legal expert system, one had two basic and essentially simple options:
translate legislation ("the law") into some formalism and add a software interpreting mechanism as a front end for the user;
take a group of experts off for a few days and get them to lay out the relevant rules of law which can then be moulded into a formalism by a non-expert and, once again, add the interpreting user interface.
It is as if Occam's Razor has been applied to the whole confusing business of 'what is law' and we are left with an elegant core notion which can be implemented by technicians. The model is thus of a core of rules, and a logical interpreter which parallels legal advice giving. This, I argue, was partly hubristic but is also a relatively accurate description of the non-critical perspectives around law schools during that decade. In fact, such a perspective still demonstrates its attraction to the technician and research funder. [16]
The promise being made in the 1980s was that cheap, good quality advice would allow us to discard the need for expensive experts or leverage their productivity further than could the traditional 'fee earner' basis. Law could be made democratically available to all and hence the research goal - it seemed to me - was almost advocated as something that was socially valuable (rather than a waste of funding on seeking impossible goals). Of course, commodification was in the background with the notion that once expertise was removed from the expert, they were essentially discardable - and the product of their collaboration with the technicians could be sold and resold to a wide audience. This not only appealed to the commodifier of the expert's knowledge but some legal experts also perceived that they could increase their value through technology.
There are two current issues which relate to the expert systems project:
First, there are arguments that expert systems in law were not the failure that I suggest and that they do indeed work. My own view is that this is inaccurate and that any basic critical analysis of a system which is proposed as a 'working legal expert system' will quickly find that the system is not used, over promoted, and generally not much different from a list of boxes to tick.
Second, could the field be rejuvenated and attract funding in large quantities again? Possibly, but my feeling is that the hubris of the 1980s was linked to a view of law which has gone, both from law schools and, to an extent, from the profession. Without that philosophical, tacit and express support to technicians which was available in the 1980s, funders would find it difficult to spend such large amounts on what is now seen as a relatively bizarre view of law. But that does not mean that no-one believes a revival is possible, only that exactly the same mistakes may not be made again (though that may be my natural optimism).
In the next section, a speedy review of the elements which comprised the failure of the expert systems project is given. Finally, I will discuss whether a revival is conceivable and what might be the framework for this.
There can be little doubt that the optimism of the field which we saw in the early to mid 1980s disappeared at some point between then and now. But rather than trying to find a date, for those looking back at the history of the field, the questions of interest should perhaps be, why was there optimism, was there ever any success, and - if as I suggest - there was none, then why was such a huge extravaganza of funding for expert systems research in a field (legal technology) which has been practically starved of funding in all the other decades outwith the 1980s? I tried to answer these questions in my Formalism in AI and Computer Science, suggesting that the focus on the machine rather than the user had led technicians into fields which they little understood, and I still believe that was the underlying reason for the decade. Some of the points raised in that text are outlined in the next sections.
The primary reason why the expert systems project failed was that the ambitions were so difficult to achieve. What was being proposed was really the robotisation of lawyers - that their skills and knowledge could be easily formalised, and that as a process was at heart a quite simple operation - if you knew the rules, then you could give advice. [18] This, unfortunately, proved wrong. This was a view which developed from the perceived success of early AI expert system programs, where the argument ran that if they worked in such complex domains as medicine or exploration, they should also work in the 'easier' field of law.
There was indeed a laboratory success with programs which were produced in the late 1970s such as Dendral, Mycin and Prospector. In the laboratory it is clear that some programs which were being designed could be used in a predictive manner. They did, as the terminology of the decade suggested, 'reason like experts' - albeit in a very limited and constrained manner. Certainly Mycin was the most heavily discussed of the early programs and there is no reason to suggest that - in the very small area in which it worked - it could help decide which antimicrobial drug should be given. Many other programs were less well analysed. Indeed the zeitgeist seemed to be (as can be seen from the papers at AI conferences) that the researcher had to describe what his program was going to do and how intelligent and useful it would be even before it was programmed. Programs were also frequently heavily discussed without, it appears, any link to reality. For example, Duda, a member of the Prospector team (whose system was never actually used in practice) wrote:
The widespread interest in expert systems has led to numerous published articles that cite impressive accomplishments - practical results that justify much of this interest. Having been primary contributors to the PROSPECTOR project, we are particularly sensitive to comments about that program, such as the one that appeared in a recent book review in this journal that referred to ". . . PROSPECTOR's hundred million dollar success story". Unfortunately, this particular statement, which is similar to others we have encountered elsewhere, has no factual basis. [19]
Dendral's team even suggested that the lack of feedback they got from users was an indication of successful use (rather than - as most producers of programs would realise - a lack of use of the program) by writing:
Many persons have used DENDRAL programs . . . in an experimental mode. Some chemists have used programs on the SUMEX machine, others have requested help by mail, and a few have imported programs to their own computers . . . . Users do not always tell us about the problems they solve using the DENRAL programs. To some extent this is one sign of a successful application. [20]
And MYCIN, too, when it was moved over onto the hospital wards went unused.
It is important to concentrate clearly upon this history. There have been many hundreds of programs which have been produced as 'expert systems' and which grew from the optimism created by these three programs, Mycin, Dendral and Prospector. Though early, they were important in defining the mood and direction of much work in the 1980s. But, did these programs make the move from the laboratory to the world of user? My own understanding is that these programs were not successful in their move from the labs: and, others, following through the lists of working expert systems, have found little truth in the claims of 'real-life' use. For example, researchers have the same problem in trying to find how applicationally successful a program is, and how much it is tied to the laboratory:
… bore little resemblance to the success stories reported in high-priced insider newsletters. As of this date, we find that there are very few operational systems in use worldwide. ... The literature, including the expensive and supposedly insightful expert systems newsletters, has consistently overstated the degree of expert systems penetration into the workplace. However, even the outright failures have not dissuaded organizations from pursuing the technology; they have simply categorized previous efforts as learning experiences. [21]
The lack of critical assessment of early programs; the optimism that AI had finally found the techniques to confound their many critics; and a marketplace which foresaw huge profits arising from these new systems all led to a context in which developments appeared to have moved out of the experimental mode and into production mode. However, this was far from the case - very few programs ever were tested in any meaningful way or even made available for scientific assessment by others.
Logic was central to the European vision of legal expert systems, in large part due to the influence of the Imperial College school based around Robert Kowalski and the programming language, Prolog. [22] This was a school which evidenced a rather gung ho attitude to the usage of logic and the complexity of applying logic to law.They held, to my view, a number of serious misunderstandings about logic itself, primary of which was their use that counterfactuals could represent legal causation. A counterfactual is simply the notion of logical implication which was, to the Prolog team a simple if ..then … statement. In logic programming, legal expertise could be represented by this format:
x is a British citizen if x was born in the U.K. and x was born on date y and y is after or on commencement and z is a parent of x and z is a British citizen on date y
Setting aside whether this is really a representation of how legislation is interpreted by the judiciary, we should concentrate upon the logicity of this. It asserts a simplistic procedural/algorithmic process can be used to reason about law and to arrive at conclusions. When we actually look at the logical core of this kind of statement, we actually find that what appears to be a very simple form of logic is in fact one of the most difficult to resolve. In her Philosophy of Logics, Haack points to the confusion over interpretation of seemingly simple connectors. She writes:
Of the readings 'not' (of ' - '), 'and' (of '&'), 'or' (of 'v') and 'if … then - ' (of ' '), Strawson has remarked that 'the first two are the least misleading' and the remainder 'definitely wrong'. [23]
Logic programmers may well believe that this is a minor problem (that their use of 'if … then' is perceived a definitely wrong interpretation of the logical construction they are using) but it indicates to the critic a lack of self awareness of the tools being used. In effect, we have a piece of legislation which must be interpreted in the real world in a social context being gutted and represented by a formalism which other logicians believe to be incorrectly interpreted, without any discussion by the logic programmers of why they have chosen these tools and why their logical interpretation is correct.
What appears to me to really have been at the heart of logic programming was not logic per se, but a kind of cognitive psychology of reasoning following in the Newell and Simon path. Indeed, Kowalski has recently written of the interpretation of conditionals but, once again, there is a psychological, rather than a logical approach taken. [24] Logic programming was psychology dressed up as logic - psychology dressed up as a means to handle information. Little wonder that there was little real interest in understanding the logical limitations of the approach.
Logicians have frequently used law as a test bed for their various systems, and it seems to me that logic programming was just such a usage - there was little attempt to really understand how law operated or was interpreted, and the use of logic programming was more an attempt to promote a specific view of logic (as 'reasoning') than solve real problems in producing systems to support lawyer's working life, which I suggest should have been the real goal of the IT and law applications movement in the 1980s.
Edison is frequently reporting as having said that he would never invent a device unless there was a market for it, a position that arose after the failure of his patented voting machine. [25] Such an attitude implies that one has an understanding of the needs of the users who will make up the market.Was there ever a marketable need for expert systems? It is not clear that there ever was - there was certainly an early belief that nonexperts would be able to utilise systems which gave them legal advice, for example, but that quickly evaporated - partly due to liability issues which scared off selling legal and other advisory systems to the public. Systems such as Mycin were clearly meant to be used on the ward by physicians: doctors would improve their drug administration (and lower the use of incorrect antibiotics) rather than prescribe without knowledge. Legal expert systems too, were aimed at the expert user - for example, the Latent Damage System [26] covered six pages of dense legislation, and CASE (below) was directed at magistrates.
What was missing was proper analysis of user needs - vital in any other area of computer implementation, but apparently viewed as unnecessary by the AI community. This produced a mismatch between what the experimenter believed the user wanted and what the user would actually use. Shortliffe discovered that after Mycin's failure to move into full use was in large part due to the ability of experts to decide how they wished to carry out their tasks - if the tool was not directly helpful to how they wished to work, then it would simply not be used. [27] This mirrors legal usage - why would a magistrate want to use a formalised system which told him what to do when, no doubt, he or she feels that they have the personal insight to use their creative and judicial abilities? Ministries of Justice have always had problems in controlling the judiciary - particularly in sentencing for example - so the expectation that judges will follow a computer derived sentencing structure was optimistic. Yet Bainbridge's view as the designer of a system for Magistrates was:
A computer system can help by handling the rules in a rational structured way. Without more, this alone should improve the quality of the decisionmaking, especially if other information concerning the rules, their scope and application is available to the sentencer as and when he needs them. For example, if the sentencer is using the rules applying to a probation order, he should, then and there, be able to inspect the relevant statutory provisions and subsidiary information such as summaries of appellate cases relating to probation orders. Structuring the rules and the information to back up those rules in this way frees the sentencer from the need to remember the authorities.This simplifies the process from the sentencer's viewpoint and makes it possible for him to concentrate on those areas which, rightly, fall within his discretion, such as the aims of sentencing. [28]
Bainbridge believed that his system offered flexibility and could handle the discretion of the sentencer, yet the system never moved from the demonstration stage to actual use, which makes one wonder whether his representation of the needs of Magistrates was accurate. Of course, in the early 1980s when the expert systems movement began there was a scarcity of insights into exactly what lawyers did with law, but there was certainly a good understanding of what Magistrates did and what comprised the difficult parts of their work. [29] The difficult parts did not appear to be technical law, and indeed Carlen's classic study of the Magistrates court appears to suggest that the real purpose of these courts is not the formal handling of legal points, but social control.
Since that period, we have developed much further into understanding how lawyers work, a growing interest in how judges use law, etc. and can more fully understand what it is that they need. This research consistently indicates that their needs are not met by a system which simply lists rules and indicates the ordering in which they were triggered.
One of the assumptions about law which was evidenced by the expert systems movement was that there was a core meaning to law - something that could be found and formalised. Until the 1980s it was a not uncommon view in UK law schools that there was indeed a core to law which could be found and agreed upon - and once agreed it could be put into the expert system's rule base. Indeed, Herbert Hart built a successful intellectual career upon positing that most law is well agreed (excepting the 'penumbra of doubt'). This was a perspective on law which was frequently used by expert system developers to assert the fundamental solidity of their research programmes. [30]
However, when we actually get out and investigate how lawyers in practice view this commonality, we see a different picture. Lawyers see that substantive law is only one small part of the tool kit which they require in order to act for clients. In research for The Barrister's World and the nature of law, [31] John Morison and I found that procedural knowledge was viewed as more useful and thus better regarded than substantive law - because procedural knowledge told you how to do something, which means that client's wishes can be moved along. Further, the difficulty of knowing and presenting facts was never far from the mindset of the barrister:
You can never ignore substantive law issues ... you have to keep a weather eye on the facts ... but however fancy your arguments on promissory estoppel, if the guy is lying ...You always have to keep a firm foot on the ground.You need to know the substantive law ... but it's tailoring your approach to what is happening on the ground. ...If you're are going to court every day with five or six authorities, the judges are going to get pretty damn sick and pretty soon the solicitors aren't going to want to know you because you are not getting to grips with things as quickly as they .. When you go down to the County Court here and you see three days work listed for one day, the people who are going to get on, and ultimately satisfy their clients, are the people who are going to be brief but to the point ... [383] [32]
And where law is important is where there are two sides prepared to argue, rather than agree the law:
... the one area where I do refer to cases is in licensing and I do that a lot ... I never ever go to court without Paterson … never ever ... because there are a hell of a lot of authorities ... its all statute law anyway and every single word of the statute has been defined in various authorities ... it is the ultimate cut-throat world, licensing ... because it is all commercial, there is no emotion involved in it, it is all to do with people making money somehow and therefore they are going to take every tiny little point they can ... they will pay the silks an awful lot of money to argue a point ... and things nearly always go to appeal if it goes against a brewery, so you have got to get it right at the beginning ... [389]
And, of course, there is always the problem of the judge:
Obviously there are some cases that are so clear-cut that it doesn't make any difference, you are going to lose the case or win the case no matter who the judge is. But there are very, very few cases like this... The success of almost every case is dependant on the judge... there are certain judges who if you knew in advance were going to hear your case you could virtually determine what is going to happen to it. [446]
Putting this all together, we see an explanation for why so many cases settle at the door of the court - the confidence that one might have had in one's case weakens as the advocate points out the strengths of the other side's case and the weaknesses in your own. It also implies that the expert knowledge which lawyers have is of a different sort to that which the expert systems designers were promoting as the knowledge base of their practice-oriented systems.
Law is agonistic - a position develops from confrontation with another position, usually one body of evidence against another, but when money is at stake, one view of law against another interpretation of that same law. Of course, as the American Realists pointed out (and others have followed) the conflicting empirical truth of the flexibility of law has existed at the same time as law is discussed as though it was unchanging and concrete. Proponents of legal expert systems have picked up on the ideology of law as unchanging and missed the observable facts that law is ever changing and constantly being interpreted. I do not mean that law changes gradually over time - it certainly does that - but it also depends largely upon the context in which it is used and how it is used to produce a narrative from the interpreted facts of a situation (and this evidences continual micro-changes, both developmental and revisionary). Thus the weakness of the expert system's formalising process is that it is static and conservative, while real law is dynamic. Can one really imagine two users of expert systems going into the courtroom and waiving their respective printouts at the judge, claiming that theirs states that they should win? This is a ludicrous picture, but essentially what was on offer from the legal expert systems movement.
The law schools of the 80s certainly contained many teachers who would have viewed law as relatively clear and consistent, and indeed one reviewer of our The Barrister's World was somewhat shocked to discover that procedural knowledge was of more value than the subject he had taught for many years. Things have changed, though, with a much more critical and realistic (in philosophical terms) perspective and it would be difficult to imagine the same kind of support for developments in legal expert systems. In the early 1980s, lawyers could barely use a word processor, and were much in awe of the technocrat. Twenty or thirty years later, they can use word processors and now think they know all there is to know about technology and its control by law. Legal expert systems developers would hopefully be much more critically assessed now than they were at the height of the movement.
See logic programming/expert systems generally and the concepts of 'bandwagon' and 'bubble'.
The psychology of the academic in search of funding upon which to build a career is clearly relevant to the whole expert systems episode, but my point is relatively simple and played over and over again: academics veer towards funding sources and will fit their research applications to suit the funder's specification. We see this as the AI community now rewrite their AI projects to suit funders who are less keen on the AI approach - XML technology being one such funding source.
There is little happening in the research field of legal expert systems at present. The projects which do appear tend more to demonstrate the poor technical quality of legal expert systems rather than show new paths which would allow a move towards success. For example, Openexpert.org is a freely available expert system shell with demos available. Unfortunately, these tend to be on the humorous side rather than the serious - tending to make the visitor to the site see the project as frivolous. Most of the firms which were selling "expert systems" appear no longer to be doing so, and even the most ardent of sales person from the 1990s appears to have gone quiet - the term 'expert system' is no longer bait which catches clients. One does come across examples of programs which evidence something of the nature of an expert system - web based programs, for example, which take the user through a series of yes/no questions such as those at NHSDirect, the UK health advisory site:
Please select an answer Is your child:
finding it extremely difficult to breathe,
making a grunting noise with each breath,
turning blue or pale around the fingernails, toes or lips, or
unable to talk to you normally?
This is essentially a simple 'decision tree' where each node offers a simple binary (perhaps tertiary) option. It has been prevalent in many fields- such as management - for many years and certainly predates expert systems work by a large margin. It is, anyway, a distant relative of the idea of an 'expert system' unless one believes that expertise in a complex domain can be reduced to such a simplistic formalism.
However, event though expert systems have all but vanished from the salesman's basket, there are still those in the artificial intelligence field who believe that the goal of commodified expertise is possible. One should never underestimate the power of an attractive idea - that we can anthropomorphize the machine: have it 'reason', 'think' or whatnot. That was what really underlay the notion of expert system - that 'intelligence' was formalisable, and that the techniques to make intelligence digital were ready, available and easy to use:
The concept of an expert system is attractive. It has connotations of helping those who cannot afford legal advice; of improving the legal process; and of making valuable products available. The combination of democratic goal and economic product is indeed powerful - and these were the ideas which were certainly behind the Alvey Programme which supported the expert systems movement. [33] Alvey devoured £200 million of public funds and an almost equivalent amount of commercial funding. [34] It was adjudged to have been a successful programme [35] in terms of organisation, but even its primary advocates felt that it did not succeed in moving UK IT industrialisation along. Brian Oakley, the programme director, chastised the government for its failure to encourage industry to take the developments from the programme and turn them into commodified products for the marketplace. [36] However, observing legal expert systems rather than the various fields of the whole programme, we can see why industry was less than excited about producing products from Alvey funded research - it did not answer the needs of the marketplace. If the failure from one small section was to be found across all Alvey fields - as I believe it was - then industry was sensible in its dismissal of the Alvey pathway. None of the fields which Alvey was so supportive of are currently core ideas in the world of computing, even some 25 years later (e.g. formal proving of programs correct, logic programming, natural language processing, etc.) so there must be considerable doubt as to the correctness of the Alvey strategy.
The huge support for the expert systems movement in the UK was partly that the idea of producing such systems was attractive to the academic IT community, partly that there was a lack of critical perspective on the nature of law within the academic legal community, but also because there was a massive influx of funding from a programme which was to be the saviour of UK computing. The technologies of shipbuilding and engineering had already gone East and a government worried that IT, too, was about to go in the same direction was keen to ensure that it didn't. Put together, these produced a context in which optimism matched the available funding as everyone rushed to the trough. The trough required those feeding at it to utilise approved techniques and approaches, which - in my view - meant that research was being damaged rather than being enhanced. To use another metaphor, the eggs were all placed in one basket.
It is difficult - though not impossible - to see a similar rerun. The notion that government can impact upon technical development is surely not in fashion - a government which cannot run its own computerisation projects in an efficient and effective manner would surely be more hesitant about its understanding of technology. And, as already suggested, both the law school and legal practitioners would be more hesitant to accept the technicians view of law in future.
Where could we have been instead if that £200 million in public money had been spent on encouraging a wider approach to computer projects? I suspect that there would have been work carried out in a much more detailed manner on the needs of the legal marketplace - developments such as legal search engines, citations systems, etc which had been seen in the 1970s (Abdul Paliwala, editor of this edition, being one innovative researcher in the field) and disappeared in the 1980s and 1990s (and have only been returning relatively recently) would have advanced much more. Alvey and its support for the expert systems movement effectively killed off any other approach to computer application to law and we are still suffering the consequences of that hubris.
[1] School of Law, Queen's University of Belfast, Belfast BT7 1NN, p.leith@qub.ac.uk
[2] Davis, R., 1984, "Amplifying expertise with expert systems", p 18, in Winston P.H., & Prendergast K.A., (eds), The AI Business: commercial uses of artificial intelligence, MIT Press, Cambridge, Mass. My emphasis.
[3] See Buchanan B.G. and Shortliffe E.H. (eds), 1984, Rule-Based Expert Systems:The MYCIN Experiments of the Stanford Heuristic Programming Project, Addison Wesley, Reading, MA. Available online at http://www.u.arizona.edu/~shortlif/Buchanan-Shortliffe-1984/MYCIN%20Book.htm
[4] Capper P.N. and Susskind R.E., 1988, Latent Damage Law - The Expert System, Butterworths, London.
[5] Popple J., 1993, SHYSTER: A Pragmatic Legal Expert System, Dartmouth Publishing Company.
[6] See Leith P. & Ingram P., 1988, The Jurisprudence of Orthodoxy: Queen's University Essays on H.L.A. Hart, Routledge. My own contribution to the collection dealt with the rule based framework which Hart offered to the AI/logic community.
[7] P. Leith, 1986, "Fundamental Errors in Legal Logic Programming", The Computer Journal, Vol. 29, No. 6, 545 - 552. I was later told that the Computer Journal only published because one AI reviewer (no friend of logic programming) argued strongly for its usefulness.
[8] "Logic and rules of law", by B. Bloomfield,The Guardian, 28 March 1987. "How the logic of the law is put on trial", by Robert Kowalski,The Guardian, 16 April 1987.
[9] Lighthill, J., 1973, “Artificial Intelligence: A General Survey” in Artificial Intelligence: a paper symposium, Science Research Council. See one example of the response from the AI community in John McCarthy's, Defending AI Research : a Collection of Essays and Reviews (1996).
[10] Sergot M.J., 2002, "Bob Kowalski: A Portrait", in Kakas A.C.,Kowalski R.A. & Sadri F., Computational Logic: Logic Programming and Beyond: Essays in Honour of Robert A. Kowalski , Springer. I had a senior contact within Imperial College who, I fear, did not quite agree with the halcyon picture which Sergot paints of the department under Kowalski. Sergot also seems to be in denial about the claims he made about the relationship to logic and law: "Logic provides a natural base for a computer interpretable formalism to express legal rules: law treats large sets of complex rules that have long seemed suitable for logical analysis, and once the law is expressed in some appropriate subset of predicate logic, that formulation can function as a program which interprets the law." Sergot M.J. (1982) "Prospects for representing the law as logic programs", in Clark K.L & Tärnlund S.A. (eds) Logic Programming, Academic Press, London.
[11] Kowalski R., 2001, “Logic Programming and the Real World”, Logic Programming Newsletter, January.
[12] Kowalski R., 1980, Response to Questionnaire published in SIGART Newsletter, 70, February, Special issue on Knowledge Representation, P44.
[13] Leith P., 2007, Software and Patents in Europe, Cambridge University Press.
[14] Much less than one would imagine - procedural knowledge was prized much more highly, we found.
[15] Stamper R., 1991 “Expert Systems - Lawyers Beware!” in Law, Decision-Making and Microcomputers, Nagel S. (ed), Quorum Books, New York. Stamper had examined my dissertation in 1985/6.
[16] The European JURIX community has continued to publish in this research spirit.
[17] This section is based upon Leith P., “The Judge and the Computer: How Best `Decision Support'?”, in Artificial Intelligence and Law,Volume 6, Numbers 2-4, 1998 , pp. 289-309(21)
[18] Greenleaf G., in 1989, was beginning to demonstrate the legal community's move away from the notion of robotised lawyers, but still felt "[s]een from this perspective, the task of developing legal expert systems is feasible, useful and still just as challenging." “Legal Expert Systems - Robot Lawyers? An introduction to knowledge-based applications to law.” Online at http://austlii.edu.au/cal/papers/robots89/
[19] Duda R.O., Hart P.E. & Reboh R., [1985] “Letter to the Editor”, in Artificial Intelligence, Vol 26, No 3.
[20] Bruce G. Buchanan B.G. & Feigenbaum E.A., 1978, Dendral and meta-Dendral: Their Applications Dimension, Stanford Heuristic Programming Project, Report No. STAN-CS-78-649 at §3.4
[21] Ostberg O., Whitaker R. & Amick III B., [1988] The Automated Expert: technical, human and organizational considerations in expert systems applications, Teldok, Sweden.
[22] See his own history of logic programming - Kowalski R.A. “The Early Years of Logic Programming”, in Communications of the ACM 31(1): 38-43, 1988. See Marek Sergot's, "Bob Kowalski: A Portrait", in Computational Logic: Logic Programming and Beyond 2002: 5-25 which notes that logic programming and Kowalski were "so inextricably linked that it is impossible to disentangle them."
[23] Haack S., 1978, Philosophy of Logics, Cambridge University Press, Cambridge, pp 35.
[24] Kowalski, R., "Reasoning with Conditionals in Artificial Intelligence", in Oaksford M. The Psychology of Conditionals, Oxford University Press, in press. Also available at http://www.doc.ic.ac.uk/~rak/papers/conditionals.pdf
[25] The patent was US 90,646.
[26] P. Capper, and R. Susskind, above.
[27] Shortliffe, E.H.,1981, "Consultation systems for physicians: The role of artificial intelligence techniques", in Webber B.L., Nilsson N.J., (eds), 1981, Readings in Artificial Intelligence, Tioga Publishing, Palo Alto
[28] Bainbridge D. 1990, “CASE Computer Assisted Sentencing in Magistrates' Courts”, in Proceedings of 5th BILETA Conference, British and Irish Legal Technology Association.
[29] Carlen, P., 1976, Magistrates' Justice, Martin Robertson, London.
[30] For a more detailed critique see my contribution in Leith P & Ingram P, 1988, The Jurisprudence of Orthodoxy:Queen's University Essays on H.L.A.Hart, Routledge.
[31] Leith and Morison, above.
[32] Paragraph numbering refers to the online version at www.bailii.org
[33] The report is available at http://www.chilton-computing.org.uk/inf/alvey/overview.htm
[34] A rough comparison with funding this today (2009) would be £500 million of public funding.
[35] Guy, K. & Georghiou, L., Evaluation of the Alvey Programme for Advanced Information Technology, London, HMSO, 1991.
[36] Interviewed by Mehta A., New Scientist, Issue 1724, 7 July 1990.