Approximate text for Euromath Bulletin volume 2 number 1 June 1996 p 13--16. The Future of Symbolic Computation Richard Fateman I looked back at old SIGSAM Bulletins. In vol 18 no 2 (May, 1984) I wrote a 2 page paper on "My view of the future of symbolic and algebraic computation". It appeared in a collection of 18 papers invited by Bruno Buchberger, from program committee members for Eurocal 85. Many of the papers seem to me to be rather off the topic of Bruno's question, but a few of them seem to stand the test of time, and might be relevant to your topic. I may be biased, but I think mine is actually pretty good. Part I. When future is past To calibrate your clock, the first edition of Wolfram's Mathematica book appeared in 1988, so this appeared some 4 years earlier. I predicted that workstations and personal computers, along with graphics, will put the most powerful computer algebra systems on inexpensive machines. I suggested that impressive systems would combine algebraic manipulation, numerical calculation, plotting, and possibly the communication of results between computers (e.g. a symbolic processor and a highly parallel numerical processor.). I questioned whether we had any way of finding parallelism effectively in our small common symbolic computations. I expressed concern that parallel solutions to problems that are rare and astronomically hard are uninteresting. I pointed to continuing work on data abstraction and mathematics, slow progress in education, predicated on the "next generation" of academics to incorporate symbolic computation into courses when the cost was low. I suggested that heuristics, pattern-matching, planning, and artificial intelligence might help with our problem solving but that there are only a few glimmers of hope, and that further progress to build effective tools would require close cooperation with application specialists. Here is one set of predictions, likely to offend people, and thus VERY MUCH UNSUITABLE FOR PUBLICATION. Part II. What next? The best prediction for the future is to find something that has actually happened in the laboratory and predict that it will spread. We can continue to expect work that was available on research workstations to become available on home computers. This is fairly obvious because the home computers of today are far more capable machines (display resolution, memory, speed, disk capacity) than the workstations of the past, and this will continue until they are virtually indistinguishable. Indeed, some (few) of my colleagues are now working on Pentium and Macintosh computers because of the greater availability of software compared to UNIX-based machines of the past. While research in new algorithms will continue, and it would be nice to predict some breakthroughs, most recent results (post-1975) have been only modest incremental improvements, niche computations, or discouraging theoretical results. Such discouraging results are typically lower-case bounds that computer systems are, practically speaking, depending on doubly-exponential time algorithms. I have been skeptical in the past about the relevance of theoretical bounds because they were typically based on dense-case (unrealistic) worst cases, but some of these results are applicable to sparse cases as well. The key will be to know when NOT to use algorithms such as Groebner basis reduction, not better ways to run the general algorithms. Adaptation to solving specific problems will be critical. I think that the more advanced systems of today will reach natural limits as they grow more complex and attempt to encompass more applications. One such barrier (Macsyma faced it first in 1980 or so) is how to incorporate application data "assumptions" into computations. It is unfortunate that later system developers did not recognize the source of Macsyma's problems. Instead they built variations on the common operational outline of a computer algebra system. We expect that failure to address such problems in the kernel computations of these newer systems will pose a substantial barrier to incorporating a high level of application "understanding" into their programs. Because of this lack of vision, most new systems of today will not transcend the current designs (or even the previous designs like Macsyma of 1982) in any serious way. It will be easier to build new systems up to the same level of competence as the most advanced systems of today, but not much further. The most contentious issues, and the ones that will affect the next twelve years are, in my opinion, based on economics, politics, academics, personalities. For example, there are now far more computer algebra systems than it makes sense to have. The burden of supporting a system is substantial, and the number of capable support personnel is small. Academic financial support for these activities is limited, typically from government grants with horizons of 3 to 5 years. Systems without a transition to another support mode (unfortunately this tends to be commercial in nature) can linger for a while in "freeware space" but usually will atropy and die. (There are a few exceptions, see below). Sales revenues govern the commercial enterprises, and I am not aware of evidence that additional entries in this market increases the user population. They merely divide the user population and revenues among competing systems. I do not know if this market is "elastic" (that is, dropping the price will increase the sales) or not. I am not an expert on software marketing, but neither, it seems, are some of the vendors! I assume that a clever vendor interested in a highly profitable growth market could find a more lucrative area than computer algebra systems. Perhaps Tee shirts or gourmet coffee beans enscribed with Klein bottles. I predict there will be a shakeout with a few of the commercial systems surviving continuing as viable companies, and most academic projects fading. A few will survive in niche markets. In areas of education applications, false economies abound. The adoption of local or free systems instead of commercial ones may cost more for support than the software license fees. The institutional view that hardware is real and must be purchased but software is insubstantial, inconsequential, and cannot be funded, is simply wrong, and nearly all US universities are routinely buying software instead of using in-house developments, whenever feasible. I doubt if it has ever been the case that you could save money by building ANY software that someone else was willing to sell you. Only by claiming that no other software has the appropriate functionality (or that you have fun building software systems) can duplicative, or nearly-duplicative system development be justified. In principle, cooperation among academic systems may provide a path to salvation by assembling one "super" free-ware system, supported and shared by many. There are a very few examples (e.g. Linux, Gnu software, TeX) but I doubt that the academics in this field will buy into such a shared responsibility for system. It hasn't happened in the past. (There is a free GNU-like CAS called JACAL. Hardly ever mentioned. I've built another CAS, MockMMA, and given it out free -- it is used by a few people. Few people bought into the US-government funded "free" Macsyma, perhaps because it was under a cloud of strange licenses for so long, or perhaps because it was too complex.) III Any hope for improvement? There is a hope that a network-based computational community will grow. That we will each provide expert solvers of some sort and depend on others' experts. Rather than trading software we will be trading internet sockets. Rather than porting software, we will worry about common protocols. I recently put up an integration table-lookup server at Berkeley. (http://http.cs.berkeley.edu/~fateman/htest.html) Other kinds of servers are possible. Not everything makes sense to do in this way. For example, I would not willingly provide you free computer time to do arbitrary computations on my facilities, and certain consequences follow from economic considerations. I would generally be pleased to assist others in downloading code to their home computer, though this is more easily said than done. I am optimistic that growth in the use of symbolic mathematical systems in engineering problem solving will make engineers and scientists more productive. There are even situations in which artificial intelligence techniques, graphics, will assist. The use of frameworks for programming and problem solving that combine techniques are extremely important, which is one reason I am a fan of Lisp -- a language which has improved in capabilities for system building (at no cost to me), has superb facilities for support of large and growing software in AI, graphics, mathematics, and can be a scripting language for code that resides in other languages. I am discouraged by the apparent need for each group of people to come up with its own surface language that is incompatible with all others, and to make such a big fuss over these appearances. I would like to think that computer algebra systems can communicate with engineering design systems at a higher level than by re-parsing each other's command strings. My work at Berkeley has relied on the synergy between Lisp-based systems written by different authors, and on the ability of lisp to call out to programs written in C, Fortran, or other languages, so long as an application program interface is well defined. I think that representing more applicable mathematics is a critical bottleneck. I consider the emphasis on category theory as a path towards understanding systems, but in no way as a solution to serious mathematical representation. At its current level of sophistication in computer systems, algebra does not explain analysis or physics. It is not clear that the path from one place to another can be built in any useful fashion, and that we will inevitably have two separated facilities. The "applications" of computers to "pure math" generally leave me cold. Furthermore I am concerned when I see system builders tout such applications as reasons for continued financial support. If one's project is to survive financially, it cannot be from redirecting 5% of the money from "pure math" into "computer algebra". It must be from redirecting some of the much larger quantities of money from real applications. For example, these might include computer-aided numerical/graphical/etc design; software development/verification; etc .