Inaugural lecture by Professor Herman Venter
Department of Computer Science and Information Systems
University of Port Elizabeth, South Africa
About five years from now, the average personal computer buyer will be able to walk into a mass retail store and buy a computer that will be capable of comprehensive voice recognition. Furthermore, it seems quite likely that within the next twenty years we shall routinely talk to our computers as if they are humans. This will become all the easier because computers as we know them today will disappear: We shall be surrounded by so many computers that they will become practically invisible to us.
It is interesting to contemplate the possible effects that ubiquitous, talking computers could have on our lives. Computer programmers will probably find that creating state-of-the-art computer programs is even more difficult than it is now. Certainly, a very different set of skills will be required of future computer programmers. I doubt whether many people have a good idea of what these skills are and how our students are to acquire them. At best we can goad our students into becoming independent thinkers and learners. Unfortunately, this is not only very difficult, but not enough.
Clearly, our students (and most others) have to continue their education throughout their careers. There is at present a sharp distinction between the institutions that prepare people for careers and those that help them to continue their education during their careers. I do not think that this distinction is necessary or desirable. Removing this distinction will require universities to undergo drastic changes, but this is on the cards anyway. I shall say more about the changes I expect later on in this lecture.
Another, more radical thought is that we could be entering a future in which written languages have largely disappeared. At first glance, the idea is preposterous and even horrifying. The creation of written languages is perhaps the crowning achievement of our entire civilization. The extension of literacy to the masses characterises the modern era. It is hard to imagine an education system that does not involve reading and writing.
Nevertheless, one cannot help but notice that in our century technology has already caused a significant drop in literacy among the privileged classes. Very few of us now take the trouble to write letters to friends and family. Instead, we just pick up the telephone and talk to them. Increasingly, we lounge in front of the television instead of reading books, magazines or newspapers.
Granted, we are still surrounded by writing. Basic literacy seems to be an essential survival skill in Western societies. But we should not forget that a mere two centuries ago, the vast majority of humans were totally illiterate. There are but few uses of writing that could not ultimately be dispensed with. It is perhaps rash to assume that writing will remain in widespread use, merely because we are currently so used to it. The interesting question, really, is which uses of writing are indispensable.
For example, consider this lecture - I find it hard to imagine how I could have made it into a coherent whole without resorting to written text. Even if I could have dictated it to a computer with extensive speech editing facilities, I would have found the absence of a visual representation to be a major impediment.
Of course, if I had grown up in a world without writing, I might have managed. After all, societies without writing have built up and maintained extensive oral cultures. It seems to me, however, that the act of expressing one's thoughts in writing is of immense value: It helps to write one's thoughts down, because written languages are more formal than spoken languages.
This should not be news to scientists. How could one possibly understand Quantum Mechanics or General Relativity without the use of equations and the immense machinery of formal notations built up over the centuries? The development of thought through the centuries is intricately linked to the development of formal notations in which we not only record our thoughts, but form them.
For me, the most exciting development in the recent history of thought is the development of programming languages. Programming languages predate computers. In the first half of this century mathematicians developed fundamental notions such as the Lambda Calculus, rewriting systems and Turing machines in order to better understand the increasingly important process of computation.
In the second half of this century, the invention of computers led to the formation of a new community of people who are inventing ever more practical notations for formulating and expressing our understanding of computation. This community has remained distinct from the mathematicians who are concerned with programming languages, even though the latter now also call themselves Computer Scientists.
I find myself in the company of the non-mathematicians: For the last eight years, I have been concerned with the design and implementation of practical programming languages. I did not plan this - I was initially more interested in using programming languages than in creating them. However, by the time that I became an academic and no longer had to write only the programs that my boss told me to write, I had already become frustrated enough with the tools at hand to formulate two conjectures that I wished to prove:
I had some ideas on how things could be improved and hoped to explore these, improve the tools at hand and then move on to other programming problems.
My conjectures were safe enough to make and have been proved correct not only by myself but by many others. However, my hope to jump in, improve the tools and get out, has proved to be naïve. I am still working on the problems of programming language design and implementation and expect to keep working on this for many years to come.
My first attempt at programming language design started as an attempt to integrate the Functional Data Model with the Ada programming language (to prove conjecture 1). This requires extensions to Ada that are beyond the capabilities of the abstraction constructs in Ada, which means that a new compiler for Ada had to be written. It did not take me long to comprehend the hopelessness of such an endeavour. I then set out to construct the smallest usable subset of Ada. One thing led to another, and soon I had designed my own language, using the Functional Data Model as a basis.
I then set out to implement this language. I already had some considerable experience in implementing intermediate level languages (super assemblers). Such languages are easy to implement and provide for quite a bit of the functionality of high level languages. I conjectured that I needed to start with an intermediate level language and provide a mechanism to extend the language. Systematically extending an intermediate level language until it becomes a high level language, seemed easier to do than to implement the high level language using classical compiler construction techniques (conjecture 2).
By that time, however, I had come across a very high level language called Icon, and realized that I could easily transliterate a large subset of my language to Icon. I quickly constructed an Icon program to do this automatically, with a result that my language had some sort of implementation. I then resolved to use my own language for all further programming. It did not take long before I could not bear to write programs in any other language.
However, this meant that my new language actually had to be useful. Creating a language that is actually useful is a very ambitious project, much more so than I initially realised. I soon found myself spending an inordinate amount of time tinkering with the design of my language. Along the way, my ideas on how to implement the language went through many twists and turns and became much more ambitious as well. As a result, I have been furiously programming away for the past eight years, producing a useful language and a useful way of generating compilers, but producing precious little in the way of classical research results.
I make no apology for this, it is right and proper that at least some academics should be able to put aside the norm of publishing a paper every few months and spend a decade or two on tackling really ambitious projects. I did not make a single conscious decision to undertake such high risk research, but took a small careless step down a slithery slope. Nevertheless, I do not regret that step.
I have learnt a lot in the past eight years. The most important thing, perhaps, is that I now need to start over again. Computer technology continues to improve at an astonishing rate and this has had a large impact on how we program computers. I have also learnt that my language is not quite simple enough.
I am now designing a new programming language. This time I am not going it alone, since I can now take some colleagues and post-graduate students along with me. It promises to be an exciting adventure.
I doubt if anyone here has not heard of the Internet (or World Wide Web). If you have not, then you surely have been sleeping like Rip van Winkel for the past three years or so.
I find that I use the Internet as my primary source of information. The information content of the Internet is growing exponentially. Furthermore, a number of full-text indices cover almost every document available on the Internet, which makes it particularly easy to find information. I would be but slightly inconvenienced if the library building burnt down tomorrow and all the staff decamped. If I were to lose my Internet connection, however, I would be greatly inconvenienced.
It will not be many years before the majority of my fellow academics find themselves in a similar situation. This has major implications for our future. The first, and most obvious problem is that we shall need ever more capacity for our Internet service. This is going to cost a lot of money and the money will have to be found by cutting back something else. This is bound to lead to furious fights to protect existing empires.
We already find that the demand for Internet usage far outstrips the University's capacity to provide it. So much so, that the service is close to unusable. The response to this crisis has been disappointing, if predictable. We are now heading down the road of accounting for every byte sent or received over the network and charging this against budgets that will not necessarily grow rapidly. This is the model that was used to control the usage of mainframe computers in the bad old days. Progress in Computer Science (and other fields) was held up enormously until the advent of computers so cheap that processor time is essentially free. Had the pay-per-use model been in use for the early adopters of the Internet, it is unlikely that many of the most dramatic and unforeseen developments on the Internet would have occurred.
While the pay-per-use policy represents an unwelcome setback that will retard the exploitation of the Internet by South African universities, it is not a serious problem because it will be temporary. It is widely expected that networks will vastly increase their capacities and that costs will plummet. South Africa has no choice but to conform to this trend, or drop out of the global economy.
A far more serious problem, for universities, is the fact that the information content of the Internet is subsuming the functions of the university library. On the one hand, this represents an indispensable reduction in costs and a welcome increase in utility. On the other hand, it means that one of the central reasons for the existence of a university has disappeared.
Eli Noam of Columbia University has written a marvelous article on this topic, in which he states the following1: Scholarly activity, viewed dispassionately, consists primarily of three elements: (i) the creation of knowledge and evaluation of its validity; (ii) the preservation of information; and (iii) the transmission of this information to others. Accomplishing each of these functions is based on a set of technologies and economics. Together with history and politics, they give rise to a set of institutions. Change the technology and economics, and the institutions must change eventually.
Few people will dispute the need for universities to change. We live in an age of rapid change driven by advances in technology. So what is the problem?
Consider the following statement by Peter Denning of George Mason University 2: Four assumptions lie behind our historical conception of a university: the library, a community of scholars (formed around the library), drawing on each other's knowledge in different disciplines; teachers working with small groups of students; and a period of schooling that helps one to transform from adolescent to adult and grants a credential for entering work.
The problem is that:
In short, our present day universities need to be totally reinvented. Those who cling to the historical conception of what a university is will sooner or later collapse and disappear.
I foresee that at least two very different types of institutions will emerge in the future. Undoubtedly, there will be global institutions offering pre-packaged courses that will be created as expensively and expertly as motion pictures are now created by Hollywood. These courses will make heavy use of computer technology and will very likely provide much better, much cheaper, higher education to the masses.
Computer Aided Learning has been oversold in the past, and still is, but to sit back and assume that it will never overtake and obliterate the classical academic course, is take the same attitude as many of the carriage makers took when first confronted with expensive, smelly, noisy, slow and wholly unreliable motor cars.
It is possible that many existing universities will become incorporated into these global institutions. However, this is not necessarily a desirable outcome. Global institutions of higher learning will behave like all global companies. The research and development work will be concentrated in a few centres of excellence and the rest of the company will be given over to marketing and customer service. The resulting environment might be attractive to teachers, but it will have little charm for scholars. If the McGraw-Hill or Addison-Wesley Global University of the future takes over UPE, I hope to be one of the first academics to leave.
In complete contrast to such global purveyors of pre-packaged higher education, we shall hopefully find institutions that specialize in research. In such institutions, scholars will be drawn together by the availability of expensive equipment and the presence of other scholars who have specialised in the same areas. The only students likely to be around will be post-graduate research students. Such institutions will necessarily be elitist and very expensive to run. They can only exist if both government and industry find it worthwhile to support them.
It is not difficult to dream up institutions that lie in between these two extremes. It is comforting to believe that there will be niche areas that smaller universities such as UPE can find and exploit. I fear, however, that there may be no middle ground. Eventually, UPE will have to disappear, or it will have to find a way to live by research alone.
While I have no doubt that UPE will have to strive mightily to become a post-graduate research institution, or disappear, it also seems obvious to me that we cannot hope to take a direct route. There is a great need to provide higher education to the masses as soon as possible and society expects UPE to play a role in the provision of such education. Eventually, it is very likely that a global institution will push UPE out of this role, but until that happens it is imperative that we play it as best we can.
The tricky part is to provide mass education, without receiving more resources from the state, while at the same time becoming better at research. Pulling this off will be no mean feat. I am thankful that it is not my job to lead such an effort. There are welcome signs that people in leadership positions realise that the present situation is untenable and that we need drastic changes. I am greatly concerned, however, that bodies such as Senate continue to operate as if there is no crisis and as if everything that comes our way can be handled with minor adjustments to this and that.
When I look ahead to the point where we will be pushed out of the undergraduate and continuing education markets, it appears to me that the strongest position to be in at that point will be to have a largely separate division of the University staffed by teachers, successfully utilising computer technology and distance education models and possibly operating on a separate campus. This division should then be sold off at a handsome price, with the proceeds becoming part of the University's endowment. What remains of the University should concentrate on research and should already be established as a world leader in a few, highly specialised areas.
To get to this point, we shall need to totally rethink the way in which we deliver education to our students. We shall also have to make many hard choices. I doubt whether all departments could be accommodated in the eventual research institution. Those that survive, will first have had to become much more focused and much better at their research.
I am going to do my best to see to it that Computer Science is one of the departments that will survive. To do so, it will have to become a world leader in some significant area of research specialisation. It is with this in mind that I am persisting with the very ambitious goal of creating a world-beating programming language.
I thank the University for giving me the opportunity to take these risks and to pursue such ambitions. I hope to succeed and I hope to be a small part of the overall success of a new UPE.