Abstracts for Four Papers I'm Gonna Write Someday

Brian Harvey
University of California, Berkeley

Copyright (C) 1989 by Brian Harvey. Permission is hereby granted to anyone to reproduce this paper, provided that it is reproduced in its entirety, without editing, and including this notice.

I. A Professional Ethics Course Wouldn't Have Helped Robert Morris

In recent months everyone with an axe to grind has been using the November Internet Worm as a grindstone. Security freaks call for more security; freedom freaks call for less security; decentralists call for less reliance on computer systems; antimilitarists call for less military reliance on computers; manufacturers of security devices call for people to buy their products. Regrettably, some people at CPSR meetings are jumping on the bandwagon and using this incident to argue for professional ethics classes.

I'm all for professional ethics classes. But I don't think such a class at Cornell would have prevented this incident. Such classes should, and generally do, examine issues that are morally difficult for professionals working in the field. What would an ethics course for computer scientists be about? Probably a major focus, for example, would be on the extent of military funding of computer science research. This is a question of real importance not only for working professionals but for the graduate students who would be enrolled in the course, who may not like working on weapons research but who do want research assistantships. If 80% of all computer science research is funded by the DoD, this poses a problem for antiwar computer scientists.

Would a computer science ethics course deal with privacy? Probably, but I hope not at the level of simple exhortations to respect it. If I were teaching such a course, I'd begin by calling the assumptions about privacy into question. For example, let's say the police want to build a spiffy data base system to keep track of criminals. What's so bad about that? I do think it's bad, often, but I don't think it's obvious why. I think that the answer requires a lot of specific historical knowledge about the political role of the police in the United States, and the recurring real abuses of police power.

If you asked Robert Morris whether computer professionals should respect people's privacy, I bet he'd say yes, sincerely. He would then go on to say that the Internet worm wasn't an invasion of privacy, but "just a joke." I propose to take this claim seriously. I argue that the relevant ethical issue is this: Playing practical jokes on one's friends is different from playing practical jokes on strangers. It's not that one is always okay and the other always not okay, but the standards are different. Practical jokes are about trust and testing trust. The degree of trust one can expect from friends is higher than the degree it's reasonable to expect from strangers. This would be a terrific issue to raise in an ethics class for 12-year-olds. (I'm not being sarcastic; when I was a 12-year-old I attended a school with required ethics classes.) It's unlikely that a teacher of graduate students would think to raise it.

I believe it is a serious problem in our society that adolescence commonly lasts into the mid-20s and beyond. The reasons have to do with a lack of serious adult values, the commercial glorification of youth, a tight economy in which adult life often truly is bleak and joyless, state-sponsored lotteries, and many other things. Professional ethics classes, though, do not address this problem.

II. Moral Dilemmas are not Ethics

The model for professional ethics courses is medical ethics courses. The latter often revolve around dilemmas, that is, around issues that are genuinely controversial among honest, well-motivated doctors. Abortion, euthanasia, whether to offer an honest diagnosis if you think it'll hurt the patient's health: all of these questions in which life and death are literally at stake are no easier for ethical philosophers than for medical practitioners.

The purpose of a medical ethics course is not to encourage doctors to be ethical. That is taken for granted, as a precondition of the course. Nor is the purpose of the course to call attention to obscure ethical questions. Every medical student knows about these questions, as does everyone who reads newspapers. The purpose of the course is to provide the students with knowledge of the range of arguments that have been made about the difficult questions, so that they do not begin their careers with one-sided views out of ignorance of alternatives.

In computer science our situation is not like that of the medical profession. Among our colleagues the very idea of social responsibility is open to question. "First, do no harm" is not controversial among doctors, but some computer programmers are perfectly comfortable building the tools for arbitrageurs and other social parasites. "Suppose your employer orders you to release a product known to have bugs because the deadline is approaching..." This is an ethical dilemma? It wouldn't be, in a profession with a sense of ethics.

The medical ethics course is useful as an adjunct to the real ethical education of medical students, which happens in hospital wards. Everyone involved understands that the course is an adjunct. Everyone understands that ethics is about empathy, human respect, and courage more than it's about intellectual resolution of moral puzzles.

In computer science, solving puzzles is central to our work. It is all too easy to see social responsibility as just another kind of puzzle, to be solved by the same techniques of formal reasoning we use with other puzzles. A dilemma-based computer ethics course too easily lets us off the hook. Instead our ethics courses must be about ethics! That is, they must force students to confront the existence of good and evil, to choose between selfishness and community spirit. Very few computer scientists explicitly choose evil, but many prefer to pretend that there is no choice to make.

III. There Is Nothing that Everyone Needs to Know about Computers

I have been arguing for several years with people who believe that to be employable, one must be "computer literate" -- skilled in some aspect or other of computer use. In the context of social responsibility there seems to be a different argument, asserting that one cannot be an effective citizen in a democracy without a technical understanding of the political issues involving computers. How will people know which way to vote on Star Wars, if they don't understand programming methodology?

This version of the "computer literacy" argument is also nonsense. It's a losing battle. Computers are not the only technology that comes to the attention of voters. Freon, oil spills, nuclear power, genetic engineering, the prime interest rate, the use of standardized tests that may or may not discriminate against some group in college admissions, research on animals, potential AIDS drugs, biochemical versus psychodynamic approaches to mental illness, teaching foreign-born students in English or in their native languages, what the Founding Fathers really meant about bearing arms: are the voters to be "literate" about all of these?

How, in fact, do I decide to believe the scientists who tell me that people evolved from animals, and not the ones who tell me that nuclear power is safe? I have no technical knowledge about either issue. Supposing that I were forced to take "biology literacy" and "nuclear power literacy" courses; how would I decide whether or not to trust the teachers of those courses? The answer is that my beliefs are based on nontechnical aspects of the issues. For example, I know that there is money to be made in nuclear power, but I don't see anyone profiting from the theory of evolution. I know that the supposedly neutral Nuclear Regulatory Commission conspired with the plant owners to withhold information about the Three Mile Island failure; I don't know of any such scandal among evolution theorists. I know that the nuclear power industry got Congress to pass a law exempting them from civil damage suits, and I understand what this means about their own confidence in their operations. I know that the spokespeople for evolution include exemplary human beings like Stephen Jay Gould, who also finds time in his schedule to work against racism; those who speak for nuclear power are more likely to be sleazeballs who also argue for nuclear weapons.

What the voters need is "political literacy": knowing how to read the newspaper without technical knowledge of the subject under discussion. They need the intellectual weapon of class analysis. They need the commitment to remember last year's scandals to help them understand this year's. They need the sophistication to understand dialectical tension, in which two contradictory views can both be aspects of the truth, without dissolving into relativism, in which everything and nothing is true.

IV. Ethics is Learned in the Laboratory

What is the policy about game-playing at your school's computer lab? Some students want to play computer games. Other students (perhaps the same students at another time) want to get their assigned work done. Does some adult facility manager decide the rule? (No games 8am to 11pm, let's say.) Then, do paid adult staff members police the rule? Or are students part of the process of setting the rule and enforcing it?

What happens when a student shows an interest in developing system software? Is s/he encouraged? Given access to source files? Allowed to install the new version for general use? Or informed that students can't be trusted to write software lest it be full of trapdoors?

Is the computer lab always open? Is it closed at night because there's no money for staff to prevent equipment theft? Is there a way students could organize cooperatively to staff the lab? Are they encouraged to do so?

When one student complains about another student violating the privacy of his or her files, how is the issue resolved? (What about faculty or staff violating the privacy of student files? Is that an issue?)

The computer lab is the best place to begin professional education in social responsibility. The crucial point is to build a sense of community. Faculty should be part of this community also, but decisions about things like game policy should be truly democratic. It's the students who face the consequences, and they can understand the issues.

(I guess I am arguing for Carol Gilligan's relationship-based view of moral development, as against Lawrence Kohlberg's rule-based view, which is embodied in the presentation of moral dilemmas in ethics classes.)

www.cs.berkeley.edu/~bh