agent (ay-jent) n. 1. a person who does something or instigates some activity. 2. one who acts on behalf of another. 3. something that produces an effect or change.Mike Wooldridge and Nick Jennings give the following list [which we have annotated with relevant Part numbers] in an article titled "Intelligent Agents: Thoery and Practice" from Knowledge Engineering Review, Vol. 10, No. 2, 1995:
William J. Rapaport says:
Bertrand Russell was not responsible for logical positivism. Rather, he was responsible for logical atomism, which is unrelated. He was also, with G.E. Moore, responsible for analytic (as opposed to "synthetic" or idealistic) philosophy, which ultimately led to logical positivism. Logical positivism, however, originated with the Vienna Circle (Carnap, et al.), inspired by Wittgenstein; it's most well-known "popularizer" was A.J. Ayer.
Doug Edwards says:
The term "logical positivism" is vague, and has varying senses with varying levels of vagueness. In its narrowest and most precise sense, it's true that it originated in the Vienna Circle, and if it could be said to have any one founder, that person would be Rudolf Carnap--not Moore, Russell, or Wittgenstein. I do believe it's going a bit far to claim that Russell's logical atomism is "unrelated" to logical positivism; I think Russell has almost as much claim as Wittgenstein to have "inspired" logical positivism. The most important point is that in this most precise sense of "logical positivism", Wittgenstein was not a logical positivist himself, however much he may have "inspired" logical positivism. The positivists condemned "metaphysical" statements as misleading, in that such statements appeared to have meaning but were in reality utterly devoid of it. Wittgenstein's Tractatus, by contrast, held that such statements could have a "higher" meaning not expressible by propositions.
CLB: To what extent have you ever followed developments in artificial intelligence? The third program you ever wrote was a tic-tac-toe program that learned from its errors, and Stanford has been one of the leading institutions for AI research...
Knuth: Well, AI interacts a lot with Volume IV; AI researchers use the combinatorial techniques that I'm studying, so there is a lot of literature there that is quite relevant. My job is to compare the AI literature with what came out of the electrical engineering community, and other disciplines; each community has had a slightly different way of approaching the problems. I'm trying to read these things and take out the jargon and unify the ideas. The hardest applications and most challenging problems, throughout many years of computer history, have been in artificial intelligence-- AI has been the most fruitful source of techniques in computer science. It led to many important advances, like data structures and list processing... artificial intelligence has been a great stimulation. Many of the best paradigms for debugging and for getting software going, all of the symbolic algebra systems that were built, early studies of computer graphics and computer vision, etc., all had very strong roots in artificial intelligence.
CLB: So you're not one of those who deprecates what was done in that area...
Knuth: No, no. What happened is that a lot of people believed that AI was going to be the panacea. It's like some company makes only a 15% profit, when the analysts were predicting 18%, and the stock drops. It was just the clash of expectations, to have inflated ideas that one paradigm would solve everything. It's probably true with all of the things that are flashy now; people will realize that they aren't the total answer. A lot of problems are so hard that we're never going to find a real great solution to them. People are disappointed when they don't find the Fountain of Youth...
CLB: If you were a soon-to-graduate college senior or Ph.D. and you didn't have any "baggage", what kind of research would you want to do? Or would you even choose research again?
Knuth: I think the most exciting computer research now is partly in robotics, and partly in applications to biochemistry. Robotics, for example, that's terrific. Making devices that actually move around and communicate with each other. Stanford has a big robotics lab now, and our plan is for a new building that will have a hundred robots walking the corridors, to stimulate the students. It'll be two or three years until we move in to the building. Just seeing robots there, you'll think of neat projects. These projects also suggest a lot of good mathematical and theoretical questions. And high level graphical tools, there's a tremendous amount of great stuff in that area too. Yeah, I'd love to do that... only one life, you know, but...
Loveland, D.W. (1970). A linear format for resolution. Symposium on Automatic Demonstration, Lecture Notes in Math. 125, Springer-Verlag, Berlin, pp. 147-162.This is a confusion in the literature in which few sources have this correct. Since I think your text will be around for a long time, it is a good place to get this correctly entered. The paper you quote, Mechanical theorem proving by Model Elimination, introduces a procedure that is linear, but the concept (and that mode of presentation) was not introduced yet. It is not a resolution procedure, technically. It actually is a significant paper, as it is the form used in SL-resolution (Kowalski and Kuehner) that lead to Prolog. Thus it would be correct to reference it in the Prolog chapter as a key paper in the theoretical background of Prolog. Actually, it is receiving much attention now as an extension of Prolog that can use the same (WAM) architecture yet is complete for all first-order logic. (A twist on history as it was part of the prehistory of Prolog.) In modern language, the elegance and power of the WAM architecture is possible because Prolog is an adaption of linear input resolution. Model Elimination also has the linear input format, but is complete for all of first-order logic. You have captured the ideas of input and linear resolution well on page 285; they are important concepts because of Prolog yet often are omitted or presented incorrectly in basic AI texts, even those dealing with Prolog."
TD-Gammon has definitely come into its own. There is no question in my mind that its positional judgement is far better than mine ... In the more complex positions, TD has a definite edge. In particular, its judgement on bold vs. safe play decisions, which is what backgammon is really about, is nothing short of phenomenol.I believe this is true evidence that you have accomplished what you intended to achieve with the neural network approach. Instead of a dumb machine which can calculate things much faster than humans such as the chess playing computers, you have built a smart machine which learns from its experiences pretty much the same way humans do.
In 1972, I took a grad course in philosophy of language from John Tienson at Indiana University. In that course, he presented the sentence:Bob Berwick gives his recollections:Dogs dogs dog dog dogs.which is grammatical and meaningful, if not acceptable, with no punctuation changes, having, of course, the same syntactic structure as:Mice cats chase eat cheese.Finding the "-s" morpheme unaesthetic, several of us grad students sought something better.Fish fish fish fish fishdoesn't quite hack it, since "fish" requires an indirect object: one fishes *for* something. At that point, I came up with the Buffalo sentence.
Well, hard to tell about these urban legends, you know. I just recall reading about it when I was 10 or something years old--before 1972, to be sure. Then Ed Barton and I sat around discussing it in 1982, and we just thought it was part of common parlance (or urban legend) by then also. Even in the Police police police form.And Andrew Philpot adds an anecdote:For a very hilarious take on all this, you might want to [look at one of Carl de Marcken's] famous "friday afternoon GSB" abstracts--in his abstract, he works out the exact algebraic formula for any number of buffaloes, as a joke, etc.
I was recently explaining this exercise (22.8) in your book to a friend. He was amused, but didn't seem to think that there was much point in a language which could only talk about buffalo. Au contraire! Later on in the weekend, we went to Buffalo Bill's (a microbrewery in Hayward), decorated with pictures and skulls of (you guessed it) buffalo. As we watched the Univ. of Colorado Buffaloes football game on T.V., and contemplated heading up to Bison brewery in Berkeley, I could legitimately turn to him, gesturing forcefully, and make meaningful sentences in Bullafo^n. Too bad a bewildered Bills fan didn't walk in. That would have been priceless. BTW, a great book.
E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan and D.B. Shmoys (1992). The Traveling Salesman Problem Wiley Interscience.
Etzioni, Oren (1993). Intelligence without Robots (A Reply to Brooks), AI Magazine, 14(4).
AIMA Home | Contact Russell & Norvig |