Neha Patil (Editor)

Perceptrons (book)

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
8.8
/
10
1
Votes
Alchetron
8.8
1 Ratings
100
90
81
70
60
50
40
30
20
10
Rate This

Rate This

Publication date
  
1969

4.4/5
Goodreads

Originally published
  
1969

Perceptrons (book) t2gstaticcomimagesqtbnANd9GcTFpbOFbt1Q5l7Tx

Authors
  
Seymour Papert, Marvin Minsky

Similar
  
Marvin Minsky books, Pattern recognition books

Perceptrons: an introduction to computational geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1987, containing a chapter dedicated to counter the criticisms made of it in the 1980s.

Contents

The main subject of the book is the perceptron, an important kind of artificial neural network developed in the late 1950s and early 1960s. The main researcher on perceptrons was Frank Rosenblatt, author of the book Principles of Neurodynamics. Rosenblatt and Minsky knew each other since adolescence, having studied with a one-year difference at the Bronx High School of Science. They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences. Despite the dispute, the corrected version of the book published after Rosenblatt's death contains a dedication to him.

This book is the center of a long-standing controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for an erroneous change in the direction of research in AI, concentrating efforts on so-called "symbolic" systems, and contributing to the so-called AI winter. This decision, supposedly, proved to be unfortunate in the 1980s, when new discoveries showed that the prognostics in the book were wrong.

The book contains a number of mathematical proofs regarding perceptrons, and while it highlights some of perceptrons' strengths, it also shows some previously unknown limitations. The most important one is related to the computation of some predicates, as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate.

The XOR affair

Some critics of the book state that the authors imply that, since a single artificial neuron is incapable of implementing some functions such as the XOR logical function, larger networks also have similar limitations, and therefore should be dropped. Later research on three-layered perceptrons showed how to implement such functions, therefore saving the technique from obliteration.

There are many mistakes in this story. Although a single neuron can in fact compute only a small number of logical predicates, it was widely known that networks of such elements can compute any possible boolean function. This was known by Warren McCulloch and Walter Pitts, who even proposed how to create a Turing Machine with their formal neurons, is mentioned in Rosenblatt's book, and is even mentioned in the book Perceptrons. Minsky also extensively uses formal neurons to create simple theoretical computers in his book Computation: Finite and Infinite Machines.

What the book does prove is that in three-layered feed-forward perceptrons (with a so-called "hidden" or "intermediary" layer), it is not possible to compute some predicates unless at least one of the neurons in the first layer of neurons (the "intermediary" layer) is connected with a non-null weight to each and every input. This was contrary to a hope held by some researchers in relying mostly on networks with a few layers of "local" neurons, each one connected only to a small number of inputs. A feed-forward machine with "local" neurons is much easier to build and use than a larger, fully connected neural network, so researchers at the time concentrated on these instead of on more complicated models.

Some other critics, most notably Jordan Pollack, note that what was a small proof concerning a global issue (parity) not being detectable by local detectors was interpreted by the community as a rather successful attempt to bury the whole idea.

Analysis of the controversy

It is most instructive to learn what Minsky and Papert themselves said in the 1970s as to what was the broader implications of their book. On his website Harvey Cohen, a researcher at the MIT AI Labs 1974+ , quotes Minsky and Papert in the 1971 Report of Project MAC, directed at funding agencies, on "Gamba networks": "Virtually nothing is known about the computational capabilities of this latter kind of machine. We believe that it can do little more than can a low order perceptron." In the preceding page Minsky and Papert make clear that "Gamba networks" are networks with hidden layers,

Minsky has compared the book to the fictional book Necronomicon in H. P. Lovecraft's tales, a book known to many, but only read by few. The authors talk in the expanded edition about the criticism of the book that started in the 1980s, with a new wave of research symbolized by the PDP book.

How Perceptrons was explored first by one group of scientists to drive research in AI in one direction, and then later by a new group in another direction, has been the subject of a peer-reviewed sociological study of scientific development.

References

Perceptrons (book) Wikipedia