Perceptrons and Pattern Recognition. Artificial Intelligence Memo no. 140. MAC-M-358. Project MAC.

Cambridge, MA: September 1967.

First edition, extremely rare pre-publication issue, of this important early work in Artificial Intelligence (AI), containing the first systematic study of parallelism in computation. It was first published in book form in 1969 as Perceptrons. An Introduction to Computational Geometry (second edition 1987). It “has remained a classical work on threshold automata networks for nearly two decades. It marked a historical turn in artificial intelligence, and it is required reading for anyone who wants to understand the connectionist counterrevolution that is going on today. Artificial-intelligence research, which for a time concentrated on the programming of Von Neumann computers, is swinging back to the idea that intelligence might emerge from the activity of networks of neuron-like entities. Minsky and Papert’s book was the first example of a mathematical analysis carried far enough to show the exact limitations of a class of computing machines that could seriously be considered as models of the brain. Now the new developments in mathematical tools, the recent interest of physicists in the theory of disordered matter, the new insights into and psychological models of how the brain works, and the evolution of fast computers that can simulate networks of automata have given Perceptrons new importance” (from the introduction to the second book edition). This pre-publication issue is extremely rare and seems to be little known. Most discussions of Minsky & Papert’s work refer only to the 1969 book edition. OCLC lists only two copies (Stanford and National Research Council Canada). There appears to be no copy at MIT, where the research was carried out and where this work was published. No copies in auction records.

Some of the earliest work in AI used networks or circuits of connected units to simulate intelligent behaviour. Examples of this kind of work, called ‘connectionism’, include Walter Pitts and Warren McCullough’s first description of a neural network for logic and Minsky’s work on the SNARC system. In the late 1950s, most of these approaches were abandoned when researchers began to explore symbolic reasoning as the essence of intelligence. However, one type of connectionist work continued: the study of perceptrons, invented by Frank Rosenblatt (1928-71) in 1958, who kept the field alive with his salesmanship and the sheer force of his personality. The perceptron was a simple neural network, namely, an algorithm which allows for learning the process of deciding whether an input belongs to some given class or not (‘binary classifier’). Rosenblatt optimistically predicted that the perceptron “may eventually be able to learn, make decisions, and translate languages”. He had been a schoolmate of Minsky at the Bronx High School of Science. Minsky had toyed with neural networks, in fact his PhD dissertation concerned them, but he had dismissed their worth at that time. Thus the claims made by Rosenblatt purporting to demonstrate the learning powers of the perceptron were viewed right from the start with skepticism by Minsky.

An active research program into perceptrons was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert’s book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. While the book highlights some of perceptrons' strengths, it also shows some previously unknown limitations. The most important of these is related to the computation of some predicates, such as the XOR function (which outputs ‘true’ when exactly one of two inputs is true), and also the important connectedness predicate (to tell whether an input pattern is connected, i.e., forms ‘one piece’, or not).

The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Major funding for connectionist projects was difficult to find in the 1970s and early 1980s. The ‘winter’ of connectionist research came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large-scale interest in neural networks.

“Minsky (1927-2016) was a uniquely brilliant, creative, and charismatic person, and his intellect and imagination shone through in his work. His ideas helped shape the computer revolution that has transformed modern life over the past few decades, and they can still be felt in modern efforts to build intelligent machines—one of the most exciting and important endeavors of our age.

“Minsky grew up in New York City, and he attended Harvard, where his curiosity led him to study an eclectic range of subjects, including mathematics, biology, and music. He then completed a PhD in the prestigious mathematics program at Princeton, where he mingled with scientists including the physicist Albert Einstein and the mathematician and computer pioneer John von Neumann.

“Inspired by mathematical work on logic and computation, Minsky believed that the human mind was fundamentally no different than a computer, and he chose to focus on engineering intelligent machines, first at Lincoln Lab, and then later as a professor at MIT, where he cofounded the Artificial Intelligence Lab in 1959 with another pioneer of the field, John McCarthy.

“Minsky’s early achievements include building robotic arms and grippers, computer vision systems, and the first electronic learning system, a device, which he called Snarc, that simulated the functioning of a simple neural network fed visual stimuli. Remarkably, while at Harvard in 1956, he also invented the confocal scanning microscope, an instrument that is still widely used today in medical and scientific research.

“Minsky was also central to a split in AI that is still highly relevant. In 1969, together with Seymour Papert, an expert on learning, Minsky wrote a book called Perceptrons, which pointed to key problems with nascent neural networks. The book has been blamed for directing research away from this area of research for many years.

“Today, the shift away from neural networks may seem like a mistake, since advanced neural networks, known as deep learning systems, have proven incredibly useful for all sorts of tasks.

“In fact, the picture is a little more complicated. Perceptrons highlighted important problems that needed to be overcome in order to make neural networks more useful and powerful; Minsky often argued that a purely “connectionist” neural network-focused approach would never be sufficient to imbue machines with genuine intelligence. Indeed, many modern-day AI researchers, including those who have pioneered work in deep learning, are increasingly embracing this same vision” (Knight, ‘What Marvin Minsky still means for AI,’ MIT Technology Review, January 26, 2016).

Seymour Papert (1928-2016) was one of the early pioneers of AI, and was also a seminal thinker regarding computers and pedagogy for children. Born in South Africa, he went to MIT in the early 1960s where, with Minsky, he founded the AI Lab.



4to (278 x 214 mm), pp. [viii], 26; 15; 11; 8; 10; 3; 26; 24;19; 9; 14; 12; 8 (each of the 11 lectures is separately-paginated). Stapled as issued into clear plastic covers (front cover loose at two of the staples), holes foring binder.

Item #4332

Price: $17,500.00

See all items in Computers, Numerical Methods
See all items by ,