Eric's Tags - russell-kirsch


Russell Kirsch: The man who taught computers to see

Russell Kirsch admits it: Inventing square pixels was a bad idea.

Square dots made pictures fuzzy.

"I started out with a bad idea," Kirsch says, "and that bad idea survived."

Few, however, think of the Portland resident's creation of the pixel itself. Instead, they stand in awe of what those first pixels produced, 50 years ago: The world's first digital image.

Measuring 176 pixels on each side, the image was a photocopy of Kirsch's infant son, an achievement that Life chronicled in its book "100 Photographs That Changed the World."


Russell Kirsch

In 1957, Kirsch's group developed a digital image scanner, to "trace variations of intensity over the surfaces of photographs", and made the first digital scans. One of the first photographs scanned,[8] a picture of Kirsch's three-month-old son, was captured as just 30,976 pixels,[9] a 176 × 176 array, in an area 5 cm × 5 cm (2" x 2").[10] The bit depth was only one bit per pixel, stark black and white with no intermediate shades of gray, but, by combining several scans made using different scanning thresholds, grayscale information could also be acquired.[8] They used the computer to extract line drawings, count objects, recognize alphanumeric characters, and produce oscilloscope displays.[10] He also proposed the Kirsch operator for edge detection in images.[11]


Earliest Image Processing

By 1957 computers were in common use in many laboratories and commercial establishments. Originally, they were devoted exclusively to numerical, algebraic, and geometric computation. Later, the symbol manipulation capability of computers became recognized, leading to so-called business data processing in which alphanumeric processing became routine. The alphanumeric data presented an obvious problem of inputting the vast quantity of data needed for business. This created activity in developing character recognition machinery (9). It occurred to R.A.Kirsch that a general purpose computer could be used to simulate the many character recognition logics that were being proposed for construction in hardware. This would require an input device that could transform a picture (of a character) into a form suitable for storage in the memory of a computer.

A further important advantage of building such a device was that it would enable programs to be written to simulate the multifarious ways in which humans view the visible world. A tradition had been building in which simple models of human structure and function had been studied, for example, in neuroanatomy and neurophysiology (10). The emphasis on binary representations of neural functions led us to believe that binary representations of images would be suitable for computer input. This serious mistake, discussed below, was implemented in the first picture scanner built. It was connected to the SEAC in 1957 and it enabled Kirsch's group to experiment with algorithms that launched the fields of image processing and image pattern recognition (11).

The scanner used a rotating drum and a photomultiplier to sense reflections from a small image mounted on the drum. A mask interposed between the picture and the photomultiplier tessellated the image into discrete pixels.

The group experimented with several classes of algorithms. The first was homogeneous transformations. Once an image was acquired, the great speed of SEAC was used to transform it with edge enhancement filters. These have become important in recent years as highly parallel methods of processing became common in neural network simulations, for example. They also provided the basis for the large class of image enhancement methods that developed. The group also wrote algorithms to make measurements on objects in an image. By showing that these objects could have multiple connectivity and still be measured correctly, they encouraged the development of specialized machines for image analysis.

A staticizer connected to the SEAC memory enabled a stored image to be displayed on a cathode ray oscilloscope. This made it possible for the researchers to see what the computer "saw". And when they could see binary images, they realized the limitations of binary representation. So they experimented with superimposing multiple scans at different scanning thresholds and the use of time varying thresholds for pulse density modulation to represent multiple gray levels in an image.

A feel for the age and maturity of the image processing field can be seen from the fact that one of the first pictures ever scanned and redisplayed was of Kirsch's newborn son. Today, his face is scanned nightly and digitally processed to appear on the nightly news as a TV reporter. Recently, he showed his newborn daughter on the evening news.



Fiftieth Anniversary of First Digital Image Marked

Fiftieth Anniversary of First Digital Image Marked
May 24, 2007
Share
Facebook
Linkedin
Twitter
Email
Black & white photo of baby
The first digital image made on a computer in 1957 showed researcher Russell Kirsch's baby son.

Credit: NIST
It was a grainy image of a baby—just 5 centimeters by 5 centimeters—but it turned out to be the well from which satellite imaging, CAT scans, bar codes on packaging, desktop publishing, digital photography and a host of other imaging technologies sprang.

It was 50 years ago this spring that National Bureau of Standards (NBS, now known as the National Institute of Standards and Technology, or NIST) computer pioneer Russell Kirsch asked "What would happen if computers could look at pictures?" and helped start a revolution in information technology. Kirsch and his colleagues at NBS, who had developed the nation's first programmable computer, the Standards Eastern Automatic Computer (SEAC), created a rotating drum scanner and programming that allowed images to be fed into it. The first image scanned was a head-and-shoulders shot of Kirsch's three-month-old son Walden.

The ghostlike black-and-white photo only measured 176 pixels on a side—a far cry from today's megapixel digital snapshots—but it would become the Adam and Eve for all computer imaging to follow. In 2003, the editors of Life magazine honored Kirsch's image by naming it one of "the 100 photographs that changed the world."

Kirsch and his wife Joan, an art historian, now reside in Oregon. Together, they use computers to analyze paintings and define the artistic processes by which they were created. Son Walden—whose face helped launch the era of computerized photography—works in communications for Intel following a successful career as a television news reporter.


RUSSELL KIRSCH: PIXEL PIONEER AND THE FATHER OF DIGITAL IMAGING

THE ORIGINAL PIXEL PUSHER

This photo of Russell’s son Walden is the first digitized image. Public Domain via Wikimedia
The term ‘pixel’ is a shortened portmanteau of picture element. Technically speaking, pixels are the unit of length for digital imaging. Pixels are building blocks for anything that can be displayed on a computer screen, so they’re kind of the first addressable blinkenlights.

In 1957, Russell brought in a picture of his son Walden, which would become the first digital image (PDF). He mounted the photo on a rotating drum scanner that had a motor on one end and a strobing disk on the other. The drum was coupled to a photo-multiplier vacuum tube that spun around on a lead screw. Photo-multipliers are used to detect very low levels of light.

As the drum slowly rotated, a photo-multiplier moved back and forth, scanning the image through a square viewing hole in the wall of a box. The tube digitized the picture by transmitting ones and zeros to SEAC that described what it saw through the square viewing hole — 1 for white, and 0 for black. The digital image of Walden is 76 x 76 pixels, which was the maximum allowed by SEAC.



Wherein The Inventor Of The Pixel Totally Agrees With Me, Even Though I Don’t Totally Agree With Him

53 years later, the guy who invented the square pixel regrets the error.
In 1957, NIST computer expert Russell Kirsch scanned the world’s first digital image [a photo of his infant son, above] using the country’s first programmable computer. To accommodate the memory and processing capacity of the available equipment, Kirsch had the computer break the image up into a 176×176 grid, and to assign a binary color value, black or white, to each of the resulting 30,976 square pixels.
Apparently, it’s been eating at him ever since, because he has, at age 81, published a suggestion for increasing the “precision and accuracy in scientific imaging” by replacing uniformly square pixels with pixels of variable shapes.
I do not know enough about compression algorithms and data/information loss to know whether Kirsch’s proposed method is either necessary or superior to the state of the art. But it is most fascinating to see one of the inventors of digital imaging remain so engaged and critical of the system he helped bring forth.
And frankly, though I don’t know any of the history or the context, I don’t necessarily agree with him that the grid and the square pixel was an “unfortunate” solution. In the 50+ years since the square pixel became the irreducible unit of visual information, it has acquired its own aesthetic and cultural context.
[Looking through the NIST Museum site, it sounds like the “serious mistake” was using a binary [i.e., b/w] basis for computer scanning in the belief that it was an accurate representation of human neural activity and visual data processing. It also sounds like the NIST folks started trying to correct for it almost immediately.]
When he completely agrees with me and validates my own assumptions, however, I agree with him completely. The man is a genius and a living legend:

…we show that the usual assumption that increased precision is accomplished with higher resolution of square pixel images does not necessarily result in the increased accuracy that can be achieved with the use of variable shape pixels…



What Is a Pixel?

Since then, some graphics pioneers such as Alvy Ray Smith have made a point to express the idea that a pixel is not actually a square---it's more abstract and fluid than that from a conceptual and mathematical standpoint. And he's correct. But for most people in most modern applications, a pixel basically is a colored digital square used to build a larger image similar to a tile in a mosaic or a stitch in needlepoint.



A Biography of the Pixel,

The pixel as the organizing principle of all pictures, from cave paintings to Toy Story.

The Great Digital Convergence of all media types into one universal digital medium occurred, with little fanfare, at the recent turn of the millennium. The bit became the universal medium, and the pixel—a particular packaging of bits—conquered the world. Henceforward, nearly every picture in the world would be composed of pixels—cell phone pictures, app interfaces, Mars Rover transmissions, book illustrations, videogames. In A Biography of the Pixel, Pixar cofounder Alvy Ray Smith argues that the pixel is the organizing principle of most modern media, and he presents a few simple but profound ideas that unify the dazzling varieties of digital image making.

Smith's story of the pixel's development begins with Fourier waves, proceeds through Turing machines, and ends with the first digital movies from Pixar, DreamWorks, and Blue Sky. Today, almost all the pictures we encounter are digital—mediated by the pixel and irretrievably separated from their media; museums and kindergartens are two of the last outposts of the analog. Smith explains, engagingly and accessibly, how pictures composed of invisible stuff become visible—that is, how digital pixels convert to analog display elements. Taking the special case of digital movies to represent all of Digital Light (his term for pictures constructed of pixels), and drawing on his decades of work in the field, Smith approaches his subject from multiple angles—art, technology, entertainment, business, and history. A Biography of the Pixel is essential reading for anyone who has watched a video on a cell phone, played a videogame, or seen a movie. 400 pages of annotations, prepared by the author and available online, provide an invaluable resource for readers.


Russell Kirsch: The man who taught computers to see

But as Kirsch told an oral historian for the National Museum of American History in 1970, he was among a "fortunate happy few" who also had access to the computer for their own private "speculative" experiments, such as artificial intelligence and image processing.

If he had learned, he would have borrowed techniques from the Ravenna mosaicists, instead of using square pixels. And if later computer scientists had learned from Kirsch's mistake, they would not have perpetuated an inferior imaging technique.


Computers Viewing Artists at Work

Our title suggests an Artificial Intelligence approach to the use of computers in the fine arts. We consider computers to have capabilities beyond the utilitarian ones of aiding in art making. Rather, we will investigate the possibility of computers seeing, even understanding, significant form in art. This understanding cannot rise autonomously, but must be the product of careful tutelage by artists, critics, and historians. A powerful tutorial mechanism to use for computers to learn about art is the picture grammar, which allows large classes of compositional structures to be described to a computer by the scholar who has a deep understanding of the art works. In this paper, we illustrate how a machine can be taught the compositional structure of the paintings of the contemporary artist Richard Diebenkorn. With such grammatical instruction, the computer can analyze existing paintings, generate new ones of the same style, and provide a beginning to a computational theory of style.


Precision and Accuracy in Scientific Imaging

Digital images are commonly used to represent scientific data. Typically, high resolution images with many square pixels are considered to be necessary under the assumption that the increased precision of such images yields increased accuracy to the viewer. We question this assumption by demonstrating improved accuracy in viewing digital images without requiring increased resolution by demonstrating how pixels with variable shapes chosen to best represent an image constitute a significant improvement over the square pixels in enhancing the accuracy of viewing such digital images.


Storing Art Images in Intelligent ComputersABSTRACT Images of artworks can be stored in media that preserve different characteristics of the original. Differences exist in the extent to which we can preserve color, three-dimensionality, surface texture, f

ABSTRACT
Images of artworks can be stored in media that preserve different characteristics of the original. Differences exist in the extent to which we can preserve color, three-dimensionality, surface texture, fine structure, tonal gradations, temporal variations and other characteristics that lend uniqueness to individual artworks. Usually, we are willing to sacrifice some of these characteristics in exchange for the permanence and recoverability offered by storage media. Thus, a color slide (diapositive), which is a common medium for storing images of artworks, compromises all of the above properties to different extents but is nevertheless considered useful for the archival properties it offers for images of artworks. Digital storage media used in conjunction with computers offer new opportunities and demand new compromises in storing art images. An unusual challenge is offered by the possibility of providing intelligence to a computer. The authors make clear the sense in which we may ascribe intelligence to the computer and how this may be used to 'perceive' the image of an artwork. The computer then uses its knowledge of the artwork with respect to a large class of such works not only for archival storage but also to achieve economy in the use of the storage medium. The authors illustrate the achievement of storage economy as much as tens of thousands of times greater than storage without intelligence. The intelligence is provided to the computer as syntactic descriptions of classes of artworks. The syntactic descriptions incorporate insight from the art historian, critic or artist who uses innovative tools like shape grammars to provide the computer with a small part of the intelligence that the educated human viewer brings to the perception of the artwork.


Post-Digital Rhetoric and the New Aesthetic

The proliferation of smart devices, digital media, and network technologies has led to everyday people experiencing everyday things increasingly on and through the screen. In fact, much of the world has become so saturated by digital mediations that many individuals have adopted digitally inflected sensibilities. This gestures not simply toward posthumanism, but more fundamentally toward an altogether post-digital condition-one in which the boundaries between the "real" and the "digital" have become blurred and technology has fundamentally reconfigured how we make sense of the world. Post-Digital Rhetoric and the New Aesthetic takes stock of these reconfigurations and their implications for rhetorical studies by taking up the New Aesthetic-a movement introduced by artist/digital futurist James Bridle that was meant to capture something of a digital way of seeing by identifying aesthetic values that could not exist without computational and digital technologies. Bringing together work in rhetoric, art, and digital media studies, Hodgson treats the New Aesthetic as a rhetorical ecology rather than simply an aesthetic movement, allowing him to provide operative guides for the knowing, doing, and making of rhetoric in a post-digital culture.


The Anatomy of Painting Style: Description with Computer Rules

Experts recognize artistic style by invoking knowledge of plastic elements and their juxtaposition. Algorithmic methods now make it possible for this knowledge to be expressed to a computer. Compared to the alternative of describing style in natural language, the computer methods offer the advantage that they can be tested for validity. The tests are accomplished by using the computer to generate new compositions in the style described. The authors demonstrate these methods with algorithmic descriptions of the styles of Richard Diebenkorn and Joan Miró and the generation of new compositions in their styles. A shape grammar algorithm for Diebenkorn is presented which accounts for the linear facture of his "Ocean Park" series. The problem of shape in Miró's work is tackled, and progress is reported on synthesizing composition in the style of his "Constellation" series. Further uses of algorithmic description of painting styles include mechanical storage, search and retrieval in art archives, attribution studies and diachronic studies of stylistic change.


Workshop on standards for image pattern recognition


Workshop on standards for image pattern recognition
Gaithersburg, MD: U.S. Dept. of Commerce, National Institute of Standards and Technology


First programmable computer turns 60

First programmable
computer turns 60
In 1950, the National Bureau of Standards, now called the National Institute
of Standards and Technology (NIST),
dedicated the first programmable computer in U. S. history, the Standards Electronic Automatic Computer (SEAC). The
computer moved information as soundwave pulses traveling through 64 mercuryfilled glass tubes in an early data-storage
system known as acoustic delay. Each tube
had a quartz crystal at both ends; one was
the transmitter and the other a receiver.
Sound pulses were repeatedly sent and
received in a closed loop, recirculating the
data patterns they carried and, essentially, storing them.
The SEAC had the equivalent of 6,000 bytes of
storage (compared to the approximately 11 billion
bytes on a home computer’s hard drive). In the book,
Measures for Progress: A History of the National Bureau
of Standards, by R.C. Cochrane, Cochrane states that
the SEAC “could add or subtract pairs of 11-digit numbers 1,000 times a second, or multiply and divide them
330 times a second.” He also wrote, “failure of one of its
[SEAC’s] more than 100,000 connections and components, even for a
millionth of a second, would result
in a computer malfunction. Yet, often operating nonstop, 7 days a week,
SEAC performed for 4,000 hr in the
first 9 months without a malfunction.”
The computer was used to do
computations for the military and
the Atomic Energy Commission; calculations on electronic-circuit design and for optical lenses; statistical sorting and tabulating for Social
Security and the Census Bureau; and
processing of data from studies of
crystal structure.
One memorable application occurred in 1957 when NBS researcher Russell Kirsch and
colleagues created a rotating drum scanner and the
programming that allowed images to be fed into the
computer. The first image scanned was a grainy, blackand-white shot of Kirsch’s three-month-old son, Walden,
a picture honored by Life magazine in 2003 as one of
“the 100 photographs that changed the world.”
The computer was retired in 1964 when it was replaced by more-advanced computers. MD


Russel Kirsch and The History Of Digitising Photos

Further innovations in digital image processing piggybacked off Kirsch’s decision, and the majority of modern displays still use square pixels. Kirsch expressed years later in 2010 that the use of square pixels ultimately limited the possibilities of digital imaging.


Russell Kirsch, the computer scientist who invented the pixel, has passed away

In something of an oversimplification, the drum scanner would scan and digitize the picture as it rotated on the drum, and then feed that information into SEAC's memory – the entire process took 25 seconds or less. That information was then output to an oscilloscope CRT (cathode-ray tube) display. The result was the first digital image ever created: a digital photocopy of Kirsch's infant son which came out at 176 x 176 pixels. Subsequently, this marked the birth of what we now know as the pixel.


Top Tags