Is life a form of calculation?

Explore
IN 1994, a strange and pixelated machine came to life on a computer screen. He read a series of instructions, copied them and built a clone of himself-just like the Hungarian-American polymathe John von Neumann had predicted half a century earlier. It was a striking demonstration of a deep idea: this life, at the base, could be a computer.
Although this is rarely appreciated, Von Neumann was one of the first to establish a deep link between life and calculation. Reproduction, like calculation, he showed, could be carried out by machines after coded instructions. In its model, based on the Universal Machine of Alan Turing, the self-reply systems read and perform instructions as DNA does: “if The following instruction is the CGA codon, SO Add arginine to the protein under construction. It is not a metaphor to call DNA a “program” – that is literally the case.
Of course, there are significant differences between organic computers and the type of digital computer science carried out by a personal computer or your smartphone. DNA is subtle and multilayer, including phenomena such as epigenetics and the proximity effects of genes. Cellular DNA is far from the whole story either. Our bodies contain (and continually exchange) countless bacteria and viruses, each executing their own code.
Biological computer science is “massively parallel”, decentralized and noisy. Your cells have somewhere in the 300 district quintillion ribosomes, all working at the same time. Each of these exquisite floating protein factories is, in fact, a small computer – although stochastic, which does not mean entirely predictable.
The movements of the hinge components, the capture and the release of smaller molecules, and the manipulation of chemical bonds are all individually random, reversible and inaccurate, trained in this way and those by constant thermal shaking. Only a statistical asymmetry promotes one direction on another, with intelligent origami movements which tend to “lock” certain stages so that a next step becomes likely to occur.
This differs considerably from the operation of “logical doors” in a computer, basic components which treat binary inputs in outputs using fixed rules. They are irreversible and 99.99% reliable and reproducible.
It is not a metaphor to call DNA a “program” – that is literally the case.
Organic IT is nevertheless computer. And its use of randomness is a functionality, not a bug. In fact, many classic computer algorithms also require a random (although for various reasons), which may explain why Turing insisted that Ferranti Mark I, a first computer he helped to design in 1951, includes a random number instruction. The random is therefore a small but important conceptual extension of the original Turing machine, although any computer can simulate it by calculating the deterministic but random or “pseudorandom” numbers.
Parallelism is also increasingly fundamental for calculation today. Modern AI, for example, depends on both massive parallelism And The random – as in the parallelized algorithm “of stochastic gradient descent” (SGD), used to train most of today’s neural networks, the “temperature” parameter used in chatbots to introduce a degree of random in their exit and the parallelism of graphic processing units (GPU), which feed most AI in data centers.
Traditional digital computer science, which is based on the centralized and sequential execution of the instructions, was a product of technological constraints. The first computers had to carry out long calculations using as few parts as possible. Originally, these parts were scary: expensive vacuum tubes, for example, tended to exhaust and needed frequent replacement by hand. Natural design was therefore a minimum “central processing unit” (CPU) operating on bits sequences transported from front to back from an external memory. This has become known as “von Neumann Architecture”.
Turing and Von Neumann both knew that IT could be carried out by other means. Turing, towards the end of his life, explored how biological models like leopard spots could result from simple chemical rules, in an area he called morphogenesis. The Turing morphogenesis model was a biologically inspired form of calculation distributed massively parallel. The same goes for its previous concept of an “unorganized machine”, a neuronal net connected at random modeled after the brain of an infant.
These were visions of what IT might look like without central processor – and what do looks like, in living systems.
Von Neumann also began to explore massively parallel approaches to the calculation in the 1940s. In discussions with the Polish mathematician Stanisław Ulam in Los Alamos, he designed the idea of ”cellular automata”, pixel -type grids of simple calculation units, all obeying the same rule, and all modifying their states Immediate. With characteristic Bravura, von Neumann went to the design, on paper, the key components of a self-reproductive Cellular automaton, including a horizontal “strip” of cells containing instructions and blocks of cellular “circuits” to read, copy and execute them.
The design of a cellular automaton is much more difficult than ordinary programming, because each cell or “pixel” simultaneously modifies its own state and its environment. Add random and subtle feedback effects, as in biology, and it becomes even more difficult to reason on, “program” or “debugging”.
Nevertheless, Turing and Von Neumann have grasped something fundamental: the calculation does not require a central processor, logical doors, binary arithmetic or sequential programs. There are infinite ways to calculate and, above all, they are all equivalent. This insight is one of the greatest achievements of theoretical computer science.
This “independence of the platform” or “multiple realitability” means that any computer can imitate anyone. If computers are different conceptions, emulation can be slow glacial. For this reason, the self -reproductive cellular automaton of Von Neumann has never been physically built – although it is fun to see!
ADVERTISEMENT
Nautilus members benefit from experience without advertising. Connect or join now.
This demonstration in 1994 – The first successful emulation of the self -reproductive automation of Von Neumann – could not have occurred much earlier. A serial computer requires serious processing power to cross the 6,329 PLC cells on the 63 billion Time steps required for the completion of its reproduction cycle. On the screen, he worked as announced: a pixelian two-dimensional rubbing-dimensional machine, lighting with califreality with califreality of 145,315 cells-long dragging to the right, pumping information by ribbon and stretching hand with a “writing arm” to slowly print a work clone of itself above and right.
It is also ineffective for a series computer to imitate a parallel neural network, heir to the “unorganized machine” in Turing. Consequently, the management of large neuronal nets like those of chatbots based on transformers has only become recently practical, thanks to continuous progress in the miniaturization, speed and parallelism of digital computers.
In 2020, my colleague Alex Mordvintsev combined modern neural networks, the morphogenesis of Turing and the cellular automaton of Von Neumann in the “Neural Cellary Automaton” (NCA), replacing the simple rule by pixel of a conventional cell automaton with a neural net. This net, capable of detecting and affecting some values representing concentrations of local morphogens, can be formed to “cultivate” any desired model or image, not only zebra stripes or leopard stains.
Real cells do not literally have neuronal nets inside, but they perform very advanced, non -linear and deliberate “programs” to decide on the actions they will take in the world, given an external stimulus and an internal state. The ANCs offer a general means of modeling the range of possible behaviors of cells whose actions do not imply movement, but only state changes (here, represented as color) and the absorption or release of chemicals.
ADVERTISEMENT
Nautilus members benefit from experience without advertising. Connect or join now.
The first NCA Alex showed me was an emoji lizard, who could regenerate not only his tail, but also its members and its head! It was a powerful demonstration of the way in which multicellular life can “think locally” but “act on a global scale”, even when each cell (or pixel) performs the same program – just like each of your cells performs the same DNA. Simulations like these show how calculation can produce realistic behavior through the scales. Drawing on the conceptions of Von Neumann and extending into modern neural cellular automata, they offer an overview of the foundations for calculating living systems.
This story is reprinted with the permission of MIT Press Reader. It is suitable for “What is intelligence?
To find out more on the computer basis of life and the ideas of Alan Turing NautilusDiscover these stories:
At the beginning, there was a calculation: Life is code, and code is life, in nature as it is in technology.
The man who tried to buy the world with the logic: Walter Pitts got up from the streets until put, but could not escape.
Turing’s patterns appear in a tiny crystal: Extend the idea of Alan Turing in 1952 on leopard spots on a atomic scale.
Image of lead: O-IAHI / Shutterstock
ADVERTISEMENT
Nautilus members benefit from experience without advertising. Connect or join now.



