And characteristic of all this, is also the notion of operational closure, which states that, to implement these internal organization, the organism does not need any information from outside.

The ant behaves like an ant, the earth-worm like an earth worm, due only to the information existing inside their living system. The environment can only trigger the internal mechanism of behaviour. And if there are changes in the environment, the organism may adapt in order to maintain its autopoiesis, or die.

Important in this picture, is the fact that the observer-the scientist- should not imply any particular aim or design in the behaviour of the living as an anthropomorphic projection, the observing scientist can only say that the living does what it has to do.

For example, for an observer to say that the amoeba moves in a sugar gradient in order to get food, would be a non-acceptable extrapolation, as the amoeba, from its internal rules, does not know anything about sugar or about feeding -it does what it has to do according to its internal program.

This also means that the observer cannot “see” the wold from the inside of the observed living system- which goes back to the initial point about consciousness- subjective experience cannot be felt by any other person or thing. And also suggests the existence of many world, as many as he different observers.

Now, let us turn to a robot which moves and does things in a given place. The robot is generally programmed so as to recognize a given environment, and to properly interact with it- be cleaning a room in an apartment, or attending patients in a hospital. If I would put this robot in a swimming pool, it would be unable to do anything, it would “die”.

But one could have programmed this robot, so that it can also recognize water and have a program to move in the swimming pool. The robot would then be capable of doing so, or even to work in an atmosphere at 90 degrees or in atmosphere dense of carbonic anhydride or ammonia.

In fact, as it is well known, one of the advantage of using robots is, that they can work in environments which are prohibitive to humans. Of course, the robot would not adapt in our biological sense, can only display cognition in different environment if it is programmed for that.

Now, let us go back to the relation between cognition and autopoiesis. On the basis of what we have said, we can accept that robots may be conceived and constructed in order to be cognitive, the big difference with living organisms being this: that animals or plants interact with the environment primarily in order to implement their own autopoiesis -to maintain a homeostatic behaviour going on. Can the same be said for robots?

Here the question becomes more difficult. Let us see some specific case.

Suppose a robot programmed in such a way, to regenerate its own energy when it becomes necessary, by charging itself to a wall plug; and it can even be programmed to repair and renew certain simple parts of its interior -for example simple electric circuits or exchange light bulbs.

Wouldn’t this be a kind of self-maintenance, corresponding roughly to the homeostatic self-maintenance? And if we answer yes to this question, we would be then asked, whether robots are thus living- as they respect the criteria of autopoiesis and cognition simultaneously…

Here even more caution is necessary. As first thing, I hear the admonition of Maturana, particularly frequent and strong in the last years: that autopoiesis, the living in general, has to do with molecular structures and molecular mechanisms; and therefore, all what is not molecular, like machines or even social systems, cannot be brought into the framework of autopoiesis and therefore of the living.

This is clearly an important clarification, but on the other hand the notion of autopoiesis has been since several years extended to non-molecular systems, see the social autopoiesis of Luhmann (ref). A robot which would re-generate itself from within, would it be autopoietic, and then living?

 

A-disturbing-presence

A disturbing presence | Photo from Wall Street International.

We are confronted here with an old problem of general validity: that once you give and accept a definition- here accepting the equivalence between autopoiesis and life- you are then obliged to be consequent.

At this point, I would really go back again to the caveat of Maturana and talk about life only in the case of molecular mechanisms. The self-generating robot, assuming that something like that could be constructed, would be something else, for which we should find a new name.

But certainly, we can say that in robots there may be cognition without life. After all, this is not breaking news. The recognition aero-mobile which have been sent to the moon or to Mars are certainly devices provided of cognition, so are other AI machines, like the drones which are now commonly used and not for the most peaceful aims.

Should we use here a different term from cognition, just to avoid confusion with the “living” cognition of Maturana and Varela?

This is again a general question: whether and to what extent, talking about AI systems, robots in first place, we should use, or forget, the terminology we commonly use for the living. And this goes to a general outline, almost a simple-minded conclusion from these few notes.

In fact, I believe that the most general observation is this: that based on an apparent analogy, we should not simply extend to robots and machines the anthropomorphic notions and terminology that we use for living systems. It would correspond to a trivial anthropomorphic projection.

Robots, and other AI devices, are new things which necessitate their own vocabulary. This is true also for other AI devices, I am now thinking to the electronic circuits, for which we are often, sic and simpliciter, using the language used for the brain neuronal networks. More working to do with robots and AI also at a semantic level? Certainly so.

References
H. Maturana and F. Varela, The tree of knowledge, re. edt. Boston, Shambala, 1998
H. Maturana and F. Varela, Autopoiesis and Cognition, Dordrech, Reidel, 1980
P.L. Luisi, The emergence of life, sec. edit., Cambridge Univ. Press, 2016
K. Luhmann, Soziale Systeme, Suhrkamp, 1984