Home page Curriculum Mi presento Pubblicazioni La scienza e i miti Ultimo numero Altri articoli SETI Medicine
complementari
Scrivimi link utili webmaster



I MILLE VOLTI DELLA COMPLESSITÀ
(Epistemologia, XXII/1999, n. 1, pp. 91-116)


CHAOS AND COMPLEXITY: PHILOSOPHICAL IMPLICATIONS

(Invited paper at the international congress "The scientific and philosophical challenge of complexity", Ponte di Legno, 27/12/1998-2/1/1999)

0. Introduction and the problems I won't treat.
1. Prediction and explanation.
2. Levels of knowledge and epistemological anti-reductionism.
3. The problem of forms and universals
4. Intentionality and ontological anti-reductionism.
5. On the nature of mathematics.
6. Conclusions
Bibliography

0. Introduction and the problems I won't treat.

The aim of my paper is to point out some philosophical implications of chaos theory. I shall not repeat what chaos is, because it was already done very well during these days. Sometimes, however, I shall briefly recall some fundamental notions to underline from which point of view I think they are important for our reasoning.
The problem of the interrelations between different degrees of knowledge is a very complex one. Classical philosophy has always said that there are different levels of reality and, consequently, different ways to approach and to understand it. I think (and I'll soon try to demonstrate) that it is true. In my recent book Philosophy of chaos I also proposed a possible scheme for them which is very similar to Maritain's in his Les degrés du savoir . But, differently from him and from most of classical philosophers, I don't think such a distinction to be a complete methodological separation. Certainly, philosophy has a peculiar methodology, which cannot be reduce to the scientific one, but it doesn't mean that philosophical truth is always and completely independent from science's discoveries. Maybe this is true for metaphysics in the strict sense of the word, but at least in the case of the so-called philosophy of nature the question is more complex (and maybe more chaotic, too): there are, indeed, some propositions which cannot be demonstrated otherwise than in a philosophical way, but there are also some for which a scientific demonstration can be provided, too, directly or indirectly. The theory of chaos shows to us more than one example of this kind.
Before to go on, I would like to clarify three points.
Firstly I want to point out that the situation is here different from that hypothesised by Karl Popper speaking about "falsifiable metaphysics". In Popper's opinion, indeed, metaphysics cannot be demonstrative: it can become, but only by transforming its propositions in scientific hypothesis. So, in Popper's view scientific demonstrations take the place of philosophical ones, which are in principle impossible. On the contrary, here I'm speaking about a possible convergence of some philosophical demonstrations with some others coming from science, and not about a substitution.
In the second place, I want to explain what I mean when I use the expression "classical philosophy". My main philosophical referent is St. Thomas Aquinas, but I am not so "orthodox" to be simply classified as a "Thomist" (for this reason I rather prefer to completely avoid the term itself). Indeed, there are some questions about which I don't agree with him and I think other solutions are requested, which in my view are obviously to be found in a line of thinking not so far from his one. This is the area which I denote with the expression "classical philosophy". It is a theoretical category more than an historical one (even if there is a partial superposition of one on another) and it's not so easy to define it precisely. However, generally and roughly speaking, we can say that the main requested features for a philosopher to be included in it are an analogical (that is, not monistic) conception of reality and a theory of knowledge which avoid the opposite extremes of empiricism and idealism. Anyway, I hope that all concerning this subject will become clearer with the sequel of my discourse.
And finally, I want to spend only few words about the questions I won't treat. As you'll can see, I tried to construct a discourse having its own inner coherence and therefore I chose the line of reasoning that seemed to permit to touch the greatest number of the main philosophical questions concerning chaos and complexity without being dispersive. But there are others, too. Obviously, I won't speak about them today, but I want at least to enumerate them, just to give you an idea. As you'll can see, someone are pretty philosophical, while others lie on an uncertain boundary between science and philosophy, as, for instance, in the first place the problem of which is the best definition of complexity itself or, secondly, the so-called problem of the arrow of time and the real meaning of entropy. Then we have a very problematic cluster of questions, entangled, at various degrees, with teleology, namely: the final cause in natural beings; the so-called anthropic principle and the meaning of evolution; and, of course, the existence of God. Then, the complex relationships between chaos, complexity and pure randomness, which is turning out to be not always so meaningless as we ever thought. And, finally, the deep and not yet really understood implications of all that for our ethics (and bioethics in particular), about which I'll say something below, but only in passing.
And now we can go on.

1. Prediction and explanation.

The first philosophical implication of deterministic chaos is a very simple, but also a very important one, that is: the proprium of science is explanation, and not the capability of forecasting.
This has always been a topic of classical philosophy, and also of Evandro Agazzi, who has jointed the traditional theory with the modern operational point of view, thus making really a very powerful synthesis. Maybe it will sound obvious and even trivial to you. If it were the case I would be very happy. But it is simply a fact that it was being more and more neglected during the last hundred years because of a conventionalistic attitude, firstly born with Mach's phenomenalism and then grown with the neopositivistic movement, excluding science can ever reach the "truth" about its objects. Obviously, if science cannot reach truth about natural phenomena it won't be able to explain them, too. But there is a problem: science works. But how can it work, if its theories never can be said true? The current answer is that, even not true and therefore not explanatory, nevertheless they are able to provide reliable predictions. How it is possible for a theory to make reliable predictions without being at least partially true remains a mystery: nevertheless, such an answer is generally considered satisfactory.
Better: it was. Because with chaos theories it is no more tenable. As John Holland says: "Meteorological conditions never stabilize. Never completely repeat themselves. It's impossible to forecast weather more than a week early. Nevertheless we can understand and explain almost all we see in the sky. We can identify important phenomena as hot fronts or cold fronts, jet streams and high pressure systems. We understand their dynamics, we know how they interact to generate meteorological phenomena at local and regional scale. Briefly, we have a real science, but without the possibility to make sure predictions. This is possible because forecasting is not the essence of science, while understanding and explaining are".
I want only to underline that such a discovery implies a real cultural revolution, which, unfortunately, seems to me still very far from happening. Indeed, it is not only a philosophical question, but also -and perhaps overall- a practice one. In fact, the capability of furnishing reliable predictions -and therefore technical efficacy- is nowadays the source of science's social legitimation even more than of its theoretical justification. A science providing us a deeper comprehension of the world surrounding us without a proportional possibility of modifying it as we like is a novelty which sounds very strange in such a society as the our one, this is a fact. Certainly, this can be a great chance, because it could lead us to a major humility towards nature and to a greater wisdom in using our scientific power. To quote Brian Arthur: "If we think to be a steamboat and to can go upstream the river, we are deceiving ourselves. We are, instead, as the captain of a little paper boat going down with the stream. If we try to resist, we shall not arrive anywhere. On the other hand, if we look at water's stream, with the feeling of being part of it, knowing it is continuously varying and always leading to new complexities, we'll sometimes can drive an oar in the water and push on from a vortex to another" . But, without doubt, the current image of our technological society is the steamboat's one: and the risk is that scientific community, instead of trying to change it into a more realistic one, prefers to accommodate to the situation and pass predictions off as reliable also when they are not. What was happening in genetics during last years is the demonstration that this risk is real and present and by no means only theoretical. In my opinion (and this is the second philosophical consequence of this fact) a great change in bioethics' traditional formulations and priorities is requested to face it in an adequate way.

2. Levels of knowledge and epistemological anti-reductionism.

The second and more complex implication of chaos theory for philosophy is just concerning the above-mentioned question of the different levels of reality and knowledge.
Briefly speaking, there are two main positions about this problem. The reductionistic one states that all we can see, as complex it is, is in reality "nothing but" a collection of simple parts. If we were able to calculate the behaviour of these simple parts, then we would also be able, in principle, to calculate the behaviour of the whole system. If we don't really do it, it is only because it would be too complicated in practice, but in principle we can: always. On the other hand, classical philosophy has always maintained that "more is different", to say it with P. W. Anderson , that is the whole is not only the sum of its parts, but "something more". Therefore, there are different levels of reality and different ways of speaking about them, everyone having a proper language and methodology, in principle irreducible one to another.
Well, all of you know it very well. And all of you also know it's by now a commonplace that chaos has finally defeated reductionism. What I want to tell you is that it can be true, but in a not so simple way as one commonly believes.
Certainly, the in-principle impossibility of deducing the "emergent" properties of a complex system as a whole from the properties of its parts really demonstrates that epistemological reductionism is untenable. This is a great discovery, but, unfortunately, is not all we need, because it remains still possible an ontological reductionism, no more supported by science, but also not confuted by it. The reason is that the irreducibility of models does not grant the irreducibility of their referents. Models, indeed, are useful just because they are not identical with their referents, but only similar: if they were, they would also share their complexity and therefore they would need the same time to evolve and therefore they would not be useful to forecast their behaviour. Then, models are simplifications of reality. But it means that we cut off something of the reality itself in constructing them: there is not a complete correspondence and then we cannot be sure that all properties of the models (including irreducibility) are also properties of reality itself. And you can see very easily that, as a matter of fact, this correspondence does not always hold. All of you know very well that, as we saw before, it's impossible to forecast weather more than a week early (more or less). But you know, too, that we can construct models of the climate evolution on a range of thousands and perhaps millions of years, thanks to the relative independence of higher order variables in complex systems. But to do it we have to not take in consideration some variables that are relevant at the lower scale. These models are then irreducible to one another, but their referents are not: in fact, they are the same referent, that is Heart's atmosphere. We cannot extrapolate models for climate at large scale from models of local weather, but nature does, since global climate evolution is determined (wholly determined, I want to underline) by the interaction of all the factors determining local weather. The fact we are not able to calculate this interaction doesn't mean it doesn't exist, nor it is not deterministic.
And, as a matter of fact, if you read books, papers and declarations of famous "chaosologists", you could have many surprises. You could find propositions with hard reductionist flavour just where you would never have imagined. For instance, I don't know how many of you know it, but the above-mentioned More Is Different begins with these words: "The reductionist hypothesis may still be a topic for controversy among philosophers, but among the great majority of active scientists I think it is accepted without questions. The working of our minds and bodies, and of all the animate matter of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under certain extreme conditions we feel we know pretty well" . And Anderson does agree with this idea, as it becomes soon clear, when he writes: "It seems inevitable to go on uncritically to what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws. In practice, that amounts to some astrophysicists, some elementary particles physicists, some logicians and other mathematicians, and few others" . It's only this attitude which Anderson opposes, and not its premises. Heard Brian Arthur, the famous economist of the Santa Fe Institute: "I didn't see how it was possible to define a behaviour really emergent. In a certain sense, all that happens in the universe, including life itself, is yet included in the laws governing quark's behaviour" . And Morris Mitchell Waldrop, in Complexity: "Living systems are machines", even if "with a kind of organization very different from that we know" . It is not an incoherence of these authors: incoherence is all in the idea that epistemological irreducibility implies automatically also an ontological one.

3. The problem of forms and universals

But this is not the only point of view from which regarding this question. So far, indeed, we have considered the negative aspect of chaotic dynamics. But they have also an important constructive function. As you well know, indeed, linear dynamics cannot bring real novelties. But non-linear ones can. Well, I don't know if you had ever thought about, but an important aspect of novelty is similarity.
In a linear world there is no place for a real similarity: in it things can be or identical or different, and tertium non datur. The reason is that such a world is described adequately by classical geometry, in which the concept of similarity exists, but only in a form which is completely reducible to relations of identity. Let me quote from a typical book of geometry for the high schools. Here is the definition of similarity between polygons: "Two polygons with the same number of sides are to be said similar, if (when someone considers in each one the vertices in a suitable order) they have in order equal angles and proportional sides" . But proportionality itself had been defined former in this way: "The proportion among four quantities A, B, C, D is, by definition, nothing but the equality of the two ratios A:B = C:D" .
In fractal geometry there is certainly also this kind of "inhautentic" similarity (which is proper of all fractals based on the iteration of forms, as in famous Koch's curve, for instance), but this is not the only one: there is also another, the one proper of "strange attractors" and based on the iteration of operations (as, for example, in the Mandelbrot set or in Lorenz's butterfly attractor), which is by no means reducible to equality, and yet is thoroughly definable in mathematical terms (see Fig. 1).
But why is similarity so important for the possibility of novelty? The reason is that similarity is, so to speak, a medium between complete identity and absolute difference. Without such a medium we'd fall in irrationalism, just in its etymological meaning: we'd simply lack a common "ratio" to connect the extreme terms, as firstly Parmenide said. The only chance to avoid it is to state the impossibility of becoming (and therefore of novelty). Parmenide, as you know, just did it. And the answer to the problem he posed was found by Plato and Aristotle just by identifying a third category between being and not being: the "different" for Plato and the "potential being" for Aristotle.
Maybe someone could think these are too more abstract questions to concern our discourse, but in reality they are not. Indeed, mechanistic atomism was born historically for the first time (at least in western thought) just as a philosophical theory, aiming to maintain the core of Parnenidean conception against the Platonic and Aristotelian ones. The motto of the so-called "pluralist physicists" was, indeed: "If a plurality of beings existed, they should be like the One". Melisso, who was the most intransigent defender of Eleatic orthodoxy, said it as paradox, to show, in his mind, the impossibility of multiplicity: but they took him in his word, by modifying Parmenide's philosophy in such a way as to avoid it to fall in the absurd of completely deny the existence of becoming. The common idea was to conceive the becoming as the result of movement, interaction, aggregation and disaggregation of simple parts of different nature, but all being indestructible and unchangeable. There were many variations on the theme, with different features to be ascribed to the elementary parts, but the most simple, coherent and, to put it in modern words, "elegant" of these solutions was Leucippo and Democrito's atomism. So, becoming was no more a mere illusion, but the "real" reality wholly remained on the side of being, even if now "crumbled", so to speak, in many pieces, but always eternal and unchangeable. And the different kinds of change present in nature were not completely denied, but all reduced to the simplest one, namely movement, that is place change.
And this is the very root not only of old philosophical mechanism, but of the modern one based on Newtonian science, too: its theoretical assumptions (all must be explained with the interactions of material points moving through an absolute, empty space), indeed, are exactly the same.
Obviously, there was a prize to be paid, that is nominalism. Nominalism is the thesis that universal terms are "nothing but" a conventional label stuck on a set of objects, without corresponding to any general concept based on a "common nature" of them, that cannot exist. Though historically elaborated autonomously from it, nominalism is a necessary consequence of mechanism.
Indeed, if the only realities are atoms and their configurations, in a linear context there are only two alternatives: or two objects have exactly the same properties, and therefore they must be identical, or they have not, and therefore they must be simply diverse. But macroscopic objects are never perfectly identical. Then, they cannot have any "common nature" and everyone represents, properly speaking, a whole, separate natural kind. Every other classification can be, then, nothing but a merely conventional one for merely pragmatic purposes.
But forms arising from chaos demonstrate the falsity of this theory. Since they are generated by complex non-linear dynamics, each of them represents one of the infinite possible trajectories on a particular strange attractor. Therefore, they are "similar" in the precise, mathematically definable and by no means "fuzzy" or "ambiguous" sense we saw before. Then, there is a "common nature" also in not identical beings: and therefore universal terms have a "fundamentum in re", as St. Thomas said, while nominalism is simply false.
But that's not all. Following a very similar line of thought, recently Prof. Gianfranco Basti has tried to make a comparison between the traditional philosophical concept of natural forms and the strange attractors themselves, with very interesting (and sometimes also very surprising) outcomes.
Basti starts from the consideration that for Aristotle (and St. Thomas) "in every process of generating a new physics form or in the inner of the matter (= morphogenesis), the final and the material cause are always, as a matter of fact (= numerically), the same, because the form in this case is nothing but the new final stable and ordered state emerging from the continuous movements (= instability) of the elements forming the material substratum (= material cause) of that new body, through an irreversible process. More precisely, with the notion of natural form Aristotle means: 1) the final stable state of an unstable dynamics; 2) the non-additive totality of the global conditions intrinsic to the in-question dynamics, thanks to which this term is really reached. The movements in the material substrate (= the elements) are induced initially by the action of an external agent (= the moving cause) making the former stable ordered state unstable (that is the former form, Aristotle said)".
Now, Basti says, "when the final state of the dynamic process derives univocally from the initial moving cause [...], the physical explanation will assume a deductive character" and therefore in this case "the formal-final cause can be reduced, at least conceptually, to the moving one" : in this sense "modern Laplacian physics has developed in an operational (mathematical and experimental) form essentially the first part, i. e. the deductive one, of Aristotelian physics" (see Fig. 2a).
But when it doesn't happen, then "the moving cause is really and conceptually different from the final-formal cause. [...] In this case, indeed, it is necessary for a physicist [...] to individuate the other conditions, clearly distinct from the initial moving cause, which have made possible the achievement of that certain outcome" . Then, "the natural form simply denotes the specificity of a dynamics of a given (not linear and unstable) system which is going to stabilize. A specificity which can be reduced neither to the initial external movement (= the moving cause or the received ), or to the mere sum of the inner-to-dynamics elementary contributes [...] So, in this sense the form is unifying and ordering and moving matter, is a final stable state and at the same time the way to reach it (the attractor state and its basin), is the outcome of an action by an external agent (= the moving cause)" (see Fig. 2b).
You would agree with me that convergences are really striking.
But, again, that's not all. If we accept to identify (even if only in principle, because in practice we are very far to be able to do it) a natural kind with a strange attractor and its members with its trajectories, each one describing, according with Basti, the natural form of the corresponding individual, then we can also understand better what Aristotle and Thomas really meant with this expression: not an unchangeable essence always identical to itself, but something very different and very more realistic, too. As Basti says very forcefully, indeed, individual essences must be "thought as thresholds of quantitative oscillations (alterations) admitted for the individual subject during its existence and depending on a concourse of contingent causes" . Obviously, not all these admitted (that is "present in potentia passiva") oscillations will really happen, without the subject stopping for this reason to be itself. Only when the oscillations get over threshold value (that is are such as to push the trajectory off the attractor) we have the destruction of the subject (namely, in the case of living beings, death). But if these oscillations happen during the initial phase of morphogenesis, instead of the destruction of the subject (which in this case it is not yet born) we'll have a phenomenon sometimes called "iperchaos" , that is a noise-induced jump from one attraction basin to another: in other words, a mutation (see Fig. 2b). And here we have the way open towards a really Darwinian and at the same time really Aristotelian evolution theory, too!
This way of reasoning is for me really fascinating. So, I'm very sorry to have to say you that not even it can solve the problem we have been confronted with in the former paragraph, that is ontological reductionism.
The reason is that, to put it with Doyne Farmer, "it's not sufficient to say emergence: cosmos is full of emergent structures as galaxies, clouds and snow crystals which are only physical objects and have not any autonomous life. It is necessary something more" .
But what can this "something more" be?

4. Intentionality and ontological anti-reductionism.

To firmly establish ontological anti-reductionism (at least for a particular class of complex systems, that is intelligent beings) we have to go on along a more indirect way, recurring to the notion of intellectus agens, better known today with the name of intentionality.
In the tradition of classical philosophy, indeed, from Plato to St. Thomas, passing from Aristotle, Plotino and all Christian philosophers, the most reliable proof of the fact that in man there is a non-material component has always been identified with the typically human capability of abstract thinking, and in particular with the capability of knowing the universal. In fact, since all that exists in material world is individual, it seemed evident that something non-individual could exist only by virtue of the action of something non-material. But in our century this argument has been called in question because of the claim made by the supporters of the so-called "strong" Artificial Intelligence program to be able to simulate human intelligence under every point of view with a completely mechanical system. Many objections has been advanced in time, but always at the philosophical level. Chaos, one time more, has deeply changed all the terms of the question.
On one hand, indeed, the use of non-linear dynamics in AI has made possible an entire set of applications which were simply unthinkable before: Arecchi's adaptive system for recognition and control of chaotic dynamics, which I'll speak about shortly, is an example of it, which I think well-known to you. But, at the same time, precisely these successes have pointed out more and more clearly the in-principle limits of this method.
Let me try to explain you what I mean. At the beginnings of AI all scientists' efforts were focused on the problem of simulating logical and mathematical reasoning. The adduced reason was that they are the paradigmatic forms of human intelligence: but the real reason was that they are not, and just for this reason they are the simplest to be simulated with a mechanical system. At a certain moment, also for the increasing use of robot in industry, it became clear that it could no more be sufficient, at least for the "strong" program, which wanted to reproduce not only this one or that one function of human intelligence, but human intelligence itself. For this purpose it would have been necessary that computers were able to perceive their environment and to communicate with it. Then the so-called problem of pattern recognition (which is the first step of intentionality) became central for AI. And it became soon clear, too, that this was a very more difficult problem than the reproduction of formal reasoning. In a first time the entered way was the traditional one, in which computer programs are seen, roughly speaking, as an implementation of Kantian philosophy: in fact, they analyse data in comparison with a set of fixed rules, and programmers' ability wholly consists in reducing as much as possible their number and in optimizing their efficiency.
But this approach is not successful when we have to face non-linear systems (that is, in practice, all real natural systems), because it leads very soon to intractability, requiring an exponentially increasing number of instructions as their complexity increases to enable the program to forecast every possible variation. As we have seen before, indeed, members of natural kinds are not identical, but only similar, and, even if they really share a common nature, they can be sometimes very different, just as the trajectories on a strange attractor.
For this reason it has been becoming more and more frequent the use of adaptive methodologies for this task, whose model is (explicitly or, more frequently, implicitly) St. Thomas Aquinas's theory of knowledge "by kind and specific difference". The idea is not to furnishing the system of a complete set of instructions about the pattern it must be able to recognize (which is impossible, as we saw), but, rather, to implement in it some instructions enabling it to continuously readapt its inner parameters depending on the inputs received from environment itself. A system like Arecchi's one, for instance , works by stroboscopically observing the wanted dynamics at not fixed time intervals (t), but at variable ones, depending on the difference between the first variations, which must be zero. In practice, the system makes its observation, then calculates the difference with respect to the previous one (dx, first variation) and then the difference between this variation and the former one (d(2)x = dxn+1 - dxn, second variation). If d(2)x = 0, then the system will make its next observation after the same time; but if d(2)x ¹ 0, then it will modify the time interval, and precisely it will increase it if d(2)x > 0, while it will diminish it if d(2)x < 0.>
The trick is that by operating in this way every dynamics generates a characteristic distribution of the intervals t between successive observations, namely: a point for a periodic dynamics, a straight line for a chaotic one and a dispersion all over the plane for a random one.
The discrimination among the different kinds of dynamics, therefore, is obtained here not in force of their direct comparison with a set of pre-programmed models, but basing on a set of data which are produced by the interaction of the particular shape of the dynamics itself with the computer-controlled measure apparatus. And this is the reason why I said previously that the implicit (but in Arecchi also explicitly declared) philosophical archetype of these systems is the Thomasian theory of knowledge instead of the Kantian one: because here every set of data represents a characteristic feature of the corresponding dynamics (its "specific difference") which doesn't pre-exist, in a Kantian way ("a priori"), in the "subject" (that is, in the computer program), but raises in it in the very moment of its interaction with reality.
So, my friend Arecchi always says that his system is a "not-with-fixed-rules" one. And every time I discussed with him I always objected that it is not true, because also in it there are fixed rules, namely: those individuating the parameter to be observed, those controlling the way in which its observation must be modified and, finally, those leading the evaluation of the so-obtained data. As you can see -you'll agree with me, I hope- they are not so few, after all! And furthermore each of them had to be chosen and set up very carefully -during a study many year long- to reach the proposed goal. Therefore I was very happy when I read, in the paper devoted to this subject in his recent Lexicon of Complexity , that "the main difference of an adaptive M apparatus with respect to a fixed one is that not all parameters are prepared by preliminary learning sessions" : that is just I have always been maintaining. Not all rules are fixed: this is correct. But many are. And -this is the point!- if they were not, then the system could not function, because it would be, so to speak, a pure syntax without any semantic.
So, we have the following paradox, which we could name "The machine-not-like-a-machine paradox":

"The less a mechanical system is like a machine, the more it will be efficient in simulating human mind. But a mechanical system has to be like a machine in some degree if it has to be efficient in some degree."

Is it an effective demonstration that human intelligence is a non-material function requiring a non-material component in human beings? The correct answer, in my opinion, cannot be sharp.
Certainly I think that now we can be sure that human mind is not like a computer software. In fact, since also the "not-with-fixed-rules" systems require, in reality, fixed rules and models, it follows that every computer program can work only at a limited extent. But human mind has not such a limitation. Then, human mind, at least at some extent, does not work like a computer.
This is not still a demonstration that it is really emergent in a different and stronger way than Farmer's crystals and clouds, namely, as John Searle says, that it is "able to cause effects we cannot explain only by analysing neurones' behaviour" , but our analysis provides us strong and convergent circumstantial evidence in this sense, even if not a positive and conclusive proof.
Firstly, this hypothesis seems to be the best explanation of the former paradox. Indeed, if intentionality is not completely analysable in physical terms, then it is only logical, and by no means strange (as it would be in the opposite case), that a mechanical system not only cannot entirely simulate it, but, on the other hand, can do it the better, the less "mechanical" turns out to be.
In the second place, it is at least questionable that the problem of models regards only the computational approach and not the materialistic approach tout court. We can, indeed, admit that neurones have other causal powers besides the computational ones, but how could the evaluation of data happen otherwise than through some kind of comparison with some kind of physical model, if all is to be explained on a merely material plane? But in this case we have again the same problem we saw in the computational approach, that is: are these models preformed in us or not? If they were, they should be infinite in number, that is impossible. But if they are not, then it means that we construct them. But it is very difficult even to hypothesise how such a process of construction could happen without recurring again to some sort of models, since the most efficient material process we have been able to reproduce update needs them.
Finally, the various attempts to simulate intentionality have shown a scale of increasing difficulty going from "classical" objects to the "complex" ones to reach its maximum with the problem to construct a system able to recognize abstract concepts in the strict sense of the term. Here the situation is even tragic, because simply there are no ideas at all of how it could be done. The adaptive strategy, indeed, is not only limited, but completely useless for such a task, because it needs physical patterns to work: and abstract concepts have not (obviously I'm referring, as I said, to abstract concepts in the strict sense of the word, as, for instance, "good", "law", "philosopher", "science", and so on, and not to the ones corresponding to natural kinds of objects or living beings, which can be associated to physical patterns -even if without coinciding with them, but this would be too a long question to be treated here). Maybe you could object that abstract concepts are communicated through the language, which has recognizable patterns. But so you forget that concepts are only communicated linguistically: but the first time a concept is thought there is not a word to express it, and therefore this operation cannot have a pattern to base on. However, every analysis of the structure of a symbolic sequence can ever tell us only that it has (presumably) a meaning, and not -by no means- what it is. This is a conclusion that holds generally, and not only in the case of human language, but a complete discussion would be to long now. However, in the case of language the fact is so evident that every demonstration is useless: nobody but a fool could ever think to understand a stranger language only basing on a careful analysis of the sequence of its letters. But if meaning cannot be deduced by the structure of the symbolic sequence, and, on the other hand, you wanted to maintain a materialistic explanation of the phenomenon, how can you ever do it? Maybe trying to feel what taste it has?
The fact of the matter is that computational strategy was not chosen at random for this task, but for the very reason that it seemed (and despite all it seems still) the most promising, or, more exactly, the only. Therefore its failure in simulating intentionality is to be valued in the light of its successes (which are many and great) and, then, cannot be considered as a simple accident among others. In other words: maybe (better, certainly) human mind is not a computer software; but in this case what physical object could it ever be?
In any case it is, as I said before, only circumstantial evidence and not a conclusive proof. But it is very noteworthy that all today's scientific considerations converge with one another and also with the traditional philosophical arguments to indicate that the solution of the problem of intentionality -and therefore of human mind (or soul, if you prefer)- is to be searched out of the purely material dimension.

5. On the nature of mathematics.

Before ending I would like spend only few words about the perhaps only classical problem of philosophy that also scientists have always considered very carefully and with respect, that is: what is mathematics?
To answer this eternal question in an acceptable way would take at least an entire book (which I'm trying to write, indeed) and not a little paragraph in a paper like this. So, here I'll simply tell you what is my theory and what was the discovery from which it arose. In a nutshell, the core of my idea is that mathematics is a natural science just like all the other ones, only having the most possible generality of all, in which the experimental part is constituted by calculation.
And the crucial discovery was exactly the last one, namely the material nature of calculation.
This isn't a complete novelty, indeed. Maybe you know that during last years some mathematicians (among which Gregory Chaitin) have begun to speak about "experimental mathematics" , just for the reason that when we have to do with chaos and fractal we can no more solve the equations once and for all, but only study them numerically, case by case, in a pretty experimental way. But in their mind this should be only a branch of mathematics. On the contrary, I say this is a general and fundamental feature of the whole mathematics, which becomes in these cases only more evident, but is ever present.
Indeed, this was yet implicit in Chaitin's views themselves. In fact, if you consider, for instance, the Mandelbrot set, you find that the equation generating it (z ® z2 + c), if applied to the real numbers instead of the complex ones, generates only a well known, very trivial segment, going from the point corresponding to -2 to that corresponding to zero (see Fig. 3). But it would be absurd that something could be change its proper nature only by changing the objects to which it applies. So, if solving Mandelbrot equation when it applies to complex numbers requests an experimental procedure, then it means that this is a feature of Mandelbrot equation as such. But Mandelbrot equation is a very common one, formally indistinguishable from the others. Then, it means that solving equations is in general an experimental task.
This idea struck me hard. Then, I remembered of Percy William Bridgman and all became clear in my mind. Bridgman was, as you know, the main exponent of operationalism, that is a stream of thought maintaining that all in science can be reduced to operations. Since his opponents replied that he didn't take account of mathematics, Bridgman said that also in mathematics there are "operations with paper and pencil". Well, I'm not certainly an operationalist, but for other reasons: in fact, about this point Bridgman was really right. Maybe even more than he himself was thinking.
Indeed, Bridgman's reply was not considered seriously, because with "operations" he had always meant, until that moment, material operations, and those "with paper and pencil" looked to be all but this. But, on the contrary, they are.
It is a tragic, well known to each schoolchild truth that it is impossible to study mathematics without knowing multiplication table. And it is another tragic truth that it is necessary to learn it by heart: it is impossible to can grasp the right results only by reasoning on it.
The matter stands thus. But why?
The fact is that to know the right result of a mathematical operation we have to count. It's the material operation of counting which is the base of the whole mathematics. And all mathematical operations are nothing but methods to make it faster: in fact, using, for instance, decimal system, we count the number of the tens or of the hundreds and so on instead of directly the units' one. Certainly, we don't always count in the same way: when we have to make a sum we count in the most natural way, always going on, while to make a subtraction we must firstly go on and then go back. To perform even more complex operations (a square root, for instance) we have to obey to even more complex rules. But the nature of what we do is still the same, that is: counting a set of symbols according to a certain set of rules. Therefore, without any material items to be counted we'd be able simply to do nothing in mathematics. We forget it because when we have to do with not too big numbers we can make calculations only in our mind. But we do it by imagining the symbols representing the numbers, just like we do also, for instance, in the so-called Gedanken Experiment, in which we understand the outcome of a material operation without really performing it. But the fact that something is imagined does not mean it is not material as for itself. And it becomes clear immediately that this is the case if we try to do the same with numbers only a little bigger, that is impossible without a material external support. Big numbers, indeed, are big not only in our mind: they take space.
This has become particularly conspicuous with computers, in which every operation is, without any doubt, a material one, because a computer is nothing but a physical system evolving from a state to another. It's just for this reason that "strong" AI supporters claimed to have demonstrate that human mind is a completely physical phenomenon, since they were able to reproduce with such a system its (supposed) higher functions, i.e. deductive mathematical and logical operations. I said former that, on the contrary, they are the simplest ones to be simulated in this way. And, if I'm right, now we know also why: they are the simplest to be simulated by a mechanical system because they are mechanical in their deeper essence.
And thus, the ring is closed.

6. Conclusions.

Finally, I want to clarify a point. I'm very far to be completely satisfied of my analysis, because it is, so to speak, a minimalist one. I'm not a Cartesian and I'm personally sure there are not only two different ontological levels in reality, and also in man itself. A great problem, for instance, is whether perception is completely explainable in physical terms: personally I have many doubts. While I am not so sure, on the other hand, that, in the light of the most recent discoveries, it is still possible to consider intelligence as a peculiar and exclusive faculty of mankind (even if it is clear that mankind possesses it in an eminent way, to speak with medieval philosophers). More generally, it seems to me that traditional demarcation lines are no more completely adequate to the present situation of our knowledge and perhaps the whole question needs to be put in a partially new way. But I also think it cannot be solved once and for all with some kind of general demonstration, as in the case of the levels of knowledge. When we have to match with reality itself the problems are always more complex than when we have only to consider our way of representing it, because it's not us who made it: therefore I think it is highly improbable that a global solution can be reached basing on a single methodology as chaos theory, how general and powerful it can be. I'm actually working on this, and I'll be very happy for every suggestion you can give me. I hope to can tell you something of more complete and more satisfactory, soon or later. But today's paper was only devoted to the philosophical implications of chaos. And, at least for the moment, I assure you that it has been enough for me. And, I think, also for you.

BIBLIOGRAPHY


Agazzi Evandro
[1985] La questione del realismo scientifico, in C. Mangione (a cura di), Scienza e filosofia. Saggi in onore di Ludovico Geymonat, Garzanti, Milano.
Anderson Philip W. [1972] More Is Different, in "Science", vol. 177, n. 4047, pp. 393-396.
Arecchi Fortunato Tito
[1985] Caos e ordine nella fisica, in "Il nuovo saggiatore", vol. 1, n. 3, Bologna, pp. 35-51.
Arecchi Fortunato Tito, Arecchi Iva
[1990] I simboli e la realtà, Jaca Book, Milano.
Arecchi Fortunato Tito, Basti Gianfranco, Boccaletti Stefano, Perrone Antonio
[1994] Adaptive recognition of a chaotic dynamics, in "Europhysics Letters", 27.327 (1994)
Arecchi Fortunato Tito, Farini Alessandro
[1996] Lexicon of complexity, Studio Editoriale Fiorentino, Firenze.
Aristotele
[1] Metafisica; trad. it. 1968, 2 voll., Loffredo, Napoli.
[2] Organon; trad. it. 1970, 3 voll., Laterza, Bari.
[3] De anima; trad. it. 1979, Loffredo, Napoli.
[4] Fisica; trad. it. in Opere, vol. 3, Laterza, Roma-Bari.
[5] De generatione et corruptione; trad. it. 1976, Loffredo, Napoli.
Basti Gianfranco
[1991] Il rapporto mente-corpo nella filosofia e nella scienza, EDS, Bologna.
[1992] Le radici forti del pensiero debole: nihilismo e fondamenti della matematica (lezioni tenute presso la Pontificia Università Gregoriana di Roma, inedito).
Bennett Charles H.
[1985], Dissipation, Information, Computational Complexity and the Definition of Organization, in David Pines (a cura di) [1987], Emerging Syntheses in Science, Santa Fe Institute Publications, pp. 215-231.
Casati Giulio (a cura di)
[1991] Il caos. Le leggi del disordine, Le Scienze S.p.A. Editore, Milano.
Chaitin Gregory J.
[1988] La casualità in aritmetica, in Casati (a cura di) [1991], pp. 193-197.
[1994] Randomness and complexity in pure mathematics, in "International Journal of Bifurcation and Chaos", Vol. 4, n. 1 (1994), pp. 3-15.
Eldredge Niles, Gould Stephen Jay
[1972] Punctuated equilibria: an alternative to phyletic gradualism, in Schopf (a cura di) [1972], pp. 82-115.
Enriques Federigo, Amaldi Ugo
[1978] Elementi di geometria, 2 voll., Zanichelli, Bologna.
Gleick James
[1987] Chaos, Viking Penguin Inc., New York; trad. it. 1989, Caos, Rizzoli, Milano.
Gould Stephen Jay
[1980] The panda's thumb; trad. it. 1983, Il pollice del panda. Riflessioni sulla storia naturale, Editori Riuniti, Roma.
Gould Stepehn Jay, Eldredge Niles
[1977] Punctuated equilibria: the tempo and mode of evolution reconsidered, in "Paleobiology" 3, pp. 115-151.
Kant Immanuel
[1781] Kritik der reinen Vernunft, Riga; trad. it. 1963, Critica della ragion pura, 2 voll., Laterza, Bari.
Laplace Pierre-Simon de
[1776] Essai philosophique sur les probabilités, Paris; trad. it. 1967 in Opere, pp. 241-404, UTET, Torino.
Lewontin Richard
[1992] Polemiche sul Genoma Umano, in "La Rivista dei Libri", anno II, n. 10, pp. 7-10 e n. 11 pp. 6-9.
Lorenz Edward
[1963a] Deterministic Nonperiodic Flow, in "Journal of the Atmospheric Sciences", 20, 1963, pp. 130-141.
[1963b] The Mechanics of Vacillation, in "Journal of the Atmospheric Sciences", 20, 1963, pp. 448-464.
[1964] The Problem of Deducing the Climate from the Governing Equations, in "Tellus", 16, 1964, pp. 1-11.
Mach Ernst
[1883] Die Mechanik in ihrer Entwickelung historisch-kritisch dargestellt, F. A. Brockhaus, Leipzig; trad. it. 1968, La meccanica nel suo sviluppo storico-critico, Boringhieri, Torino.
MacKay D.M.
[1980] The interdependence of mind and brain, in "Neuroscience" n. 5 (1980), pp. 1389-1391.
Mandelbrot Benoit
[1967] How Long is the Coast of Britain?, in "Science", n. 156 (1967), p. 156 e in Mandelbrot [1983].
[1983] The Fractal Geometry of Nature, W. H. Freeman, New York; trad. it. 1987, La geometria della natura, Imago, Milano.
Maritain Jacques
[1932] Distinguer pour unir ou Les degrés du savoir, Desclée de Brouwer, Paris; trad. it. (della 6a edizione [1959], riveduta e aumentata) 1974, Distinguere per unire. I gradi del sapere, Morcelliana, Brescia.
Musso Paolo
[1997] Filosofia del caos, Angeli, Milano.
Platone
[1] Dialoghi; Trad. it. 1991 in Tutti gli scritti, Rusconi, Milano.
Popper Karl Raimund
[1934] Logik der Forschung, Springer, Wien (pubblicata con la data di stampa del 1935); trad. it. 1970, Logica della scoperta scientifica, Einaudi, Torino.
[1963] Conjectures and Refutations. The Growth of Scientific Knowledge, Routledge & Kegan Paul, London; trad.it. 1972, Congetture e confutazioni. Lo sviluppo della conoscenza scientifica, Il Mulino, Bologna.
Searle John
[1992] The Rediscovery of the Mind, Massachussetts Institute of Technology Press, Cambridge (Mass.); trad. it. 1994, La riscoperta della mente, Bollati Boringhieri, Torino.
Tommaso d'Aquino
[1] De principiis naturae; trad. it. 1982, in Tommaso d'Aquino, L'uomo e l'universo, Rusconi, Milano.
[2] De ente et essentia; trad. it. 1989, in Tommaso d'Aquino, Opuscoli filosofici, Città Nuova, Roma.
[3] Summa theologiae; trad. it. 1949-1992, La somma teologica, 35 voll., Salani, Firenze.
Waldrop Morris Mitchell
[1992] Complexity: The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, New York; trad. it. 1995, Complessità: uomini e idee al confine tra ordine e caos, Instar Libri, Torino.