Imagine that we have created a society of robots. They would lack freedom of the will in the traditional sense, because they are causally determined automata. But they would have conscious models of themselves and of other automata in their environment, and these models would let them interact with others and control their own behavior. Imagine that we now add two features to their internal self- and other-person models: first, the erroneous belief that they (and everybody else) are responsible for their own actions; second, an "ideal observer" representing group interests, such as rules of fairness for reciprocal, altruistic interactions. What would this change? Would our robots develop new causal properties just by falsely believing in their own freedom of the will? The answer is yes: moral aggression would become possible, because an entirely new level of competition would emerge -- competition about who fulfills the interests of the group best, who gains moral merit, and so on. You could now raise your own social status by accusing others of being immoral or by being an effective hypocrite. A whole new level of optimizing behavior would emerge. Given the right boundary conditions, the complexity of our experimental robot society would suddenly explode, though its internal coherence would remain. It could now begin to evolve on a new level. The practice of ascribing moral responsibility -- even if based on delusional PSMs [Phenomenal Self Models] -- would create a decisive, and very real, functional property: Group interests would become more effective in each robot's behavior. The price for egotism would rise. What would happen to our experimental robot society if we then downgraded its members' self-models to the previous version -- perhaps by bestowing insight?
[...]
Neuroscientists like to speak of "action goals", processes of "motor selection", and the the "specification of movements" in the brain. As a philosopher (and with all due respect), I must say that this, too, is conceptual nonsense. If one takes the scientific worldview seriously, no such things as goals exist, and there is nobody who selects or specifies an action. There is no process of "selection" at all; all we really have is dynamic self-organization. Moreover, the information-processing taking place in the human brain is not even a rule-based kind of processing. Ultimately, it follows the laws of physics. The brain is best described as a complex system continuously trying to settle into a stable state, generating order out of chaos.
According to the purely physical background assumptions of science, nothing in the universe possesses an inherent value or is a goal in itself; physical objects and processes are all there is. That seems to be the point of the rigorous reductionist approach -- and exactly what beings with self-models like ours cannot bring themselves to believe. Of course, there can be goal representations in the brains of biological organisms, but ultimately -- if neuroscience is to take its own background assumptions seriously -- they refer to nothing. Survival, fitness, well-being, and security as such are not values or goals in the true sense of either word; obviously only those organisms that internally represented them as goals survived. But the tendency to speak about the "goals" of an organism or a brain makes neuroscientists overlook how strong their very own background assumptions are. We can now begin to see that even hardheaded scientists sometimes underestimate how radical a naturalistic combination of neuroscience and evolutionary theory could be: It could turn us into beings that maximized their overall fitness by beginning to hallucinate goals.
I am not claiming that this is the true story, the whole story, or the final story. I am only pointing out what seems to follow from the discovery of neuroscience, and how these discoveries conflict with our conscious self-model. Sub personal self-organization in the brain simply has nothing to do with what we mean by "selection". Of course, complex and flexible behaviors caused by inner images of "goals" still exist, and we may also continue to call these behaviors "actions". But even if actions, in this sense, continue to be a part of the picture, we may learn that agents do not -- that is, there is no entity doing the acting.
Thomas Metzinger, The Ego Tunnel, 2009, p. 129-131
Inga kommentarer:
Skicka en kommentar