September 09, 2010

On distributed morality and cognitive extension

Artifacts may literally extend our cognition, not because they are inside or outside the cranium, but because such a distinction is contingent. The same argument can apply so that external objects are rendered bearers of moral responsibility, iff they take part of an extended system that gets assembled to solve a given cognitive task with harmful or benevolent "consequences".

Imagine a weapon of the future that is triggered from a nano-device suitably implanted in the appropriate cortex. In this case, what would the bounds of the locus of moral responsibility be? Apparently, the distinction between "being moral" and "acting morally” is not a trivial one." Distributed Morality is based on a pragmatic perspective, which is why it focuses on the consequences and purposes of cognitive agency’s action. The distributed approach of human moral action has clear methodological and experimental commitments, because it defends the thesis that an "is" does not need to subjugate itself to an "ought", as conceived of by a rationalist philosophical tradition. Consequently, experimental-philosophical moral responsibility does not put the emphasis on the specification of a possible intrinsic property or character that a cognitive agent could have. In this case, it is more appropriate to speak of "degrees of moral responsibility" that could be attributed to the action of the several parts of an extended cognitive system. That is, a system the comprises aspects of the internal and external dimensions of the human organism. Here, one might also get puzzled about the limits of the locus of moral responsibility in an extended system, on the assumption that the distribution of such responsibility over an extended agency is possible. The problem that underlies this apprehension can be discriminated as a function of both the concept of "degrees of representationality" and the so called "parity principle."

In the first case, we would have to abandon the representationalist view that a mind is unable to directly access the outside world. This traditional view defends the possible existence of a representational system that reliably mediates between the mind and the world. Representational realism is what determines the reliability of such a system, where what is represented reproduces or replaces parts of the outside world, supposedly, by means of nomologically regulated mechanisms. Such mechanisms, of course, are unknown. Alternatively, the notion of degrees of representationality is said to arise from the convergence of representationalism and constructivism, two typically conflicting traditions in philosophy and social sciences. In this view, representing does not stand for "replacing", but rather than that, it stands for "re-presenting." The main point - not only in line with certain anti-representationalist theories, but also with experimental approaches such as certain studies on active perception - is that the representations of the external world are constructed at the same time that knowledge advances. As a result, one of the (supposedly conclusive) reasons to abandon the traditional notion of representation is simply that there is really nothing to represent "out there." Everything exists in the here and now, and emerges as the result of just "being there". This is a notion that would conform to the so-called available "reliable evidence" related to the diverse experimental field of active cognition.

In the second case, what is defended by the "principle of parity" is that any external process or component, which happens to participate in some kind of task that calls for intelligent adaptive behavior, can very well be considered cognitive, as long as if, were it to occur in the head, we would not hesitate to accept as cognitive.

Assuming the above, one might be prepared to accept the idea of a distributed cognitive agency, where (except for important aspects of advanced cognition), we could do without the classical representational view that confines the boundaries of cognitive agency to an intracranialist realm. Thus, if those processes responsible for the intelligent action of agents like us necessarily befall within the realm of a coupled system (i.e a system where brain-body-world interact intimately interact with one another), the tendency to confine the responsibility for malicious/benevolent actions to a skull-bound agency would only correspond to an individualist prejudice.

Finally, we should say something about the emphasis on "consequentialism" of moral action. This vision is a pragmatic reaction against intrinsicalist philosophical understanding of mental content, especially of a materialistic character. While the latter puts certain “conditions of ontological individuation” at the center of an intrinsicalist mental content determination, the former seems to hinge on a liberalized interpretation of Peirce’s "pragmatist maxim", understood as a rule for clarifying the content of hypothesis depending on (or by tracing) their "practical consequences".

No comments: