Representational




Representational     -    Non-representational
Conscious               -    Unconscious
Symbolic                 -    Pre-symbolic
Formal                     -    Dynamic
Explicit                    -    Implicit
Narrow                    -    Wide
Focused                   -    Distributed
Spectator                 -    Participant




Connectionist networks cannot represent '... higher order relations. This representational poverty leads to an incapacity for generalisation to higher order relations since a network can only learn what it can represent.’

The first objection merely states a commitment to a strong theory of representation. It is true that networks do not 'represent higher order relations', but that is only a problem if representation is insisted upon.

This commitment is made explicitly by Chandrasekaran et al. (1988). For them there is an abstract level of 'information-processing' which is higher than any specific realisation thereof, whether that realisation be symbolic or connectionist. It is at this abstract level that the 'explanatory power' resides.

Like Fodor and Pylyshyn (1988) and Lloyd (1989), they claim that connectionists remain committed to representation, and the fact that this representation is 'distributed' makes no difference to anything. I will argue in detail […] that distributed representation makes all the difference; that, in fact, it undermines the whole concept of representation. The fact that connectionist networks 'cannot represent' becomes a distinct advantage.

The second objection reflects the urge of symbolic modellers to reduce the domain to be modelled to a finite number of explicit principles using logical inference. We have already argued that, when dealing with true complexity, this is often not possible. Connectionist models can implement aspects of complexity without performing this reduction. That is their strength.

One cannot make use of a priori domain knowledge because one often does not know which aspects of the domain are relevant. This also largely answers the third objection, i.e. that the connectionist model is too general and does not reflect the 'structure' of the problem. The structure cannot be reflected, precisely because it cannot be made explicit in symbolic terms. The fact that the same network can be taught to perform 'very different' tasks is not a weakness, but rather an indication of the power of this approach.

[Paul Cilliers]
Complexity and Postmodernism, p.20





In a representational system, the representation and that which is being represented operate at different logical levels; they belong to different categories.

This is not the case with a neural network. There is no difference in kind between the sensory traces entering the network and the traces that interact inside the network. In a certain sense we have the outside repeated, or reiterated, on the inside, thereby deconstructing the distinction between outside and inside.

The gap between the two has collapsed.

[Paul Cilliers]
Complexity and Postmodernism, p.83





Models based on formal symbol systems have the classical theory of representation built in.

The main problem with representation lies in the relationship between the symbols and their meaning. There are two ways of establishing this relationship. One can either claim that the relationship is 'natural', determined in some a priori fashion, or one has to settle for an external designer determining this relationship.

The first option is a strongly metaphysical one since it claims that meaning is determined by some kind of fundamental, all-embracing law. Such an approach has to be excluded here because the main thrust of my argument is that an understanding of complexity should be developed without recourse to metaphysical cornerstones.

The second option where the relationships are the result of the decisions made by a designer - is acceptable as long as an active, external agent can be assumed to be present. When a well-framed system is being modelled on a computer by a well-informed modeller, this could well be the case. However, when we deal with autonomous, self-organising systems with a high degree of complexity, the second option becomes metaphysical as well.

As soon as we drop the notion of representation, these metaphysical problems disappear.

[Paul Cilliers]
Complexity and Postmodernism, p.86




A last, and a most important, principle requires that the memory of the system be stored in a distributed fashion. The importance of memory has already been stated, and in neural networks the connection strengths, or weights, perform the function of storing information.

Specific weights cannot stand for specific bits of symbolic information since this would imply that the information should be interpretable at the level of that weight. Since each weight only has access to local levels of activity, it cannot perform the more complex function of standing for a concept. Complex concepts would involve a pattern of activity over several units.

Weights store information at a sub-symbolic level, as traces of memory.

The fact that information is distributed over many units not only increases the robustness of the system, but makes the association of different patterns an inherent characteristic of the system - they overlap in principle.

[Paul Cilliers]
Complexity and Postmodernism, p.95




I have already argued that a pairing off of words and objects in a direct fashion—classical mimetic representation—is not acceptable. It does not give enough credit to the fact that language is a complex system.

It assumes the existence of an objective, external viewpoint and begs the question as to the identity of the agent that performs this ‘pairing off’.

The relationship between language and the world is neither direct and transparent nor objectively controlled, but there is such a relationship—without it natural language would not exist. By understanding language as a self-organising system, we can start sketching a more sophisticated theory of this relationship.

[Paul Cilliers]
Complexity and Postmodernism, p.125




In an essay entitled The Ontological Status of Observables: In Praise of Superempirical Virtues’ (139–151) [Churchland] positions himself as a realist asserting ‘that global excellence of theory is the ultimate measure of truth and ontology at all levels of cognition, even at the observational level.’

His realism is more circumspect than may be deduced from this passage, but he remains committed to the idea that there is a world that exists independent of our ‘cognition’, and that we construct representations of this world.

Since different representations are possible, they have to be compared, and the best selected. The selection cannot be made on the basis of ‘empirical facts’, but ‘must be made on superempirical grounds such as relative coherence, simplicity, and explanatory unity.’

It should be clear that from this position he is not about to explore contingency, complexity and diversity.

[Paul Cilliers]
Complexity and Postmodernism, p.133




There's a contradiction between the Darwinian notion of reality and the Newtonian, or Cartesian, idea of reality because there's a reality that has something to do with this notion of ‘fit’ - of relationship between the subjective and the objective.

The Newtonian and the Cartesian are formal systems. Darwin's theory of evolution is the first significant and important dynamical systems theory within science, in which the self-organization of this system and its coupling to the world are constitutive of the kind of entity it is.

[Jordan B. Peterson]
‘A Conversation So Intense It Might Transcend Time and Space | John Vervaeke | EP 321, YouTube



Related posts: