Mental Representations in a Mechanical World: Ruhr-Bochum 28/29th November 2019

These are my notes from the conference/workshop, including the questions asked by the audience. Hope this helps people…it’s certainly given me a lot to think about. In these notes, generally, italics indicate my voice and not the speaker’s

List of speakers:

  • Marcin Miłkowski Cognition is (mostly) representational. So what?
  • Matej Kohár What must we do to explain cognitive phenomena?
  • Dimitri Coelho Mollo The Teleocomputational View of Representation
  • Stephen Mann The mechanics of representation: teleosemantics meets predictive processing
  • Danaja Rutar , Wanja Wiese, & Johan Kwisthout Representational Gradation in Predictive Processing
  • Carrie Figdor Mental Representation and Mental Evaluation
  • Jolien Francken Conceptual issues in cognitive neuroscience
  • Samuel D. Taylor A third way to think about mental representation
  • Joe Dewhurst Folk psychological representations and neural mechanisms
  • Beate Krickel Unconscious mental representations in a mechanical world: a challenge for the anti-representationalist?
  • Karina Vold Multiply realised representations…with non-derived content

Marcin Miłkowski: Cognition is (mostly) representational. So what?

Cognition = computation + representation. (For adaptive control of behaviour.) Not an explanatory definition, you can’t predict stuff with it.

Representational mechanism: Cognitive systems can represent as long as they have representational mechanisms. Without this there is not a complete explanation. Representation must be part of a bigger system, with relevant mechanisms…To explain the mechanisms is to elucidate the causal structure of its systems wide capacity (function).

Mechanisms has 3 functions:
They refer to targets of the representations
They identify the characteristics of the target
evaluating the epistemic value of information about the target (adaptive control?)

#3 is particular to psychology and cognitive science: Without identifying the epistemic value, it is hard to justify why behaviour of the systems really depends on representation.

The evaluation mechanisms helps detect that the system is in error when trying to identify the target) – Failure to identify the target is possible, and this is one of the most important features of representations. (It’s referential opacity, in addition to system-detectable error.) This is a blueprint for a mechanism, not a description.

Representational mechanisms must be present in any adaptive control system, due to the Good Regulator Theorem (1970) No optimal control without representation, because representation provides semantic information for manipulation.

But cognition need not be optimal, and you don’t need optimal control to be adaptive – being ‘satisfactory’ is enough.

Something about logons, n bits. No information in a thing means no logons??? This tells us about the ability of something to store information.

Control requires representation – representation should be rich enough to deal with certain n bits of information that we’re supposed to manipulate.

“No miracles argument” No successful action without truth. Systematic successful action without truth amounts to miracles happening systematically. Explaining away representations is really difficult on this view.

Optimal theory is wrong in certain cases because computational tractability requires, for example, not 1:1 ratios on maps.

“Entity realism about representational mechanisms is forced by various interventions (cf. Thomson & Piccinini 2018). If you can observe it, then it’s…real? The point is that you should be a realist about mental representation. There is nothing special about mental representations.

Ok, so the definition of cognition above, when unpacked, gets us some interesting stuff, but not a huge amount.

He thinks: Unified scientific representations are:

Universal in scope (for a given domain)???
Parsimonious
Organically connect (or…non-monstrous…???) Single theories, not multiple theories mashed together…Large scope is cheating if they are monstrous.
Systematic

Explaining how computation and representation occur in cognitive mechanisms is posed to produce similar explanatory patterns which display these properties.
But it may still not explain everything.

As Pylyshyn said: contents is posed to offer more general explanations than those that appeal to merely to behavioural or physiological factors. Difference in satisfaction conditions (for explanations) makes the causal difference.

The point is to argue that representation mechanisms unify cognitive science.

For example, you can get the simplicity that you need above because you can appeal to the same kind of representational mechanism across many contexts. The idea is that you can also create an organically connected mechanistic theory to get unification…but this is much more difficult because already many theories are monstrous as they posit different representational mechanisms e.g. structural representations + symbolic representations.

He goes through some of the others and says that they can conform to unification in principle.

BUT he entertains the objection that competing theories use the same representational principle, so where’s the unity? The response from the speaker is: SO WHAT? The point is that representation in cognitive science is a weak principle. Marking the cognitive in terms of representation is shallow – it omits the adaptive control of behaviour, which is the important bit; without this control you’re not cognitive…you’re…dead?

There are still lots of questions left to answer, for example computational tractability…representational mechanisms that are posited often may not be tractable as we understand them yet. Also, how do we learn things? What are the mechanisms for learning? Associative? Reinforcement? Are these different mechanisms? The point is that none of this follows straight forwardly from representations. There’s nothing about attention here, or consciousness, when we talk about mechanisms of representations. Also, core cognition like social cognition has nothing immediately follow from representations in this way. Representations might be necessary in explanations of these, but they are clearly not sufficient.

Objection: But can’t there be cogsci without representation and computation? Answer: NO! (Good!)

What about Gibson? Affordances and dynamical resonance – these are supposed to be alternatives to computational models.
The problem: Affordances are usually defined in quasi-representational terms. Under this view you get chains of affordances and explaining this without computation is basically impossible. Think about resonance also, it establishes a semantic internal structure relationship to a target in the resonance relationships, so this is computation with a terminological difference. Resonance and ecological information have satisfaction conditions so they are semantic. Whilst tracking (covariance information) is not the same as representation, as tracking might not be related to sophisticated forward mechanisms. E.g. crickets tracking sounds of other crickets and moving towards the relevant sounds. The point is that tracking still has satisfaction conditions and thus can be processed computationally.

Objection: The hard problem of contents. Hutto & Myin: They are wrong and don’t respond to my paper, the speaker says. The hard problems of contents was solved in the 1980s, Hutto and Myin are just bleurgh. Roboticists actually rely on the solutions to this problem when programming, so the solution is functional. The speaker has written extensively on this so this is biased…but his opponents are Hutto and Myin so I’m inclined to believe Marcin here (my biases showing through here too)…

Why complain anyway: First principals are too abstract. Representation is just one tiny bit of explaining cognition. This is why we should go mechanistic, we need explanations of the mechanisms of representations rather than just talking about representations. You need to account for satisfaction conditions anyway so looking at wider cognitive architecture is way more important than dealing with merely the phenomena of representations. Reducing representations to tracking seems to be for no purpose, and ignores ‘modern control theory’. The point is that reducing representations won’t get you good cogsci.

A stress on adaptive control of behaviour gets you more than shallow representational theories, but these accounts can take lots of different forms. The mark of the cognitive is way beyond representations, even though representations are necessary. (E.g. with cricket sound tracking, there might not be representations here, but there’s computation and so this is part of cognition.)

Discussion:

Audience: No miracles argument: Could you not allow for some systematic misrepresentations? Like biases that avoid costly errors? It can’t be all about truth right?
Response: Yeah, this is really a simplification that I was discussing…truth isn’t the only property we’re trying to rely on. Different errors have different costs and so we can account for this in tractability discussions.

Audience: You were arguing that there was a limit to representational cogsci, so core cognition won’t be explained by representations for example?
Response: Not just representations. There’s lots more, and that’s fine. E.g. global workspace theory talks about availability of representations, not representations themselves. Also, you can’t explain things like IQ by representations, or at the very least, not only representations. Models are costly, if you don’t need them, why commit the neurons? Just don’t bother. That’s fine, and it’s still cognitive (in the LIBERAL cognition sense, not the CONSERVATIVE cognition sense). But, if you aren’t positing a mechanism, you’re not positing anything useful. Representational mechanisms are part of the picture, not all of the picture. E.g. lots of tracking too.

Audience: Miracles again. About what it means to be a successful action. For an organisms, rules are different than to robots. We have a reward system, and satisfying it might be the successful action condition…but we know that in a lot of cases, following the reward system is not really successful, so what’s going on?
Response: Organics can learn things…there are evaluation systems which can evaluate the reward system success. This allows for getting ‘success’ wrong sometimes and also explains meta cognitive responses, like higher-order desires. At the end of the day it’s still truth as evaluated by the organism but that’s fine.

Matej Kohár: What must we do to explain cognitive phenomena?

This talk is a rundown of his PhD dissertation. Defends a mechanistic account of cognitive phenomena. He thinks that mechanistic explanations must be non-representational.

Explaining with mechanisms: You have a phenomenon which you’re trying to explain. You take the phenomenon and look at the components of the mechanism which produces the phenomenon. Together, the actions of the components are responsible for the phenomenon. How do we know which bits are relevant for the performance of the system? If you affect the phenomenon, you can affect its mechanistic components. (And vice versa.)

If we want to explain a mechanism, we want to answer why questions and not just how questions. E.g. Why P rather than P’? You say: Well it’s because C rather than C’, where C are the mechanistic constituents of P.

So, you need to identify not just mechanism C, but mechanism C’ which produces the different result – crucially, the closest possible difference in mechanisms.

Mechanisms and representations don’t mix: In representational explanations, representational contents must be explanatorily relevant. But in mechanistic explanations, explanatory relevance depends on constitutive relevance. So this sounds to me like representations can still be present, it’s just that they needn’t have an explanatory role in any particular mechanism? The output of the mechanism may require a different explanation from the mechanism itself, so perhaps it’s true that mechanisms and representations don’t mix, but that it doesn’t really matter to the cognitive scientist? Who knows, I don’t know anything!

If you want representational contents to be explanatorily relevant, the content-fixing properties must be local and mutually manipulatable with the phenomenon because that’s what makes them part of the mechanism. What does explanatory mean here? If you’re talking about how the mechanism itself functions, then we needn’t even talk about the content of the representations, right? The mechanism just manipulates representations in some way, so an explanation of the manipulation of representations needn’t consider representations themselves to be explanatory of those mechanisms…if we’re talking about explaining the output of the mechanisms though, they we may well need to discuss the content of the representations. So, the we can look at contenting-fixing accounts. E.g. Teleosemantic, indictor theories and structural resemblance theories. Teleosemantics is not local, so won’t work apparently. The indicator theories don’t have locally determined content either so that won’t work…Structural resemblance with the referent is not local either as the referent is outside. So, representations can’t mix with mechanisms.

Objections: You might say that representational explanations provide more understanding than mechanistic ones, because they connect the models with pre-theoretical explananda. E.g. ‘rat wants the food’ is more dialectically useful than ‘ooh, some neurons firing here’. The response is that connection with pre-theoretical explananda doesn’t justify the theory choice. E.g. quantum mechanics is a big departure from physics generally, and our understanding grows as we get more comfortable with the theory (“get to manipulate the theory”), rather than its relation to pre-theoretical explananda. This just seems to me to be talking about the merits of different levels of explanation. Of course you can construct a cognitive science that never mentions representations, since you’d only talk about neurons for example, but it doesn’t explain as much as talking about how representational content is manipulated will do. We want to explain why the rat wanted the food and how it went about getting it…merely explaining the mechanisms behind the food acquisition seems to almost be behaviourist in spirit.

You can also have mechanistic understanding, which is understanding of the space of possible mechanisms. In this you are able to identify the mechanisms for a phenomenon and the interventions needed to transform it into a different phenomenon. You can get this understanding without appealing to representations, is the thought.

Objection: The dual explananda defence. The idea is that mechanisms get you the how explanation and a complete explanation in cogsci requires explaining why that mechanism is suitable for that particular function or explaining why a particular phenomenon counts as success (why the rat succeeds in running through the maze). He thinks you can also do this without representation.

The mechanist can provide an explanatory story of both the causal and indicational story of some phenomenon, by appealing to an “ontogenetic mechanism”.  He wants to explain success without representations:

First, you get your explanandum in the contrastive form (why P and not P’?) where P’ is failure. Various types of failure are explained by various mechanisms. Correct representation does not map onto success and vice versa. Only careful analysis of mechanisms will explain success and failure.

Applications here: non-representational theories: Removes an obstacle to reconciling mechanistic and dynamical explanations. Dynamical models can inform about organisation of mechanisms and points of intervention. Sometimes organisation is the relevant difference between actual phenomenon and a contrast class.

Mechanistic explanations can now be used to make sense of enaction, particularly the concept of resonance and attunement. These concepts are assumed and not explained, but that’s because it’s the conclusion of the talk and the conference is about 4E cognition, where these technical terms are for the most part assumed.

Questions:

Audience: What’s the motivation for this view? Say I agree with your view, should I just assume representational explanations entirely, or just say that they’re explanatory but not mechanistic?
Response: Appeals to best science should be taken with a grain of salt. Neuroscientific explanations use the word representations, it means a correlate, it needn’t have satisfaction conditions to do its job. Also, there is a tension: The brain is complicated, and we don’t know a lot, but it’s taken as uncontroversial that the things we have now are explanations. I would question this (representations…computation?)

Audience: I agree with you, but functions will play a role in mechanistic explanation and so your rejection of teleosemantics, which relies on function, may cause some tension?
Response: Depends on your notion of function. You can use weaker notions of function.

Audience: You said that content fixing properties of structural resemblance representational theories are nonlocal…but content of the targets are the referents and those can be local, not actually the referents which are nonlocal.
Response: The way the contents are used in explaining, they are always used to refer to the target/referent. It is not just the structure of the vehicle that gives it the range of contents…it’s that the structure is mirrored by some other structure out there in the world that is supposed to play a role in the structure of the mechanism here, so the nonlocal property still relies on nonlocal content.

Audience: What do you mean by local? If you try to wiggle the rat maze, you can change the success of the rats…but why is the maze not part of the mechanism here rather than the phenomenon?
Response: Most cognitive scientists thinking about cognitive systems, they count the environment as not part of the system.
Audience: But you’re using it in a mechanistic sense so it is part of the mechanism here.
Response: ???????? Sorry I couldn’t catch the response, they talked over each other.

Audience: So the philosophy of externalism…you didn’t consider that at all. There’s a difference between individuating content in terms of relations and saying that they’re spread over the world and are not local. Being an uncle is a relational property, but that doesn’t mean the property is all over the place…the locality seems to be a spatial notion and the claim that representations are in the wrong place doesn’t matter, you’re just wrong.
Response: The vehicles are in the right place and are doing all the work…
Audience: You’re changing the conversation, there are different levels of explanation.
Response: We can investigate locality without looking at causal efficacy. We don’t have to deal with the externalist…something (I missed what he said). When you want cogsci explanation of the system and of what it does, explaining the system by decomposition tells us exactly what the system will do regardless of what environment it finds itself in, so this is the kind of explanation we should be searching for in cognitive science…where this doesn’t involve representations.

Audience: You said that if mechanisms needs functions, then the functions cannot be teleofunctions which are nonlocal. But there’s some argument that functions meet some local nonteleo function so you can account for that.
Response: That assumes that malfunction is objective, which you could dispute. You can identify some things antecedently malfunctioning.

Dimitri Coelho Mollo: The Teleocomputational View of Representation

Proposing a different kind of approach to the notion of representation based on computation. Will end up looking up at a teleomechanistic view of computation and representation.

Representation: determinate content, misrepresentation, surrogative processing/reasoning.

Computation: Transformation of representational states according to rules of a syntactic engine – semantic engine. (Rationally constrained – by following the rules, it respects semantic and rational constraits.)

Thank you for properly defining your terms!

The slogan: No computation without representation! This view will be resisted: The more promising approach is the opposite – we need to build a theory of representations on a basis of a view of computation.

He’ll also argue that teleology should come into a theory of computation, in order to help ground a theory of representation.

Theories of cognitive representation: Teleosemantics and structural representation.
First is to do with natural teleology, where representation is tied to biological functions of producer or consumer systems. Second is about second-order resemblance – relational structures. Representations represent what they (second-order) resemble. (First-order views have been refuted.)

Main problems with teleosemantics:

Functional indeterminacy (of content)
Behavioural success does not equal representational accuracy.

Main problems with structural representation:

Resemblance relations (of abstract relational kind) are cheap, they’re everywhere you look, – liberality of representational status/indeterminacy of content. You have too many things counting as representational and having too many contents.

The computational approach tries to take a different approach to make some progress. Computation must be, to not be circular: non-semantic, naturalistic and mind-independent. No representation without computation!

Computational systems are those mechanisms with function to compute. Manipulation of physical vehicles according to rules (input-output).

Teleology fixes computational nature and norms.

Descriptive and explanatory adequacy. Normativity of computation à miscomputation.

This does not appeal to representation (in understanding computation – this is non-semantic!)

Theoretical advantages: Representation has a problem: aboutness is mysterious and hard to naturalise. But computation as inputs and outputs transform unmysterious mechanisable and implemented natural teleology as accepted in biology.

Computation narrows down representational candidates – representational states as a subset of computational states. This deals with the liberality of cognition problems. (I don’t think it’s a problem, but hey ho)

Computational implementation: It’s the bridge between semantic and causally efficacious properties (the physical properties). You take the syntactic engine and show how it can work physically as mechanised. This is the classical way of using computation in making sense of representation in cognitive science. But does the speaker’s view have a problem? He’s appealing to teleological function as part of his explanation. But he says that this isn’t a problem, which trades in a differences between individuation and implementation. His story is about individuation of systems – the implementational mechanistic story comes later. Teleology is not a problem here.

There’s another objection to teleosemantics: the internal complexity objection (Rosa Cao 2012). Poses a dilemma. You get no complexity in contents or you get vacuous contents. I didn’t catch how, sorry. Computation can help us reveal complicated internal complexities, by breaking down complex computation into simpler functions. The descriptions of these small local computational computations come to make up the complicated algorithm that the system is computing. Focussing on computation rather than representation, so the problem of internal complexity becomes more tractable.

Also, teleocomputing avoids other teleosemantics problems, assuming a non-semantic view of computation. E.g. the indeterminacy of content is not a problem because everything is….course grained? Mechanistic states are the sorts of things that selection processes are sensitive to.

Useful distinctions can be made apparent. inappropriate behaviour can be explained by: misrepresentations made by miscomputations. Misrepresentations not made by miscomputation. Miscomputation without misrepresentation.

New theoretical horizons:

Cognitive systems described as assemblies of basic computational units.
Canonical neural computations (normalisation, linear filtering – and other simple functions)
These form task-depenent colations which have a rich computational structure.

Complex computational structures stand in exploitable relations to entities in the world, and guide behaviour. (Neural reuse, Anderson, Protean Brain, Hutto).

Conclusion: Neglect of computation pernicious to theories of representation. The teleocomputational approach gives us better tools to help solve problems of representation.

Audience: You have to say that natural teleology is good, but in biology natural teleology is a primitive posit – couldn’t someone say that content plays a role in our best scientific explanations? What’s the dialectic? Where do we appeal to science and where not?
Response: I don’t take natural teleology as a primitive. I’m not giving up on representation by naturalising it in my way. Why use different measures for representations regrading teleology? Why hard on representations and why so soft on natural teleology. In philosophy of biology, we have robust sophisticated and consensus involving theories of natural teleology. But representation is much more mysterious and much more controversial. But I don’t want to look at metatheoretical issues here, I’m just looking for what provides benefits.

Audience: On miscomputation without misrepresentation, can you give an example?
Response: In artificial systems, you might have an input output function with no contents and no connection to what’s out there and in this case where there’s an electrical problem, we get an output that isn’t one that’s supposed to be produced, but because the system isn’t representing anything, we have miscomputation without misrepresentation. This notion is much more controversial in biological systems. In some cases we overextend representational explanations because it seems like the best explanation but once we’ve got the normative notion of computation we can see that we need not have representations in some of these cases.

Audience: You said that it can avoid indeterminacy because computations are more course grained. That’s weird, are they not more fine grained than representations?
Response: Maybe fine grainedness is misleading, I meant that it’s extensional and non-intentional. No Millikan frog problem with computation as opposed to representation. We don’t have to cut finely the distinctions in computations like we do in representations. And/or gates are the same because they’re performing the same computations.

Audience: The view is sold as something new. It’s just a combination of any teleosemantic view with Piccinini’s view of computations. This theory already presupposes mechanism in Piccinini’s sense.
Response: If you are a representationalist, usually you’re a computationalist – we want to know how to account for computation. My view is not teleosemantics, it’s a theory of non-semantic computation according to biological functions. This is not a theory of content either.
Audience: Would you agree that Piccinini’s theory combined with any teleosemantic theory is a teleocomputational view?
Response: Yes.

Stephen Mann: The mechanics of representation: teleosemantics meets predictive processing

How do representations get content on the predictive processing framework? The idea is that the brain is an organ for minimising prediction error. Predictions are made on the basis of hypotheses, which the brain models, i.e. how the world is. Philosophers ask: Are you attributing representational content??? You gotta be careful! We can help!

There are different theories of representation and content: Structural representations can play the role of representations in predictive processing, say Milkowski and co. Other people say you need this plays a functional role semantics.

Speaker is gonna say that teleosemantics can do this job too. Teleosemantics licences representational content for predictive processing. But then, an objection: nothing can licence representational content! (The mechanist objection) There will be a response. Then, an advantage of the teleosemantic strategy: it allows you to talk on a free energy principle framework as well as the predictive processing framework.

On PP framework, radically, states like thirst are strong and incorrigible predictions that you currently have water in your mouth…to make the prediction match is to put the water in your mouth.

So, consumer-producer semantics: Sender and receiver of a system want to produce some kind of effect. But there’s a distal state that has a causal link to the effect and it interferes with the receiver – we need a signal, a representational vehicle, if it bears a proper relation to the distal state, then it has an effect on the produced effect?? Sorry it went really fast.

He clarifies that representation is meant very liberally here. E.g. pain: sender is nerve ending in toe, state/signal is pain signal and receiver is motor cortex, and the effect is moving the toe away from the pain or otherwise removing harm, i.e. from the table where it was stubbed. If you were feeling pain where there was no harm, this would be a misrepresentation.

With teleosemantics on the table, we want to talk about prediction, prediction error and the model used. All these 3 can have representational content. The idea is to plug these three things into the sender receiver model. Then there’s a few diagrams, with the idea being that the colours of the Millikan model can be mapped onto a diagram of PP in the brain…the colours can move around depending on predictions/prediction errors/the model. Just…assume that’s all fine…let’s move onto the objections.

But nothing licences representations? At least in these subpersonal states you can’t. It’s a shorthand for a fiction. Representational content is not supposed to be causally efficacious, so it can’t play the roles we want it to play.

Mechanists: RC is not causally efficacious; we ought not to appeal to it to explain behaviour.
Pain & Mann If that were sound, there would be no explanatory role for any relations in causal models. But there is!

E.g. You’ve got a sock sorting machine. It either throws out the socks, or bundles them up. You’ve got a distribution of socks…how might we explain this? We might say that every time they were paired together, they had the same pattern. This is a matching relation between socks which you’re appealing to. This doesn’t require you to appeal to the properties of the socks individually. It explains, it predicts, and it allows you to control, i.e. if you wanted your socks paired you could get that done with the machine. This doesn’t require an explanation of what’s going on inside the machine.

The idea is that this sock example is how logic gates work. There is a matching relation to the gate with the output being YES match or NO match. This demonstrates an explanatory role of relations in causal models. The relation itself might not be causally efficacious, but we should agree that logic gates are a way to make relations causally efficacious and it’s important that they do because this is the basis of digital technology!

The point is that: some mechanisms are explanations of how representations work.

Moving to the advantage of this theory: Predictive processing has been abstracted so viciously that it’s unfamiliar as a theory of cognition to the point that it’s now a system of biology in some respects, which we might understand as Friston’s free energy principle. The idea is that this system, so described by a diagram on screen, is that you can plug teleosemantics into this account and it…works, that is, if you want representational content in your free energy principle.

Audience: Is the causal efficacy objection serious at all? We don’t need to care about causal efficacy for semantic properties, we can go with causal relevance (as has been argued convincingly). This was a problem of the 1990s, not today.
Response: Fair, but what’s different now is the advent of the causal modelling framework, the idea being something like if a mechanist takes their views of causal explanation and tries to interpret them to say that the gold standard of explanation is in terms of causal explanation only, then what would be helpful to say is that there’s still going to be a role for relations here.

Audience: I’m happy for relations to have a causal role, because they can be local to some phenomena, it’s just that…(I didn’t understand, sorry).
Response: Well what do you think about the ‘weight’ relation of a moving needle on a scale. Is that considered to be local? (The explananda is the number the needle lands on.)
Audience: Not sure.

Audience: With the sock sorter, they might say that’s an abstract explanation and a sketch of a genuine mechanism…so let’s look inside the sock sorter and then we don’t need the representations anymore. The mechanistic stuff is still gonna end up being non-representational. The stuff in the sock sorter itself will be non-representational.
Response: Yeah sure.
Audience: Then we get a mechanistic non-representational view about a representational process…?

Audience: You want to attribute content to individual parts of the hierarchical PP model….something something
Response: Something something, to be honest I’m not massively hot on the PP model, e.g. how might you distinguish the specific contents at different levels of abstraction? I got asked that and I don’t know. I was going to talk about the free energy principle but remember I changed it to PP to be more cognitive sciency.

Audience: Can we call PP a model? Is it not a distribution? Of different states of affairs? How do we make sense of the content being probability distributions as opposed to content of stuff out there in the world? That’s counter-intuitive.
Response: Machine learning models use vectors to understand the flow of information from one neuron to the next, and then to different levels of abstraction – they can assess representations by these vectors, so this doesn’t seem to be as counter-intuitive as we might think at first. Probability distributions could be done by vectors.

Marcin: Friston recently wrote on exactly this and I’ve responded to this paper. They thought they were anti-representational but I think they were confused on this in many points. Also, I’m a teleosemanticist. We believe that you need the notion of function to get misrepresentations. In our unpublished book we talk about this.

Danaja Rutar, Wanja Wiese, & Johan Kwisthout: Representational Gradation in Predictive Processing

Representational features comes in degrees that vary independently of one another within the levels of a generative model. These are cognitively necessary.

Structural representations: Structural similarity, ability to misrepresent, ability to guide actions.

S.representations need to preserve relational structure (e.g. spatial relations) of what they represent. Structural similarity has to be causally conducive for functioning of the organism.

Detachability – cognitive systems are ‘detachable’ if they can function in the absence of sensory input.

So, what’s a generative model? They aim to capture the statistical structure of some set of observed inputs by tracking the causal matrix responsible for that very structure.

Detachment corresponds to the degree of computational efforts involved in transducing. The detachment can be partial or complete. Structural similarity is a gradable notion. Now they move to their own proposal…the point now is to say that gradation research in this sense is very scarce.

There are PP processes underlying gradation of representational features.

Structural similarity can be cast in terms of the level of detail. Level of detail provides additional information relevant for ….I’m sorry this is going faster than I can listen and type.

Two features of structures of similarity, preservation of structural relations and the level of detail, are distinct. E.g. dots on a map could be the same, but they can be different in terms of the level of detail.

Level of detail is important to consider because it allows for extra information relevant for understanding cognitive performance…current notions of structural similarity cannot differentiate between differences in the level of detail and hence differences in information gain.

Structural similarity is gradational in virtue of the level of detail being gradational.

Now moving to detachability:

Detachability can be cast in terms of precision weights of predictions errors, which are gradational, so detachability is gradational.

There are lots of complicated diagrams on the slides that really don’t make a lot of sense to me, sorry. There’s also lots of information on the screen and I’m getting overwhelmed.

Precision is gradual on PP. Detachment can be expressed through precision weighting of PE, so detachment is gradual. Value of precision weighting and degrees of detachment are negatively correlated (high precision weighting value is low detachment and vice versa). So high value weighting and lot detachment is like navigating new environments and infant learning, whereas the other end of the spectrum is thinking and imagining and daydreaming.

Audience: When predictions are used for action, they need to be online and not detached…so they’re always going to be more precise? Action oriented predictions?
Response: Yeah this is vague. Perhaps systems need to engage in offline simulation to reduce the complexity of the model for online use. This is how precision errors could be minimised.
Audience: There could be instances where we’re acting on imprecise instances though?
Response: Also, remember that our diagrams are really simplistic compared to how they should be in a full explanation.

Audience: Is there a relation between level of detail are precision weighting in the sense that it seems like if you’re performing a task where the precision weighting is so strong that you can’t afford to get it wrong, it seems like what is employing the structural representation is encouraged to go to the next level of detail, but then that’s not as good for predictions.
Response: The level of detail in terms of state space similarity is inversely related, at higher levels of detail there’s more uncertainty, and hence more entropy. Online/offline is related to precision.

Audience: What’s the role of optimising for this? How does this work in AI systems.
Response: Ok, optimality isn’t the best word to use, we thought that if the system reached the Goldilocks zone, it would be good for making predictions in the long run and so we just called that optimal behaviour.

Audience: You talked about the features directions of this work? Can you explain more?
Response: We have a rough idea of what to do: first to find a way how to implement precision weighting of precision error in a simulation. Then to have agents which are able to differentiate between different levels of structure detail and then they would have to reach some goal and then we could compare results based on how the weightings vary.

Audience: So you’re saying that the level of detail and preservation of structural relations is distinct, right? The Neumann’s problem, the problem for structural realists, you cannot distinguish between relations if you use set-based accounts of relations – I’m confused how the diagram looks the same but you’re saying that they’re distinct in virtue of the level of detail of the components. Surely you just have more features, more properties in the right hand case.
Response: But it’s the same variable (colour of a dot) which doesn’t mean anything on the left, but means something on the right. So, the term level of detail is misleading, we really mean state-space granularity.
Audience: It’s hard to translate between fuzzy logic and this…let me have a think.

Audience: I’m confused on the same point. The difference in colours can be construed as a structural relation. But you want to say that this isn’t a structural relation?
Response: We’re not saying that it’s a different kind of thing. There’s a difference between having a structure with relations between them and coming from a state space from which you can look at relations within the state space and this gives you some information and…um…the idea is that preserving the relational structure is fine, it’s just that you can describe the things being related in differing levels of details.

Carrie Figdor: Mental Representation and Mental Evaluation

This is new material. Taken from a paper under review. Overview: Problem of intentionality. Predictive signalling systems. Prediction error signals as mental evaluations (not representations). Relation to the “representation wars”.

Intentionality: Brentano’s mark of the mental – aboutness – how do minds represent the world? Naturalising intentionality – truth conditional semantics in the head. (Contexts of manipulation of propositions). What’s weird is regarding propositional attitudes is that the people initially trying to naturalise intentionality were considering minds as linguistic stuff in the heads and as such minds manipulated propositions in the head. That was the basic idea which was expanded on. You get more and more elaborate, you get Millikan teleosemantics etc. You start with the semantics of natural language and take the model and put it in the head.

Is aboutness just representation? (She thinks no). Yablo 2014 says aboutness is the relation that meaningful items bear to whatever it is that they are on or of that they address or concern.

Figdor thinks that aboutness can be carved into representations and evaluations.

So, what are predictive signalling systems? Step 1. Shannon communications. (Diagram on screen.) The idea  there was to think about how communication systems could eliminate noise. People used to think physically (thicker cables) and Shannon thought about more efficient coding. The message being selected would be from a set of possible selections. Natural language has a particular statistical structure (some words/letters used more frequently) so you get a theory of reducing uncertainty from this: e.g. if you have Q, it’s highly probable that the next letter will be U. So, the statistical structure of language is rounded in conventions of use (not strictly, but probabistically). So the idea is you can be more efficient: Morse code is more ‘efficient’ than English because it considers statistical efficiency in its construction. (E.g. if you have a Q, the U is redundant. If you get QEEN, you know you’ve got QUEEN. This is done because {Q,U} is a possible set of solutions.

So, now probabilistic theories of content: Starts with Dretske (for a signal to carry information is that from background information and probability, you get 1. Nowadays, S’s occurring raise the probability (but need not raise it to 1)

The world has statistical structure – world states are probabilistically related (e.g. smoke and fire). If this wasn’t the case, we wouldn’t be able to navigate the world. So once a signalling system exists, the smoke signal makes the fire signal redundant like the Qs and Us. So it’s more efficient.

Skyrm’s vector semantics: Identifies the content of signals with shifts in probabilities across several possible states, given the signal. A’s content of a probable world state is identified with a vector of the logs of those differences (between A’s occurrence and B’s occurrence)

Godfrey-Smith objects to Skyrm saying that if you can learn from a signal how the world probably is, it’s no use to learn how the probabilities have changed. (If not, then the signal cannot be used to guide action.)

So here predictive processing comes in. This is a leading theory of biological cognition, in which efficiency is critical. Efficient signalling! It only encodes what it needs to and send what it needs to because this is expensive, cognitively. You want to reduce prediction error, but also signal more efficiently in general.

Prediction error signals encode differences between the what transmitter expects via prediction and the message it receives. These differences are what organisms want to receive here.

But what kind of content is a “difference” in signalling and prediction errors? What does it mean to say that content is a difference? For prediction error signals, they are evaluations of the systems expectations in the light of various environmental contingencies. They provide internally accessible assessment of how well the organism’s predictions are going. Their evaluative content integrates sensory and internally generated inputs to encode a difference, if any.

Evaluation vs. representation:

Distinct purposes. A prediction error isn’t looking at fit (of mind to world). Predictions themselves may be looking for accuracy, for truth, and be representation, but prediction errors don’t need to aim for this. “They aren’t aimed at being accurate”

A prediction error will never represent truly because it isn’t generated when a prediction is true. They’re not true or false, but instead encode differences between prediction and the incoming value. These matter to the organism.

These evaluations are “normatively accessible” – if they encode the actual difference, then there’s no misevaluation. If they encode a smaller or larger difference or wrong valence, they are assessable as optimistic or pessimistic. But isn’t this just right and wrong in different ways? How is this avoiding a truth value here?

Shea’s metarepresentational alternative (content of prediction error signals) He says that PE signals are pushmi-pullyu meta-representations. They have both indicative and imperative content. Indicative content is about rewards and feedback, the imperative content is the instruction to revise up or down probabilistically.

But Figdor doesn’t like this. She says that the imperative content is optional, rather than imperative here in prediction error signals. Such a signal doesn’t impel you to do anything. The indicative content is also weird, because why would such information about successes be encoded on a PP view?

On to the representation wars:

As an anti-representatioalist, you might say that there are no evaluations, they’re just representations, and then the usual debate happens. Or you can accept the distinction but then question whether representations and evaluations are equally objectionable (for the anti-representationalist).  

Figdor’s view on the representation wars: Officially neutral, but there may be stronger reasons to think the system needs and so creates explicitly encoded evaluations. Evaluations are dynamically encoded to guide behaviour in a constantly changing environment; such signals are not going to be implicit in dispositions of the organisms.

Summary: Two leading theories of information processing in cognitive sciences can be (easily) integrated: signalling systems theory and predictive processing. In this integrated picture, your prediction error signals are evaluations, not representations. So, for cognitive science to philosophy of mind, the problem of intentionality is not adequately understood by taking the analogy to linguistic meaning (truth conditional semantics) too seriously.

Audience: Curious about the reply to Shea. Let’s grant the imperative complaint, but why do you think that in order to be a representation, the system would need to sometimes send the signal that the prediction is true?
Response: It’s the logic of representations. What representations want to do is represent truly. Error signals are never trying to do that.
Audience: The prediction error difference can be represented as true or false though?
Response: The general model can want to be true, but that doesn’t mean that all of its parts have to be aiming for truth (generative model, predictions and prediction errors). Misrepresentation is the pathological case…the normal thing is to be true.

Audience: Pretend I’m an aspiring dictator. I say all internet images have to be encoded as to how different they are to a great picture of me. This scheme represents things in terms of differences of the prediction that every internet image will be my picture. It’s an evaluation, but why not a representation?
Response: All that’s encoded is a difference if there is one. It’s not encoding red or blue, it’s the difference between prediction and incoming. That’s it. That’s not representing that it’s red or blue, it’s just saying that there’s a difference here.
Audience: Given the language we’re speaking, it does…we just need to de-encode it.

Audience: The prediction error can be true or false!
Response: We’re getting stuck on talking true and false, and assuming that this licences talk of representation in and of itself. Focus on what’s being encoded. It encodes a difference. If it gets the difference…you say it’s true, but that doesn’t make it a representation. It doesn’t represent the difference. I agree with the audience’s general scepticism here…

Audience: Couldn’t you just say that the error signal cannot represent some things, and that’s the only conclusion you can make?
Response: There’s always gonna be things that can’t be represented…That’s a different issue. You want true representations, but you don’t get error signals when things are represented truly, so they can’t be representations.

Audience: Would neural research inform your view?
Response: Is the question that the PP model lacks neural reality?
Audience: If we could detect predictions and error signals, would this inform on the difference between representations and evaluations?
Response: Probably not.

Audience: You want to avoid a linguistic content bias, which is great. But a traditionalist might say that the things in the brain which don’t have propositional structure just aren’t representations…they’re other things. (The conservative view). Or there’s the liberal view, but you want to be in the middle. I don’t really get the conditions on being a representations, as opposed to being evaluations.
Response: So there’s aboutness, there’s intentionality, there’s a signal. So thinking conceptual vs nonconceptual content, that’s all fine with me, you can have a more liberal view on representations (e.g. representational but nonconceptual in structure). I’m just saying that the signals are not providing the organisms with what the representations want to give them, it’s giving them something else they want. It’s a different functional role. Does one notion of aboutness work for models, predictions and predictions errors? I think not. My view captures the functional role better.

Audience: You assume that a PP model of cognition will be representational…is it possible to be an evaluationist but also anti-representational on your view?
Response: Yes. You can construe PP as anti-representational, but also evaluationist.
Audience: But there’s still aboutness??
We run out of time.

Jolien Francken: Conceptual issues in cognitive neuroscience

Her talk will be about the relationship between the neural and the mental. Presents a general theoretical framework, then a brief primer of their current research. The translation problem…what’s this problem? Why is it relevant? What is the cause? How might we proceed?

Cognitive neuroscience aims to identify neural mechanisms underlying cognitive processes. What type of things should be in the cognitive topology? E.g. memory is in there, so what’s memory? Also, are the definitions of things like consciousness the same as folk definitions? Awareness, being conscious of, subjective experiences, etc., there’s lots of aspects to these sorts of notions. If it’s unclear what we’re looking for, then it’s hard to find the right neurons.

An example with memory: Say you want to study memory. Say you start with a scientific notion (working memory etc.). You’d take a task to manipulate working memory, like the n back task: You see numbers on screen and you’re supposed to press the button when you see a number that you’ve seen before. The number sequences get longer and the task gets tougher. Then you put the participants in an fMRI scanner and have a looksie. You’re looking for activation patterns. Then you’re ready for publication! However, in another lab there’s another person like you doing working memory and doing different tasks. For example the Wisconsin card-sorting task. You’re meant to put the target cards in particular boxes depending on the strategy you’re asked to fulfil. Might be number for example. Then you’re asked to change to, say, colour or shape. Every now and then you get different strategies, so you have to keep in mind which strategy you’re using. Pop them in the fMRI and you can claim you’ve found the neural correlates of working memory….but of course your results don’t match the other working memory test!

No direct mapping from concept to task to brain. The idea is that in consciousness research, the same phenomenon tends to occur. This is essentially the translation problem, abbreviated in Francken & Slors (2018). Drawing conclusions about commonsense cognitive concepts (CCCs) from brain data is hugely problematic. Surely this isn’t new information for this field of study??

The relevance of the problem: It’s hard to connect the scientific data, the speaker thinks this can only be solved at the conceptual level. You also get the problem of unwarranted extrapolations from experiments to the wider public. This all also presents an obstacle for understanding and treatment of psychiatric disorders (e.g. habit vs. compulsive behaviour Luigjes et al., 2019).

The cause of the problem: Mental realism! The idea that cognitive concepts can be found in the brain. You then have two options: Eliminativism – cognitive concepts don’t exist and will be replaced by neuroscientific explanations. Studying the brain itself will make cognitive stuff emerge from it. But it’s hard to study the brain without interference…just saying “lie there in the scanner” is an instruction where you get issues in resting state fMRI. This isn’t what cognitive neuroscience does or aims for anyway, since they use cognitive concepts all the time.
The other option is to Interpretivism: cognitive concepts are interpretations of behaviour; there is no direct link between mind and brain (Dennett). They say that the mapping cannot be made or is more complicated (it aligns nicely with the speaker’s story thus far, they note). E.g. the label working memory is an interpretation of what is manipulated in the n-back task.

Cognitive concepts are not natural kinds according to this view. Natural kinds are observer independent categories, like neurons. Distinguished against nominal human kinds, like money. Cognitive neuroscience tries to map human kinds onto natural kinds.

The categorisation practice does not serve a scientific , but a social and phenomenological purpose. (Dennett’s intentional stance). Cognitive concepts track behavioural patterns. We ascribe them to human beings in order to predict and explain behaviour. E.g. memory is culturally relative.

On this position you can ask: Does the Wisconsin card-sorting task measure working memory or task-switching and then you can have a productive discussion. The concepts we employ do not necessarily correspond to any neural counterpart.

How to proceed from here? The speaker talks about her current project. You can try to find scientific consensus on definitions, make operationalisation explicit and this in particular takes special attention. You can also take a consciousness survey to agree on a definition. She’s trying to convince fellow cognitive neuroscientists to agree on some of this stuff. The speaker presents some of the questions from her consciousness questionnaire. E.g. “do you think that there is phenomenal content beyond that which we have cognitive access to?” Hmm. No definitions of what these terms mean in the survey?

She’s looking at which cognitive functions can occur unconsciously. There’s a recent surge in looking for functions that can happen unconsciously in the cognitive neuroscientific community. E.g. response inhibition, motor reflexes etc. A hot topic at the moment is working memory. Her hypothesis is that how you answer the posed question is dependent on what you mean by unconscious. Well…yeah. She discusses a test (masked priming) where you might be able to tease apart these senses of unconsciousness. In the test you might ask “is the prime the same as the target word?” (Flashed quickly in a gif before the target word appears. If you don’t process it, you should be at chance level (and you shouldn’t really see the word when it’s flashed so quickly). This is the “objective” testing. You can also ask the “subjective measures” where you go “did you see anything?” and she discusses the obvious problems with this. Anyway, the idea is to compare results from objective and subjective questioning, in her preliminary measures, the subjective questioning attributes more ‘unconscious’ processing than an objective measure would do. You assume a more extensive subconscious paradigm with subjective questioning. The idea is that this is not just an empirical question, as your working concepts affect results.

Some of my comments here may seem sarcastic. It isn’t meant to be mean to the speaker, as the talk was excellent and it’s good work. My worry is with the state of cognitive neuroscience that this basic scepticism is such a new idea in that entire field…

Audience: What happens to common sense cognitive concepts over time as new scientific information disseminates to the public? Let’s assume interpretivism is true right now. But in the future, with good scientific progress, then perhaps interpretivism will become false in the future?
Response: Dennett would say that this basically won’t happen because our scientific terms are good enough right now to do work, could we really get much better concepts? Secondly, in a sense this is already happening. People are using ‘brain talk more often’ e.g. ooh, I feel the adrenaline; I just got a dopamine rush. But at the end of the day, neuro-terms and cognitive terms have different purposes so there won’t be too much convergence over time.

Audience: Some people argue that cogsci is immature, it’s not a mental realism problem. Once we change our cognitive ontology with maturity, we’ll be fine. Is this promising?
Response: This isn’t an alternative to interpretivism, even though I’m optimistic about this stuff. Cognitive ontology has potential for progress, sure, you might want to create a ‘task’ ontology demonstrated in working memory vs task switching experiments. But this is part of interpretivism.

Figdor: My worry is that the translation project is too narrow. It’s biased. You got brain v. capacities…we know the brain is an evolved organ, but so are capacities e.g. episodic memory…there’s this profound anthropocentric bias to this whole project (not you, the project) – are you aware of how any of this might work for any animals? Otherwise this isn’t a cognitive ontology, it’s a human cognitive ontology.
Response: I agree, and this makes the project harder and longer…but that doesn’t affect my general Dennettian framework. All of this seems prima facie possible to apply to non-humans. Though, there is a risk that you have to stay at the task-level descriptions…e.g. “we don’t talk about working memory, we talk about n-back tasks” but that’s not useful or interesting. To be fair, I think cognitive neuroscientists should do loads of boring and slightly different experiments to get good science done.

Audience: At what point can we say that we’re introducing new tasks based on mature, advanced, cognitive ontology? But we think that current tasks do work, so maybe this isn’t the way to go? But then some tests work on humans and dolphins, but not dogs etc.. (e.g. recognition tests). But then we realised dogs recognise by smell and not sight so then we redid the tests under the same label…and…
Response: So, the question is about openness in the tasks or trying to design new tasks? Generally, it’s really hard to change existing tasks substantially…this sort of stuff is already happening, but you have to start somewhere. In designing a new task, you’re already having conceptual input which affects results. This is really difficult.

Audience: You said that cognitive concepts are not natural kinds and that they’re nominal kinds. Why not say they are natural kinds at a different level of description?
Response: interpretivism places cognitive concepts in the manifest image of our social world and then by definition because they function in our social interactions then they’re nominal kinds. There’s no reason to suppose that cognitive taxonomy maps one to one onto the brain. I see what you’re saying, it’s just that there’s an interpretative step here and that makes it nominal.

Audience: Surely everything is even more complicated? Is there just one scientific cognitive ontology? Surely choosing an ontology itself is a problematic choice which complicates the picture.
Response: Cognitive neuroscientists are hoping that they’re using the same ontology. The aim is for one.

Marcin: Interpretivism is based on the assumption of idealised descriptions of rationalised behaviour. This isn’t the case in rational memory.
Response: Because you have to rationalise these concepts, they become more detached from the original use and maybe scientists don’t notice…so the translational steps are still made without acknowledging this.
Marcin: This makes your task easier because you don’t have to care about rationality in definitions anymore.

Samuel D. Taylor: A third way to think about mental representation

Aims: Anti-representationalism vs. representationalism (the wars). Take the anti-theories and suggest whether they succeed or fail. Then he wants to say that there’s two kinds of naturalisation projects going on and the different sides want different things. Then discuss the instrumentalist stance then make future suggestions.

Rep: We should posit mental representations in cognitive science.
Anti-rep, 2 arguments: Firstly that they’re unnecessary, and secondly that they’re unjustified. (Don’t need, can’t have).

On being unnecessary: We can explain cognitive capacities in non-representational terms. E.g. dynamic process explanations in state-space evolution – differential equations showing how dependencies interact. But the speaker thinks we generally just don’t have no-representational explanations of cognitive competencies. Representational explanations are ubiquitous.

E.g. declarative memory: conscious recollection of factual information. Even Hutto and Myin say that contentful representation is absolutely required here. Understanding declarative memory as an evolved faculty involves positing representations with particular kinds of contents.

On being unjustified: Positing representations gives us a further problem of naturalising representational contents. This is known as the “hard problem of content” according to Hutto and Myin. Representationalists generally think that content has been naturalised, or that it will be naturalised in the future. Anti-reps take the view that it has not been naturalised (or even perhaps that it can’t). This leads to a question over what a successful naturalisation of content look like?

Now there are two different ways to go about naturalising content.

  1. Ontological naturalisation – content accounted for in terms of physical/and or spatiotemporal entities.
  2. Methodological naturalisation – content accounted for in terms of the entities that play a role in cognitive scientific explanations.

Collapsing this distinction gives ontological naturalisation priority. Some of the best science work though requires positing non-spatiotemporal entities, so we need this distinction.

Anti-rep prioritise ontological naturalisation. It’s an attempt to reform cog-sci so that methodological naturalisation fails. This is taken to be a standard anti-rep view.

Representationalists prioritise methodological naturalisation. That’s not to say they don’t want the ontological kind, it’s just that they prioritise metholodology. (e.g. Shea 2013) We want to say that content has substantive explanatory capacities in cog-sci.

We don’t have an account of content in physical terms. As such, content hasn’t yet been naturalised. But we (might) have a viable account of content in terms of entities that play a role in cog-sci explanation (Dretske, Fodor, Neander etc.). From this we can conclude that content has been or can be naturalised.

Here we’ve got a gridlock…moving to the instrumentalist stance.

Are these the only ways to think about mental representations? Does this exhaust the logical space? No! We should consider the lessons of cognitive instrumentalism:

Talk of unobservable objects should be literal only when they are assigned properties with which we are experientially acquainted. Unobservables are real only when they can be accounted for in terms of observables. E.g. holding an apple, you can feel the force of its mass. Halving and halving it etc. will lead to not feeling any force. It’s an unobservable, but we can still assign to it an observable property of the force of mass.

But the problem of scope: Almost any unobservable can be defined in terms of observables (angels by their wings, etc). But we can restrict the scope to scientific explanations given the domain of cog-sci.

Is observability a vague notion? But a search for a Cartesian style guarantee is a search in vain. We should think observability is an empirical question, and then that’s no problem at all as it’s “indexed to our best science”.

So his stance: Constructive empiricism + instrumentalism: Acceptance of a scientific theory does not imply that one fully believes in that theory. + the observable unobservable stuff.

Mental representations standardly are assigned at least the properties of:

Having a structure
They’re compositional
They’re acquired
They have content

Can these properties plausibly be understood as observable? Many people think these are interdefined, e.g. having structure in terms of having content. But someone employing the speaker’s stance cannot be compelled to endorse any particular theory of representation of how the properties of mental representations relate.

Speaker thinks that some properties of mental representations can plausibly be understood as observable.

Structure – as constructed of an arrangement of relations between its parts.
Compositional -e.g. brick walls. These properties are clearly observable.
Acquired – to be learned is observable, as is acquired, like metal acquiring rust.

But having content? It’s unclear.
Content is having aboutness here. Many things seem to have content (pictures etc.), but do we observe that things have the property of having content?
If we say ‘yes’, this seems to be dependent on our use of the object. But ‘use’ implies cognition, as in cognitively engaging with it.

So one’s answer about observability of content cannot be segregated from one’s views about cognition. But trying to give an answer here, we get back to the representation wars. But the point of the speaker’s stance is that they don’t want to get embroiled in the war, so they posit a third way:

On the instrumentalist stance, you can accept a theory positing representations, but only ‘believe’ in representations as structured, compositional, acquirable, but not (necessarily) contentful, things.
But is there tension here? Accepting a theory involving representations is taking on an unqualified belief in mental representations – ipso facto acceptance? Should we accept the theory?

But there’s a difference between positing and believing in mental representations. Beliefs can be judged on the instrumentalist terms (on whether they can be construed as observable properties) and so the naturalisation of content is irrelevant for belief here.

But are we still talking about representations? Not in the sense that the warring factions have been talking about them. But, both sides can endorse the speaker’s stance. As a result, cognitive science is free to do whatever explanations it wants and “philosophy gets out of its way”.

Figdor: Ontological naturalism is not clarified in the right way, as you can be reductionist or not (Chemero or not). Also, this is not a stable middle position. Everyone wants observable stuff, so if those tasks are adequate, then you’re gonna go the representationalist way. And if you think they’re not adequate, you’re gonna go a different way.
Response: Chemero wants a radical reductive enterprise, but also acceptance of affordance, so he’s not being inconsistent. On the second point, it might be that success on a task might require positing representations, in the same way you might posit electrons. That’s fine to posit, but I’m proposing a certain perspective-taking to each theory. Acceptance vs. belief is distinguishable on this view.

Audience: I’m inclined to accept your approach, but it also helpfully reveals a dispute in the anti-rep camp because once you have your kind of view of content, you realise Hutto isn’t talking about representations, they’re talking about structure, about computation…so they’re anti-computationalists, rather than anti-representationalists. The structure of the physical mechanisms is in dispute here!
Response: A room of anti-reps hated my talks because of the structure, and the reps didn’t like it either because putting science first hurt their egos.

Audience: Unclear why you felt that answering the content question relied on views about cognition. Hutto and Millikan have realised they mean different things when they talk about content. Millikan is liberal, Hutto is conservative and requires Intensionality. To figure out the distinctions between these things, do you need to know anything about cognition?
Response: I don’t like Hutto’s idea. “You have no content, then something something something you get content.” The premise is that content is something we can have a debate about. But something like Chemero wouldn’t agree. Is this really a debate about representationalists vs pseudo-representationalists.

Audience: Disanalogy between representations and the apple example. We already observe the apple and we can’t do that with representations to start with.
Response: That was just an intuition pump, think about electrons instead. I’m applying Rowbottom’s thought here to this debate by the way, so don’t give me too much credit for these examples.

Audience: So, the science first approach…does that mean that you’ve made a decision on the ontology vs. methodology issue anyway?
Response: I don’t think so, but your point suggests that the real work to be done in philosophy of mind in general is about how science is done sure, but right now you can just look at the science and see a schism (between types of cog-sci). There are open questions about comparing scientific theories, but those are philosophy of science questions and not philosophy of mind questions.

Joe Dewhurst: Folk psychological representations and neural mechanisms

Anti-representationalist position: for a priori reasons. Content being somewhat normative is the view the speaker starts from. Talk is about how to make sense of his anti-rep in a positive light. Folk psychology as a normative social practice possibly unsuited to subpersonal mechanisms. How we think about folkpsy should be how we think about attributing content.

He likes Dennett’s approach to folkpsy, he’s interested in mechanisms without normativity (no teleological proper functions). Interested in computation, defending Piccinini’s account of computation which is close to Egan’s view. Also interested in cybernetics.

Folk psychological representations (or descriptions of behaviour) are a course-grained gloss, not fine-grained descriptions of underlying mechanisms. E.g. someone looking at a cat. We attribute the belief that there’s a cat sitting in front of them. This isn’t to commit the folk to literally saying that there is some discrete belief-like mental state here to find…it’s just a course-grained description of the overall picture. Leads to a Dennettian picture of folk psychology. Representation talk in neuroscience and mechanistic explanation is basically the same deal today.

When we say that some neural mechanism represents something, this is a more fine grained description, but it’s still gloss. Egan has said we should interpret Marr’s work in this way.

Computational theory à representation and algorithm à Hardware implementation. (This is Egan’s view)

The view has been criticised by people saying that content plays a role in mechanistic explanations. Bechtel says content has a heuristic role…in explaining a representational capacity, cognitive scientists are describing it in such a way as you can’t get rid of content. The speaker says that this helps you understand the context of the mechanism operating and so you can even talk about misrepresentations…this is not literal but just being surprising events compared to what we’d expect.

The causal mechanism is doing the explanatory work here. Many anti-reps will disagree with this though. We can still say that the content is still a gloss even when describing structural representations. What we really want to know is a causal story about how the mechanism works in relation to the world – the content is just a summary of the overall effect to be explained.

But is the speaker’s story anti-representationalist, he asks? It may well depend on other commitments. So, we’ve got causal mechanical vehicles. The contents are causally inert, perspectival, it’s just the way we interpret the system. Their not integral to the mechanism’s function. Gestures towards the swampman as an example of this.

What does this mean? Mechanistic structures afford representational interpretation. There will thus be constraints on representational attribution to particular systems. Mechanist accounts talk about patterns of structures which are often said by representationalists to have some kind of content. Thinking about Dennett, someone has beliefs and desires in such a way that there are patterns which can be tracked that correspond to a belief story. So maybe representations are real in only a Dennettian way…Neural representational attributions are as real as folk psychological attributions. But representationalists want something stronger, they want content even when we’re not looking at it, and the speaker rejects this view.

Speaker notes: by the way, functions are an important part of mechanistic explanations and I don’t think proper functions are a real thing. Feel free to press me on this point as I don’t have time in the talk.

Next: computation. Computing mechanisms and computations explanations. There exists a mechanistic computational account, such that computations are just manipulations of a mechanism, so there’s no content. BUT, computational explanations often look like they’re invoking content. They seem to talk about abstract level stuff (algorithms) which can be instantiated in different kinds of physical system. So the idea is that the same algorithm can be done by different computational systems (under some constraints obviously). This is a challenge to the mechanistic account and you can invoke the idea that the mathematical content is the same, despite the mechanistic differences.

But you might also say that the mechanisms themselves are distinct, and really that the “same mathematical content” being processed is itself just a gloss. They are different computing mechanisms but with structures such that it makes it the case that it’s useful to describe them as computing the same functions, and that’s the gloss.

But, mechanistic explanations, local, non-contentful…but maybe there are other kinds of explanation. Mathematical explanation? Non-causal explanation? Well, that’s probably fine, frankly. Those are different questions anyway. Perhaps they’re explanatory insofar as they depend on actual mechanistic computations. That’s a harder line that I might not want to take right now, but we’ll see in future.

Ok, cybernetics: Ross Ashby and his…hat thing. Discussing black boxes and the role they play. Devices which you can look at inputs and outputs, but not the innards of the mechanisms. The quote says that basically, once you put the system in a black box, what it seems to do is often miraculous! Perhaps the brain is like that too, it’s just that a lot of stuff is in black boxes and positing content explains the system when we don’t know what’s in the black box.

E.g. you might think that a system where A is affected by I and Z to produce O can be explained by I and Z (where Z = I-1). But without knowing that Z is there, we might describe the system as A seeming to ‘remember’ I-1 which is a gloss. Once we know what’s in the box, then content falls out of the picture.

So representational systems are black boxes. Or, we could say that we’re drawing attention to the relevant features (Z) and labelling it content.

Ok, so the representational wars…Which side is the speaker on? Mechanistic explanations shouldn’t invoke rep content, but they’re usefully ascribable to summarise what’s going on…so is this deflationary realism? Sometimes I describe myself as a realist about folk psychology, so maybe I should be realist about representations too?

Is this account deflationary? Is it eliminativist? It really depends on your other theoretical commitments. Regardless, we should be looking for mechanistic explanations. The speaker just doesn’t think we should be massively concerned about content when we do so. After all, as others have argued, are we really arguing about the same kind of thing ‘content’?

Audience: Without reference to content, we miss a relational dimension to explaining why the mechanisms function as they do. A representationalist can agree with you, it’s just targeting a different explanandum.
Response: Sure that’s fine, but some people will resist that if you think content is important in explaining computation. So what counts as a good scientific explanation? Some will think causal mechanical explanations are the gold standard, but I don’t want to go for that view. Anyway, the reason why the mechanism is as it is is a social explanation, and that’s fine. In the natural domain you usually use an evolutionary explanation, but I want to get rid of normativity there so I need to think about that more. But then you might give a computational view, and the level of abstraction is important to consider.

Audience: You said that representations serve as part of our explanatory practices but you needn’t invoke them in mechanistic explanations, so are they explanatory or not?

Response: They’re not necessary but you might want to retain them for their role in reducing the complexity of an explanation…or for explaining the bigger picture with normative considerations.

Audience: Why are you confused about representations? If contents are not doing causal work, then you’re anti-rep!
Response: Marcin, Do you think contents play a role in causal explanations?
Marcin: Yeah, you can. Realism may be forced by the Dennettian stance.
Response: Cool, back to the question. Then I guess I’m an anti-representationalist. But even then, when you think about evolutionary explanations, then all the causal mechanical details show that the functions drop out of the question.

Audience: Why do you contrast representational explanation and mechanistic explanation in the first place? Also you talk about explanation, but we talk about different levels of explanation, so why are we being so strict on this? The audience member elaborates with an example for quite a while.
Response: The distinction bit is because representational explanations appeal to content but mechanistic accounts don’t need to. Now, the levels thing. I commit currently to the view that the higher level explanations are glosses. They’re explanatory, but they’re black boxing parts of the explanation.

Audience: Most representationalists are teleosemanticists. But you’re wary about attributing proper functions. Darwin’s theory either demonstrated how teleology works, or it showed how to explain away teleology. But want do you want from design if not what natural selection produces?
Response: Cool thanks. With representation we either accept representation or explain it away with mechanisms so that’s a good useful comparison. Your point is that for teleology, we understand design now and it hasn’t been explained away. But we have to be thinking in the background about what normativity is and these are good things to be thinking about.

Audience: The different mechanisms implementing different algorithms example…what’s going on?
Response: She challenges that more complex mechanistic explanations are better when really you have three instances of the same computation in different physical systems instead of three different computations.

Audience: Why should we withhold a commitment to a properly full explanation, one which makes use of representation?
Response: Representation may not be essential to explanation. Or you could explain why content can’t play the core role others think it will play, which is the negative side to this positive account.

Beate Krickel: Unconscious mental representations in a mechanical world: a challenge for the anti-representationalist?

First steps in a new project on the unconscious mind. She previously worked on mechanistic explanations.

Unconscious mental phenomena (UMP): specific types are of interest today like unconscious vision – blindsight; unconscious attitudes; unconscious ‘mental conflict’ like self-deception, cognitive dissonance, and other forms of unconscious reasoning.

The starting point: They all rely on the same type of argument: The inference to the existence of unconscious mental phenomena.

  1. Subject show a certain behaviour B
  2. The subjects are not conscious of any mentala causes of B
  3. The best explanations for the conjunction of 1 and 2 is in terms of unconscious mental causes.

From this it is inferred that the explanations is trye, and the unconscious mental cause is real.

This talk assumes premise 2 for the sake of argument.

Ok let’s look at the argument from the perspective of the anti-representationalists…is this argument still a reasonable one? The challenge is premise 3. They’re unconscious mental causes…what makes them mental?

Traditional marks of the mental:

Consciousness – doesn’t work here obviously

Intentionality – we’re presupposing anti-rep so this won’t work.

So what makes these mental?

You might say it….is….coupled with a cognitive system? Extended cognition people say this, but these states are supposed to be internal so how is this option really meant to work? Also, is being cognitive sufficient for being mental?

We could say that the behaviour produced merely looks like it was caused by a mental state, that doesn’t necessarily mean that it was. But this does not validate the IBE argument.

You might say that the causal profile is the same as those of a conscious mental state, and take a functional approach. The thing is, this…just isn’t true.

Maybe say it’s a personal-level cause? What do you even mean though? See Figdor 2018.

The speaker wants to say that we have to pose a different question. What is the explanatory role of the mentality here? Why ascribe mental states? We’re looking for causes and we’re looking for real causes, and this gives us good reason to suppose that scientists are really looking for mechanistic explanations for these phenomena!

So, mechanistic explanation: Contrastive explananda approach. These explanations explain the contrasts P vs. P*. Mechanistic explanations mention only those mechanistic components contrasted against a maximally similar (but not identical) mechanism. Then to answer the contrastive question, we find all the differences between the two maximally similar mechanisms.So, why P rather than P*? Well, because of the difference in components in the maximally similar mechanisms.

The idea is to apply this idea to UMP (unconscious mental phenomena).

UMP explanations explain two contrasts. Take for example unconscious vision.

The explanandum is two questions. (Why can they respond above chance to stimuli? Why can they react only with medium accuracy and not high accuracy?)

For the first one, invoking mental states is meant to explain this contrast. Of the second, the answer is because it’s unconscious rather than conscious. The problem though is that this doesn’t explain the explanatory purpose of positing mental states here.

Back to mechanisms: There’s a cool venn diagram on the board which is a bit too difficult to transcribe, sorry. Anyway, an example:

Unconscious vision. Assume you present the stimuli to a subject and measure the reaction time. What’s in common between unconscious vision and blindness is retina activation and motor areas activation…whereas in the visual case there’s visual cortex activation that there isn’t in the other cases. In the space in between though, between the unconscious vision and the conscious vision are the early visual areas being active. Here we can use this described venn diagram to show why it’s mental.

There’s a second example with unconscious racist beliefs that supposedly shows a similar thing.

Summary: How to meet the challenge for the anti-representationalist. Can we defend the IBE to unconscious mental phenomena? First we need to rephrase the premises to make the explanandum clear. Then you say that the mechanism is similar to a conscious one but different to a non-mental one and that counts as mental. It’s more complex than that, I was concentrating on listening so I couldn’t write it fast enough.

The idea is that you can still infer that the UMP explanation is true, and the UMP is real, even if you’re an anti-representationalist, because of the mechanistic explanatory account offered. For the future, there are some things to consider. Does the mechanistic strategy work for all kinds of UMP? Unconscious vision and unconscious confabulation may differ in relevant ways. Also, what does unconscious really mean? Not reportable? No attention? Automatized? Well, different notions of unconsciousness might just make different contrast relevant. What is the relationship between mental and mental terms, like belief? Do these all share mechanistic components not shared by all non-mental states? Unlikely but…Maybe the category mental or mind is only relevant in folk psychological explanations. But this changes the explanatory virtue of ascribing mental states. So, mental-science and mental-folk may fulfil different explanatory roles, but the say that there’s no mental-science…but then again lots of scientists use ‘mental-folk’. Be careful!

Marcin: Properties like flexibility of behaviour aren’t properties of components of mechanisms, they’re features of mechanisms or features of the products of the mechanisms, says someone in a paper. What do you think?
Response: Cool, sounds good I’ll read it.

Audience: I’m sceptical of there being a mark of the mental. If so, why should we care whether there’s unconscious mentality? But does that even matter for your purposes?
Response: In the end of the talk, maybe I’m gesturing at there being no mark of the mental, but there’s a mark of the vision for example.

Audience: Intentionality can’t be the mark of the mental you said for the anti-representationalist, but an anti-representationalist could be committed to an intention but not representational mind.
Response: Yeah, sure. But who takes that view anyway?
Audience: Some people.

Response: Sure, but that doesn’t help distinguish whether X is a mental case or not.
Audience: You focused on unconscious states that couldn’t become conscious, you know, subpersonal. What about just preconscious, standing, dispositional beliefs, etc.?
Response: Ok yeah, and some people even think that unconscious attitudes can be like this if you direct your attention properly. I think my approach could apply to them, but I definitely need to do more thinking, especially about the notion of the unconscious.

Audience: In the example venn diagrams, is it possible to have no overlap at all?
Response: Sure, if the mechanisms are different enough, but then they’re definitely not in question for being mental or conscious.

Audience: I completely missed this question.
Response: I zoned out.

Karina Vold: Multiply realised representations…with non-derived content

A longstanding objection to the extended mind thesis is that only biologically instantiated states can have non-derived content. The thesis will challenge this objection.

External representations can have non-derived content is where she’ll end up.

The speaker describes the extended mind hypothesis. You all know what that is. She clarifies that all sorts of variants of this view basically are subject to the non-derived content objection so it’s fine to talk about it generally here.

Otto with the phone and twin Otto without the phone can be brain duplicates with different mental states. The vehicle of Otto’s mental representation is external to his biological one. Anyway, on to Brentano’s mark of the mental again. So, does Otto’s notebook/phone exhibit intentionality?

Rings on a tree represent age, but that content is derived. Adams and Aizawa (2001) say that non-derived content is the mark of the cognitive. All and only cognitive states represent intrinsically. (So these people endorse the representational claim generally, but in other ways avoid it). The vehicle externalist (the speaker) endorses representationalism, so this talk is representationalist.

So, the objection is that the notebook has content but it is derived. The symbols in the notebook are not “original”. As we’ve seen, trying to naturalise content is a big problem! A&A say that there is broad consensus that cognition involves non-derived content (assuming there’s content). They say that vehicle externalism is logically possible, but contingently false.

So, the challenge to the speaker is not just to show that external states could have non-derived content, but in some cases that they actually do.

Five replies:

  1. Is the derived/non-derived distinction even a coherent distinction to make? (Dennettian view.) The distinction is inspired by Searle, but Searle’s distinction applies to a system, but the distinction here applies to a bit of the content, not the system, says Clark. Also screw Searle
  2. Why think that all genuinely cognitive, internal states have non-derived content? Clark says that surely some internal states must have derived contents, like when imagining the meaning of the interlap in Venn diagrams.
  3. Cognitive processes might involve computations over a conjunction of representations, some with non-derived content, some without, says Clark. Once again why think it’s an exclusive club for non-derived content?
  4. Even if biological representations can have non-derived content, this still permits cases of social cognitive extension. The speaker has argued this is a paper. E.g. Otto asking of someone else’s mental states instead of a notebook. The representational vehicles are extra cranial, but they’re biologically instantiated on this understanding. The author wants to show that non-biologically instantiated vehicles can have non-derived content.
  5. Why think a priori that only internal biological representations can have non-derived content without explaining further why this is? This is the subject of the author’s current work with a publishing partner.

So, here’s the crux of the talk. Mathematicians rely on external markers, vehicles, and these cases will be appealed to.

Clark and Chalmers also label their view active externalism, contrasted against Burge’s passive externalism. So, step 1: Active content externalism (Lyre 2016). In this view, the individual playing an active role in determining content (e.g. in gang languages) the members of the group play an active role in determining the content of the words they use.

E.g. Jean graffities a few buildings with an abstract symbol. It becomes a symbol for safe neighbourhoods, and over time they use it to stay safe. This is an external vehicle where the content is determined by the group of users.

Step 2: Towards mathematical content externalism (with non-derived content). Mathematical concepts are cultural practices. At a higher level of expertise, a small number determine what the symbols mean that disseminate to the rest of the group.

Step 3: Mathematical active content externalism. E.g. non positive integers. 6-2=4. There’s no natural number expressing the result of 2-6. In maths, historically, you want an efficient theory and rules of operations for these sorts of cases. These numbers were thought to refer to abstract numbers, but there’s still a problem of x-x=? So you end up with positing things like -4. From these sorts of rules being generated, you can get to theories about imaginary numbers. E.g. X squared = -1….what can we say about that case? So then you get X = the square root of -1 even if that’s a bit useless. These expressions proved their worth though in the mathematical communities over time. Then over time the rules for the operations with such expressions were codified and you get writing “i” for the square root of negative one. So, mathematicians introduced these new symbolic expressions and once they’d proved their worth, the new ideas were understood to have certain content. So these days the content of these symbols is the same, though it wasn’t when first introduced.

This process is called operative writing. The content of the symbol is constituted by the symbol itself and its relation to other symbols (Kramer 2003). These are real symbols that really could have non-derived content that exist outside of the brain. But is that convincing enough? The speaker gives another example:

Step 4: Non-Euclidean geometries. The parallel postulate: basically certain angles add to 180 degrees if a line cuts across two other lines. There are some planes for which the parallel postulate does not hold…but over time these non-Euclidean geometries became to be shown to be relevantly connected to Euclidean geometry.

The point is that non-Euclidean geometries are not compatible with Euclidean geometry, but their content was discovered (reluctantly) as a result of Euclidean geometry. They have content which is non-derived…?

Objections: This wouldn’t help Otto’s notebook. These are examples of vehicle externalism, but perhaps then non-derived content just isn’t a mark of the mental? But also, numbers represent in virtue of social agreements and practices. The speaker says that there may be two notions of derived content.

It’s from representational states with intrinsic content, or
from conventions or social practices.

In the cases the speaker described, these symbol meanings are understood according to social agreements, but they do have content because they were introduced without consideration for their representational content. But when you get content from social practices, it can come from two ways. Firstly, something is assigned a content that already exists. E.g. a cartoon dog represents a dog. But this is interestingly different from the speaker’s cases because the content itself emerges from social conventions about the use of the symbol. The content itself here, the speaker says, comes from the social agreement but it’s new content, or reluctantly agreed upon content. I think I butchered the explanation there, don’t take these notes too seriously at this point in the day…

Audience: This depends on your view of social ontology, right? On the nature of social objects in general? Do you have a view?
Response: The co-author wanted to avoid commitments here, but the speaker doesn’t have a horse in that race so the speaker is considering the issue.

Audience: Derived vs. non-derived is still a bit unclear for everyone here. Will you end up defining the non-derived out of existence? What could it possibly be?
Response: There’s cultural evolution behind these symbols, which is important, yeah. I agree and have work on this elsewhere.

Audience: I think non-derived content is fake…with the other way where the content emerges from the conventions, once we’ve got the rules, they can do unexpected things, which is the cool thing about maths. If you could say that there are similar kinds of processes going on in neural development, rules set by the structure of the brain and surprising brain structures arise…this might cause a problem for that anti-content people.
Response: Cool! I also want to look at different domains instead of maths, like art, to expand on this.

Audience: They talk about some idea from a paper they read which the speaker might want to read.
Response: Cool I’ll write that down.

Audience: More helpful examples: Explanations in linguistic evolution – no one intends novel forms of words. Prepositions emerging from different cases in English. No one bestowed the meaning of “that”, it just happened over time.
Response: Cool! Thanks! This is meant to be enthusiastic responses, not sarcastic by the way.

The end. I’m missing the last plenary discussion as I’m going to get train and a plane now. Bye bye.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s