Citation: Shea, Nicholas (2025) Realist Representational Explanations of Agency Should Require Some Unity of Purpose. Philosophy and the Mind Sciences . (In Press)
Shea_25_Repn_agency_AI_PhiMiSci_PrePr.pdf
Creative Commons: Attribution 4.0
Restricted to Repository administrators only
Request a copyAbstract
An increasing amount of work in AI aims to build computational systems that are not just tools for human users but agents in their own right. In AI, agency is often taken to consist simply in the capacity to pursue and achieve goals. However, different kinds of sophistication in representational processing produce different degrees or varieties of goal-directedness. Realist representational accounts of goal-directedness usually omit or fail to highlight a requirement which is central to instrumentalist representational accounts, namely that there is a certain coherence or unity of purpose amongst the different goals that the system pursues. That is a commitment of Dennett’s Intentional Stance, for example. The same requirement operates when biologists adopt the rational agent heuristic to make sense of the evolved phenotypes of an organism. This paper argues that this requirement should be included when specifying what it is for an AI system to be an agent. If the degree to which an AI system is an agent is captured in terms of what it represents and what computations it performs, mechanisms that help achieve unity of purpose are an important ingredient. One dimension along which an AI system becomes more agentive is increased sophistication in such mechanisms. Conversely, when the objective is to build AI systems that are agents, satisfying this additional requirement will endow machine learning systems with a deeper kind of goal-directedness or agency.
Metadata
Creators: | Shea, Nicholas (0000-0002-2032-5705) and |
---|---|
Related URLs: | |
Subjects: | Philosophy |
Keywords: | agency, goal-directedness, representational explanation, artificial intelligence |
Divisions: | Institute of Philosophy |
Dates: |
|