Small Navigation Menu

Primary Menu

Moving Beyond Content‐Specific Computation in Artificial Neural Networks

Citation: Shea, Nicholas (2021) Moving Beyond Content‐Specific Computation in Artificial Neural Networks. Mind & Language . ISSN 1468-0017

Shea_Moving beyond CS comptn_Preprint.pdf

Creative Commons: Attribution-No Derivative Works 4.0

A new wave of deep neural networks (DNNs) have performed astonishingly well on a range of real‐world tasks. A basic DNN is trained to exhibit, in parallel, a large collection of different input‐output dispositions. While this is a good model of the way humans perform some tasks automatically and without deliberative reasoning, more is needed to approach the goal of human‐like artificial intelligence. Indeed, DNN models are increasingly being supplemented to overcome the limitations inherent in dispositional‐style computation. Examining these developments, and earlier theoretical arguments, reveals a deep distinction between two fundamentally different styles of computation, defined here for the first time: content‐ specific computation and non‐content‐specific computation. Deep episodic RL networks, for example, combine content‐specific computations in a DNN with non‐content‐specific computations involving explicit memories. Human concepts are also involved in processes of both kinds. This suggests that the remarkable success of recent AI systems, and the special power of human conceptual thinking are both due, in part, to the ability to mediate between content‐specific and non‐content‐specific computations. Hybrid systems take advantage of the complementary costs and benefits of each. Combining content‐specific and non‐content‐ specific computations both has practical benefits and provides a better model of human cognitive competence.

Creators: Shea, Nicholas (0000-0002-2032-5705) and
DOI: doi.org/10.1111/mila.12387
Subjects: Philosophy
Keywords: computation, deep neural networks, distributed representation, content‐specific, explicit memory, concepts
Divisions: Institute of Philosophy
Collections: Legal Biography
Dates:
  • 19 May 2021 (accepted)
  • 5 October 2021 (published)
References: Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Banino, A., Badia, A. P., Köster, R., et al. (2020). Memo: A deep network for flexible combination of episodic memories. arXiv preprint arXiv:2001.10913. Blundell, C., Uria, B., Pritzel, A., et al. (2016). Model‐free episodic control. arXiv preprint arXiv:1606.04460. Bottou, L. (2014). From machine learning to machine reasoning. Machine learning, 94(2), 133‐149.22 Botvinick, M., Ritter, S., Wang, J. X., et al. (2019). Reinforcement Learning, Fast and Slow. Trends Cogn Sci, 23(5), 408‐422. doi:10.1016/j.tics.2019.02.006 Botvinick, M., Wang, J. X., Dabney, W., et al. (2020). Deep reinforcement learning and its neuroscientific implications. Neuron, 107(4), 603‐616. Brown, T. B., Mann, B., Ryder, N., et al. (2020). Language models are few‐shot learners. arXiv preprint arXiv:2005.14165. Buckner, C. (2018). Empiricism without magic: Transformational abstraction in deep convolutional neural networks. Synthese, 195(12), 5339‐5372. Buckner, C. (2019). Deep learning: A philosophical introduction. Philosophy Compass, 14(10), e12625. Butlin, P. (forthcoming). Cognitive Models are Distinguished by Content, Not Format. Philosophy of Science. Camp, E. (2004). The generality constraint and categorial restrictions. The Philosophical Quarterly, 54(215), 209‐231. Camp, E. (2015). Logical Concepts and Associative Characterizations. In E. Margolis & S. Laurence (Eds.), Conceptual mind: New directions in the study of concepts. (pp. 591‐ 621). London / Cambridge MA: MIT Press. Chen, C., Lu, Q., Beukers, A., et al. (2021). Learning to perform role‐filler binding with schematic knowledge. PeerJ, 9, e11046. Cichy, R. M., Khosla, A., Pantazis, D., et al. (2016). Comparison of deep neural networks to spatio‐temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports, 6, 27755. Dayan, P. (2014). Rationalizable irrationalities of choice. Topics in cognitive science, 6(2), 204‐228. Dennett, D. C. (2017). From bacteria to Bach and back: The evolution of minds: WW Norton & Company. Duan, Y., Schulman, J., Chen, X., et al. (2016). RL2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779. Eimas, P. D., Siqueland, E. R., Jusczyk, P., et al. (1971). Speech perception in infants. Science, 171(3968), 303‐306. Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. Oxford: OUP. Eslami, S., Heess, N., Weber, T., et al. (2016). Attend, infer, repeat: Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575. Eslami, S. A., Rezende, D. J., Besse, F., et al. (2018). Neural scene representation and rendering. Science, 360(6394), 1204‐1210. Evans, G. (1982). The Varieties of Reference. Oxford: O.U.P. Floridi, L., & Chiriatti, M. (2020). GPT‐3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 1‐14. Fodor, J. A. (1983). The Modularity of Mind: MIT Press. Fodor, J. A. (1985). Précis of The Modularity of Mind. Behavioral and Brain Sciences, 8. Fodor, J. A., & McLaughlin, B. (1990). Connectionism and the problem of systematicity: Why Smolensky's solution doesn't work. Cognition, 35, 183‐204. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and Cognitive Architecture: A Critical Analysis. Cognition, 28, 3‐71. French, R. M. (1999). Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4), 128‐135. 23 Gallistel, C. R. (2006). The nature of learning and the functional architecture of the brain. Psychological science around the world, 1, 63‐71. Gallistel, C. R. (2008). Learning and representation. Learning and memory: A comprehensive reference, 1, 227‐242. Gallistel, C. R., & King, A. P. (2010). Memory and the computational brain: Why cognitive science will transform neuroscience: John Wiley & Sons. Garnelo, M., Arulkumaran, K., & Shanahan, M. (2016). Towards deep symbolic reinforcement learning. arXiv preprint arXiv:1609.05518. Garnelo, M., & Shanahan, M. (2019). Reconciling deep learning with symbolic artificial intelligence: representing objects and relations. Current Opinion in Behavioral Sciences, 29, 17‐23. Gershman, S. J., & Daw, N. D. (2017). Reinforcement learning and episodic memory in humans and animals: an integrative framework. Annual review of psychology, 68, 101‐128. Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2015). Concepts in a probabilistic language of thought. In E. Margolis & S. Laurence (Eds.), The Conceptual Mind : New Directions in the Study of Concepts. Cambridge, MA: MIT Press. Graves, A., Wayne, G., Reynolds, M., et al. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471‐476. Güçlü, U., & van Gerven, M. A. (2015). Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. Journal of Neuroscience, 35(27), 10005‐10014. Hassabis, D., Kumaran, D., Summerfield, C., et al. (2017). Neuroscience‐inspired artificial intelligence. Neuron, 95(2), 245‐258. Henderson, J. (2020). The unstoppable rise of computational linguistics in deep learning. arXiv preprint arXiv:2005.06420. Higgins, I., Matthey, L., Glorot, X., et al. (2016). Early visual concept learning with unsupervised deep learning. arXiv preprint arXiv:1606.05579. Higgins, I., Sonnerat, N., Matthey, L., et al. (2018). Scan: Learning hierarchical compositional visual concepts. International Conference on Learning Representations. Hochreiter, S., & Schmidhuber, J. (1997). Long short‐term memory. Neural computation, 9(8), 1735‐1780. Hummel, J. E., & Holyoak, K. J. (2003). A symbolic‐connectionist theory of relational inference and generalization. Psychological Review, 110(2), 220. Hummel, J. E., & Holyoak, K. J. (2005). Relational reasoning in a neurally plausible cognitive architecture: An overview of the LISA project. Current Directions in Psychological Science, 14(3), 153‐157. Jaderberg, M., Czarnecki, W. M., Dunning, I., et al. (2019). Human‐level performance in 3D multiplayer games with population‐based reinforcement learning. Science, 364(6443), 859‐865. doi:10.1126/science.aau6249 Jumper, J., Evans, R., & al., e. (2020). High Accuracy Protein Structure Prediction Using Deep Learning. Paper presented at the Fourteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstract Book). https://predictioncenter.org/casp14/doc/CASP14_Abstracts.pdf Kriegeskorte, N., & Diedrichsen, J. (2019). Peeling the onion of brain representations. Annual review of neuroscience, 42, 407‐432. 24 Kriete, T., Noelle, D. C., Cohen, J. D., et al. (2013). Indirection and symbol‐like processing in the prefrontal cortex and basal ganglia. Proceedings of the National Academy of Sciences, 110(41), 16390‐16395. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 25 (pp. 1097‐1105). New York: Curran Associates, Inc. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., et al. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40. Lea, R. B., Mulligan, E. J., & Walton, J. L. (2005). Accessing distant premise information: How memory feeds reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 387. Liu, Y., Mattar, M., Behrens, T., et al. (2020). Experience replay supports non‐local learning. bioRxiv. Locatello, F., Weissenborn, D., Unterthiner, T., et al. (2020). Object‐centric learning with slot attention. arXiv preprint arXiv:2006.15055. Madarasz, T., & Behrens, T. (2019). Better transfer learning with inferred successor maps. Paper presented at the Advances in neural information processing systems. Marcus, G. (2018). Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631. Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Human‐level control through deep reinforcement learning. Nature, 518(7540), 529‐533. Morgan, A. (2020). Against neuroclassicism: On the perils of armchair neuroscience. Mind & Language. Murphy, D. (2001). Folk psychology meets the frame problem. Stud. Hist. Phil. Biol. & Biomed. Sci., 32(3), 565‐573. Murphy, G. L. (2002). The Big Book of Concepts. London / Cambridge, MA: MIT Press. Penn, D. C., Holyoak, K. J., & Povinelli, D. J. (2008). Darwin's mistake: Explaining the discontinuity between human and nonhuman minds. Behavioral and Brain Sciences, 31(2), 109‐130. Pritzel, A., Uria, B., Srinivasan, S., et al. (2017). Neural episodic control. arXiv preprint arXiv:1703.01988. Putin, E., Asadulaev, A., Ivanenkov, Y., et al. (2018). Reinforced adversarial neural computer for de novo molecular design. Journal of chemical information and modeling, 58(6), 1194‐1204. Quilty‐Dunn, J. (2021). Polysemy and thought: Toward a generative theory of concepts. Mind & Language, 36, 158–185. Quilty‐Dunn, J., & Mandelbaum, E. (2019). Non‐Inferential Transitions. In T. Chan & A. Nes (Eds.), Inference and Consciousness (pp. 151‐171). New York, NY: Routledge. Rae, J. W., Hunt, J. J., Harley, T., et al. (2016). Scaling memory‐augmented neural networks with sparse reads and writes. arXiv preprint arXiv:1610.09027. Rogers, A., Kovaleva, O., & Rumshisky, A. (2020). A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8, 842‐866. Samuels, R. (2010). Classical computationalism and the many problems of cognitive relevance. Studies in History and Philosophy of Science Part A, 41(3), 280‐293. 25 Santoro, A., Raposo, D., Barrett, D. G., et al. (2017). A simple neural network module for relational reasoning. Paper presented at the Advances in neural information processing systems. Senior, A. W., Evans, R., Jumper, J., et al. (2020). Improved protein structure prediction using potentials from deep learning. Nature, 577(7792), 706‐710. Shea, N. (2007). Content and its vehicles in connectionist systems. Mind & Language, 22(3), 246–269. Shea, N. (2015). Distinguishing Top‐Down From Bottom‐Up Effects’. In S. Biggs, M. Matthen, & D. Stokes (Eds.), Perception and Its Modalities (pp. 73‐91). Oxford: OUP. Shea, N. (2018). Representation in cognitive science. Oxford: Oxford University Press. Silver, D., Huang, A., Maddison, C. J., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484‐489. Smolensky, P. (1988). On the Proper Treatment of Connectionism. Behavioral and Brain Sciences, 11, 1‐74. Stanovich, K. E., & Toplak, M. E. (2012). Defining features versus incidental correlates of Type 1 and Type 2 processing. Mind & Society, 11(1), 3‐13. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction: The MIT press. Vikbladh, O., Shohamy, D., & Daw, N. (2017). Episodic contributions to model‐based reinforcement learning. Paper presented at the Annual conference on cognitive computational neuroscience, CCN. Wang, J. X., Kurth‐Nelson, Z., Kumaran, D., et al. (2018). Prefrontal cortex as a meta‐reinforcement learning system. Nature Neuroscience, 21(6), 860‐868. Wang, J. X., Kurth‐Nelson, Z., Tirumala, D., et al. (2016). Learning to reinforcement learn. arXiv preprint arXiv:1611.05763. Wayne, G., Hung, C.‐C., Amos, D., et al. (2018). Unsupervised predictive memory in a goal‐directed agent. arXiv preprint arXiv:1803.10760. Werker, J. F., Gilbert, J. H., Humphrey, K., et al. (1981). Developmental aspects of cross‐language speech perception. Child development, 349‐355. Wixted, J. T., Squire, L. R., Jang, Y., et al. (2014). Sparse and distributed coding of episodic memory in neurons of the human hippocampus. Proceedings of the National Academy of Sciences, 111(26), 9621‐9626. Yamins, D. L., Hong, H., Cadieu, C. F., et al. (2014). Performance‐optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 8619‐8624. Yang, D., Qin, X., Xu, X., et al. (2020). Sample Efficient Reinforcement Learning Method via High Efficient Episodic Memory. IEEE Access, 8, 129274‐129284.

Statistics

View details