Interacting with an Inferred World: The Challenge of Machine Learning for Humane Computer Interaction

  • Alan F. Blackwell University of Cambridge, Computer Laboratory
Keywords: Machine learning, critical theory

Abstract

Classic theories of user interaction have been framed in relation to symbolic models of planning and problem solving, responding in part to the cognitive theories associated with AI research. However, the behavior of modern machine-learning systems is determined by statistical models of the world rather than explicit symbolic descriptions. Users increasingly interact with the world and with others in ways that are mediated by such models. This paper explores the way in which this new generation of technology raises fresh challenges for the critical evaluation of interactive systems. It closes with some proposed measures for the design of inference-based systems that are more open to humane design and use. 

References

Agre,P.(1997)ComputationandHumanExperience, Cambridge University Press.

Agre,P.E.(1997).Towardsacriticaltechnicalpractice: lessons learned in trying to reform AI. In Bowker, G. Star, S. L & Turner, W. (Eds) Social Science, Technical Systems and Cooperative Work, Lawrence Erlbaum, Mahwah, NJ, pp. 131-157.

Bennett, J. (2013) Forensic musicology: approaches and challenges. In The Timeline Never Lies: Audio Engineers Aiding Forensic Investigators in Cases of Suspected Music Piracy. Presented at the International Audio Engineering Society Convention. New Y ork, USA, October 2013.

Blackwell,A.F.(2001).SWYN:Avisualrepresentation for regular expressions. In H. Lieberman (Ed.), Your wish is my command: Giving users the power to instruct their software. Morgan Kauffman , pp. 245-270.

Blackwell,A.F.(2002).Firststepsinprogramming:A rationale for attention investment models. In Human Centric Computing Languages and Environments, 2002. Proceedings. IEEE 2002 Symposia on (pp. 2-10). IEEE.

Blackwell,A.F.andPostgate,M.(2006).Programming culture in the 2nd-generation attention economy. Presentation at CHI Workshop on Entertainment media at home - looking at the social aspects.

Breiman,L.(2001).Statisticalmodeling:thetwo cultures. Statistical Science 16(3), 199–231

Borges,J.L.(1941/tr.1962)TheLibraryofBabel. Trans. by J.E. Irby in Labyrinths. Penguin, pp. 78-86

boyd,d.andCrawford,K.(2012).Criticalquestionsfor big data. Information, Communication & Society, 15(5), 662-679,

Card, S.K. Allen Newell, and Thomas P. Moran. 1983. The Psychology of Human-Computer Interaction. L. Erlbaum Assoc. Inc., Hillsdale, NJ, USA.

Collins, H. and Kusch, M. (1998). The Shape of Actions: What Humans and Machines Can Do. MIT Press.

Cope, D. (2003). Virtual Bach: Experiments in musical intelligence. Centaur Records.

Coyle, D., Moore, J., Kristensson, P.O., Fletcher, P. & Blackwell, A.F. (2012). I did that! Measuring users' experience of agency in their own actions. Proceedings of CHI 2012, pp. 2025-2034.

Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. MIT Press.

Dourish, P. (2004). What we talk about when we talk about context. Personal and Ubiquitous Computing 8(1), 19-30.

Gill, K.S. (Ed.) (1986). Artificial Intelligence for Society. Wiley.

Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. Cambridge University Press.

Gomer, R., schraefel, m.c. and Gerding, E. (2014). Consenting agents: semi-autonomous interactions for ubiquitous consent. In Proc. Int. Joint Conf. on Pervasive and Ubiquitous Computing: (UbiComp 14).

Kepes, B. (2013). Google users - you're the product, not the customer. Forbes Magazine, 4 December 2013.

Leahu, L., Sengers, P., and Mateas, M. (2008). Interactionist AI and the promise of ubicomp, or, how to put your box in the world without putting the world in your box. In Proc. 10th int. conf. on Ubiquitous computing (UbiComp '08), pp. 134-143.

Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proc 7th IEEE Int. Conf. on Computer Vision, pp. 1150-1157.

Madison, J. (2011). Damn You, Autocorrect! Virgin Books.

predictions for unrecognizable images. arXiv:1412.1897 [cs.CV] http://arxiv.org/abs/1412.1897

Mnih, V ., Kavukcuoglu, K., Silver, D., Graves, Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing Atari with deep reinforcement learning. http://arxiv.org/abs/1312.5602

Monbiot, G. (2013). Transatlantic trade and investment partnership as a global ban on left-wing politics. The Guardian, 4 Nov 2013. http://www.monbiot.com/2013/11/04/a-global-ban-on- left-wing-politics/

Newell, A. (1974). Notes on a proposal for a psychological research unit. Xerox Palo Alto Research Center Applied Information-processing Psychology Project. AIP Memo 1

Nguyen, A., Yosinski, J. and Clune, J. (2014). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. arXiv:1412.1897 [cs.CV] http://arxiv.org/abs/1412.1897

Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

Norman, D. A., & Draper, S. W. (1986). User centered system design. Hillsdale, NJ.

Norman, D.A. Cognition in the head and in the world: an introduction to the special issue on situated action. Cognitive Science 17, 1-6 (1993).

Norvig, P. (2011). On Chomsky and the two cultures of statistical learning. http://norvig.com/chomsky.html

Pasquinelli, M. (2009). Google’s PageRank algorithm: a diagram of cognitive capitalism and the rentier of the common intellect. In K. Becker & F. Stalder (eds), Deep Search. London: Transaction Publishers.

Scherzinger, M. (2014). Musical property: Widening or withering? Journal of Popular Music Studies 26(1), 162-192.

Shotton, J., T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore. (2013). Real-time human pose recognition in parts from single depth images. Communications of the ACM 56(1), 116- 124.

de Souza, C.S. (2005) The Semiotic Engineering of Human-Computer Interaction. The MIT Press.

Suchman, Lucy (1987). Plans and Situated Actions: The Problem of Human-machine Communication. Cambridge: Cambridge University Press.

Ward, D.J., Blackwell, A.F. & MacKay, D.J.C. (2000). Dasher - a data entry interface using continuous gestures and language models. In Proc. UIST 2000, pp. 129-137.

United Nations General Assembly. (1948). Universal Declaration Of Human Rights, Article 27.

Vercellone, C. (2008). The new articulation of wages, rent and profit in cognitive capitalism. Paper presented at The Art of Rent Feb 2008, Queen Mary University School of Business and Management, London.

Winograd, T. and Flores, F. (1986). Understanding computers and cognition: A new foundation for design. Intellect Books.

Zaslow, J. (2002) If TiVo thinks you are gay, here's how to set it straight. Wall Street Journal online Nov. 26. http://www.wsj.com/articles/SB1038261936872356908

Published
2015-10-05
How to Cite
Blackwell, A. (2015). Interacting with an Inferred World: The Challenge of Machine Learning for Humane Computer Interaction. Aarhus Series on Human Centered Computing, 1(1), 12. https://doi.org/10.7146/aahcc.v1i1.21197
Section
Interpreting Infrastructure