Artificial Intelligence and Privacy

Causes for Concern

Authors

DOI:

https://doi.org/10.7146/psj.v3i.143099

Keywords:

Artificial Intelligence, Heuristic Zones of Privacy, Machine Learning, Privacy

Abstract

Modern Artificial Intelligence (AI) technologies have a rapidly growing impact on a wide range of human activities. AI methods are being used in varied domains such as healthcare, material science, infrastructure engineering, social media, surveillance technologies, and even artistic expression. They have been used for the purposes of drug discovery via protein folding prediction, power usage optimization through reinforcement learning, and facial recognition by means of image segmentation. Their effectiveness and wide-scale, unregulated deployment within our societies pose significant risks to our fundamental rights. Multiple existing AI methods have the potential to profoundly undermine our ability to safeguard our privacy. The societal impact of such AI models can be investigated through six concentric Heuristic Zones of privacy. These AI models can perform inferences regarding highly sensitive, personal information such as race, gender, and intelligence from seemingly innocuous data sources beyond the capabilities of human experts. They are capable of generating increasingly accurate text and image recreations of our thoughts from non-invasive brain activity recordings such as magnetoencephalography and functional magnetic resonance imaging. Furthermore, prospective AI technologies pose concerns about the existential risk to our civilization which extend beyond the erosion of privacy and other fundamental human rights.

Author Biography

Natacha Klein Kafer, Faculty of Theology University of Copenhagen

I am currently researching the interplay between health and privacy in the early modern period and the eighteenth and nineteenth centuries. Beyond medical discourse on privacy, I focus on how popular healing knowledge survived in the private sphere despite the efforts to suppress these practices, paying particular attention to the relationship between popular healing and “official” medical knowledge, witch trials, the legal framing of healing practices, and colonial encounters. I am particularly interested in how health is framed in trans-continental and trans-imperial contexts of healing knowledge transmission and the cross-connections between health, death, superstition, secrecy, and privacy.

References

Altman, Sam. “Planning for AGI and beyond.” OpenAI Blog 1, no. 1 (2023): 1. Accessed 24 February 2023. https://openai.com/blog/planning-for-agi-and-beyond.

Amodei, Dario, Paul Christiano, and Alex Ray. “Learning from human preferences.” 2017. https://openai.com/research/learning-from-human-preferences.

Arulkumaran, Kai, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. “Deep reinforcement learning: A brief survey.” IEEE Signal Processing Magazine 34, no. 6 (2017): 26–38.

Ayers, John W, Zechariah Zhu, Adam Poliak, Eric C. Leas, Mark Dredze, Michael Hogarth, and Davey M Smith. “Evaluating Artificial Intelligence Responses to Public Health Questions.” JAMA Network Open 6, no. 6 (2023): 2317517.

Bae, Gwangbin, Martin de La Gorce, Tadas Baltrušaitis, Charlie Hewitt, Dong Chen, Julien Valentin, Roberto Cipolla, and Jingjing Shen. “Digiface-1m: 1 million digital face images for face recognition.” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 3526–35. 2023.

Bai, Yuntao, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. “Training a helpful and harmless assistant with reinforcement learning from human feedback.” arXiv preprint arXiv:2204.05862 1 (2022): 1.

Baker, Ingmar, Bowen an Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. “Emergent tool use from multi-agent interaction.” 2019. https://openai.com/research/emergent-tool-use.

Bakhtin, Anton, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al. “Human-level play in the game of Diplomacy by combining language models with strategic reasoning.” Science 378, no. 6624 (2022): 1067–74.

Bartneck, Christoph, Christoph Lütge, Alan Wagner, and Sean Welsh. An introduction to ethics in robotics and AI. Springer Nature, 2021.

Benchetrit, Yohann, Hubert Banville, and Jean-Rémi King. “Brain decoding: toward real time reconstruction of visual perception.” arXiv preprint arXiv:2310.19812 1, no. 1 (2023): 1–10.

Bengio, Yoshua. “Managing AI Risks in an Era of Rapid Progress.” 2023. https://managing-ai-risks.com/.

Benson-Tilsen, Tsvi, and Nate Soares. “Formalizing Convergent Instrumental Goals.” AAAI Workshop: AI, Ethics, and Society. 2016.

Bittle, Jake. “Lie detectors have always been suspect. AI has made the problem worse.” MIT Technology Review, 2020. https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/.

Bostrom, Nick. “The control problem. Excerpts from superintelligence: Paths, dangers, strategies.” Science Fiction and Philosophy: From Time Travel to Superintelligence 1, no. 1 (2016): 308–30.

Bowman, Samuel R, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamile Lukošlute, Amanda Askell, Andy Jones, Anna Chen, et al. “Measuring progress on scalable oversight for large language models.” arXiv preprint arXiv:2211.03540 1, no. 1 (2022): 1.

Branwen, Gwern, Catherine Olsson, Joel Lehman, and Alex Irpa. “Specification gaming examples in AI-master list.” 2022. https://heystacks.com/doc/186/specification-gaming-examples-in-ai---master-list.

Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. “Language models are few-shot learners.” Advances in neural information processing systems 33 (2020): 1877–1901.

Bruun, Mette Birkedal. “Towards an Approach to Early Modern Privacy: The Retirement of the Great Condé.” In Early Modern Privacy: Sources and Approaches, ed. M. Green, L.C. Nørgaard, and M.B. Bruun, 12–60. Leiden: Brill, 2021. https://doi.org/10.1163/9789004153073_003.

Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. “Sparks of artificial general intelligence: Early experiments with gpt-4.” arXiv preprint arXiv:2303.12712 1 (2023): 1.

Bucknall, Benjamin S., and Shiri Dori-Hacohen. “Current and near-term AI as a potential existential risk factor.” Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 119–129. 2022.

Burns, Collin, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. “Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision.” arXiv e-prints, 2023, arXiv-2312.

Burns, John Lee, Zachary Zaiman, Jack Vanschaik, Gaoxiang Luo, Le Peng, Brandon Price, Garric Mathias, Vijay Mittal, Akshay Sagane, Christopher Tignanelli, et al. “Ability of artificial intelligence to identify self-reported race in chest x-ray using pixel intensity counts.” Journal of Medical Imaging 10, no. 6 (2023): 61106.

Carlsmith, Joseph. “Is Power-Seeking AI an Existential Risk?” arXiv preprint arXiv:2206.13353 1, no. 1 (2022): 1.

Cetinic, Eva, and James She. “Understanding and creating art with AI: Review and outlook.” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 18, no. 2 (2022): 1–22.

Christian, Brian. The alignment problem: Machine learning and human values. W.W. Norton & Company, 2020.

Christiano, Paul F., Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. “Deep reinforcement learning from human preferences.” Advances in neural information processing systems 30 (2017): 1–10.

Clark, Jack, and Dario Amodei. “Faulty reward functions in the wild.” 2016. https://openai.com/research/faulty-reward-functions.

Conmy, Arthur, Augustine Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià Garriga-Alonso. “Towards automated circuit discovery for mechanistic interpretability.” Advances in Neural Information Processing Systems 36 (2024): 16318-16352.

Cunningham, Pádraig, Matthieu Cord, and Sarah Jane Delany. “Supervised learning.” In Machine learning techniques for multimedia: case studies on organization and retrieval, 21–49. Springer, 2008.

Cybenko, George. “Approximation by superpositions of a sigmoidal function.” Mathematics of control, signals and systems 2, no. 4 (1989): 303–14.

D’Alfonso, Simon. “AI in mental health.” Current Opinion in Psychology 36 (2020): 112–17.

Damian, Alexandru, Jason Lee, and Mahdi Soltanolkotabi. “Neural networks can learn representations with gradient descent.” In Conference on Learning Theory, 5413–52. PMLR, 2022.

Dokmanic, Ivan, Reza Parhizkar, Andreas Walther, Yue M. Lu, and Martin Vetterli. “Acoustic echoes reveal room shape.” Proceedings of the National Academy of Sciences 110, no. 30 (2013): 12186–91.

Dong, Zizhao, Gang Wang, Shaoyuan Lu, Luyao Dai, Shucheng Huang, and Ye Liu. “Intentional-deception detection based on facial muscle movements in an interactive social context.” Pattern Recognition Letters 164 (2022): 30–9.

Fiske, Amelia, Peter Henningsen, and Alena Buyx. “Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy.” Journal of medical Internet research 21, no. 5 (2019): 13216.

Floridi, Luciano, and Massimo Chiriatti. “GPT-3: Its nature, scope, limits, and consequences.” Minds and Machines 30 (2020): 681–94.

Gabriel, Iason. “Artificial intelligence, values, and alignment.” Minds and machines 30, no. 3 (2020): 411–37.

Ganguli, Deep, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, et al. “Predictability and surprise in large generative models.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (2022), 1747–64.

Gao, Ruohan, Changan Chen, Ziad Al-Halah, Carl Schissler, and Kristen Grauman. “Visualechoes: Spatial image representation learning through echolocation.” Computer Vision–ECCV 2020. 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, 658–76. Springer, 2020.

Garcia-Navarro, Lulu. “Creating A ’Dadbot’ To Talk With A Dead Father.” 2017. https://www.npr.org/2017/07/23/538825555/creating-a-dadbot-to-talk-with-a-dead-father.

Geng, Jiaqi, Dong Huang, and Fernando De la Torre. “DensePose From WiFi.” arXiv e-prints -, nos. - (2022): arXiv–2301.

Gichoya, Judy Wawira, Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghas semi, Shih-Cheng Huang, et al. “AI recognition of patient race in medical imaging: a modelling study.” The Lancet Digital Health 4, no. 6 (2022): 406–14.

Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. “When will AI exceed human performance? Evidence from AI experts.” Journal of Artificial Intelligence Research 62 (2018): 729–54.

Han, Lawrence, and Hao Tang. “Designing of Prompts for Hate Speech Recognition with In-Context Learning.” In 2022 International Conference on Computational Science and Computational Intelligence (CSCI), 319–20. IEEE, 2022.

Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. “Universal approximation of an unknown mapping and its derivatives using multilayer feed-forward networks.” Neural networks 3, no. 5 (1990): 551–60.

Hyde, Steven J., Eric Bachura, Jonathan Bundy, Richard T. Gretz, and Wm Gerard Sanders. “The tangled webs we weave: Examining the effects of CEO deception on analyst recommendations.” Strategic Management Journal 45, no. 1 (2024): 66–112.

Jabbour, Sarah, David Fouhey, Ella Kazerooni, Michael W Sjoding, and Jenna Wiens. “Deep learning applied to chest x-rays: Exploiting and preventing shortcuts.” In Machine Learning for Healthcare Conference, 750–82. PMLR, 2020.

Jumper, John, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. “Highly accurate protein structure prediction with AlphaFold.” Nature 596, no. 7873 (2021): 583–9.

Karras, Johanna, Aleksander Holynski, Ting-Chun Wang, and Ira Kemelmacher-Shlizerman. “Dreampose: Fashion video synthesis with stable diffusion.” In Proceedings of the IEEE/CVF International Conference on Computer Vision (2023), 22680–90.

Kosinski, Michal. “Theory of mind might have spontaneously emerged in large language models.” arXiv preprint https://arxiv. org/abs/2302.02083 1, no. 1 (2023): 1–2.

Kosinski, Michal, David Stillwell, and Thore Graepel. “Private traits and attributes are predictable from digital records of human behavior.” Proceedings of the national academy of sciences 110, no. 15 (2013): 5802–805.

Kratsios, Anastasis, and Ievgen Bilokopytov. “Non-euclidean universal approximation.” Advances in Neural Information Processing Systems 33 (2020): 10635–46.

Levchev, Plamen, Michael N. Krishnan, Chaoran Yu, Joseph Menke, and Avideh Zakhor. “Simultaneous fingerprinting and mapping for multimodal image and WiFi indoor positioning.” In 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 442–50. IEEE, 2014.

Li, Kenneth, Aspen K. Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. “Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task.” The Eleventh International Conference on Learning Representations (2022).

Lin, Jinying, Zhen Ma, Randy Gomez, Keisuke Nakamura, Bo He, and Guangliang Li. “A review on interactive reinforcement learning from human social feedback.” IEEE Access 8 (2020): 120757-65.

Liu, Kan, and Lu Chen. “Deep Neural Network Learning for Medical Triage.” Data Ana lysis and Knowledge Discovery 3, no. 6 (2019): 99–108.

Lovens, Pierre-François. “Sans ces conversations avec le chatbot Eliza, mon mari serait toujours là.” (2023). https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/.

Lu, Yulong, and Jianfeng Lu. “A universal approximation theorem of deep neural networks for expressing probability distributions.” Advances in neural information processing systems 33 (2020): 3094–105.

Lu, Zhou, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. “The expres sive power of neural networks: A view from the width.” Advances in neural information processing systems 30 (2017): 26–38.

Maddocks, Sophie. “‘A Deepfake Porn Plot Intended to Silence Me’: exploring continuities between pornographic and ‘political’ deep fakes.” Porn Studies 7, no. 4 (2020): 415–23.

Mankowitz, Daniel J, Andrea Michi, Anton Zhernov, Marco Gelmi, Marco Selvi, Cosmin Paduraru, Edouard Leurent, Shariq Iqbal, Jean-Baptiste Lespiau, Alex Ahern, et al. “Faster sorting algorithms discovered using deep reinforcement learning.” Nature 618, no. 7964 (2023): 257–63.

Masood, Momina, Mariam Nawaz, Khalid Mahmood Malik, Ali Javed, Aun Irtaza, and Hafiz Malik. “Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward.” Applied intelligence 53, no. 4 (2023): 3974–4026.

Mathew, Dennise, N.C. Brintha, and J.T. Winowlin Jappes. “Artificial intelligence powered automation for industry 4.0.” In New Horizons for Industry 4.0 in Modern Business, 1–28. Springer, 2023.

McKinney, Scott Mayer, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg S. Corrado, Ara Darzi, et al. “International evaluation of an AI system for breast cancer screening.” Nature 577, no. 7788 (2020): 89–94.

Montavon, Grégoire, Wojciech Samek, and Klaus-Robert Müller. “Methods for interpreting and understanding deep neural networks.” Digital signal processing 73 (2018): 1–15.

Nanda, Neel, Lawrence Chan, Tom Liberum, Jess Smith, and Jacob Steinhardt. “Progress measures for grokking via mechanistic interpretability.” arXiv preprint arXiv:2301.05217 1, no. 1 (2023): 1–12.

Niikura, Ryota, Tomonori Aoki, Satoki Shichijo, Atsuo Yamada, Takuya Kawahara, Yusuke Kato, Yoshihiro Hirata, Yoku Hayakawa, Nobumi Suzuki, Masanori Ochi, et al. “Artificial intelligence versus expert endoscopists for diagnosis of gastric cancer in patients who have undergone upper gastrointestinal endoscopy.” Endoscopy 54, no. 8 (2022): 780–4.

Ntoutsi, Eirini, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, et al. “Bias in data-driven artificial intelligence systems—An introductory survey.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, no. 3 (2020): 1356.

Pan, Alexander, Kush Bhatia, and Jacob Steinhardt. “The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models.” International Conference on Learning Representations. 2022.

Pan, Yue, and Limao Zhang. “Roles of artificial intelligence in construction engineering and management: A critical review and future trends.” Automation in Construction 122 (2021): 103517.

Purkitt, Helen E. “Artificial intelligence and intuitive foreign policy decision-makers viewed as limited information processors: Some conceptual issues and practical concerns for the future.” In Artificial Intelligence and International Politics, 35–55. London: Routledge, 2019.

Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. “Language models are unsupervised multitask learners.” OpenAI Blog 1, no. 8 (2019): 9.

Ring, Mark, and Laurent Orseau. “Delusion, survival, and intelligent agents.” Artificial General Intelligence. 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings 4, 11–20. Springer, 2011

Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. “High-resolution image synthesis with latent diffusion models.” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2022), 10684–95.

Russell, Stuart. “Artificial Intelligence: A Modern Approach, 2021. Human-compatible artificial intelligence.” Human-like machine intelligence 1, no. 1 (2021): 3–23.

Sadiku, Matthew N.O., Tolulope J. Ashaolu, Abayomi Ajayi-Majebi, and Sarhan M. Musa. “Artificial intelligence in social media.” International Journal of Scientific Advances 2, no. 1 (2021): 15–20.

Sandbrink, Jonas, Hamish Hobbs, Jacob Swett, Allan Dafoe, and Anders Sandberg. “Differential technology development: A responsible innovation principle for navigating technology risks.” Available at SSRN 1, no. 1 (2022): 4–8.

Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. “Grad-cam: Visual explanations from deep networks via gradient-based localization.” In Proceedings of the IEEE international conference on computer vision (2017), 618–26.

Shinn, Noah, Beck Labash, and Ashwin Gopinath. “Reflexion: an autonomous agent with dynamic memory and self-reflection.” arXiv preprint arXiv:2303.11366 1, no. 1 (2023): 1–10.

Shuster, Anastasia, Lilah Inzelberg, Ori Ossmy, Liz Izakson, Yael Hanein, and Dino J. Levy. “Lie to my face: An electromyography approach to the study of deceptive behavior.” Brain and Behavior 11, no. 12 (2021): 2386.

Silver, David, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. “Mastering the game of Go with deep neural networks and tree search.” Nature 529, no. 7587 (2016): 484–89.

Strachan, James, Dalila Albergo, Giulia Borghini, Oriana Pansardi, Eugenio Scaliti, Alessandro Rufo, Guido Manzi, Michael Graziano, and Cristina Becchio. “Testing Theory of Mind in GPT Models and Humans.” arXiv preprint 1, no. 1 (2023): 1–13.

Sun, Chenbo. “Comparative Study of RSA Encryption and Quantum Encryption.” Theo retical and Natural Science 2, no. 1 (2023): 121–25.

Sun, Penghao, Zehua Guo, Sen Liu, Julong Lan, Junchao Wang, and Yuxiang Hu. “Smart-FCT: Improving power-efficiency for data center networks with deep reinforcement learning.” Computer Networks 179 (2020): 107255.

Szymanski, Nathan J., Bernardus Rendy, Yuxing Fei, Rishi E. Kumar, Tanjin He, David Milsted, Matthew J. McDermott, Max Gallant, Ekin Dogus Cubuk, Amil Merchant, et al. “An autonomous laboratory for the accelerated synthesis of novel materials.” Nature 1, no. 1 (2023): 1–6.

Tang, Jerry, Amanda LeBel, Shailee Jain, and Alexander G. Huth. “Semantic reconstruction of continuous language from non-invasive brain recordings.” Nature Neuroscience 1, no. 1 (2023): 1–9.

Turing, Sara. Alan M. Turing: Centenary Edition. Cambridge: Cambridge University Press, 2012.

Vigliotti, Jonathan. “How AI is transforming Hollywood, and why it’s at the center of contract negotiations.” 2023. https://www.cbsnews.com/news/artificial-intelligence-actors-strike-sag-aftra-metaphysic/

Vinyals, Oriol, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. “Grandmaster level in StarCraft II using multi-agent reinforcement learning.” Nature 575, no. 7782 (2019): 350–54.

Wang, Ben (@kingoflolz), and Aran Komatsuzaki. “GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model,” May 15, 2022. https://github.com/kingoflolz/mesh-transformer-jax.

Wang, Zuoguang, Hongsong Zhu, and Limin Sun. “Social engineering in cybersecu rity: Effect mechanisms, human vulnerabilities and attack methods.” IEEE Access 9 (2021): 11895–910.

Weidinger, Laura, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al. “Taxonomy of risks posed by language models.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (2022). 214–29.

Weizenbaum, Joseph. “ELIZA—a computer program for the study of natural language communication between man and machine.” Communications of the ACM 9, no. 1 (1966): 36–45.

Westerlund, Mika. “The emergence of deepfake technology: A review.” Technology innovation management review 9, no. 11 (2019): 1–10.

Wiener, Norbert. “Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers.” Science 131, no. 3410 (1960): 1355–58.

Yampolskiy, Roman, and Joshua Fox. “Safety engineering for artificial general intelligence.” Topoi 32 (2013): 217–26.

Zimmermann, Roland S., Thomas Klein, and Wieland Brendel. “Scale alone does not improve mechanistic interpretability in vision models.” arXiv preprint arXiv:2307.05471 1, no. 1 (2023): 1.

Downloads

Published

2024-05-20

How to Cite

Jurewicz, M., Klein Kafer, N., & Kran, E. (2024). Artificial Intelligence and Privacy: Causes for Concern. Privacy Studies Journal, 3, 1–32. https://doi.org/10.7146/psj.v3i.143099

Issue

Section

Position Papers