Monday, December 25, 2023

In fiction

 

In fiction

The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots".

Thought-capable artificial beings have appeared as storytelling devices since antiquity,[291] and have been a persistent theme in science fiction.[292]

A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[293]

Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the "Multivac" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics;[294] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[295]

Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[296]

See also

Explanatory notes

  • This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
  • This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
  • It is among the reasons that expert systems proved to be inefficient for capturing knowledge.[30][31]
  • "Rational agent" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or, in the case of AI, a computer program.
  • Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[43] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[44]
  • See AI winter § Machine translation and the ALPAC report of 1966
  • Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[88]
  • Expectation-maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables.[90]
  • Russell and Norvig suggest the alternative term "computational graphs" – that is, an abstract network (or "graph") where the edges and nodes are assigned numeric values.
  • Some form of deep neural networks (without a specific learning algorithm) were described by: Alan Turing (1948);[117] Frank Rosenblatt(1957);[117] Karl Steinbuch and Roger David Joseph (1961).[118] Deep or recurrent networks that learned (or used gradient descent) were developed by: Ernst Ising and Wilhelm Lenz (1925);[119] Oliver Selfridge (1959);[118] Alexey Ivakhnenko and Valentin Lapa (1965);[119] Kaoru Nakano (1977);[120] Shun-Ichi Amari (1972);[120] John Joseph Hopfield (1982).[120] Backpropagation was independently discovered by: Henry J. Kelley (1960);[117] Arthur E. Bryson (1962);[117] Stuart Dreyfus (1962);[117] Arthur E. Bryson and Yu-Chi Ho (1969);[117] Seppo Linnainmaa (1970);[121] Paul Werbos (1974).[117] In fact, backpropagation and gradient descent are straight forward applications of Gottfried Leibniz' chain rule in calculus (1676),[122] and is essentially identical (for one layer) to the method of least squares, developed independently by Johann Carl Friedrich Gauss (1795) and Adrien-Marie Legendre (1805).[123] There are probably many others, yet to be discovered by historians of science.
  • Geoffrey Hinton said, of his work on neural networks in the 1990s, "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow"[124]
  • Including Jon Kleinberg (Cornell), Sendhil Mullainathan (University of Chicago), Cynthia Chouldechova (Carnegie Mellon) and Sam Corbett-Davis (Stanford)[153]
  • Moritz Hardt (a director at the Max Planck Institute for Intelligent Systems) argues that machine learning "is fundamentally the wrong tool for a lot of domains, where you're trying to design interventions and mechanisms that change the world."[158]
  • When the law was passed in 2018, it still contained a form of this provision.
  • This is the United Nations' definition, and includes things like land mines as well.[171]
  • See table 4; 9% is both the OECD average and the US average.[184]
  • Sometimes called a "robopocalypse".[191]
  • "Electronic brain" was the term used by the press around this time.[226]
  • Daniel Crevier wrote, "the conference is generally recognized as the official birthdate of the new science."[230] Russell and Norvig called the conference "the inception of artificial intelligence."[228]
  • Russell and Norvig wrote "for the next 20 years the field would be dominated by these people and their students."[231]
  • Russell and Norvig wrote "it was astonishing whenever a computer did anything kind of smartish".[232]
  • The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
  • Russell and Norvig write: "in almost all cases, these early systems failed on more difficult problems"[233]
  • Embodied approaches to AI[239] were championed by Hans Moravec[240] and Rodney Brooks[241] and went by many names: Nouvelle AI.[241] Developmental robotics,[242]
  • Matteo Wong wrote in The Atlantic: "Whereas for decades, computer-science fields such as natural-language processing, computer vision, and robotics used extremely different methods, now they all use a programming method called "deep learning." As a result, their code and approaches have become more similar, and their models are easier to integrate into one another."[248]
  • Jack Clark wrote in Bloomberg: "After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever," and noted that the number of software projects that use machine learning at Google increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[250]
  • See Problem of other minds
  • Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."[264]
  • Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[269]
    1. Searle presented this definition of "Strong AI" in 1999.[279] Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."[280] Strong AI is defined similarly by Russell and Norvig: "Stong AI – the assertion that machines that do so are actually thinking (as opposed to simulating thinking)."[281]

    References

  • Google (2016).
  • Copeland, J (Ed.) (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford: Clarendon Press. ISBN 0-19-825079-7.
  • Dartmouth workshop: The proposal:
  • Successful programs the 60s:
  • Funding initiatives in the early 80s: Fifth Generation Project (Japan), Alvey (UK), Microelectronics and Computer Technology Corporation (US), Strategic Computing Initiative (US):
  • First AI Winter, Lighthill report, Mansfield Amendment
  • Second AI Winter:
  • Deep learning revolution, AlexNet:
  • Toews (2023).
  • Frank (2023).
  • Artificial general intelligence: Proposal for the modern version: Warnings of overspecialization in AI from leading researchers:
  • Russell & Norvig (2021, §1.2)
  • Problem solving, puzzle solving, game playing and deduction:
  • Uncertain reasoning:
  • Intractability and efficiency and the combinatorial explosion:
  • Psychological evidence of the prevalence sub-symbolic reasoning and knowledge:
  • Knowledge representation and knowledge engineering:
  • Smoliar & Zhang (1994).
  • Neumann & Möller (2008).
  • Kuperman, Reichley & Bailey (2006).
  • McGarry (2005).
  • Bertini, Del Bimbo & Torniai (2006).
  • Russell & Norvig (2021), pp. 272.
  • Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts):
  • Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem):
  • Causal calculus:
  • Representing knowledge about knowledge: Belief calculus, modal logics:
  • Default reasoning, Frame problem, default logic, non-monotonic logics, circumscription, closed world assumption, abduction: (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning").
  • Breadth of commonsense knowledge:
  • Newquist (1994), p. 296.
  • Crevier (1993), pp. 204–208.
  • Gertner (2023).
  • Russell & Norvig (2021), p. 528.
  • Automated planning:
  • Automated decision making, Decision theory:
  • Classical planning:
  • Sensorless or "conformant" planning, contingent planning, replanning (a.k.a online planning):
  • Uncertain preferences: Inverse reinforcement learning:
  • Information value theory:
  • Markov decision process:
  • Game theory and multi-agent decision theory:
  • Learning:
  • Turing (1950).
  • Solomonoff (1956).
  • Unsupervised learning:
  • Supervised learning:
  • Reinforcement learning:
  • Transfer learning:
  • "Artificial Intelligence (AI): What Is AI and How Does It Work? | Built In". builtin.com. Retrieved 30 October 2023.
  • Computational learning theory:
  • Natural language processing (NLP):
  • Subproblems of NLP:
  • Russell & Norvig (2021), p. 856–858.
  • Dickson (2022).
  • Modern statistical and deep learning approaches to NLP:
  • Vincent (2019).
  • Russell & Norvig (2021), p. 875–878.
  • Bushwick (2023).
  • Computer vision:
  • Russell & Norvig (2021), pp. 849–850.
  • Russell & Norvig (2021), pp. 895–899.
  • Russell & Norvig (2021), pp. 899–901.
  • Russell & Norvig (2021), pp. 931–938.
  • MIT AIL (2014).
  • Affective computing:
  • Waddell (2018).
  • Poria et al. (2017).
  • Search algorithms:
  • State space search:
  • Russell & Norvig (2021), §11.2.
  • Uninformed searches (breadth first search, depth-first search and general state space search):
  • Heuristic or informed searches (e.g., greedy best first and A*):
  • Adversarial search:
  • Local or "optimization" search:
  • Evolutionary computation:
  • Merkle & Middendorf (2013).
  • Logic:
  • Propositional logic:
  • First-order logic and features such as equality:
  • Logical inference:
  • Russell & Norvig (2021), §8.3.1.
  • Resolution and unification:
  • Forward chaining, backward chaining, Horn clauses, and logical deduction as search:
  • citation in progress
  • Fuzzy logic:
  • Stochastic methods for uncertain reasoning:
  • Bayesian networks:
  • Domingos (2015), chapter 6.
  • Bayesian inference algorithm:
  • Domingos (2015), p. 210.
  • Bayesian learning and the expectation-maximization algorithm:
  • Bayesian decision theory and Bayesian decision networks:
  • Stochastic temporal models: Hidden Markov model: Kalman filters: Dynamic Bayesian networks:
  • decision theory and decision analysis:
  • Information value theory:
  • Markov decision processes and dynamic decision networks:
  • Game theory and mechanism design:
  • Statistical learning methods and classifiers:
  • Decision trees:
  • Non-parameteric learning models such as K-nearest neighbor and support vector machines:
  • Domingos (2015), p. 152.
  • Naive Bayes classifier:
  • Neural networks:
  • Russell & Norvig (2021), p. 750.
  • Gradient calculation in computational graphs, backpropagation, automatic differentiation:
  • Universal approximation theorem: The theorem:
  • Feedforward neural networks:
  • Recurrent neural networks:
  • Perceptrons:
  • Deep learning:
  • Convolutional neural networks:
  • "What is machine learning?". MIT Technology Review. 17 November 2018.
  • Schulz & Behnke (2012).
  • Deng & Yu (2014), pp. 199–200.
  • Ciresan, Meier & Schmidhuber (2012).
  • Russell & Norvig (2021), p. 751.
  • Russell & Norvig (2021), p. 785.
  • Schmidhuber (2022), §5.
  • Schmidhuber (2022), §6.
  • Schmidhuber (2022), §7.
  • Schmidhuber (2022), §8.
  • Schmidhuber (2022), §2.
  • Schmidhuber (2022), §3.
  • Quoted in Christian (2020, p. 22)
  • Kobielus (2019).
  • Davenport, T; Kalakota, R (June 2019). "The potential for artificial intelligence in healthcare". Future Healthc J. 6 (2): 94–98. doi:10.7861/futurehosp.6-2-94. PMC 6616181. PMID 31363513.
  • Bax, Monique; Thorpe, Jordan; Romanov, Valentin (December 2023). "The future of personalized cardiovascular medicine demands 3D and 4D printing, stem cells, and artificial intelligence". Frontiers in Sensors. 4. doi:10.3389/fsens.2023.1294721. ISSN 2673-5067.
  • Jumper, J; Evans, R; Pritzel, A (2021). "Highly accurate protein structure prediction with AlphaFold". Nature. 596 (7873): 583–589. Bibcode:2021Natur.596..583J. doi:10.1038/s41586-021-03819-2. PMC 8371605. PMID 34265844.
  • Simonite (2016).
  • Russell & Norvig (2021), p. 987.
  • Laskowski (2023).
  • GAO (2022).
  • Valinsky (2019).
  • Russell & Norvig (2021), p. 991.
  • Russell & Norvig (2021), p. 991–992.
  • Christian (2020), p. 63.
  • Vincent (2022).
  • Reisner (2023).
  • Alter & Harris (2023).
  • Nicas (2018).
  • "Trust and Distrust in America". 22 July 2019.
  • Williams (2023).
  • Taylor & Hern (2023).
  • Rose (2023).
  • CNA (2019).
  • Goffrey (2008), p. 17.
  • Berdahl et al. (2023); Goffrey (2008, p. 17); Rose (2023); Russell & Norvig (2021, p. 995)
  • Algorithmic bias and Fairness (machine learning):
  • Christian (2020), p. 25.
  • Russell & Norvig (2021), p. 995.
  • Grant & Hill (2023).
  • Larson & Angwin (2016).
  • Christian (2020), p. 67–70.
  • Christian (2020, pp. 67–70); Russell & Norvig (2021, pp. 993–994)
  • Russell & Norvig (2021, p. 995); Lipartito (2011, p. 36); Goodman & Flaxman (2017, p. 6); Christian (2020, pp. 39–40, 65)
  • Quoted in Christian (2020, p. 65).
  • Russell & Norvig (2021, p. 994); Christian (2020, pp. 40, 80–81)
  • Quoted in Christian (2020, p. 80)
  • Dockrill (2022).
  • Sample (2017).
  • "Black Box AI". 16 June 2023.
  • Christian (2020), p. 110.
  • Christian (2020), pp. 88–91.
  • Christian (2020, p. 83); Russell & Norvig (2021, p. 997)
  • Christian (2020), p. 91.
  • Christian (2020), p. 83.
  • Verma (2021).
  • Rothman (2020).
  • Christian (2020), p. 105-108.
  • Christian (2020), pp. 108–112.
  • Russell & Norvig (2021), p. 989.
  • Robitzski (2018); Sainato (2015)
  • Russell & Norvig (2021), p. 987-990.
  • Russell & Norvig (2021), p. 988.
  • Harari (2018).
  • Buckley, Chris; Mozur, Paul (22 May 2019). "How China Uses High-Tech Surveillance to Subdue Minorities". The New York Times.
  • "Security lapse exposed a Chinese smart city surveillance system". 3 May 2019. Archived from the original on 7 March 2021. Retrieved 14 September 2020.
  • "AI traffic signals to be installed in Bengaluru soon". NextBigWhat. 24 September 2019. Retrieved 1 October 2019.
  • Urbina et al. (2022).
  • Tarnoff, Ben (4 August 2023). "Lessons from Eliza". The Guardian Weekly. pp. 34–9.
  • E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2022) 51(3) Industrial Law Journal 511–559 Archived 27 May 2023 at the Wayback Machine
  • Ford & Colvin (2015);McGaughey (2022)
  • IGM Chicago (2017).
  • Arntz, Gregory & Zierahn (2016), p. 33.
  • Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, p. 33)
  • Morgenstern (2015).
  • Mahdawi (2017); Thompson (2014)
  • Zhou, Viola (11 April 2023). "AI is already taking video game illustrators' jobs in China". Rest of World. Retrieved 17 August 2023.
  • Carter, Justin (11 April 2023). "China's game art industry reportedly decimated by growing AI use". Game Developer. Retrieved 17 August 2023.
  • Cellan-Jones (2014).
  • Russell & Norvig 2021, p. 1001.
  • Bostrom (2014).
  • Russell (2019).
  • Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015)
  • Harari (2023).
  • Müller & Bostrom (2014).
  • Leaders' concerns about the existential risks of AI around 2015:
  • Arguments that AI is not an imminent risk:
  • Christian (2020), pp. 67, 73.
  • Valance (2023).
  • Yudkowsky (2008).
  • Anderson & Anderson (2011).
  • AAAI (2014).
  • Wallach (2010).
  • Russell (2019), p. 173.
  • Alan Turing Institute (2019). "Understanding artificial intelligence ethics and safety" (PDF).
  • Alan Turing Institute (2023). "AI Ethics and Governance in Practice" (PDF).
  • Floridi, Luciano; Cowls, Josh (23 June 2019). "A Unified Framework of Five Principles for AI in Society". Harvard Data Science Review. 1 (1). doi:10.1162/99608f92.8cd550d1. S2CID 198775713.
  • Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (1 September 2020). "A critical perspective on guidelines for responsible and trustworthy artificial intelligence". Medicine, Health Care and Philosophy. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN 1572-8633. PMID 32236794. S2CID 214766800.
  • Kamila, Manoj Kumar; Jasrotia, Sahil Singh (1 January 2023). "Ethical issues in the development of artificial intelligence: recognizing the risks". International Journal of Ethics and Systems. ahead-of-print (ahead-of-print). doi:10.1108/IJOES-05-2023-0107. ISSN 2514-9369. S2CID 259614124.
  • Regulation of AI to mitigate risks:
  • Vincent (2023).
  • Stanford University (2023).
  • UNESCO (2021).
  • Kissinger (2021).
  • Altman, Brockman & Sutskever (2023).
  • VOA News (25 October 2023). "UN Announces Advisory Body on Artificial Intelligence".
  • Edwards (2023).
  • Kasperowicz (2023).
  • Fox News (2023).
  • Milmo, Dan (3 November 2023). "Hope or Horror? The great AI debate dividing its pioneers". The Guardian Weekly. pp. 10–12.
  • "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023". GOV.UK. 1 November 2023. Archived from the original on 1 November 2023. Retrieved 2 November 2023.
  • "Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration". GOV.UK (Press release). Archived from the original on 1 November 2023. Retrieved 1 November 2023.
  • Berlinski (2000).
  • "Google books ngram".
  • AI's immediate precursors:
  • Russell & Norvig (2021), p. 17.
  • See "A Brief History of Computing" at AlanTuring.net.
  • Crevier (1993), pp. 47–49.
  • Russell & Norvig (2003), p. 17.
  • Russell & Norvig (2003), p. 18.
  • Russell & Norvig (2021), p. 21.
  • Lighthill (1973).
  • Russell & Norvig (2021), p. 22.
  • Expert systems:
  • Russell & Norvig (2021), p. 24.
  • Nilsson (1998), p. 7.
  • McCorduck (2004), pp. 454–462.
  • Moravec (1988).
  • Brooks (1990).
  • Developmental robotics:
  • Russell & Norvig (2021), p. 25.
  • Russell & Norvig (2021), p. 26.
  • Formal and narrow methods adopted in the 1990s:
  • AI widely used in the late 1990s:
  • Wong (2023).
  • Moore's Law and AI:
  • Clark (2015b).
  • Big data:
  • "Intellectual Property and Frontier Technologies". WIPO. Archived from the original on 2 April 2022. Retrieved 30 March 2022.
  • DiFeliciantonio (2023).
  • Goswami (2023).
  • Turing (1950), p. 1.
  • Turing's original publication of the Turing test in "Computing machinery and intelligence": Historical influence and philosophical implications:
  • Turing (1950), Under "The Argument from Consciousness".
  • Russell & Norvig (2021), chpt. 2.
  • Russell & Norvig (2021), p. 3.
  • Maker (2006).
  • McCarthy (1999).
  • Minsky (1986).
  • "What Is Artificial Intelligence (AI)?". Google Cloud Platform. Archived from the original on 31 July 2023. Retrieved 16 October 2023.
  • Nilsson (1983), p. 10.
  • Haugeland (1985), pp. 112–117.
  • Physical symbol system hypothesis: Historical significance:
  • Moravec's paradox:
  • Dreyfus' critique of AI: Historical significance and philosophical implications:
  • Crevier (1993), p. 125.
  • Langley (2011).
  • Katz (2012).
  • Neats vs. scruffies, the historic debate: A classic example of the "scruffy" approach to intelligence: A modern example of neat AI and its aspirations in the 21st century:
  • Pennachin & Goertzel (2007).
  • Roberts (2016).
  • Russell & Norvig (2021), p. 986.
  • Chalmers (1995).
  • Dennett (1991).
  • Horst (2005).
  • Searle (1999).
  • Searle (1980), p. 1.
  • Russell & Norvig (2021), p. 9817.
  • Searle's Chinese room argument: Discussion:
  • Robot rights:
  • Evans (2015).
  • McCorduck (2004), pp. 19–25.
  • Henderson (2007).
  • The Intelligence explosion and technological singularity: I. J. Good's "intelligence explosion" Vernor Vinge's "singularity"
  • Russell & Norvig (2021), p. 1005.
  • Transhumanism:
  • AI as evolution:
  • AI in myth:
  • McCorduck (2004), pp. 340–400.
  • Buttazzo (2001).
  • Anderson (2008).
  • McCauley (2007).
  • Galvan (1997).

    No comments:

    Post a Comment

    Technician work

      🌟 Join Americas Technician Services and be part of our esteemed team of Field Technicians in the IT industry! 🌟 At ATS, we specialize in...