Monday, December 25, 2023

Ethical machines and alignment

 

Ethical machines and alignment

Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[201]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[202] The field of machine ethics is also called computational morality,[202] and was founded at an AAAI symposium in 2005.[203]

Other approaches include Wendell Wallach's "artificial moral agents"[204] and Stuart J. Russell's three principles for developing provably beneficial machines.[205]

Frameworks

Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values - developed by the Alan Turing Institute tests projects in four main areas:[206][207]

  • RESPECT the dignity of individual people
  • CONNECT with other people sincerely, openly and inclusively
  • CARE for the wellbeing of everyone
  • PROTECT social values, justice and the public interest

Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[208] however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[209]

Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[210]

Regulation

Sam Altman seated in front of microphone
OpenAI CEO Sam Altman testifies about AI regulation before a United States Senate subcommittee, 2023

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.[211] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[212] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[213][214] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[215] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, US and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[215] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[215] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[216] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[217] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[218]

In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[213] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[219] In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[220][221]

In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[222] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[223][224]

History

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction and formal reasoning, which is known as the Church–Turing thesis.[225] This, along with concurrent discoveries in cybernetics and information theory, led researchers to consider the possibility of building an "electronic brain".[r][227]

Alan Turing was thinking about machine intelligence at least as early as 1941, when he circulated a paper on machine intelligence which could be the earliest paper in the field of AI - though it is now lost.[2] The first available paper generally recognized as "AI" was McCullouch and Pitts design for Turing-complete "artificial neurons" in 1943 - the first mathematical model of a neural network.[228] The paper was influenced by Turing's earlier paper 'On Computable Numbers' from 1936 using similar two-state boolean 'neurons', but was the first to apply it to neuronal function.[2]

The term 'Machine Intelligence' was used by Alan Turing during his life which was later often referred to as 'Artificial Intelligence' after his death in 1954. In 1950 Turing published the best known of his papers 'Computing Machinery and Intelligence', the paper introduced his concept of what is now known as the Turing test to the general public. Then followed three radio broadcasts on AI by Turing, the lectures: 'Intelligent Machinery, A Heretical Theory’, ‘Can Digital Computers Think’? and the panel discussion ‘Can Automatic Calculating Machines be Said to Think’. By 1956 computer intelligence had been actively pursued for more than a decade in Britain; the earliest AI programmes were written there in 1951–52.[2]

In 1951, using a Ferranti Mark 1 computer of the University of Manchester, checkers and chess programs were wrote where you could play against the computer.[229] The field of American AI research was founded at a workshop at Dartmouth College in 1956.[s][3] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as "astonishing":[u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[v][4] Artificial Intelligence laboratories were set up at a number of British and US Universities in the latter 1950s and early 1960s.[2]

They had, however, underestimated the difficulty of the problem.[w] Both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[234] and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether.[235] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[6]

In the early 1980s, AI research was revived by the commercial success of expert systems,[236] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[5] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[7]

Many researchers began to doubt that the current practices would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.[237] A number of researchers began to look into "sub-symbolic" approaches.[238] Robotics researchers, such as Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines that move and survive.[x] Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[86][243] But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.[244] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[245]

AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).[246] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".[247]

Several academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[11]

Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[8] For many specific tasks, other methods were abandoned.[y] Deep learning's success was based on both hardware improvements (faster computers,[249] graphics processing units, cloud computing[250]) and access to large amounts of data[251] (including curated datasets,[250] such as ImageNet).

Deep learning's success led to an enormous increase in interest and funding in AI.[z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019,[215] and WIPO reported that AI was the most prolific emerging technology in terms of the number of patent applications and granted patents.[252] According to 'AI Impacts', about $50 billion annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer Science PhD graduates have specialized in "AI";[253] about 800,000 "AI"-related US job openings existed in 2022.[254] The large majority of the advances have occurred within the United States, with its companies, universities, and research labs leading artificial intelligence research.[10]

In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.[199]

No comments:

Post a Comment

Technician work

  🌟 Join Americas Technician Services and be part of our esteemed team of Field Technicians in the IT industry! 🌟 At ATS, we specialize in...