{"id":11287,"date":"2023-09-05T14:54:41","date_gmt":"2023-09-05T14:54:41","guid":{"rendered":"https:\/\/nft.runfyers.com\/index.php\/2023\/09\/05\/public-input-is-crucial-in-ai-development-say-social-scientists\/"},"modified":"2023-09-05T14:54:41","modified_gmt":"2023-09-05T14:54:41","slug":"public-input-is-crucial-in-ai-development-say-social-scientists","status":"publish","type":"post","link":"https:\/\/nft.runfyers.com\/index.php\/2023\/09\/05\/public-input-is-crucial-in-ai-development-say-social-scientists\/","title":{"rendered":"Public Input is Crucial in AI Development, Say Social Scientists"},"content":{"rendered":"<p><\/p>\n<div>\n<p class=\"has-drop-cap\">Are democratic societies ready for a future in which AI\u00a0<a href=\"https:\/\/www.who.int\/publications\/i\/item\/9789240029200\" target=\"_blank\" rel=\"noopener\">algorithmically assigns limited supplies<\/a>\u00a0of respirators or hospital beds during pandemics? Or one in which\u00a0<a href=\"https:\/\/www.axios.com\/2023\/07\/10\/ai-misinformation-response-measures\" target=\"_blank\" rel=\"noopener\">AI fuels an arms race<\/a>\u00a0between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?<\/p>\n<p>Decades of research show that most democratic societies\u00a0<a href=\"https:\/\/doi.org\/10.1073\/pnas.2004835117\" target=\"_blank\" rel=\"noopener\">struggle to hold nuanced debates<\/a>\u00a0about new technologies. These discussions need to be informed not only by the best available science but also by the numerous ethical, regulatory, and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already emerging at a rate that overwhelms modern democracies\u2019 ability to collectively work through those problems.<\/p>\n<p>Broad public engagement, or the lack of it, has been a long-running challenge in assimilating emerging technologies and is key to tackling the challenges they bring.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-ready-or-not-unintended-consequences\">Ready or not, unintended consequences<\/h2>\n<p>Striking a balance between the awe-inspiring possibilities of emerging technologies like AI and the need for societies to think through both intended and unintended outcomes is not a new challenge. Almost 50 years ago, scientists and policymakers met in Pacific Grove, California, for what is often referred to as the\u00a0<a href=\"https:\/\/doi.org\/10.1073\/pnas.1317516111\" target=\"_blank\" rel=\"noopener\">Asilomar Conference<\/a>\u00a0to decide the future of recombinant DNA research, or transplanting genes from one organism into another. Public participation and input into their deliberations was minimal.<\/p>\n<p>Societies are severely limited in their ability to anticipate and mitigate unintended consequences of rapidly emerging technologies like AI without good-faith engagement from broad cross-sections of public and expert stakeholders. And there are real downsides to limited participation. If Asilomar had sought such wide-ranging input 50 years ago, it is likely that the issues of cost and access would have shared the agenda with the science and the ethics of deploying the technology. If that had happened, the\u00a0<a href=\"https:\/\/www.statnews.com\/2023\/03\/07\/crispr-sickle-cell-access\/\" target=\"_blank\" rel=\"noopener\">lack of affordability<\/a>\u00a0of recent\u00a0<a href=\"https:\/\/www.npr.org\/sections\/health-shots\/2021\/12\/31\/1067400512\/first-sickle-cell-patient-treated-with-crispr-gene-editing-still-thriving\" target=\"_blank\" rel=\"noopener\">CRISPR-based sickle cell<\/a>\u00a0treatments, for example, might\u2019ve been avoided.<\/p>\n<p>AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be obvious to elites like tech leaders and policymakers. If societies fail to ask \u201cthe right questions, the ones people care about,\u201d science and technology studies scholar\u00a0<a href=\"https:\/\/www.semanticscholar.org\/author\/S.-Jasanoff\/3281378\" target=\"_blank\" rel=\"noopener\">Sheila Jasanoff<\/a>\u00a0<a href=\"https:\/\/doi.org\/10.38105\/spr.n9a0lhvw2b\" target=\"_blank\" rel=\"noopener\">said in a 2021 interview<\/a>, \u201cthen no matter what the science says, you wouldn\u2019t be producing the right answers or options for society.\u201d<\/p>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<p>\n<iframe loading=\"lazy\" title=\"Ethics of AI: Challenges and Governance\" width=\"696\" height=\"392\" src=\"https:\/\/www.youtube.com\/embed\/VqFqWIqOB1g?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/p>\n<\/figure>\n<p>Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We\u00a0<a href=\"https:\/\/scholar.google.com\/citations?user=j2-5w_AAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">study the public<\/a>\u00a0<a href=\"https:\/\/scholar.google.com\/citations?user=NKj9jw4AAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">and political aspects<\/a>\u00a0<a href=\"https:\/\/scholar.google.com\/citations?user=r7G9f0wAAAAJ&amp;hl=en\" target=\"_blank\" rel=\"noopener\">of emerging science<\/a>. In 2022, our research group at the University of Wisconsin-Madison\u00a0<a href=\"https:\/\/scimep.wisc.edu\/wp-content\/uploads\/sites\/178\/2023\/07\/2023_AI-scientists-topline-report81.pdf\" target=\"_blank\" rel=\"noopener\">interviewed almost 2,200 researchers<\/a>\u00a0who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-who-gets-a-say-on-ai\">Who gets a say on AI?<\/h2>\n<p>Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline\u00a0<a href=\"https:\/\/gizmodo.com\/these-23-principles-could-help-us-avoid-an-ai-apocalyps-1791920321\" target=\"_blank\" rel=\"noopener\">principles for future AI research<\/a>. Senator Chuck Schumer plans to hold the first of a series of\u00a0<a href=\"https:\/\/www.washingtonpost.com\/technology\/2023\/08\/28\/schumer-musk-zuckerberg-altman-ai\/\" target=\"_blank\" rel=\"noopener\">AI Insight Forums<\/a>\u00a0on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta\u2019s Mark Zuckerberg and X\u2019s Elon Musk.<\/p>\n<p>Meanwhile, there is a hunger among the public for helping to shape our collective future. Only about a quarter of U.S. adults in our 2020 AI survey agreed that scientists should be able \u201cto conduct their research without consulting the public\u201d (27.8%). Two-thirds (64.6%) felt that \u201cthe public should have a say in how we apply scientific research and technology in society.\u201d<\/p>\n<p>The public\u2019s desire for participation goes hand in hand with a widespread lack of trust in government and industry when it comes to shaping the development of AI. In a\u00a0<a href=\"https:\/\/scimep.wisc.edu\/wp-content\/uploads\/sites\/178\/2023\/03\/22-05-04_scimep_AI-topline_socarxiv-submission.pdf\" target=\"_blank\" rel=\"noopener\">2020 national survey<\/a>\u00a0by our team, fewer than one in 10 Americans indicated that they \u201cmostly\u201d or \u201cvery much\u201d trusted Congress (8.5%) or Facebook (9.5%) to keep society\u2019s best interest in mind in the development of AI.<\/p>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<p>\n<iframe loading=\"lazy\" title=\"Algorithmic Bias and Fairness: Crash Course AI #18\" width=\"696\" height=\"392\" src=\"https:\/\/www.youtube.com\/embed\/gV0_raKR2UQ?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen><\/iframe>\n<\/p>\n<\/figure>\n<h2 class=\"wp-block-heading\" id=\"h-a-healthy-dose-of-skepticism\">A healthy dose of skepticism?<\/h2>\n<p>The public\u2019s deep mistrust of key regulatory and industry players is not entirely unwarranted. Industry leaders have had a hard time\u00a0<a href=\"https:\/\/www.cnbc.com\/2022\/10\/24\/how-googles-former-ceo-eric-schmidt-helped-write-ai-laws-in-washington-without-publicly-disclosing-investments-in-ai-start-ups.html\" target=\"_blank\" rel=\"noopener\">disentangling their commercial interests<\/a>\u00a0from efforts to develop an effective regulatory system for AI. This has led to a fundamentally messy policy environment.<\/p>\n<p>Tech firms helping regulators think through the potential and complexities of technologies like AI is not always troublesome, especially if they are transparent about potential conflicts of interest. However, tech leaders\u2019 input on technical questions about what AI can or might be used for is only a small piece of the regulatory puzzle.<\/p>\n<p>Much more urgently, societies need to figure out what types of applications AI should be used for, and how. Answers to those questions can only emerge from public debates that\u00a0<a href=\"https:\/\/doi.org\/10.1073\/pnas.2004835117\" target=\"_blank\" rel=\"noopener\">engage a broad set of stakeholders<\/a>\u00a0about values, ethics and fairness. Meanwhile, the public is\u00a0<a href=\"https:\/\/www.pewresearch.org\/short-reads\/2023\/08\/28\/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life\/\" target=\"_blank\" rel=\"noopener\">growing concerned<\/a>\u00a0about the use of AI.<\/p>\n<p>AI might not wipe out humanity anytime soon, but it is likely to increasingly disrupt life as we currently know it. Societies have a finite window of opportunity to find ways to engage in good-faith debates and collaboratively work toward meaningful AI regulation to make sure that these challenges do not overwhelm them.<\/p>\n<p><em>This article is republished from\u00a0<\/em><a href=\"https:\/\/theconversation.com\/\" target=\"_blank\" rel=\"noopener\">The C<\/a><a href=\"https:\/\/theconversation.com\/https:\/\/theconversation.com\/experts-alone-cant-handle-ai-social-scientists-explain-why-the-public-needs-a-seat-at-the-table-210848\" target=\"_blank\" rel=\"noopener\">onversation<\/a><em>\u00a0under a Creative Commons license. Read the<a href=\"https:\/\/theconversation.com\/3-ways-ai-is-transforming-music-210598https:\/\/theconversation.com\/experts-alone-cant-handle-ai-social-scientists-explain-why-the-public-needs-a-seat-at-the-table-210848\" target=\"_blank\" rel=\"noopener\">\u00a0original article\u00a0<\/a>by\u00a0<a href=\"https:\/\/theconversation.com\/profiles\/dietram-a-scheufele-322406\" target=\"_blank\" rel=\"noopener\">Dietram A. Scheufele<\/a>, <a href=\"https:\/\/theconversation.com\/profiles\/dominique-brossard-366719\" target=\"_blank\" rel=\"noopener\">Dominique Brossard<\/a>, &amp; <a href=\"https:\/\/theconversation.com\/profiles\/todd-newman-816100\" target=\"_blank\" rel=\"noopener\">Todd Newman<\/a><\/em>, social scientists from the University of Wisconsin-Madison.<\/p>\n<\/p><\/div>\n<p><a href=\"https:\/\/nftnow.com\/ai\/public-input-is-crucial-in-ai-development-say-social-scientists\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Are democratic societies ready for a future in which AI\u00a0algorithmically assigns limited supplies\u00a0of respirators or hospital beds during pandemics? Or one in which\u00a0AI fuels an arms race\u00a0between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices? Decades of research show that [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":11290,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[10],"tags":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/nftnow.com\/wp-content\/uploads\/2023\/08\/080223_AI_Editorial_Article_feature-1-scaled.jpg","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/11287"}],"collection":[{"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=11287"}],"version-history":[{"count":0,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/11287\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/11290"}],"wp:attachment":[{"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=11287"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=11287"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nft.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=11287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}