fbpx
Wikipedia

Automated decision-making

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.[1][2][3]

Overview edit

There are different definitions of ADM based on the level of automation involved. Some definitions suggests ADM involves decisions made through purely technological means without human input,[4] such as the EU's General Data Protection Regulation (Article 22). However, ADM technologies and applications can take many forms ranging from decision-support systems that make recommendations for human decision-makers to act on, sometimes known as augmented intelligence[5] or 'shared decision-making',[2] to fully automated decision-making processes that make decisions on behalf of individuals or organizations without human involvement.[6] Models used in automated decision-making systems can be as simple as checklists and decision trees through to artificial intelligence and deep neural networks (DNN).

Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, gameplay, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport.

An ADM system (ADMS) may involve multiple decision points, data sets, and technologies (ADMT) and may sit within a larger administrative or technical system such as a criminal justice system or business process.

Data edit

Automated decision-making involves using data as input to be analyzed within a process, model, or algorithm or for learning and generating new models.[7] ADM systems may use and connect a wide range of data types and sources depending on the goals and contexts of the system, for example, sensor data for self-driving cars and robotics, identity data for security systems, demographic and financial data for public administration, medical records in health, criminal records in law. This can sometimes involve vast amounts of data and computing power.

Data quality edit

The quality of the available data and its ability to be used in ADM systems is fundamental to the outcomes. It is often highly problematic for many reasons. Datasets are often highly variable; corporations or governments may control large-scale data, restricted for privacy or security reasons, incomplete, biased, limited in terms of time or coverage, measuring and describing terms in different ways, and many other issues.

For machines to learn from data, large corpora are often required, which can be challenging to obtain or compute; however, where available, they have provided significant breakthroughs, for example, in diagnosing chest X-rays.[8]

ADM Technologies edit

Automated decision-making technologies (ADMT) are software-coded digital tools that automate the translation of input data to output data, contributing to the function of automated decision-making systems.[7] There are a wide range of technologies in use across ADM applications and systems.

ADMTs involving basic computational operations

  • Search (includes 1-2-1, 1-2-many, data matching/merge)
  • Matching (two different things)
  • Mathematical Calculation (formula)

ADMTs for assessment and grouping:

ADMTs relating to space and flows:

ADMTs for processing of complex data formats

Other ADMT

Machine learning edit

Machine learning (ML) involves training computer programs through exposure to large data sets and examples to learn from experience and solve problems.[2] Machine learning can be used to generate and analyse data as well as make algorithmic calculations and has been applied to image and speech recognition, translations, text, data and simulations. While machine learning has been around for some time, it is becoming increasingly powerful due to recent breakthroughs in training deep neural networks (DNNs), and dramatic increases in data storage capacity and computational power with GPU coprocessors and cloud computing.[2]

Machine learning systems based on foundation models run on deep neural networks and use pattern matching to train a single huge system on large amounts of general data such as text and images. Early models tended to start from scratch for each new problem however since the early 2020s many are able to be adapted to new problems.[9] Examples of these technologies include Open AI's DALL-E (an image creation program) and their various GPT language models, and Google's PaLM language model program.

Applications edit

ADM is being used to replace or augment human decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, reduce costs and enable new solutions to complex problems.[10]

Debate edit

Research and development are underway into uses of technology to assess argument quality,[11][12][13] assess argumentative essays[14][15] and judge debates.[16][17][18][19] Potential applications of these argument technologies span education and society. Scenarios to consider, in these regards, include those involving the assessment and evaluation of conversational, mathematical, scientific, interpretive, legal, and political argumentation and debate.

Law edit

In legal systems around the world, algorithmic tools such as risk assessment instruments (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts.[20] In the United States RAI are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions,[21] evaluate parole for prisoners and to predict "hot spots" for future crime.[22][23][24] These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system.[20] In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.[25]

Economics edit

Automated decision-making systems are used in certain computer programs to create buy and sell orders related to specific financial transactions and automatically submit the orders in the international markets. Computer programs can automatically generate orders based on predefined set of rules using trading strategies which are based on technical analyses, advanced statistical and mathematical computations, or inputs from other electronic sources.

Business edit

Continuous auditing edit

Continuous auditing uses advanced analytical tools to automate auditing processes. It can be utilized in the private sector by business enterprises and in the public sector by governmental organizations and municipalities.[26] As artificial intelligence and machine learning continue to advance, accountants and auditors may make use of increasingly sophisticated algorithms which make decisions such as those involving determining what is anomalous, whether to notify personnel, and how to prioritize those tasks assigned to personnel.

Media and Entertainment edit

Digital media, entertainment platforms, and information services increasingly provide content to audiences via automated recommender systems based on demographic information, previous selections, collaborative filtering or content-based filtering.[27] This includes music and video platforms, publishing, health information, product databases and search engines. Many recommender systems also provide some agency to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user.[6]

Large-scale machine learning language models and image creation programs being developed by companies such as OpenAI and Google in the 2020s have restricted access however they are likely to have widespread application in fields such as advertising, copywriting, stock imagery and graphic design as well as other fields such as journalism and law.[9]

Advertising edit

Online advertising is closely integrated with many digital media platforms, websites and search engines and often involves automated delivery of display advertisements in diverse formats. 'Programmatic' online advertising involves automating the sale and delivery of digital advertising on websites and platforms via software rather than direct human decision-making.[27] This is sometimes known as the waterfall model which involves a sequence of steps across various systems and players: publishers and data management platforms, user data, ad servers and their delivery data, inventory management systems, ad traders and ad exchanges.[27] There are various issues with this system including lack of transparency for advertisers, unverifiable metrics, lack of control over ad venues, audience tracking and privacy concerns.[27] Internet users who dislike ads have adopted counter measures such as ad blocking technologies which allow users to automatically filter unwanted advertising from websites and some internet applications. In 2017, 24% of Australian internet users had ad blockers.[28]

Health edit

Deep learning AI image models are being used for reviewing x-rays and detecting the eye condition macular degeneration.

Social Services edit

Governments have been implementing digital technologies to provide more efficient administration and social services since the early 2000s, often referred to as e-government. Many governments around the world are now using automated, algorithmic systems for profiling and targeting policies and services including algorithmic policing based on risks, surveillance sorting of people such as airport screening, providing services based on risk profiles in child protection, providing employment services and governing the unemployed.[29] A significant application of ADM in social services relates to the use of predictive analytics – eg predictions of risks to children from abuse/neglect in child protection, predictions of recidivism or crime in policing and criminal justice, predictions of welfare/tax fraud in compliance systems, predictions of long term unemployment in employment services. Historically these systems were based on standard statistical analyses, however from the early 2000s machine learning has increasingly been developed and deployed. Key issues with the use of ADM in social services include bias, fairness, accountability and explainability which refers to transparency around the reasons for a decision and the ability to explain the basis on which a machine made a decision.[29] For example Australia's federal social security delivery agency, Centrelink, developed and implemented an automated processes for detecting and collecting debt which led to many cases of wrongful debt collection in what became known as the RoboDebt scheme.[30]

Transport and Mobility edit

Connected and automated mobility (CAM) involves autonomous vehicles such as self-driving cars and other forms of transport which use automated decision-making systems to replace various aspects of human control of the vehicle.[31] This can range from level 0 (complete human driving) to level 5 (completely autonomous).[2] At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment. Cars with levels 1 to 3 are already available on the market in 2021. In 2016 The German government established an 'Ethics Commission on Automated and Connected Driving' which recommended connected and automated vehicles (CAVs) be developed if the systems cause fewer accidents than human drivers (positive balance of risk). It also provided 20 ethical rules for the adaptation of automated and connected driving.[32] In 2020 the European Commission strategy on CAMs recommended that they be adopted in Europe to reduce road fatalities and lower emissions however self-driving cars also raise many policy, security and legal issues in terms of liability and ethical decision-making in the case of accidents, as well as privacy issues.[31] Issues of trust in autonomous vehicles and community concerns about their safety are key factors to be addressed if AVs are to be widely adopted.[33]

Surveillance edit

Automated digital data collections via sensors, cameras, online transactions and social media have significantly expanded the scope, scale, and goals of surveillance practices and institutions in government and commercial sectors.[34] As a result there has been a major shift from targeted monitoring of suspects to the ability to monitor entire populations.[35] The level of surveillance now possible as a result of automated data collection has been described as surveillance capitalism or surveillance economy to indicate the way digital media involves large-scale tracking and accumulation of data on every interaction.

Ethical and legal issues edit

There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others.[36][37] As ADM becomes more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies.[38]

ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable.[2]

A report from Citizen lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person.[25]

Legislative responses to ADM include:

Bias edit

ADM may incorporate algorithmic bias arising from:

  • Data sources, where data inputs are biased in their collection or selection[37]
  • Technical design of the algorithm, for example where assumptions have been made about how a person will behave[44]
  • Emergent bias, where the application of ADM in unanticipated circumstances creates a biased outcome[44]

Explainability edit

Questions of biased or incorrect data or algorithms and concerns that some ADMs are black box technologies, closed to human scrutiny or interrogation, has led to what is referred to as the issue of explainability, or the right to an explanation of automated decisions and AI. This is also known as Explainable AI (XAI), or Interpretable AI, in which the results of the solution can be analysed and understood by humans. XAI algorithms are considered to follow three principles - transparency, interpretability and explainability.

Information asymmetry edit

Automated decision-making may increase the information asymmetry between individuals whose data feeds into the system and the platforms and decision-making systems capable of inferring information from that data. On the other hand it has been observed that in financial trading the information asymmetry between two artificial intelligent agents may be much less than between two human agents or between human and machine agents.[45]

Research fields edit

Many academic disciplines and fields are increasingly turning their attention to the development, application and implications of ADM including business, computer sciences, human computer interaction (HCI), law, public administration, and media and communications. The automation of media content and algorithmically driven news, video and other content via search systems and platforms is a major focus of academic research in media studies.[27]

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include ADM and AI.

Key research centres investigating ADM include:

See also edit

References edit

  1. ^ Marabelli, Marco; Newell, Sue; Handunge, Valerie (2021). "The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges". Journal of Strategic Information Systems. 30 (1): 101683. doi:10.1016/j.jsis.2021.101683. Retrieved November 1, 2022.
  2. ^ a b c d e f Larus, James; Hankin, Chris; Carson, Siri Granum; Christen, Markus; Crafa, Silvia; Grau, Oliver; Kirchner, Claude; Knowles, Bran; McGettrick, Andrew; Tamburri, Damian Andrew; Werthner, Hannes (2018). When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making. New York: Association for Computing Machinery. doi:10.1145/3185595.
  3. ^ Mökander, Jakob; Morley, Jessica; Taddeo, Mariarosaria; Floridi, Luciano (2021-07-06). "Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations". Science and Engineering Ethics. 27 (4): 44. arXiv:2110.10980. doi:10.1007/s11948-021-00319-4. ISSN 1471-5546. PMC 8260507. PMID 34231029.
  4. ^ UK Information Commissioner's Office (2021-09-24). Guide to the UK General Data Protection Regulation (UK GDPR) (Report). Information Commissioner's Office UK. from the original on 2018-12-21. Retrieved 2021-10-05.
  5. ^ Crigger, E.; Khoury, C. (2019-02-01). "Making Policy on Augmented Intelligence in Health Care". AMA Journal of Ethics. 21 (2): E188–191. doi:10.1001/amajethics.2019.188. ISSN 2376-6980. PMID 30794129. S2CID 73490120.
  6. ^ a b Araujo, Theo; Helberger, Natali; Kruikemeier, Sanne; de Vreese, Claes H. (2020-09-01). "In AI we trust? Perceptions about automated decision-making by artificial intelligence" (PDF). AI & Society. 35 (3): 611–623. doi:10.1007/s00146-019-00931-w. hdl:11245.1/b73d4d3f-8ab9-4b63-b8a8-99fb749ab2c5. ISSN 1435-5655. S2CID 209523258.
  7. ^ a b Algorithm Watch (2020). Automating Society 2019. Algorithm Watch (Report). Retrieved 2022-02-28.
  8. ^ Seah, Jarrel C Y; Tang, Cyril H M; Buchlak, Quinlan D; Holt, Xavier G; Wardman, Jeffrey B; Aimoldin, Anuar; Esmaili, Nazanin; Ahmad, Hassan; Pham, Hung; Lambert, John F; Hachey, Ben (August 2021). "Effect of a comprehensive deep-learning model on the accuracy of chest x-ray interpretation by radiologists: a retrospective, multi-reader multicase study". The Lancet Digital Health. 3 (8): e496–e506. doi:10.1016/s2589-7500(21)00106-0. ISSN 2589-7500. PMID 34219054. S2CID 235735320.
  9. ^ a b Snoswell, Aaron J.; Hunter, Dan (13 April 2022). "Robots are creating images and telling jokes. 5 things to know about foundation models and the next generation of AI". The Conversation. Retrieved 2022-04-21.
  10. ^ Taddeo, Mariarosaria; Floridi, Luciano (2018-08-24). "How AI can be a force for good". Science. 361 (6404): 751–752. Bibcode:2018Sci...361..751T. doi:10.1126/science.aat5991. ISSN 0036-8075. PMID 30139858. S2CID 52075037.
  11. ^ Wachsmuth, Henning; Naderi, Nona; Hou, Yufang; Bilu, Yonatan; Prabhakaran, Vinodkumar; Thijm, Tim; Hirst, Graema; Stein, Benno (2017). "Computational argumentation quality assessment in natural language" (PDF). Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. pp. 176–187.
  12. ^ Wachsmuth, Henning; Naderi, Nona; Habernal, Ivan; Hou, Yufang; Hirst, Graeme; Gurevych, Iryna; Stein, Benno (2017). "Argumentation quality assessment: Theory vs. practice" (PDF). Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pp. 250–255.
  13. ^ Gretz, Shai; Friedman, Roni; Cohen-Karlik, Edo; Toledo, Assaf; Lahav, Dan; Aharonov, Ranit; Slonim, Noam (2020). "A large-scale dataset for argument quality ranking: Construction and analysis". Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. pp. 7805–7813.
  14. ^ Green, Nancy (2013). "Towards automated analysis of student arguments". International Conference on Artificial Intelligence in Education. Springer. pp. 591–594. doi:10.1007/978-3-642-39112-5_66.
  15. ^ Persing, Isaac; Ng, Vincent (2015). "Modeling argument strength in student essays" (PDF). Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pp. 543–552.
  16. ^ Brilman, Maarten; Scherer, Stefan (2015). "A multimodal predictive model of successful debaters or how I learned to sway votes". Proceedings of the 23rd ACM international conference on Multimedia. pp. 149–158. doi:10.1145/2733373.2806245.
  17. ^ Potash, Peter; Rumshisky, Anna (2017). "Towards debate automation: a recurrent model for predicting debate winners" (PDF). Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pp. 2465–2475.
  18. ^ Santos, Pedro; Gurevych, Iryna (2018). "Multimodal prediction of the audience's impression in political debates". Proceedings of the 20th International Conference on Multimodal Interaction. pp. 1–6. doi:10.1145/3281151.3281157.
  19. ^ Wang, Lu; Beauchamp, Nick; Shugars, Sarah; Qin, Kechen (2017). "Winning on the merits: The joint effects of content and style on debate outcomes". Transactions of the Association for Computational Linguistics. 5. MIT Press: 219–232. arXiv:1705.05040. doi:10.1162/tacl_a_00057. S2CID 27803846.
  20. ^ a b Chohlas-Wood, Alex (2020). Understanding risk assessment instruments in criminal justice. Brookings Institution.
  21. ^ Angwin, Julia; Larson, Jeff; Mattu, Surya (23 May 2016). "Machine Bias". ProPublica. from the original on 2021-10-04. Retrieved 2021-10-04.
  22. ^ Nissan, Ephraim (2017-08-01). "Digital technologies and artificial intelligence's present and foreseeable impact on lawyering, judging, policing and law enforcement". AI & Society. 32 (3): 441–464. doi:10.1007/s00146-015-0596-5. ISSN 1435-5655. S2CID 21115049.
  23. ^ Dressel, Julia; Farid, Hany (2018). "The accuracy, fairness, and limits of predicting recidivism". Science Advances. 4 (1): eaao5580. Bibcode:2018SciA....4.5580D. doi:10.1126/sciadv.aao5580. PMC 5777393. PMID 29376122.
  24. ^ Ferguson, Andrew Guthrie (2017). The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. New York: NYU Press. ISBN 9781479869978.
  25. ^ a b Molnar, Petra; Gill, Lex (2018). Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada's Immigration and Refugee System. Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto).
  26. ^ Ezzamouri, Naoual; Hulstijn, Joris (2018). "Continuous monitoring and auditing in municipalities". Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age. pp. 1–10. doi:10.1145/3209281.3209301.
  27. ^ a b c d e Thomas, Julian (2018). "Programming, filtering, adblocking: advertising and media automation". Media International Australia. 166 (1): 34–43. doi:10.1177/1329878X17738787. ISSN 1329-878X. S2CID 149139944. Q110607881.
  28. ^ Newman, N; Fletcher, R; Kalogeropoulos, A (2017). Reuters Institute Digital News Report (Report). Reuters Institute for the Study of Journalism. from the original on 2013-08-17. Retrieved 2022-01-19.
  29. ^ a b Henman, Paul (2019-01-02). "Of algorithms, Apps and advice: digital social policy and service delivery". Journal of Asian Public Policy. 12 (1): 71–89. doi:10.1080/17516234.2018.1495885. ISSN 1751-6234. S2CID 158229201.
  30. ^ Henman, Paul (2017). "The Computer Says 'Debt': Towards A Critical Sociology Of Algorithms And Algorithmic Governance". Data for Policy 2017: Government by Algorithm? Conference, London. doi:10.5281/ZENODO.884116. S2CID 158228131.
  31. ^ a b EU Directorate-General for Research and Innovation (2020). Ethics of connected and automated vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility. LU: Publications Office of the European Union. doi:10.2777/035239. ISBN 978-92-76-17867-5.
  32. ^ Federal Ministry of Transport and Digital Infrastructures. Ethics Commission's complete report on automated and connected driving. www.bmvi.de (Report). German Government. from the original on 2017-09-04. Retrieved 2021-11-23.
  33. ^ Raats, Kaspar; Fors, Vaike; Pink, Sarah (2020-09-01). "Trusting autonomous vehicles: An interdisciplinary approach". Transportation Research Interdisciplinary Perspectives. 7: 100201. doi:10.1016/j.trip.2020.100201. ISSN 2590-1982. S2CID 225261480.
  34. ^ Andrejevic, Mark (2021). "Automated surveillance". Routledge handbook of digital media and communication. Leah A. Lievrouw, Brian Loader. Abingdon, Oxon: Taylor and Francis. ISBN 978-1-315-61655-1. OCLC 1198978596.
  35. ^ Pasquale, Frank (2016). Black box society: the secret algorithms that control money and information. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-97084-7. OCLC 946975299.
  36. ^ Eubanks, Virginia (2018). Automating inequality: how high-tech tools profile, police, and punish the poor (First ed.). New York, NY. ISBN 978-1-250-07431-7. OCLC 1013516195.{{cite book}}: CS1 maint: location missing publisher (link)
  37. ^ a b Safiya Noble (2018), Algorithms of Oppression: How Search Engines Reinforce Racism, New York University Press, OL 19734838W, Wikidata Q48816548
  38. ^ Cath, Corinne (2018-11-28). "Governing artificial intelligence: ethical, legal and technical opportunities and challenges". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376 (2133): 20180080. Bibcode:2018RSPTA.37680080C. doi:10.1098/rsta.2018.0080. PMC 6191666. PMID 30322996.
  39. ^ "EUR-Lex - 32016R0679 - EN - EUR-Lex". eur-lex.europa.eu. Retrieved 2021-09-13.
  40. ^ Brkan, Maja (2017-06-12). "AI-supported decision-making under the general data protection regulation". Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law. ICAIL '17. London, United Kingdom: Association for Computing Machinery. pp. 3–8. doi:10.1145/3086512.3086513. ISBN 978-1-4503-4891-1. S2CID 23933541.
  41. ^ Court of Justice of the European Union. "Request for a preliminary ruling from the Verwaltungsgericht Wien (Austria) lodged on 16 March 2022 – CK (Case C-203/22)".
  42. ^ a b Edwards, Lilian; Veale, Michael (May 2018). "Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?". IEEE Security & Privacy. 16 (3): 46–54. arXiv:1803.07540. doi:10.1109/MSP.2018.2701152. ISSN 1540-7993. S2CID 4049746.
  43. ^ Binns, Reuben; Veale, Michael (2021-12-20). "Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR". International Data Privacy Law. 11 (4): 320. doi:10.1093/idpl/ipab020. ISSN 2044-3994.
  44. ^ a b Friedman, Batya; Nissenbaum, Helen (July 1996). "Bias in computer systems". ACM Transactions on Information Systems. 14 (3): 330–347. doi:10.1145/230538.230561. ISSN 1046-8188. S2CID 207195759.
  45. ^ Marwala, Tshilidzi (2017). Artificial intelligence and economic theory: Skynet in the market. Evan Hurwitz. Cham. ISBN 978-3-319-66104-9. OCLC 1004620876.{{cite book}}: CS1 maint: location missing publisher (link)

automated, decision, making, involves, data, machines, algorithms, make, decisions, range, contexts, including, public, administration, business, health, education, employment, transport, media, entertainment, with, varying, degrees, human, oversight, interven. Automated decision making ADM involves the use of data machines and algorithms to make decisions in a range of contexts including public administration business health education law employment transport media and entertainment with varying degrees of human oversight or intervention ADM involves large scale data from a range of sources such as databases text social media sensors images or speech that is processed using various technologies including computer software algorithms machine learning natural language processing artificial intelligence augmented intelligence and robotics The increasing use of automated decision making systems ADMS across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical legal ethical societal educational economic and health consequences 1 2 3 Contents 1 Overview 2 Data 2 1 Data quality 3 ADM Technologies 3 1 Machine learning 4 Applications 4 1 Debate 4 2 Law 4 3 Economics 4 4 Business 4 4 1 Continuous auditing 4 5 Media and Entertainment 4 6 Advertising 4 7 Health 4 8 Social Services 4 9 Transport and Mobility 4 10 Surveillance 5 Ethical and legal issues 5 1 Bias 5 2 Explainability 5 3 Information asymmetry 6 Research fields 7 See also 8 ReferencesOverview editThere are different definitions of ADM based on the level of automation involved Some definitions suggests ADM involves decisions made through purely technological means without human input 4 such as the EU s General Data Protection Regulation Article 22 However ADM technologies and applications can take many forms ranging from decision support systems that make recommendations for human decision makers to act on sometimes known as augmented intelligence 5 or shared decision making 2 to fully automated decision making processes that make decisions on behalf of individuals or organizations without human involvement 6 Models used in automated decision making systems can be as simple as checklists and decision trees through to artificial intelligence and deep neural networks DNN Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex ambiguous and highly skilled tasks such as image and speech recognition gameplay scientific and medical analysis and inferencing across multiple data sources ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport An ADM system ADMS may involve multiple decision points data sets and technologies ADMT and may sit within a larger administrative or technical system such as a criminal justice system or business process Data editAutomated decision making involves using data as input to be analyzed within a process model or algorithm or for learning and generating new models 7 ADM systems may use and connect a wide range of data types and sources depending on the goals and contexts of the system for example sensor data for self driving cars and robotics identity data for security systems demographic and financial data for public administration medical records in health criminal records in law This can sometimes involve vast amounts of data and computing power Data quality edit The quality of the available data and its ability to be used in ADM systems is fundamental to the outcomes It is often highly problematic for many reasons Datasets are often highly variable corporations or governments may control large scale data restricted for privacy or security reasons incomplete biased limited in terms of time or coverage measuring and describing terms in different ways and many other issues For machines to learn from data large corpora are often required which can be challenging to obtain or compute however where available they have provided significant breakthroughs for example in diagnosing chest X rays 8 ADM Technologies editAutomated decision making technologies ADMT are software coded digital tools that automate the translation of input data to output data contributing to the function of automated decision making systems 7 There are a wide range of technologies in use across ADM applications and systems ADMTs involving basic computational operations Search includes 1 2 1 1 2 many data matching merge Matching two different things Mathematical Calculation formula ADMTs for assessment and grouping User profiling Recommender systems Clustering Classification Feature learning Predictive analytics includes forecasting ADMTs relating to space and flows Social network analysis includes link prediction Mapping RoutingADMTs for processing of complex data formats Image processing Audio processing Natural Language Processing NLP Other ADMT Business rules management systems Time series analysis Anomaly detection Modelling SimulationMachine learning edit Main article Machine learning Machine learning ML involves training computer programs through exposure to large data sets and examples to learn from experience and solve problems 2 Machine learning can be used to generate and analyse data as well as make algorithmic calculations and has been applied to image and speech recognition translations text data and simulations While machine learning has been around for some time it is becoming increasingly powerful due to recent breakthroughs in training deep neural networks DNNs and dramatic increases in data storage capacity and computational power with GPU coprocessors and cloud computing 2 Machine learning systems based on foundation models run on deep neural networks and use pattern matching to train a single huge system on large amounts of general data such as text and images Early models tended to start from scratch for each new problem however since the early 2020s many are able to be adapted to new problems 9 Examples of these technologies include Open AI s DALL E an image creation program and their various GPT language models and Google s PaLM language model program Applications editADM is being used to replace or augment human decision making by both public and private sector organisations for a range of reasons including to help increase consistency improve efficiency reduce costs and enable new solutions to complex problems 10 Debate edit Research and development are underway into uses of technology to assess argument quality 11 12 13 assess argumentative essays 14 15 and judge debates 16 17 18 19 Potential applications of these argument technologies span education and society Scenarios to consider in these regards include those involving the assessment and evaluation of conversational mathematical scientific interpretive legal and political argumentation and debate Law edit In legal systems around the world algorithmic tools such as risk assessment instruments RAI are being used to supplement or replace the human judgment of judges civil servants and police officers in many contexts 20 In the United States RAI are being used to generate scores to predict the risk of recidivism in pre trial detention and sentencing decisions 21 evaluate parole for prisoners and to predict hot spots for future crime 22 23 24 These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system 20 In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications 25 Economics edit Automated decision making systems are used in certain computer programs to create buy and sell orders related to specific financial transactions and automatically submit the orders in the international markets Computer programs can automatically generate orders based on predefined set of rules using trading strategies which are based on technical analyses advanced statistical and mathematical computations or inputs from other electronic sources Business edit Continuous auditing edit Continuous auditing uses advanced analytical tools to automate auditing processes It can be utilized in the private sector by business enterprises and in the public sector by governmental organizations and municipalities 26 As artificial intelligence and machine learning continue to advance accountants and auditors may make use of increasingly sophisticated algorithms which make decisions such as those involving determining what is anomalous whether to notify personnel and how to prioritize those tasks assigned to personnel Media and Entertainment edit Digital media entertainment platforms and information services increasingly provide content to audiences via automated recommender systems based on demographic information previous selections collaborative filtering or content based filtering 27 This includes music and video platforms publishing health information product databases and search engines Many recommender systems also provide some agency to users in accepting recommendations and incorporate data driven algorithmic feedback loops based on the actions of the system user 6 Large scale machine learning language models and image creation programs being developed by companies such as OpenAI and Google in the 2020s have restricted access however they are likely to have widespread application in fields such as advertising copywriting stock imagery and graphic design as well as other fields such as journalism and law 9 Advertising edit Online advertising is closely integrated with many digital media platforms websites and search engines and often involves automated delivery of display advertisements in diverse formats Programmatic online advertising involves automating the sale and delivery of digital advertising on websites and platforms via software rather than direct human decision making 27 This is sometimes known as the waterfall model which involves a sequence of steps across various systems and players publishers and data management platforms user data ad servers and their delivery data inventory management systems ad traders and ad exchanges 27 There are various issues with this system including lack of transparency for advertisers unverifiable metrics lack of control over ad venues audience tracking and privacy concerns 27 Internet users who dislike ads have adopted counter measures such as ad blocking technologies which allow users to automatically filter unwanted advertising from websites and some internet applications In 2017 24 of Australian internet users had ad blockers 28 Health edit Deep learning AI image models are being used for reviewing x rays and detecting the eye condition macular degeneration Social Services edit Governments have been implementing digital technologies to provide more efficient administration and social services since the early 2000s often referred to as e government Many governments around the world are now using automated algorithmic systems for profiling and targeting policies and services including algorithmic policing based on risks surveillance sorting of people such as airport screening providing services based on risk profiles in child protection providing employment services and governing the unemployed 29 A significant application of ADM in social services relates to the use of predictive analytics eg predictions of risks to children from abuse neglect in child protection predictions of recidivism or crime in policing and criminal justice predictions of welfare tax fraud in compliance systems predictions of long term unemployment in employment services Historically these systems were based on standard statistical analyses however from the early 2000s machine learning has increasingly been developed and deployed Key issues with the use of ADM in social services include bias fairness accountability and explainability which refers to transparency around the reasons for a decision and the ability to explain the basis on which a machine made a decision 29 For example Australia s federal social security delivery agency Centrelink developed and implemented an automated processes for detecting and collecting debt which led to many cases of wrongful debt collection in what became known as the RoboDebt scheme 30 Transport and Mobility edit Connected and automated mobility CAM involves autonomous vehicles such as self driving cars and other forms of transport which use automated decision making systems to replace various aspects of human control of the vehicle 31 This can range from level 0 complete human driving to level 5 completely autonomous 2 At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real time sensors and processing of the environment Cars with levels 1 to 3 are already available on the market in 2021 In 2016 The German government established an Ethics Commission on Automated and Connected Driving which recommended connected and automated vehicles CAVs be developed if the systems cause fewer accidents than human drivers positive balance of risk It also provided 20 ethical rules for the adaptation of automated and connected driving 32 In 2020 the European Commission strategy on CAMs recommended that they be adopted in Europe to reduce road fatalities and lower emissions however self driving cars also raise many policy security and legal issues in terms of liability and ethical decision making in the case of accidents as well as privacy issues 31 Issues of trust in autonomous vehicles and community concerns about their safety are key factors to be addressed if AVs are to be widely adopted 33 Surveillance edit Automated digital data collections via sensors cameras online transactions and social media have significantly expanded the scope scale and goals of surveillance practices and institutions in government and commercial sectors 34 As a result there has been a major shift from targeted monitoring of suspects to the ability to monitor entire populations 35 The level of surveillance now possible as a result of automated data collection has been described as surveillance capitalism or surveillance economy to indicate the way digital media involves large scale tracking and accumulation of data on every interaction Ethical and legal issues editThere are many social ethical and legal implications of automated decision making systems Concerns raised include lack of transparency and contestability of decisions incursions on privacy and surveillance exacerbating systemic bias and inequality due to data and algorithmic bias intellectual property rights the spread of misinformation via media platforms administrative discrimination risk and responsibility unemployment and many others 36 37 As ADM becomes more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies 38 ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed leading to concerns that they are black box systems which are not transparent or accountable 2 A report from Citizen lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision making does not result in infringements on rights including the rights to equality and non discrimination freedom of movement expression religion and association privacy rights and the rights to life liberty and security of the person 25 Legislative responses to ADM include The European General Data Protection Regulation GDPR introduced in 2016 is a regulation in EU law on data protection and privacy in the European Union EU Article 22 1 enshrines the right of data subjects not to be subject to decisions which have legal or other significant effects being based solely on automatic individual decision making 39 40 GDPR also includes some rules on the right to explanation however the exact scope and nature of these is currently subject to pending review by the Court of Justice of the European Union 41 These provisions were not first introduced in the GDPR but have been present in a similar form across Europe since the Data Protection Directive in 1995 and the 1978 French law the loi informatique et libertes fr 42 Similarly scoped and worded provisions with varying attached rights and obligations are present in the data protection laws of many other jurisdictions across the world including Uganda Morocco and the US state of Virginia 43 Rights for the explanation of public sector automated decisions forming algorithmic treatment under the French loi pour une Republique numerique 42 Bias edit ADM may incorporate algorithmic bias arising from Data sources where data inputs are biased in their collection or selection 37 Technical design of the algorithm for example where assumptions have been made about how a person will behave 44 Emergent bias where the application of ADM in unanticipated circumstances creates a biased outcome 44 Explainability edit Questions of biased or incorrect data or algorithms and concerns that some ADMs are black box technologies closed to human scrutiny or interrogation has led to what is referred to as the issue of explainability or the right to an explanation of automated decisions and AI This is also known as Explainable AI XAI or Interpretable AI in which the results of the solution can be analysed and understood by humans XAI algorithms are considered to follow three principles transparency interpretability and explainability Information asymmetry edit Automated decision making may increase the information asymmetry between individuals whose data feeds into the system and the platforms and decision making systems capable of inferring information from that data On the other hand it has been observed that in financial trading the information asymmetry between two artificial intelligent agents may be much less than between two human agents or between human and machine agents 45 Research fields editMany academic disciplines and fields are increasingly turning their attention to the development application and implications of ADM including business computer sciences human computer interaction HCI law public administration and media and communications The automation of media content and algorithmically driven news video and other content via search systems and platforms is a major focus of academic research in media studies 27 The ACM Conference on Fairness Accountability and Transparency ACM FAccT was established in 2018 to study transparency and explainability in the context of socio technical systems many of which include ADM and AI Key research centres investigating ADM include Algorithm Watch Germany ARC Centre of Excellence for Automated Decision Making and Society Australia Citizen Lab Canada Informatics EuropeSee also editAutomated decision support Algorithmic bias Decision making software Decision Management Ethics of artificial intelligence Government by algorithm Machine learning Recommender systemsReferences edit Marabelli Marco Newell Sue Handunge Valerie 2021 The lifecycle of algorithmic decision making systems Organizational choices and ethical challenges Journal of Strategic Information Systems 30 1 101683 doi 10 1016 j jsis 2021 101683 Retrieved November 1 2022 a b c d e f Larus James Hankin Chris Carson Siri Granum Christen Markus Crafa Silvia Grau Oliver Kirchner Claude Knowles Bran McGettrick Andrew Tamburri Damian Andrew Werthner Hannes 2018 When Computers Decide European Recommendations on Machine Learned Automated Decision Making New York Association for Computing Machinery doi 10 1145 3185595 Mokander Jakob Morley Jessica Taddeo Mariarosaria Floridi Luciano 2021 07 06 Ethics Based Auditing of Automated Decision Making Systems Nature Scope and Limitations Science and Engineering Ethics 27 4 44 arXiv 2110 10980 doi 10 1007 s11948 021 00319 4 ISSN 1471 5546 PMC 8260507 PMID 34231029 UK Information Commissioner s Office 2021 09 24 Guide to the UK General Data Protection Regulation UK GDPR Report Information Commissioner s Office UK Archived from the original on 2018 12 21 Retrieved 2021 10 05 Crigger E Khoury C 2019 02 01 Making Policy on Augmented Intelligence in Health Care AMA Journal of Ethics 21 2 E188 191 doi 10 1001 amajethics 2019 188 ISSN 2376 6980 PMID 30794129 S2CID 73490120 a b Araujo Theo Helberger Natali Kruikemeier Sanne de Vreese Claes H 2020 09 01 In AI we trust Perceptions about automated decision making by artificial intelligence PDF AI amp Society 35 3 611 623 doi 10 1007 s00146 019 00931 w hdl 11245 1 b73d4d3f 8ab9 4b63 b8a8 99fb749ab2c5 ISSN 1435 5655 S2CID 209523258 a b Algorithm Watch 2020 Automating Society 2019 Algorithm Watch Report Retrieved 2022 02 28 Seah Jarrel C Y Tang Cyril H M Buchlak Quinlan D Holt Xavier G Wardman Jeffrey B Aimoldin Anuar Esmaili Nazanin Ahmad Hassan Pham Hung Lambert John F Hachey Ben August 2021 Effect of a comprehensive deep learning model on the accuracy of chest x ray interpretation by radiologists a retrospective multi reader multicase study The Lancet Digital Health 3 8 e496 e506 doi 10 1016 s2589 7500 21 00106 0 ISSN 2589 7500 PMID 34219054 S2CID 235735320 a b Snoswell Aaron J Hunter Dan 13 April 2022 Robots are creating images and telling jokes 5 things to know about foundation models and the next generation of AI The Conversation Retrieved 2022 04 21 Taddeo Mariarosaria Floridi Luciano 2018 08 24 How AI can be a force for good Science 361 6404 751 752 Bibcode 2018Sci 361 751T doi 10 1126 science aat5991 ISSN 0036 8075 PMID 30139858 S2CID 52075037 Wachsmuth Henning Naderi Nona Hou Yufang Bilu Yonatan Prabhakaran Vinodkumar Thijm Tim Hirst Graema Stein Benno 2017 Computational argumentation quality assessment in natural language PDF Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics pp 176 187 Wachsmuth Henning Naderi Nona Habernal Ivan Hou Yufang Hirst Graeme Gurevych Iryna Stein Benno 2017 Argumentation quality assessment Theory vs practice PDF Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics pp 250 255 Gretz Shai Friedman Roni Cohen Karlik Edo Toledo Assaf Lahav Dan Aharonov Ranit Slonim Noam 2020 A large scale dataset for argument quality ranking Construction and analysis Proceedings of the AAAI Conference on Artificial Intelligence Vol 34 pp 7805 7813 Green Nancy 2013 Towards automated analysis of student arguments International Conference on Artificial Intelligence in Education Springer pp 591 594 doi 10 1007 978 3 642 39112 5 66 Persing Isaac Ng Vincent 2015 Modeling argument strength in student essays PDF Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing pp 543 552 Brilman Maarten Scherer Stefan 2015 A multimodal predictive model of successful debaters or how I learned to sway votes Proceedings of the 23rd ACM international conference on Multimedia pp 149 158 doi 10 1145 2733373 2806245 Potash Peter Rumshisky Anna 2017 Towards debate automation a recurrent model for predicting debate winners PDF Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing pp 2465 2475 Santos Pedro Gurevych Iryna 2018 Multimodal prediction of the audience s impression in political debates Proceedings of the 20th International Conference on Multimodal Interaction pp 1 6 doi 10 1145 3281151 3281157 Wang Lu Beauchamp Nick Shugars Sarah Qin Kechen 2017 Winning on the merits The joint effects of content and style on debate outcomes Transactions of the Association for Computational Linguistics 5 MIT Press 219 232 arXiv 1705 05040 doi 10 1162 tacl a 00057 S2CID 27803846 a b Chohlas Wood Alex 2020 Understanding risk assessment instruments in criminal justice Brookings Institution Angwin Julia Larson Jeff Mattu Surya 23 May 2016 Machine Bias ProPublica Archived from the original on 2021 10 04 Retrieved 2021 10 04 Nissan Ephraim 2017 08 01 Digital technologies and artificial intelligence s present and foreseeable impact on lawyering judging policing and law enforcement AI amp Society 32 3 441 464 doi 10 1007 s00146 015 0596 5 ISSN 1435 5655 S2CID 21115049 Dressel Julia Farid Hany 2018 The accuracy fairness and limits of predicting recidivism Science Advances 4 1 eaao5580 Bibcode 2018SciA 4 5580D doi 10 1126 sciadv aao5580 PMC 5777393 PMID 29376122 Ferguson Andrew Guthrie 2017 The Rise of Big Data Policing Surveillance Race and the Future of Law Enforcement New York NYU Press ISBN 9781479869978 a b Molnar Petra Gill Lex 2018 Bots at the Gate A Human Rights Analysis of Automated Decision Making in Canada s Immigration and Refugee System Citizen Lab and International Human Rights Program Faculty of Law University of Toronto Ezzamouri Naoual Hulstijn Joris 2018 Continuous monitoring and auditing in municipalities Proceedings of the 19th Annual International Conference on Digital Government Research Governance in the Data Age pp 1 10 doi 10 1145 3209281 3209301 a b c d e Thomas Julian 2018 Programming filtering adblocking advertising and media automation Media International Australia 166 1 34 43 doi 10 1177 1329878X17738787 ISSN 1329 878X S2CID 149139944 Q110607881 Newman N Fletcher R Kalogeropoulos A 2017 Reuters Institute Digital News Report Report Reuters Institute for the Study of Journalism Archived from the original on 2013 08 17 Retrieved 2022 01 19 a b Henman Paul 2019 01 02 Of algorithms Apps and advice digital social policy and service delivery Journal of Asian Public Policy 12 1 71 89 doi 10 1080 17516234 2018 1495885 ISSN 1751 6234 S2CID 158229201 Henman Paul 2017 The Computer Says Debt Towards A Critical Sociology Of Algorithms And Algorithmic Governance Data for Policy 2017 Government by Algorithm Conference London doi 10 5281 ZENODO 884116 S2CID 158228131 a b EU Directorate General for Research and Innovation 2020 Ethics of connected and automated vehicles recommendations on road safety privacy fairness explainability and responsibility LU Publications Office of the European Union doi 10 2777 035239 ISBN 978 92 76 17867 5 Federal Ministry of Transport and Digital Infrastructures Ethics Commission s complete report on automated and connected driving www bmvi de Report German Government Archived from the original on 2017 09 04 Retrieved 2021 11 23 Raats Kaspar Fors Vaike Pink Sarah 2020 09 01 Trusting autonomous vehicles An interdisciplinary approach Transportation Research Interdisciplinary Perspectives 7 100201 doi 10 1016 j trip 2020 100201 ISSN 2590 1982 S2CID 225261480 Andrejevic Mark 2021 Automated surveillance Routledge handbook of digital media and communication Leah A Lievrouw Brian Loader Abingdon Oxon Taylor and Francis ISBN 978 1 315 61655 1 OCLC 1198978596 Pasquale Frank 2016 Black box society the secret algorithms that control money and information Cambridge Massachusetts Harvard University Press ISBN 978 0 674 97084 7 OCLC 946975299 Eubanks Virginia 2018 Automating inequality how high tech tools profile police and punish the poor First ed New York NY ISBN 978 1 250 07431 7 OCLC 1013516195 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link a b Safiya Noble 2018 Algorithms of Oppression How Search Engines Reinforce Racism New York University Press OL 19734838W Wikidata Q48816548 Cath Corinne 2018 11 28 Governing artificial intelligence ethical legal and technical opportunities and challenges Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences 376 2133 20180080 Bibcode 2018RSPTA 37680080C doi 10 1098 rsta 2018 0080 PMC 6191666 PMID 30322996 EUR Lex 32016R0679 EN EUR Lex eur lex europa eu Retrieved 2021 09 13 Brkan Maja 2017 06 12 AI supported decision making under the general data protection regulation Proceedings of the 16th edition of the International Conference on Articial Intelligence and Law ICAIL 17 London United Kingdom Association for Computing Machinery pp 3 8 doi 10 1145 3086512 3086513 ISBN 978 1 4503 4891 1 S2CID 23933541 Court of Justice of the European Union Request for a preliminary ruling from the Verwaltungsgericht Wien Austria lodged on 16 March 2022 CK Case C 203 22 a b Edwards Lilian Veale Michael May 2018 Enslaving the Algorithm From a Right to an Explanation to a Right to Better Decisions IEEE Security amp Privacy 16 3 46 54 arXiv 1803 07540 doi 10 1109 MSP 2018 2701152 ISSN 1540 7993 S2CID 4049746 Binns Reuben Veale Michael 2021 12 20 Is that your final decision Multi stage profiling selective effects and Article 22 of the GDPR International Data Privacy Law 11 4 320 doi 10 1093 idpl ipab020 ISSN 2044 3994 a b Friedman Batya Nissenbaum Helen July 1996 Bias in computer systems ACM Transactions on Information Systems 14 3 330 347 doi 10 1145 230538 230561 ISSN 1046 8188 S2CID 207195759 Marwala Tshilidzi 2017 Artificial intelligence and economic theory Skynet in the market Evan Hurwitz Cham ISBN 978 3 319 66104 9 OCLC 1004620876 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link Retrieved from https en wikipedia org w index php title Automated decision making amp oldid 1195177593, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.