Artificial Intelligence Argumentative Essay – With Outline

Published by Boni on May 4, 2023 May 4, 2023

Artificial Intelligence Argumentative Essay Outline

In recent years, Artificial Intelligence (AI) has become one of the rapidly developing fields and as its capabilities continue to expand, its potential impact on society has become a topic of intense debate. Different people have different views regqarding AI making this topic a bit challenging especially to students writing an argumentative essay on AI. However, with the help of a trustworthy research paper writing service , students can get guarentee themselves quality papers that will get them good grades.

Elevate Your Writing with Our Free Writing Tools!

Did you know that we provide a free essay and speech generator, plagiarism checker, summarizer, paraphraser, and other writing tools for free?

Topic: Artificial Intelligence Argumentative Essay

Introduction

Thesis: Artificial Intelligence cannot replace human intelligence no matter how sophisticated it may get.

Supporting arguments

Paragraph 1:

AI lacks emotional intelligence.

  • Emotional intelligence makes human beings perpetually relevant at work. 
  • Humans are social animals and they feel emotionally connected to other people.
  • AI cannot imitate emotional intelligence.

Paragraph 2:

AI can only operate using the data it is given.

  • The machine is useless if the data entered into it does not include a new field of work.
  • AI does not automatically adapt to any circumstance.
  • AI cannot easily mimic the capacity of the human brain to analyze, develop, innovate, maneuver, and collect information.

Paragraph 3:

AI is limited by its coding and its inability to think creatively.

  • AI’s coding prevents them from coming up with original solutions to problems.
  • Robots are designed to operate within their constraints.
  • AI cannot analyze the context, consider complex events critically, or create intricate plans.

Paragraph 4:

AI lacks soft skills.

  • Soft skills are a must for every employee.
  • Soft skills are alien to artificially intelligent computers.
  • Humans have an advantage over AI in the workplace thanks to soft skills.

Paragraph 5:

AI is a creation of humans and it is humans that make it work.

  • Without human intelligence, artificial intelligence would not exist.
  • The lines of code that are used to create AI are written by humans.
  • Humans provide the data that AI machines use to operate.

Paragraph 6:

While humans can develop relationships, AI will never achieve that.

  • Relationships are the foundation of many things.
  • Humans have to communicate and work together with fellow humans.
  • Machines cannot understand this emotional aspect of human behavior.

Paragraph 7:

AI will never express empathy, whereas humans can.

  • Humans can express their emotions.
  • AI cannot read other people’s emotions and display expressions.
  • While AI-based devices can mimic human speech, they do not have empathy and the human touch.

Paragraph 8:

AI requires fact-checking.

  • AI chatbots often make mistakes and need human moderators.
  • While AI can learn incredibly quickly, it does not have common sense.
  • AI cannot reason and challenge the truth to the same extent that humans can.

Paragraph 9:

AI cannot replace important human skills like critical thinking, time management, interpersonal skills, and analytical skills.

  • Machines lack the human critical-thought ability.
  • Machines are not as good at setting priorities or managing their time as humans.
  • Machines lack the human ability to evaluate data and develop conclusions.

Struggling to get a proper argumentative topic for your paper? Here is a well researched list of argumentative research paper topics that will give you brilliant ideas.

Counterarguments and rebuttals

Paragraph 10:

Some people could argue that AI could soon catch up with and replace human intelligence.

  • This is becausemachines can now perform cognitively complicated tasks.
  • This could mean all work could be delegated to robots.
  • However, this is not true because AI lacks intuition, emotion, or cultural sensitivity.

Paragraph 11:

Some people also argue that AI will push people out of jobs in a few years to come.

  • AI use in the workplace is growing.
  • Many current positions will be replaced by AI.
  • However, the kind of work that AI can perform is often repetitious needing less sophisticated reasoning.
  • AI will never replace human intelligence or humans in the workplace.
  • Human intelligence is still far much superior to what AI can accomplish.
  • AI’s abilities will enhance humanity rather than replace it.
  • As AI technology advances, more jobs may be created.

Learn the best way to write a killer argumentative essay that will get you an A+ grade step by step.

Artificial Intelligence Argumentative Essay

Artificial Intelligence (AI) is the kind of intelligence displayed by machines. It is the capacity of a machine, specifically a computer, to replicate mental functions. The natural intelligence of people is in contrast to artificial intelligence. Numerous technologies are being created to educate computer systems on how to plan, understand, learn from experience, recognize objects, make judgments, and solve issues. Machines can carry out human-like tasks like driving a car or having a conversation by mimicking these abilities. AI has ingrained itself into humans’ daily lives and is here to stay. It is working alongside humans to efficiently and quickly meet societal needs, which is having a significant, beneficial impact on numerous industries and people’s lives. Some people feel that AI has become so efficient that it could replace humans in the future. However, Artificial Intelligence cannot replace human intelligence no matter how sophisticated it may get.

AI cannot replace human intelligence because it lacks emotional intelligence. Emotional intelligence is one distinctive quality that makes human beings perpetually relevant at work. The value of emotional intelligence in the workplace, particularly when working with clients, cannot be overstated. Humans are social animals, and one fundamental, indisputable desire that they have is to feel emotionally connected to other people. While AI tries to imitate human intelligence, emotional intelligence is more difficult to mimic than intellectual intelligence (Oluwaniyi, 2023). This is because emotional intelligence requires empathy and a profound understanding of the human condition, particularly suffering and pain (Oluwaniyi, 2023). AI is incapable of experiencing these feelings. Smart corporate executives and entrepreneurs are aware of the value of appealing to the emotions of their personnel and customers. Such degrees of human connection is impossible for machines to accomplish, but there are techniques for humans to develop their emotional intelligence. Systems with artificial intelligence are quick, logical, and precise. However, they lack intuition, empathy, and cultural awareness (Prajapat, 2022). It is these abilities that make humans more effective. Only a human being can read a person’s facial expression and know just what to say.

In the same breath, AI is only able to operate using the data it is given. Anything beyond that would be asking too much of it, and machines are not made that way. Therefore, the machine is useless if the data entered into it does not include a new field of work or if its algorithm does not account for unexpected events. These circumstances are frequent in the manufacturing and tech sectors, and AI builders are continuously looking for interim solutions (Oluwaniyi, 2023). One of the many prevalent misconceptions about artificial intelligence is the notion that technologies will automatically adapt to any circumstance. It follows that AI will never permeate every industry and reduce the need for human professional expertise (Oluwaniyi, 2023). AI cannot easily mimic human reasoning or the capacity of the human brain to analyze, develop, innovate, maneuver, and collect information.

AI is also limited by its coding and its inability to think creatively. AI’s coding prevents them from coming up with original solutions to a variety of developing issues. Robots are designed to operate within their constraints (Prajapat, 2022). A machine could think for itself someday. However, that will not happen anytime soon in the real world. Artificial intelligence cannot analyze the context, consider complex events critically, or create intricate plans (Prajapat, 2022). Teams and organizations connect with the outside world regularly. However, AI can only process information that has already been input into its system. It cannot account for the influence from outside, unlike humans. In real work environments, it is important to have the flexibility to distill a vision and plan while coping with abrupt changes and skewed information sharing (Prajapat, 2022). Human intuition, a crucial component of daily work, especially for high-level executives, drives this skill.

Further, AI lacks soft skills. In the workplace, soft skills are a must for every employee. To name just a few, they include collaboration, focus on detail, creative and critical thinking, excellent communication skills, and interpersonal skills (Larson, 2021). Every industry needs these soft skills, so one must acquire them if one wants to thrive in one’s career. These are skills that humans learn and are expected to have. Learning them is beneficial for everybody, regardless of position. Both business leaders and a group of field personnel in any industry depend on these skills to succeed. Consequently, humans have an advantage over AI in the workplace thanks to soft skills. Soft skills, however, are alien to artificially intelligent computers. These soft skills are essential for professional development and progress, but AI cannot create them (Larson, 2021). Higher levels of emotional intelligence and thinking are needed to develop the skills.

Additionally, it is general knowledge that AI is a creation of humans and it is humans that make it work. Without human intelligence, artificial intelligence would not exist. Artificial intelligence is intelligence created by humans. The lines of code that are used to create AI are written by humans. Humans provide the data that AI machines use to operate (Larson, 2021). Humans are also the ones who operate these machines. Human services will become more and more in demand as AI applications expand. These machines need to be built, run, and maintained by someone who also designs the AI systems (Larson, 2021). This can only be done by humans. These facts give one the confidence to refute any theories that AI will replace human intelligence. 

Furthermore, while humans can develop relationships, AI will never achieve that. Relationships are the foundation of many things. Humans have to communicate and work together with fellow humans. Additionally, many people do better individually when working in teams. On the same note, teams produce better and more inventive results, according to numerous studies (Prajapat, 2022). The most crucial component of employee engagement is an emotional commitment and ties with teammates, which demonstrate how much humans care about their work and the organizations they work for. Because people prefer to work with like-minded individuals, relationships also aid in locating partners and clients (Prajapat, 2022). However, machines are unable to understand this emotional aspect of human behavior.

In addition, AI will never express empathy, whereas humans can. Humans can express their emotions, including joy, satisfaction, grief, thanksgiving, hope, goodness, and optimism (Prajapat, 2022). There are a virtually infinite number of different emotions that humans can feel and let out. Furthermore, it is impossible to imagine AI being able to read others’ emotions and display all expressions better than a human being can. Several work situations call for the establishment of trust and human-to-human connections in order to get workers to relax, open up, and communicate about themselves (Prajapat, 2022). While AI-based devices can mimic human speech, they do not have empathy and the human touch.

AI also falls short of the human intelligence level in that it requires fact-checking. The fact that AI chatbots, such as ChatGPT, often make mistakes and need human moderators to double-check their facts is a major issue. While AI can learn incredibly quickly, it does not have common sense and is simply unable to reason and challenge the truth to the same extent that humans can (Oluwaniyi, 2023). This is why technology users should probably refrain from asking AI chatbots certain questions. The lesson here is that fact-checking will probably become a serious career in the future since artificial intelligence cannot regulate itself and requires external supervision (Oluwaniyi, 2023). One might want to hone their research skills in the interim in anticipation of this potential future career path.

Further, AI cannot replace such important human skills as critical thinking, time management, interpersonal skills, and analytical skills. Machines are quite good at analyzing data, but they lack the human critical-thought ability. It is a skill that is required in many professions, such as commerce, law, and medicine. On the same note, while machines are capable of performing tasks quickly and efficiently, they are not as good at setting priorities or managing their time as humans are (Cremer, 2020). Time management is essential in many different industries, including healthcare, education, and project management. Similarly, interpersonal skills, such as dispute resolution, active listening, and empathy enable humans to develop important connections and interactions with fellow humans. These skills are required for many different professions, including human resource management, social work, and counseling. On another note, machines can analyze data and provide recommendations, but they do not have the human ability to evaluate the data and develop conclusions (Cremer, 2020). Analytical skills are essential in many different disciplines, including finance, engineering, and science.  

Some people could argue that with the rate at which AI is evolving, it could soon catch up with and replace human intelligence. The practice of humans outsourcing their work to machines began with routine, repetitive physical jobs such as weaving. Machines have advanced to the point where they can now perform tasks that could be considered cognitively complicated, such as solving mathematical equations, understanding speech and language, and writing. So, it appears that machines are prepared to duplicate not just human physical work but also human’s mental work. In the twenty-first century, AI is improving to the point that it can perform many activities better than humans, making humans appear ready to delegate their intelligence to machines (Cremer & Kasparov, 2021). With this most recent trend, it appears as though everything will soon be automatable, which means that no work will be immune from being delegated to robots. This picture of the future of labor resembles a zero-sum contest in which there can be only one victor. However, this interpretation of how AI will affect the workplace is misleading. The contention of whether AI will replace human employees assumes that the two species share the same attributes and skills, yet this is untrue. AI-based systems are quicker, more precise, and always rational, but they lack intuition, emotion, or cultural sensitivity (Cremer & Kasparov, 2021). It is precisely these skills that humans have, which make them superior to machines.

Some people also argue that since AI may outperform humans in many different aspects, it will push people out of jobs in a few years to come. For instance, according to Larkin (2022), over 67 percent of American workers believe robots will take their jobs within fifty years. The use of artificial intelligence applications in the workplace is growing, and many current positions will be replaced by them. However, the kind of work that such applications can perform, is often repetitious ones needing less sophisticated reasoning. As the world transitions to a more connected information and communication technology ecosystem, new positions for people will also be created by changing workplace demands. According to an analysis by the World Economic Forum, while machines using AI will displace roughly 85 million jobs in 2025, AI will also create about 97 million new employment positions in the same year (“The Future of Jobs Report 2020,” 2020). Thus, the concern should be how humans can collaborate with AI rather than having it replace them. This is what people should concentrate on. Because, it will be difficult, even impossible, to survive in the modern era without AI. Similarly, AI will not survive without the input of humans.

No matter the level to which AI may advance, it will not replace human intelligence nor will it replace humans at the workplace. The human-like intelligence is still very distant from what the world’s AI technology can accomplish. Despite all the concerns, the majority of AI machines are built to be exceptionally good at tackling a specific problem in the setting of a certain data system. On the other hand, human imagination, wisdom, and contextual knowledge are essential to the success of AI. This is due to the straightforward fact that people will always be able to provide value that robots cannot. Thus, it can be summed up that AI’s abilities will enhance humanity rather than replace it. Because of this, top-tier and progressive firms have begun implementing AI to improve their experiences, productivity, and organizational agility. Overall, it can be seen that as AI technology advances, more jobs may be created.

Cremer, D. (2020). Leadership by algorithm: Who leads and who follows in the AI era? Harriman House.

Cremer, D., & Kasparov, G. (2021, March 18). AI should augment human intelligence, not replace it . Harvard Business Review. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it  

Larkin, C. (2022, September 27). AI won’t replace human intuition . Forbes. https://www.forbes.com/sites/forbestechcouncil/2022/09/27/ai-wont-replace-human-intuition/?sh=7f25bf1267bf

Larson, E. J. (2021). The myth of artificial intelligence: Why computers can’t think the way we do . Harvard University Press.

Oluwaniyi, R. (2023, March 15). 7 reasons why artificial intelligence can’t replace humans at work . MUO. https://www.makeuseof.com/reasons-artificial-intelligence-cant-replace-humans/#:~:text=Regardless%20of%20how%20well%20AI,is%20vital%20for%20business%20growth .

Prajapat, J. (2022, May 17). Why A.I. artificial intelligence can’t replace humans? LinkedIn. https://www.linkedin.com/pulse/why-ai-artificial-intelligence-cant-replace-humans-jitendra-prajapat/?trk=pulse-article_more-articles_related-content-card

The Future of Jobs Report 2020 . (2020, October 20). World Economic Forum. Retrieved May 2, 2023, from https://www.weforum.org/reports/the-future-of-jobs-report-2020/in-full/executive-summary

More examples of argumentative essays written by our team of top-notch writers

  • Same sex marriage argumentative essay
  • American patriotism Argumentative Essay
  • Argumentative essay on marijuana legalization
  • Euthanasia argumentative essay sample
  • Argumentative essay on abortion – sample essay
  • Gun control argumentative essay – sample essay
  • Illegal immigration argumentative essay

Gudwriter Custom Papers

Special offer! Get 20% discount on your first order. Promo code: SAVE20

Related Posts

Free essays and research papers, synthesis essay example – with outline.

The goal of a synthesis paper is to show that you can handle in-depth research, dissect complex ideas, and present the arguments. Most college or university students have a hard time writing a synthesis essay, Read more…

spatial order example

Examples of Spatial Order – With Outline

A spatial order is an organizational style that helps in the presentation of ideas or things as is in their locations. Most students struggle to understand the meaning of spatial order in writing and have Read more…

Ad Analysis Essay Example With Outline

An ad analysis essay is a type of academic essay whereby the writer is required to examine an advertisement. The aim of the essay is to find any hidden messages which may be deceptive or misleading Read more…

logo

Argumentative Essay Example on Artificial Intelligence in MLA

Artificial Intelligence

Like we discussed in our previous blog, argumentative essays are complicated to write. In most cases, having a look at the examples of argumentative essays can help you construct ideas and write yours. In this blog, we present to you an example of an MLA argumentative essay on Artificial Intelligence as a solution more than a threat. When writing an argumentative essay, it is a chance to present your prowess ion sharing with the audience why both options are considerable. Also, just like in a persuasive essay you can persuade the readers to adopt your side of the argument. In this respect, either side of the arguments on argumentative essay topics is presented, including a counterargument. The conclusion should then make clear what is in the body of the essay.

Provided you have a great topic for your essay, enough and proper evidence to back your claims, and facts to refute the opponent's viewpoint, you can always write convincing arguments. A strong thesis is a must for an argumentative essay. So is the conclusion, which must stand out. Look at this top-grade argumentative essay example and learn the art.

Argumentative Essay Example: Artificial Intelligence: A Solution more than a Threat

The debate on the future of making in the age of computers remains to be a hotly contested debate in the public, professional, and scholarly spheres. Within the stem of the debate, there have been fears in the fast growing field of computing referred to as artificial intelligence.  Artificial intelligence or AI is a term that was originally coined in the 1950s by John McCarthy, and it simply means machine intelligence. It is the field of computer science that deals with the study of the systems that act or behave in a way that an observer sees them as intelligent and using human and animal intelligent behavior models in solving sophisticated problems (Kaplan 1). Even though portrayed as a threat on account of the loss of jobs, AI is a promising solution for medical applications with efficiency and high precision compared to humans and in disaster response.

Artificial intelligence (AI) has proven to be a solution to natural disasters abound to affect different places globally. The success of any humanitarian intervention depends on quality information, which is in the heart AI systems. For example, the Artificial Intelligence Disaster Response (AIDR) has been applied in different catastrophes in enabling the coordination between machines and human intelligence in coordination response operations (Imran et al. 159). During such events, AIDR allows for the coordination of drones, sensors, and robots to acquire, synthesize and produce accurate information based on the landscapes, thus making rescue less-time consuming and easier (Imran et al. 159-160). It has been used in the Nepal earthquake in the mobilization of volunteers as well as in the Chile earthquake in evacuation processes, in 2015 (EKU). Therefore, artificial intelligence offers high precision and accuracy in solving tasks that are otherwise complicated and time-consuming to humans.

Apart from disaster response, Artificial Intelligence also plays a critical role in the field of medicine including research, training, and diagnosis of diseases. In fact, Medical Artificial Intelligence deals with the construction of AI systems and programs that can make diagnosis and therapy recommendations easier (Moein xi). The medical field uses AI techniques such as Expert systems and Knowledge-based systems. These systems offer the clinicians and other medical professionals the ability to do data mining that is used in interpreting complex diagnostic tests. Such tests and results are accurate since the AI systems integrate information from various sources to offer patient-specific therapy and treatment recommendations (Moein 2). AI-supported medical diagnosis is correct and provides information for both the patients and the experts for effective decision making. As such, it is evident that artificial intelligence has not only revolutionized the medical field but promises its sustainability.

Despite being a savior to humankind in the field of medicine and natural disaster response, AI presents the existential threat of loss of jobs. Research predicts that artificial intelligence already has and poses an existential threat to the labor market. The emergence of intelligent algorithms that control robots has led to the loss of jobs that are otherwise tiring and monotonous to humans (Kaplan 113). For example, artificial intelligence controls the robots that are used in the design and manufacture of vehicles. In this case, the people formerly employed in the industry have lost jobs. In a study by researchers at Oxford University, it emerged that the recent emergence of machine learning and robotics will significantly affect the U.S. labor market, with 47% of the jobs being at risk of automation (Kaplan 118). Even so, not all jobs in entirety will be affected. Rather, even the existence of AI in the workplace would require the support of experts, which is also another frontier for job creation. In sum, even though AI poses a threat to the labor market, it creates an avenue for employment as well.

In conclusion, amidst the fear that artificial intelligence is a threat, either now or in the future, it is clear that it has substantial and critical benefits for humans. Using the systems that mimic human and animal intelligence is the next frontier in solving problems within society. In fact, in its definition, AI seeks to create solutions to complex problems. In this respect, its application in medicine could help in creating a breakthrough in finding the cure for chronic diseases such as cancer and HIV that are affecting masses.  Furthermore, as man increases activity on the earth's surface nature is poised to fight back through natural disasters. In this case, AI comes handy as a partner to help humans prevent the aftermath of disasters. The only threat posed by AI is the loss of jobs, which again is predictable and has been a progressive issue. Even in doing so, AI presents an opportunity for job creation. Therefore, AI has more benefits compared to the threats and stands as a solution other than a threat.

Works Cited

EKU. "Using Artificial Intelligence for Emergency Management | EKU Online."  Safetymanagement.eku.edu . N.p., 2017. Web. 4 Sept. 2017.

Imran, Muhammad et al. "AIDR."  Proceedings of the 23rd International Conference on World Wide Web - WWW '14 Companion  (2014): 159-162. Web. 4 Sept. 2017.

Kaplan, Jerry.  Artificial Intelligence: What Everyone Needs To Know ? New York, NY, United States of America: Oxford University Press, 2016. Print.

Moein, Sara.  Medical Diagnosis Using Artificial Neural Networks . Hershey, PA: Medical Information Science Reference, 2014. Print.

Parting Shot!

When writing a research paper with works cited page or an essay for that matter, it is always MLA formatting. If it is an essay that requires you to have endnotes and footnotes then you should write it in Chicago style. Most of the argumentative essays we have helped students write are always in APA or MLA.

Related Article:

  • Best topics for argumentative essays.
  • Topics and Ideas for Persuasive essays

On rare occasions, we also get requests for argumentative essays in Vancouver, Oxford, and Turabian. The good news is that if you still cannot wrap your head around writing an excellent argumentative essay, we can always help. You can choose to buy argumentative essays from Gradecrest. Be assured of quality, well-researched, and plagiarism-free argumentative essays.

argumentative essay against artificial intelligence

Gradecrest is a professional writing service that provides original model papers. We offer personalized services along with research materials for assistance purposes only. All the materials from our website should be used with proper references. See our Terms of Use Page for proper details.

paypal logo

📕 Studying HQ

Comprehensive argumentative essay paper on artificial intelligence, rachel r.n..

  • February 22, 2024

What You'll Learn

Unraveling the Promise and Peril of Artificial Intelligence

Artificial Intelligence (AI) stands as a hallmark of human innovation, promising to revolutionize industries, economies, and even the fabric of society itself. With its ability to mimic cognitive functions, AI has penetrated various spheres of human existence, from healthcare to finance, transportation to entertainment. However, this technological marvel is not without its controversies and ethical dilemmas. This essay delves into the multifaceted landscape of artificial intelligence, exploring its potential, challenges, and implications for humanity.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

AI holds the promise of unlocking unprecedented levels of efficiency and productivity across industries . In healthcare, AI-driven diagnostic tools can analyze vast amounts of medical data to detect diseases with higher accuracy and speed than human physicians. Moreover, AI-powered robotic surgeries enable minimally invasive procedures, reducing patient recovery times and risks. In manufacturing, AI-driven automation streamlines production processes, leading to cost savings and higher output. Self-driving cars, a pinnacle of AI innovation, promise safer roads and greater mobility for individuals, while also potentially reducing traffic congestion and emissions.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

Furthermore, AI has revolutionized the way we interact with technology, enhancing user experiences through natural language processing and personalized recommendations. Virtual assistants like Siri and Alexa have become ubiquitous, simplifying tasks and providing timely information at our fingertips. AI-driven recommendation algorithms power platforms like Netflix and Spotify, catering to individual preferences and shaping our consumption habits.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

Despite its transformative potential, AI also raises significant concerns regarding privacy , security, and the displacement of human labor. The proliferation of AI-powered surveillance systems raises alarms about encroachments on personal privacy and civil liberties. Facial recognition technology, for instance, poses risks of mass surveillance and wrongful identifications. Moreover, the reliance on AI for critical decision-making, such as in criminal justice or financial markets, raises questions about accountability and transparency. Biases embedded in AI algorithms can perpetuate social inequalities and discrimination, amplifying existing societal injustices.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

Furthermore, the widespread adoption of AI-driven automation threatens to disrupt labor markets, leading to job displacement and widening economic disparities. Low-skilled workers are particularly vulnerable to being replaced by AI-powered systems, exacerbating socio-economic inequalities. Moreover, the concentration of AI capabilities in the hands of a few powerful corporations raises concerns about monopolistic practices and the concentration of wealth and power.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

The ethical implications of AI extend beyond its practical applications to f undamental questions about the nature of intelligence, consciousness, and autonomy. As AI systems become increasingly sophisticated, they blur the lines between machine and human cognition, raising questions about the moral status of AI entities. Should AI systems be granted rights and responsibilities akin to human beings? Can AI possess consciousness and subjective experiences? These philosophical inquiries challenge our understanding of personhood and moral agency in the age of artificial intelligence.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

Furthermore, the development and deployment of AI raise profound ethical dilemmas regarding accountability and control. Who should be held responsible when AI systems malfunction or make erroneous decisions with significant consequences? How can we ensure that AI aligns with human values and ethical principles? These questions underscore the importance of ethical frameworks and regulatory mechanisms to govern the development and use of AI technology responsibly.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

In conclusion, artificial intelligence holds immense promise as a transformative force for human society, offering solutions to complex problems and augmenting human capabilities. However, its rapid advancement also poses significant challenges and ethical dilemmas that demand careful consideration. As we navigate the evolving landscape of AI, it is imperative to strike a balance between innovation and responsibility, ensuring that AI serves the collective good while upholding fundamental human values and rights. Only through thoughtful reflection, ethical deliberation, and inclusive governance can we harness the full potential of artificial intelligence for the betterment of humanity.(Comprehensive Argumentative Essay Paper on Artificial Intelligence)

Owe, A., & Baum, S. D. (2021). Moral consideration of nonhumans in the ethics of artificial intelligence.  AI and Ethics ,  1 (4), 517-528. https://scholar.google.com/citations?user=lJxa2TEAAAAJ&hl=en&oi=sra

Heinrichs, B. (2022). Discrimination in the age of artificial intelligence.  AI & society , 1-12. https://link.springer.com/article/10.1007/s00146-021-01192-2

Start by filling this short order form order.studyinghq.com

And then follow the progressive flow. 

Having an issue, chat with us here

Cathy, CS. 

New Concept ? Let a subject expert write your paper for You​

Rachel R.N.

Related Posts

  • Term-Long Project Nursing Paper Example
  • Case Study on Moral Status
  • Applying the Concepts of Epidemiology and Nursing Research on Measles Nursing Paper Essay
  • A Comprehensive Guide to Writing a Nursing Research Paper
  • 50 Potential Research Summary Topics
  • Free Essays
  • Citation Generator
  • Topic Generator
  • Paraphrasing Tool
  • Conclusion Maker
  • Research Title Generator
  • Thesis Statement Generator
  • Summarizing Tool
  • How to Guides
  • Essay Topics and Ideas
  • Manage Orders
  • Business StudyingHq
  • Writing Service 
  • Discounts / Offers 

Study Hub: 

  • Studying Blog
  • Topic Ideas 
  • Business Studying 
  • Nursing Studying 
  • Literature and English Studying

Writing Tools  

  • Terms and Conditions
  • Privacy Policy
  • Confidentiality Policy
  • Cookies Policy
  • Refund and Revision Policy

Our samples and other types of content are meant for research and reference purposes only. We are strongly against plagiarism and academic dishonesty. 

Contact Us:

📧 [email protected]

📞 +15512677917

2012-2024 © studyinghq.com. All rights reserved

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Should Augment Human Intelligence, Not Replace It

  • David De Cremer
  • Garry Kasparov

argumentative essay against artificial intelligence

Artificial intelligence isn’t coming for your job, but it will be your new coworker. Here’s how to get along.

Will smart machines really replace human workers? Probably not. People and AI both bring different abilities and strengths to the table. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. Chess Grandmaster Garry Kasparov offers some unique insight here. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.

In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to the global economy by 2030 . As AI is changing how companies work, many believe that who does this work will change, too — and that organizations will begin to replace human employees with intelligent machines . This is already happening: intelligent systems are displacing humans in manufacturing, service delivery, recruitment, and the financial industry, consequently moving human workers towards lower-paid jobs or making them unemployed. This trend has led some to conclude that in 2040 our workforce may be totally unrecognizable .

  • David De Cremer is the Provost’s chair and professor in management and organizations at NUS Business School, National University of Singapore. He is the founder and director of the Centre on AI Technology for Humankind at NUS Business school and author of Leadership by Algorithm: Who leads and who follows in the AI era? (2020). Before moving to NUS, he was the KPMG endowed chaired professor in management studies and current honorary fellow at Cambridge Judge Business School and fellow at St. Edmunds College, Cambridge University. From July 2023 onwards, he will be the new Dunton Family Dean of D’Amore McKim School of Business at Northeastern University. His website is www.daviddecremer.com .
  • Garry Kasparov is the chairman of the Human Rights Foundation and founder of the Renew Democracy Initiative. He writes and speaks frequently on politics, decision-making, and human-machine collaboration. Kasparov became the youngest world chess champion in history at 22 in 1985 and retained the top rating in the world for 20 years. His famous matches against the IBM super-computer Deep Blue in 1996 and 1997 were key to bringing artificial intelligence, and chess, into the mainstream. His latest book on artificial intelligence and the future of human-plus-machine is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (2017).

Partner Center

  • International edition
  • Australia edition
  • Europe edition

Elia Barbieri illustration

The big idea: Should we worry about artificial intelligence?

Could AI turn on us, or is natural stupidity a greater threat to humanity?

E ver since Garry Kasparov lost his second chess match against IBM’s Deep Blue in 1997, the writing has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will lead – by some estimates, in only a few decades – to the development of superintelligent, sentient machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather undesirable. But is this anything more than yet another sci-fi “Project Fear”?

Some confusion is caused by two very different uses of the phrase artificial intelligence. The first sense is, essentially, a marketing one: anything computer software does that seems clever or usefully responsive – like Siri – is said to use “AI”. The second sense, from which the first borrows its glamour, points to a future that does not yet exist, of machines with superhuman intellects. That is sometimes called AGI, for artificial general intelligence.

How do we get there from here, assuming we want to? Modern AI employs machine learning (or deep learning): rather than programming rules into the machine directly we allow it to learn by itself. In this way, AlphaZero, the chess-playing entity created by the British firm Deepmind (now part of Google), played millions of training matches against itself and then trounced its top competitor. More recently, Deepmind’s AlphaFold 2 was greeted as an important milestone in the biological field of “protein-folding”, or predicting the exact shapes of molecular structures, which might help to design better drugs.

Machine learning works by training the machine on vast quantities of data – pictures for image-recognition systems, or terabytes of prose taken from the internet for bots that generate semi-plausible essays, such as GPT2. But datasets are not simply neutral repositories of information; they often encode human biases in unforeseen ways. Recently, Facebook’s news feed algorithm asked users who saw a news video featuring black men if they wanted to “ keep seeing videos about primates ”. So-called “AI” is already being used in several US states to predict whether candidates for parole will reoffend, with critics claiming that the data the algorithms are trained on reflects historical bias in policing.

Computerised systems (as in aircraft autopilots) can be a boon to humans, so the flaws of existing “AI” aren’t in themselves arguments against the principle of designing intelligent systems to help us in fields such as medical diagnosis . The more challenging sociological problem is that adoption of algorithm-driven judgments is a tempting means of passing the buck, so that no blame attaches to the humans in charge – be they judges, doctors or tech entrepreneurs. Will robots take all the jobs? That very framing passes the buck because the real question is whether managers will fire all the humans.

The existential problem, meanwhile, is this: if computers do eventually acquire some kind of god‑level self-aware intelligence – something that is explicitly in Deepmind’s mission statement, for one (“our long-term aim is to solve intelligence” and build an AGI) – will they still be as keen to be of service? If we build something so powerful, we had better be confident it will not turn on us. For the people seriously concerned about this, the argument goes that since this is a potentially extinction-level problem, we should devote resources now to combating it. The philosopher Nick Bostrom , who heads the Future of Humanity Institute at the University of Oxford, says that humans trying to build AI are “like children playing with a bomb”, and that the prospect of machine sentience is a greater threat to humanity than global heating. His 2014 book Superintelligence is seminal. A real AI, it suggests, might secretly manufacture nerve gas or nanobots to destroy its inferior, meat-based makers. Or it might just keep us in a planetary zoo while it gets on with whatever its real business is.

AI wouldn’t have to be actively malicious to cause catastrophe. This is illustrated by Bostrom’s famous “paperclip problem”. Suppose you tell the AI to make paperclips. What could be more boring? Unfortunately, you forgot to tell it when to stop making paperclips. So it turns all the matter on Earth into paperclips, having first disabled its off switch because allowing itself to be turned off would stop it pursuing its noble goal of making paperclips.

That’s an example of the general “problem of control”, subject of AI pioneer Stuart Russell’s excellent Human Compatible: AI and the Problem of Control , which argues that it is impossible to fully specify any goal we might give a superintelligent machine so as to prevent such disastrous misunderstandings. In his Life 3.0: Being Human in the Age of Artificial Intelligence , meanwhile, the physicist Max Tegmark, co-founder of the Future of Life Institute (it’s cool to have a future-of-something institute these days), emphasises the problem of “value alignment” – how to ensure the machine’s values line up with ours. This too might be an insoluble problem, given that thousands of years of moral philosophy have not been sufficient for humanity to agree on what “our values” really are.

Other observers, though, remain phlegmatic. In Novacene , the maverick scientist and Gaia theorist James Lovelock argues that humans should simply be joyful if we can usher in intelligent machines as the logical next stage of evolution, and then bow out gracefully once we have rendered ourselves obsolete. In her recent 12 Bytes , Jeanette Winterson is refreshingly optimistic, supposing that any future AI will be at least “unmotivated by the greed and land-grab, the status-seeking and the violence that characterises Homo sapiens”. As the computer scientist Drew McDermott suggested in a paper as long ago as 1976, perhaps after all we have less to fear from artificial intelligence than from natural stupidity.

Further reading

Human Compatible: AI and the Problem of Control by Stuart Russell (Penguin, £10.99)

Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark (Penguin, £10.99)

12 Bytes: How We Got He r e, Where We Might Go Next by Jeannette Winterson (Jonathan Cape, £16.99)

  • The big idea
  • Artificial intelligence (AI)
  • Consciousness
  • Garry Kasparov

Most viewed

May 25, 2023

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

By Tamlyn Hunt

Human face behind binary code

devrimb/Getty Images

“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “ the godfather of AI ,” said after he quit his job in April so that he can warn about the dangers of this technology .

He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Why are we all so concerned? In short: AI development is going way too fast.

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4 , which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper .

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times : "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligence for a good overview ) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky , for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness , the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely . But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

The Case Against AI Everything, Everywhere, All at Once

Neuron system

I cringe at being called “Mother of the Cloud, " but having been part of the development and implementation of the internet and networking industry—as an entrepreneur, CTO of Cisco, and on the boards of Disney and FedEx—I am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing today’s attention, machine learning, has expanded beyond predicting our behavior to generating content—called Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicry—and sometimes fakery—over deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “.. . a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse .

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trust—with few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts , and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize “intelligence” to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity .

Human well-being and dignity should be our North Star—with innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet , power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once , is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the “Authoritarian Intelligence” emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads From TIME

  • Why We're Spending So Much Money Now
  • The Fight to Free Evan Gershkovich
  • Meet the 2024 Women of the Year
  • John Kerry's Next Move
  • The Quiet Work Trees Do for the Planet
  • Breaker Sunny Choi Is Heading to Paris
  • Column: The Internet Made Romantic Betrayal Even More Devastating
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

You May Also Like

Advertisement

Advertisement

Two arguments against human-friendly AI

  • Original Research
  • Published: 09 May 2021
  • Volume 1 , pages 435–444, ( 2021 )

Cite this article

  • Ken Daley   ORCID: orcid.org/0000-0002-0060-7596 1  

3150 Accesses

6 Altmetric

Explore all metrics

The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans. This has led to extensive research into how humans can avoid losing control of an AI that is at least as intelligent as the best of us. This ‘control problem’ has given rise to research into the development of ‘friendly AI’ which is a highly competent AGI that will benefit, or at the very least, not be hostile toward humans. Though my question is focused upon AI, ethics and issues surrounding the value of friendliness, I want to question the pursuit of human -friendly AI (hereafter FAI). In other words, we might ask whether worries regarding harm to humans are sufficient reason to develop FAI rather than impartially ethical AGI, or an AGI designed to take the interests of all moral patients—both human and non-human—into consideration. I argue that, given that we are capable of developing AGI, it ought to be developed with impartial, species-neutral values rather than those prioritizing friendliness to humans above all else.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

argumentative essay against artificial intelligence

The Ethics of AI Ethics: An Evaluation of Guidelines

Thilo Hagendorff

argumentative essay against artificial intelligence

The role of artificial intelligence in healthcare: a structured literature review

Silvana Secinaro, Davide Calandra, … Paolo Biancone

argumentative essay against artificial intelligence

AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems

Iqbal H. Sarker

See, for example, Yudkowsky [ 27 ].

See, for example, Tarleton [ 22 ], Allen et. al. [ 1 ], Anderson and Anderson [ 2 ], and Wallach et al. [ 26 ].

See, for example, Omohundro [ 16 ], Bostrom [ 4 ], ch. 12; Taylor et al. [ 23 ], Soares [ 21 ], and Russell [ 18 ].

See Armstrong et al. [ 3 ] and Bostrom [ 4 ], pp. 177–181.

As an example of a company aiming at the latter, see https://openai.com/charter/ .

While ‘intelligence’ is notoriously difficult to define, Russell [ 18 ], p. 9 claims that something is intelligent “to the extent that their actions can be expected to achieve their objectives”. According to Tegmark (2017) p. 50, intelligence is the “ability to accomplish complex goals”. And Yudkowsky [ 25 ]: intelligence is “an evolutionary advantage” that “enables us to model, predict, and manipulate regularities in reality”.

Central to explaining AGI’s move to ASI is ‘recursive self-improvement’ described in Omohundro [ 14 ].

This is consistent with Yudkowsky [ 12 ], p. 2, according to which: “The term ‘Friendly AI’ refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals”.

With ‘considers the interests’ I’m anthropomorphizing for simplicity. I expect it to be a matter of controversy whether AGI of any sort can consider the interests of anything whatsoever.

See Regan [ 17 ], chapter 5 for a discussion of the notions of ‘moral patient’ and ‘moral agent’.

For opinions regarding when AGI will be attained see Bostrom [ 4 ], pp. 23–24 and Müller and Bostrom [ 12 ].

See, for example, Bostrom [ 4 ], Kurzweil [ 11 ], Yudkowsky [ 7 ], Chalmers [ 5 ], Vinge [ 25 ], Good [ 9 ]. There are differing views on the timelines involved in the move from AGI to ASI. For a discussion of the differences between ‘hard’ and ‘soft takeoffs’ see, for example, Bostrom [ 4 ] chapter 4 (especially pp. 75–80), Yudkowsky [ 25 ], Yudkowsky [ 30 ], and Tegmark (2017), pp. 150–157.

IAI may favor particular species if species-neutral values dictate favoring some species over others. For example, it may be the case that while all animals are worthy of moral consideration, some species are worthy of a greater level of consideration than others.

Of course, another possibility is that AGI develops hostile values in which case issues of human and non-human interests are likely moot.

Of course, it should be noted that while IAI may not be consistent with FAI, it is at least possible that IAI will be consistent with FAI. I take it that we are not in a position to know which is more likely with any degree of certainty.

The term ‘speciesism’, coined by Ryder [ 19 ], is meant to express a bias toward the interests of one’s own species and against those of other species.

By ‘moral patient’ I mean anything which is sentient or conscious and can be harmed or benefitted. A moral patient is anything toward which moral agents (i.e., those entities that bear moral responsibilities) can have responsibilities toward for their own sake. For present purposes, I will take the capacity to suffer as a reasonable sufficient (and possibly necessary) condition for being a moral patient.

By ‘possible’ here I don’t intend a distant, modal sense according to which there exists some possible world in which the relevant beings exist. I mean that, in this world, such beings could very well actually exist in the future given that we don’t exterminate the preceding species or beings.

Even if the goals, as specified, are consistent with human interests, ASI might take unintended paths toward the accomplishing of these goals, or it may develop subgoals (or, instrumental goals) that are ultimately inconsistent with human interests. For the latter issue, see Omohundro [ 14 , 15 ] and Bostrom [ 4 ], ch. 7.

I acknowledge that there is a debate to be had regarding what is ‘in the interest’ of a species. Nonetheless, I do not see the plausibility of my thesis turning on the choices one might make here.

In terms of FAI based upon values we believe to be consistent with human interests , the main problem involves the widely discussed ‘ unintended consequences ’. The worry stems from our inability to foresee the possible ways in which AGI might pursue the goals we provide it with. Granting that it will become significantly more intelligent than the brightest humans, it’s unlikely that we’ll be capable of discerning the full range of possible paths cognitively available to AGI for pursuing whatever goal we provide it. In light of this, something as powerful as AGI might produce especially catastrophic scenarios (see, for example, Bostrom [ 4 ] ch. 8 and Omohundro [ 15 ].

As for FAI based upon what are, in fact , human-centric values, an initial problem arises when we consider that what we believe is in our interest and what is actually in our interest might be quite distinct. If so, how could we possibly go about developing such an AI? It seems that any hopeful approach to such an FAI would require our discovering the correct theory of human wellbeing, whatever that might happen to be. Nonetheless, for the purposes of this paper I want to grant that we are, in fact, capable of developing such an objectively human-friendly AI.

By ‘a set of impartial, species-neutral moral facts’ I mean simply that, given the assumption that the interests of all moral patients are valuable, there is a set of moral facts that follow. Basically, there are a set of facts that determine rightness and wrongness in any possible situation given the moral value of all moral patients, where this is understood in a non-speciesist (i.e., based upon morally relevant features rather than species-membership) way.

I thank an anonymous reviewer for this point.

Muehlhauser and Bostrom [ 12 ], p. 43.

Yudkowsky [ 29 ], p. 388.

Singer [ 20 ].

Singer [ 20 ], p. 6.

DeGrazia [ 7 ], p. 36.

Singer [ 20 ], p. 8.

See Singer [ 20 ], p. 20.

DeGrazia [ 7 ], pp. 35–36.

The arguments in the remainder of the paper will clearly still follow for proponents of the ‘equal consideration approach’. In fact, my conclusions may still follow on an even weaker anti-speciesist view according to which we ought to treat species as morally equal to humans (or of even greater moral worth than humans) if such beings evolve from current species (see Sect.  4 below).

See, for example, De Waal [ 8 ].

In addition, it’s also likely that there will be many cases in which, despite non-human interests receiving no consideration, such interests will remain consistent with human interests. I happily admit this. The point I’m making is that there will be cases where non-human interests will not be consistent with human interests and therefore will be disregarded by FAI.

See, for example, Bostrom [ 4 ], Yudkowsky [ 31 ], Omohundro [ 14 , 15 ], Häggström [ 10 ], and Russell [ 18 ].

This might be accomplished by harvesting and altering their genetic information then producing the new ‘versions’ via in vitro fertilization. This is outlandish, of course, but no more so than the scenarios suggested by many AI researchers regarding existential threats to humanity via unintended consequences.

See Omohundro [ 15 ] for a discussion of ‘basic AI drives’. Of these, the most relevant to the current point is ‘resource acquisition’. ‘Efficiency’ is another relevant subgoal, as AGI/ASI will become more efficient with regarding to pursuing its goals as well as its use of resources.

It’s also important to recall that there’s every reason to believe that IAI will, as well as FAI, develop the basic AI drives presented in Omohundro [ 15 ].

I remind the reader that by ‘possible’ beings here I intend those that could very well actually exist in the future given that we don’t exterminate the relevant preceding beings and not some logically distant, modal sense of beings.

In addition, given that such species could develop from currently existing species, it is not a major leap to accept that we ought to develop AGI with them in mind as well, even if one rejects that currently existing species are not now worthy of consideration.

Darwin [ 6 ], pp. 34–35.

See, for example, https://www.theguardian.com/environment/2018/oct/30/humanity-wiped-out-animals-since-1970-major-report-finds , https://www.ipbes.net/news/Media-Release-Global-Assessment and https://www.forbes.com/sites/trevornace/2018/10/16/humans-are-exterminating-animal-species-faster-than-evolution-can-keep-up/#451b4d6415f3 .

I would suggest that this is analogous to cases in which, when presented with a moral dilemma, children should defer to suitable adults to make decisions that will have morally relevant consequences.

In fact, it seems that beyond all of the foregoing, a sufficiently competent and powerful ASI could well fit the environment of the earth, as well as the universe beyond, to the most morally superior of possible biological beings. If it turns out that the optimal moral scenario is one in which the highest of possible moral beings exists and has its interests maximized, then we ought to develop IAI to bring about just this scenario, regardless of whether we are included in such a scenario. On the other hand, if we’re supposed to, morally speaking, develop that which will most benefit humans, then we are left not only scrambling to do so, but also hoping that there are no smarter beings somewhere in the universe working on the analogous project.

I thank an anonymous reviewer for this point as well.

Unfortunately, there is precedent in past human behavior for this attitude. For example, I expect that, with the benefit of hindsight, many believe that nuclear weapons ought not have been created. The same can be said for the development of substances and practices employed in processes that continue to contribute to climate change. Nonetheless, global dismantling of nuclear weapons and moving away from practices that proliferate greenhouse gases remain far off hopes.

If this is correct, then I would suggest not only that the foregoing provides support for the preferability of species-neutral AGI but that the scope of interests to be considered by AGI ought to be given far more attention than it currently receives.

Allen, C., Smit, I., Wallach, W.: Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf. Technol. 7 , 149–155 (2006)

Anderson, M., Anderson, S.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28 (4), 15–26 (2007)

Google Scholar  

Armstrong, S., Sandberg, A., Bostrom, N.: Thinking inside the box: controlling and using an oracle AI. Mind. Mach. 22 , 299–324 (2011)

Article   Google Scholar  

Bostrom, N.: Superintelligence. Oxford University Press, Oxford (2014)

Chalmers, D.: The singularity: a philosophical analysis. J. Conscious. Stud. 17 (9–10), 7–65 (2010)

Darwin, C.: The Descent of Man, and Selection in Relation to Sex. John Murray, London (1871)

Book   Google Scholar  

DeGrazia, D.: Animal Rights: A Very Short Introduction. Oxford University Press, New York, NY (2002)

De Waal, F.: Chimpanzee Politics. Johns Hopkins University Press, Baltimore, MD (1998)

Good, I.J.: Speculations concerning the first ultraintelligent machine. In: Franz, L., Rubinoff, M. (eds.) Advances in Computers, vol. 6, pp. 31–88. Academic Press, New York (1965)

Häggström, O.: Challenges to the Omohundro—Bostrom framework for AI motivations. Foresight 21 (1), 153–166 (2019)

Kurzweil, R.: The Singularity is Near: When Humans Transcend Biology. Penguin Books, New York (2005)

Muehlhauser, L., Bostrom, N.: Why We Need Friendly AI. Think 36, 13 (Spring) (2014)

Müller, V., Bostrom, N.: Future progress in artificial intelligence: a survey of expert opinion. In: Fundamental Issues of Artificial Intelligence, 2016-06-08, pp. 555–572 (2016)

Omohundro, S.: The nature of self-improving artificial intelligence [steveomohundro.com/scientific-contributions/] (2007)

Omohundro, S.: The basic AI drives. In: Wang, P., Goertzel, B., Franklin, S. (eds.) Artificial General Intelligence 2008: Proceedings of the First AGI Conference. IOS, Amsterdam, pp. 483–492 (2008)

Omohundro, S.: Autonomous technology and the greater human good. J. Exp. Theor. Artif. Intellig. 26 (3), 303–315 (2014). https://doi.org/10.1080/0952813X.2014.895111 .

Regan, T.: The Case for Animal Rights. University of California Press, California (2004)

Russell, S.: Human Compatible: Artificial Intelligence and the Problem of Control. Viking, New York (2019)

Ryder, R.: http://www.criticalsocietyjournal.org.uk/Archives_files/1.SpeciesismAgain.pdf (2010)

Singer, P.: Animal Liberation. HarperCollins, New York, NY (2002)

Soares, N.: The value learning problem. In: Ethics for Artificial Intelligence Workshop at 25th International Joint Conference on Artificial Intelligence (IJCAI-2016), New York, NY, USA, 9–15 July 2016 (2016)

Tarleton, N.: Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics. The Singularity Institute, San Francisco, CA (2010)

Taylor, J., Yudkowsky, E., LaVictoire, P., Critch, A.: Alignment for Advanced Machine Learning Systems. Machine Intelligence Research Institute, July 27, 2016 (2016)

Tegmark, M.: Life 3.0: Being Human in the Age of Artificial Intelligence. Alfred A. Knopf, New York, NY (2017)

Vinge, V.: The coming technological singularity: how to survive in the post-human era. Whole Earth Rev. 77 (1993)

Wallach, W., Allen, C., Smit, I.: Machine morality: bottom-up and top-down approaches for modelling human moral faculties. Ethics Artif. Agents 22(4): 565–582 (2008). doi: https://doi.org/10.1007/s00146-007-0099-0

Yudkowsky, E.: Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. The Singularity Institute, San Francisco, CA, June 15 (2001)

Yudkowsky, E.: Artificial intelligence as a positive and negative factor in global risk. In: Bostrom, N., Cirkovic, M. (eds.) Global Catastrophic Risks. Oxford University Press, Oxford, pp 308–345 (2008)

Yudkowsky, E.: Complex value systems in friendly AI. In: Schmidhuber, J., Thórisson, K.R., Looks, M. (eds.) Artificial General Intelligence: 4th International Conference. AGI 2011, LNAI 6830, pp. 388–393 (2011)

Yudkowsky, E.: Intelligence Explosion Microeconomics. Technical Report 2013-1. Machine Intelligence Research Institute, Berkeley, CA. Last modified September 13, 2013 (2013)

Yudkowsky, E.: There’s No Fire Alarm for Artificial General Intelligence (2017). https://intelligence.org/2017/10/13/fire-alarm/

Download references

Author information

Authors and affiliations.

Philosophy Department, Southern Methodist University, Dallas, TX, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ken Daley .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Daley, K. Two arguments against human-friendly AI. AI Ethics 1 , 435–444 (2021). https://doi.org/10.1007/s43681-021-00051-6

Download citation

Received : 25 January 2021

Accepted : 17 March 2021

Published : 09 May 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s43681-021-00051-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Artificial general intelligence
  • Superintelligence
  • Existential risk
  • Control problem
  • Impartiality
  • Friendly AI
  • Find a journal
  • Publish with us
  • Track your research

Artificial intelligence argumentative essay

argumentative essay against artificial intelligence

The advancements in technology, which have resulted to the advent of machines and modern information technologies powered by artificial intelligence have greatly influenced the workplace in the 21st century. In today’s world, computers, software’s, and algorithms simplify everyday tasks making it impossible to imagine life without these machines (Wisskirchen, Biacabe and Bormann 9). Defined artificial intelligence revolves around the work processes of machines which require more intelligence when performed by human beings. It is the process of investigating intelligent problem-solving behavior and also the creation of computer systems.

While the digitalization and the automation processes continue to develop across the globe, more organizations are turning to the use of artificial intelligence and robotics in conducting their tasks. An important factors in the developed countries is the degree to which technological development and technological devices shape the labor markets. Artificial intelligence over the years has become a new factor of production driving growth through the creation of a new workforce, complementing the skills and abilities of the already existing workforces and driving innovation within the economy (Wisskirchen, Biacabe and Bormann 12).

For the creation of a new workforce, the wave of intelligence resulting from artificial intelligence has brought new features with the ability to automate the complex tasks which require agility and adaptability. In complementing the skills of workforces, artificial intelligence is not only replacing the already existing labor and capital but also enabling a more effective system of operation (Chui, Manyika and Miremadi 3). Artificial intelligence has also driven innovation within the economy through diffusing the innovations and technological devices into the economy

Artificial intelligence which includes the use if robotics has impacted the workplaces both positively and negatively. The impacts on artificial intelligence in the workplace begins by the impacts it has on the labor market. This advancement in technology has strongly affected both white collar and the blue collar sectors. A third of the current jobs, for example, those requiring a bachelor’s degree from specific universities can be performed through the use of intelligent software’s. This means that a third of university graduates lose their jobs due to the use of artificial intelligence. Even with this, however, artificial intelligence has resulted in considerable savings especially with regard to the cost of products and the cost of labor.

In today’s world, especially within the industrial sector, more investors opt to use artificial intelligence and robotics. The decisions to replace human labor are influenced by the benefits that result from the use of artificial intelligence. These decisions are also influenced by the fact that artificial intelligence does not depend on the external factors within the workplace. This, in turn, means that artificial intelligence, for example, robotic and other computer systems work in a more reliable and constant manner (24/7 depending on their programming) and can work even in danger zones.  As a rule, all artificial systems are more accurate that human beings and cannot be distracted by factors such as fatigue and other external factors (Ennals 3).

argumentative essay against artificial intelligence

Through the use of intelligent systems, work can be synchronized and standardized to a greater extent. This results in a more improved work efficiency, transparency and even a better control of the performance. Another major impact that has resulted from the use of artificial intelligence in the workplace over human beings in their decision-making process. Unlike human beings, the decision-making processes of machines, and autonomous systems are guided by objective standards which means that the decisions are not emotionally based but are more influenced by the existing facts. Productivity in the use of robotics has resulted to and improvement of the productivity levels in organizations mostly influenced by the work time.

Artificial intelligence has also resulted in benefits for employees; these benefits mostly revolve around the fact that they do less manual work and hard work. This same concept applies to the typical back to back office activities within the service sector. In this case, algorithms collect data automatically; this data is then transferred from the purchasers to the sellers and develop solutions to the client’s issues. In the service sector, the interface between the sellers and the buyers is set up relieving the employees from manually entering the data into the Information technology systems (Ennals 6). Intelligence machines and robots in the workplaces also have lifesaving functions within the workplaces. For example, robots are used for medical diagnostics in hospitals and even for life support

As evident, artificial intelligence has had a significant impact in today’s workforce. The positive impacts from the use of artificial intelligence in the workforce also surpass the negative impacts. Artificial intelligence even with this opens new opportunities for organizations, companies, and even individuals. With it, human beings will become more adaptable and will create new jobs improving the different sectors of the economy.

  • Chui, Michael, James Manyika and Mehdi Miremadi. “Four fundamentals of workplace automation.” McKinsey Quaterly (2015): 1-9.
  • Ennals, Richard. Artificial intelligence and human institutions . Springer Science & Business Media, 2012.
  • Wisskirchen, G, et al. “Artificial Intelligence and Robotics and Their Impact on the Workplace.” IBA Global Employment Institute (2017): 9-40.
  • Artificial Intelligence
  • Computer Science
  • Cyber Crime
  • Cyber Security
  • Data Analysis
  • Internet Of Things

argumentative essay against artificial intelligence

Artificial Intelligence: History, Challenges, and Future Essay

In the editorial “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence” by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired, humanized AI, and artificial narrow, general, and superintelligent AI. They address the AI effect, which is the phenomenon in which observers disregard AI behavior by claiming that it does not represent true intelligence. The article also uses the analogy of the four seasons (spring, summer, fall, and winter) to describe the history of AI.

The article provides a useful overview of the history of AI and its current state. The authors provide a useful framework for understanding AI by dividing it into categories based on the types of intelligence it exhibits or its evolutionary stage. It addresses the concept of the AI effect, which is the phenomenon where observers disregard AI behavior by claiming that it does not represent true intelligence.

The central claim made by Michael Haenlein and Andreas Kaplan is that AI can be classified into different types based on the types of intelligence it exhibits or its evolutionary stage. The authors argue that AI has evolved significantly since its birth in the 1940s, but there have also been ups and downs in the field (Haenlein). The evidence used to support this claim is the historical overview of AI. The authors also discuss the current challenges faced by firms today and the future of AI. They make qualifications by acknowledging that only time will tell whether AI will reach Artificial General Intelligence and that early systems, such as expert systems had limitations. If one takes their claims to be true, it suggests that AI has the potential to transform various industries, but there may also be ethical and social implications to consider. Overall, the argument is well-supported with evidence, and the authors acknowledge the limitations of AI. As an AI language model, I cannot take a stance on whether the argument is persuasive, but it is an informative overview of the history and potential of AI.

The article can be beneficial for the research on the ethical and social implications of AI in society. It offers a historical overview of AI, and this can help me understand how AI has evolved and what developments have occurred in the field. Additionally, the article highlights the potential of AI and the challenges that firms face today, and this can help me understand the practical implications of AI. The authors also classify AI into three categories, and this can help me understand the types of AI that exist and how they can be used in different contexts.

The article raises several questions that I would like to explore further, such as the impact of AI on the workforce and job displacement. The article also provides a new framework for looking at AI, and this can help me understand the potential of AI and its implications for society. However, I do not disagree with the author’s ideas, and I do not see myself working against the ideas presented.

Personally, I find the topic of AI fascinating, and I believe that it has the potential to transform society in numerous ways. However, I also believe that we need to approach AI with caution and be mindful of its potential negative impacts. As the editorial suggests, we need to develop clear AI strategies and ensure that ethical considerations are taken into account. In this way, we can guarantee that the benefits of AI are maximized while minimizing its negative impacts.

Haenlein, Michael, and Andreas Kaplan. “ A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence .” California Management Review , vol. 61, no. 4, 2019, pp. 5–14, Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 25). Artificial Intelligence: History, Challenges, and Future. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/

"Artificial Intelligence: History, Challenges, and Future." IvyPanda , 25 Feb. 2024, ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

IvyPanda . (2024) 'Artificial Intelligence: History, Challenges, and Future'. 25 February.

IvyPanda . 2024. "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

1. IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

Bibliography

IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

  • Geology Issues: the San Andreas Fault
  • San Andreas Fault and Devil’s Punchbowl Natural Area
  • San Andreas Geological Phenomena
  • California, US: San Andreas Fault and Coso Volcanic Field
  • The Consumption Pattern in Grand Theft Auto: San Andreas
  • Andreas Gursky's "The Rhine II" Photography
  • International HR Management: Global Assignment
  • The Kaplan-Meier Method: NG’s Article
  • The Lost Leonardo Film by Andreas Koefoed
  • Editorial Independence in Kuwaiti Legislation
  • Artificial Intelligence in the Field of Copywriting
  • Artificial Intelligence for Recruitment and Selection
  • Artificial Intelligence and Gamification in Hiring
  • Open-Source Intelligence and Deep Fakes
  • Artificial Intelligence and Frankenstein's Monster: Article Review
  • Skip to main content
  • Skip to primary sidebar

AAAI

Association for the Advancement of Artificial Intelligence

AAAI: An Argument Against Artificial Intelligence

June 20, 2017

Privacy Overview

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

argumentative essay against artificial intelligence

Should AI be Regulated? The Arguments For and Against

argumentative essay against artificial intelligence

Europe is on the right path — we must follow it...

Ever since OpenAI released ChatGPT into the wild in late 2022, the world has been abuzz with talks of Generative Artificial Intelligence and the future it could create . Capitalism’s fanboys see the technology as a net positive ; the logical continuation of the digital world, which has contributed to the creation of untold wealth… for a select few . Boomers, meanwhile, recall the best of the 80s’ Sci-Fi, and fear we may be well on our way to create our own HAL / SHODAN / Ultron / SkyNet / GLaDOS .

These are the loud minorities. Most people presented with the possibilities offered by Generative Artificial Intelligence understand that technology is merely a tool, without a mind of its own. The onus is on users to “ do good ” with it. And if that is not possible because “ good ” is inherently subjective… then democratic governments need to step in and regulate.

How (and if) this is to be done is still hotly debated . The European Union was first out of the gate with the proposed AI Act . It is an imperfect first draft, but has the benefit of being a real attempt at managing a highly disruptive technology rather than letting tech billionaires call the shots. Below is a summary of the proposed law, and the pros and cons of such regulations.

Are Brain-Computer Interfaces Good or Bad? →

11. India is the new China

What is in the eu’s ai act.

The AI Act puts risk at the core of the discussion : “ The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed. ”

  • AI posing “ unacceptable ” levels of risk (behavioural manipulation, real-time and remote biometrics, social scoring…) will be banned
  • High-risk AI systems (relating to law enforcement, education, immigration…) “ will be assessed before being put on the market and also throughout their lifecycle ”
  • Limited-risk AI systems will need to “ comply with minimal transparency requirements that would allow users to make informed decisions. ”

Generative AI gets a special mention within the proposed regulation. Companies using the technology will have to:

  • Disclose AI-generated content
  • Design safeguards to prevent the generation of illegal content
  • Publish summaries of copyrighted data used for training

If that seems satisfyingly pragmatic while remaining overly broad, trust your instincts. Companies failing to comply could face fines of up to 6% of their annual turnover and be kept from operating in the EU. The region is estimated to represent between 20% and 25% of a global AI market that is projected to be worth more than $1.3 trillion within 10 years… which is tech companies may say they’ll leave… but never will . The law is expected to pass around 2024.

Why Generative Artificial Intelligence should not be regulated

There has been plenty written about the fact that tech billionaires say they want AI to be regulated . Let’s make one thing clear: that is a front. Mere PR. They do not want regulation, and if it comes, they want it in their own terms . Below are some of the best arguments presented by them and their minions over the past few months.

1. Stifling Innovation and Progress

The case could be made that regulations will slow down AI advancements and breakthroughs. That not allowing companies to test and learn will make them less competitive internationally. However, we are yet to see definitive proof that this is true. Even if it were, the questions would remain: is unbridled innovation right for society as a whole ? Profits are not everything. Maybe the EU will fall behind China and the US when it comes to creating new unicorns and billionaires. Is that so bad, as long as we still have social nets, free healthcare, parental leaves and 6 weeks of holidays a year? If having this, thanks to regulations, means a multi-millionaire cannot become a billionaire, so be it.

The non-international competitiveness argument is a lot more relevant for the discussion at hand: regulation can create barriers to entry (high costs, standards, or requirements on developers or users) for new companies, strengthening the hand of incumbents. The EU has already seen this when implementing the GDPR . Regulations will need to carve out a space for very small companies to experiment, something that is already being discussed at EU-level. If they’re so small, how much harm can SMEs do anyway, given the exponential nature of AI’s power?

2. Complex and Challenging Implementation

Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. This is particularly true when accounting for the lack of clear standards in the field. After all, what are risks and ethics if not culturally relative ?

This makes the need to balance international standards and sovereignty a particularly touchy subject. AI operates across borders, and its regulation requires international cooperation and coordination. This can be complex, given varying legal frameworks and cultural differences. This is what they will say.

There are however few voices calling for one worldwide regulation. AI is (in so many ways) not the same as the atomic bomb, whatever the doomsayers calling for “New START” approach may claim. The EU will have its own laws, and so will other world powers . All we can ask for is a common understanding around the risks posed by the technology, and limited cooperation to cover blind spots within and between regional laws.

3. Potential for Overregulation and Unintended Consequences

Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. New challenges, risks and opportunities continuously emerge, and we need to remain agile / flexible enough to deal with them. Keeping up with the advancements and regulating cutting-edge technologies can be challenging for governing bodies… but that has never stopped anyone, and the world still stands.

Meanwhile, governments must make sure that new industries (not considered AI) are not caught up in the scope of existing regulation, with unexpected consequences. We wouldn’t want, for example, ecology to suffer because a carbon capture system uses a technology akin to generative AI to recommend regions to target for cleanup.

It is important to avoid excessive bureaucracy and red tape… but that is not a reason to do nothing. The EU’s proposed risk-based governance is a good answer to these challenges. Risks are defined well-enough to apply to all people across the territory, while allowing for changes should the nature of artificial intelligence evolve.

There are, in truth, few real risks in regulating AI… and plenty of benefits.

Why Generative Artificial intelligence needs to be regulated

There are many reasons to regulate Gen. AI, specifically when looking through the prism of risks to under-privileged or defenceless populations. It can be easy not to take automated and wide-scale discrimination seriously… when you’ve never been discriminated against. Looking at you, tech bros.

1. Ensuring Ethical Use of Artificial Intelligence

Firstly (and obviously), regulation is needed to apply and adapt existing digital laws to AI technology. This means protecting the privacy of users ( and their data ). AI companies should invest in strong cyber-security capabilities when dealing with data-heavy algorithms… and forego some revenues as user data should not be sold to third parties. This is a concept American companies seem to inherently and wilfully misunderstand without regulation.

As mentioned in the AI Act, it is also crucial that tech companies remove the potential for bias and discrimination from algorithms dealing with sensitive topics. That entails A) ensuring none is purposefully injected and B) ensuring naturally occurring biases are removed to avoid reproduction at scale. This is non-negotiable , and if regulatory crash testing is needed, so be it.

More philosophically, regulation can help foster trust, transparency, and accountability among users, developers, and stakeholders of generative AI. By having all actors disclose the source, purpose, and limitations of AIs’ outputs, we will be able to make better choices… and trust the choices of others. The fabric of society needs this.

2. Safeguarding Human Rights and Safety

Beyond the “basics”, regulation needs to protect populations at large from AI-related safety risks, of which there are many.

Most will be human-related risks. Malicious actors can use Generative AI to spread misinformation or create deepfakes . This is very easy to do , and companies seem unable to put a stop to it themselves — mostly because they are unwilling (not unable) to tag AI-generated content. Our next elections may depend on regulations being put in place… while our teenage daughters may ask why we didn’t do it sooner.

We also need to avoid letting humans do physical arm to other humans using generative Artificial Intelligence : it has been reported that AI can be used to describe the best way to build a dirty bomb . Here again, if a company cannot prevent it to the best of its abilities, I see no reason for us to continue to allow it exist in its current form.

All this is without even going into the topic of AI-driven warfare and autonomous weapons , the creation of which must be avoided at all cost. This scenario is however so catastrophic that we often use it to hide the many other problems with AI. Why concentrate on data privacy when Terminator is right around the corner, right? Don’t let the doomers distract you from the very boring, but very real fact: without strong AI regulation tackling the above, society may die a death of a thousand cuts rather than one singular weaponised blow.

This is why we must ensure that companies agree to create systems that align with human values and morals. Easier said than done, but having a vision is a good start.

3. Mitigating Social and Economic Impact

There are important topics that the AI Act (or any other proposed regulation) does not completely cover. They will need to be further assessed over the coming years, but their very nature makes regulating without over-regulating difficult, though not any less needed.

Firstly, rules are needed to fairly compensate people whose data is used to train algorithms that will bring so much wealth to so few . Without this, we are only repeating the mistakes of the past, and making a deep economical chasm deeper. This is going to be difficult; there are few legal precedents to inform what is happening in the space today.

It will also be vital to address gen. AI-led job displacement and unemployment. Most roles are expected to be impacted by artificial intelligence, and with greater automation often comes greater unemployment. According to a report by BanklessTimes.com , AI could displace 800 million jobs (30% of the global workforce) by 2030.

It may be at the macro-economic level for some (“ AI could also shift job roles and create new ones by automating some aspects of work while allowing humans to focus on more creative or value-adding tasks ”, they’ll say), but it is decades of despair for others. We need a regulatory plan for those replaced and automated by AI (training, UBI…).

Finally, it will be important to continuously safeguard the world’s economies against AI-driven economic monopolies. Network effects mean that catching up to an internet giant is almost impossible today, for lack of data or compute. Anti-trust laws have been left rather untouched for decades, and it can no longer go on. Regulations will not make us less competitive in this case. It may make the economy more so .

Final thoughts...

The regulatory game has just started. Moving forward, governments will need to collaborate and cooperate to establish broad frameworks while promoting and encouraging knowledge sharing and interdisciplinary collaboration.

These frameworks will need to be adaptive and collaborative, lest they become unable to keep up with AI’s latest development. Regular reviews and updates will be key, as will agile experimentation in sandbox environments.

Finally, public engagement and inclusive decision-making will make or break any rules brought forwards. We need to Involving diverse stakeholders in regulatory discussions, while engaging the public in AI policy decisions. This is for us / them, and communicating that fact well will help governments counter-act tech companies' lobbying.

The regulatory road ahead is long : today, no foundational LLM currently complies with EU AI Act. Meanwhile, China’s regulation concentrates on content control rather than risks, further tightening the Party’s grip on free expression.

The regulatory game has just started. But… we’ve started, and that makes all the difference.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

argumentative essay against artificial intelligence

Holen Sie sich Ihren kostenlosen Leitfaden zum erfolgreichen Employer Branding!

Continue reading, cross platform mobile development frameworks to use in 2024, the 7 most popular backend frameworks for developers, the 8 best code testing tools, subscribe to devdigest.

Get a weekly, curated and easy to digest email with everything that matters in the developer world.

From developers. For developers.

Should schools ban or integrate generative AI in the classroom?

Subscribe to the center for technology innovation newsletter, regina ta and regina ta research intern - the brookings institution darrell m. west darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies.

August 7, 2023

  • The advent of generative AI tools creates both opportunities and risks for students and teachers.
  • So far, public schools have followed one of three strategies, either banning generative AI, integrating it into curricula, or placing it under further review.
  • Moving forward, schools should develop guiding principles for the use of AI tools, provide training resources for educators, and empower educators to implement those principles.
  • 10 min read

The start of a new school year is soon approaching, but there is a major question left unresolved: What are schools going to do about generative AI? Since ChatGPT’s release on November 30, 2022, educators have been slow to address questions regarding whether to allow its use in the classroom and how the tool affects pedagogy, student learning, and creativity. Debates have been intense among stakeholders—including teachers, parents, students, and edtech developers—about the opportunities for personalized learning, enhanced evaluations, and augmenting human performance against the possible risks of increased plagiarism and cheating , disinformation and discriminatory bias , and weakened critical thinking .

In this post, we review current responses to generative AI across K-12 public school districts and explore what remains to be done. Right now, public schools have varied between banning or integrating generative AI and reviews are ongoing without any definitive guidelines. After sharing how public schools are addressing these options, we suggest a path forward in which schools establish guiding principles, provide training resources, empower educators to implement those principles, and help over-burdened districts that already are struggling with instructional, infrastructure, and financial challenges.

Three paths of action from public schools

Colleges and universities are largely deferring to faculty to determine policies on generative AI, so a lot of higher education is moving on an ad-hoc basis that varies by classroom, course, and professor. There is neither a common approach across universities, nor agreed-upon policies on how to move forward.

In the case of K-12 public school districts, most administrators generally are taking institutional action and implementing decisions for entire school districts. They are not delegating the decisions to teachers but are enacting across-the-board decisions that affect every teacher and student in their jurisdiction. Their efforts fall into one of three categories: banning, integrating, or reviewing generative AI.

Banning generative AI

By the end of May 2023, ChatGPT joined YouTube, Netflix, and Roblox on lists of websites either banned for school staff and students among various large U.S. school districts, where access would require special approval. The controversial movement to widely ban ChatGPT began when the two largest school districts in the nation—New York City Public Schools and Los Angeles Unified— blocked access to ChatGPT from school Wi-Fi networks and devices. Other districts soon followed suit.

Citing the Children’s Internet Protection Act (CIPA), Fairfax County Public Schools in Virginia restricted access to ChatGPT, since the chatbot may not be appropriate for minors. Texas’s Austin Independent School District cited similar concerns about academic integrity and child safety in its decision. Seattle Public Schools banned access to not only ChatGPT, but also six additional websites that provide AI-powered writing assistance, including Rytr , Jasper , and WordAI . While these were not full bans, student use restrictions affected teacher adoption and use.

However, one problem with the approach to ban or restrict ChatGPT is that students can always find ways to circumvent school-issued bans outside the classroom. ChatGPT and other such chatbot tools are accessible from home or non-school networks and devices. Students could also use other third-party writing tools, since it would be impractical to ban the growing number of websites and applications driven by generative AI. Besides, bans may only be band-aid solutions, distracting from the root causes of inefficacy in our school systems—for instance, concerns about ChatGPT-enabled cheating might instead point to a need for changing how teachers assess students.

But the biggest problem, by far, is that this approach could cause more harm than good, especially if the benefits as well as the opportunities are not weighed. For example, ChatGPT can enrich learning and teaching in K-12 classrooms, and a full ban might deny students and teachers potential opportunities to leverage the technology for instruction, or lesson development. Instead of universally banning ChatGPT, school districts should recognize that needs in adoption and use may vary by teacher, classroom, and student. Imagine using ChatGPT for a history vs. an art class, for students whose first language is not English, and for students with learning disabilities. Different issues can pop up in various use cases, so across-the-board bans, and even restrictions for that matter, could limit the ability of students and instructors to take advantage of relevant learning benefits, and in turn, have effects on adoption and use during postsecondary opportunities, or in the workplace.

Integrating generative AI

New York City Public Schools—the first school system to block access to ChatGPT—was also the first to reverse its ban. Within four months of the initial ban, the reversal came after convenings of tech industry representatives and educators to evaluate emerging risks and understand how to leverage ChatGPT’s capabilities for the better. To support teachers, NYC school district leaders have promised to provide resources developed by MIT (Massachusetts Institute of Technology), along with real-life examples of successful AI implementation from classrooms in the district that have been early adopters of technology. The district also plans to create a shared repository to track each school’s progress and share findings across schools.

Schools like Peninsula School District in Washington had already been working to integrate AI into their curricula, so when ChatGPT arrived, they were prepared: digital learning teams visited classrooms across different grade levels to share how language models work, as well as how to identify and leverage AI-generated content. Alliance City School District in Ohio is also embracing ChatGPT’s potential, resolving to proactively set boundaries on its usage to prevent misuse. In Lower Merion School District, students from Pennsylvania will hone their critical thinking skills by analyzing and editing AI-generated writing. In all the above cases, responsibly integrating generative AI as a teaching tool will require school districts to invest in proper oversight procedures and professional development for educators.

As such, Garden City Public Schools in New York has held training sessions for educators to demonstrate the capabilities of different generative AI tools, along with how to incorporate them effectively and tailor materials to students’ needs. Schools like Norway-Vulcan Area Schools in Michigan also plan to provide professional development opportunities for teachers, as well as strengthen the school community’s understanding of its honor code and plagiarism policies. The district has encouraged teachers to use Turnitin’s AI detector to check for cases of plagiarism, as they prepare to teach with generative AI in the fall.

There are some schools that are being more cautious as they integrate generative AI. In Texas, Mineral Wells Independent School District has adopted a more cautious approach, testing generative AI use in an experimental set of classrooms, and sending those instructors for general training in AI. Elsewhere in Texas, Eanes Independent School District is similarly focused on helping teachers make the most of generative AI, as they first try ChatGPT for administrative use cases, like scheduling or lesson planning.

Placing generative AI under review

While districts like Prince George’s County (MD), Jefferson County (KY), and Chicago (IL) have not banned ChatGPT, they have placed the chatbot under review . School districts that haven’t acted yet are watching and waiting, and most fall into this category. A recent survey by UNESCO (United Nations Educational, Scientific and Cultural Organization) found that less than 10% of schools have implemented guidance on generative AI, and of the schools with policies in place, 40% reported that the guidance was only communicated verbally—not in writing.

Just as we demand transparency from developers on how AI is built , we need to provide transparency for students and teachers on how AI can be used . Not enough schools have issued formal guidance on generative AI. A nationwide survey of K-12 teachers revealed that 72% have not received guidance on generative AI use. Generally, the longer schools delay their deliberation of bans or integrated use of new generative AI technologies, the higher the stakes—especially with a new school year on the horizon. As one of many generative AI tools being used for education, ChatGPT is increasingly accessed by students and teachers, and the absence of institutional policies may enable counterproductive use cases. Without an educational sandbox for generative AI usage, schools run the risk of having students deploy these rapidly developing technologies in unplanned ways with unintended outcomes affecting safety, equity, and learning.

School districts also have a critical opportunity to govern the use and misuse of generative AI tools before the academic year begins. Districts can shape its use and role in the future of education, instead of letting generative AI write it for them. In California, education policy researchers have made a similar call to action. More important, national concerns around the digital divide in education can make technology more useful in bridging learning gaps created by the lack of home internet. But that also means that schools must support the equitable distribution of generative AI’s benefits. Being proactive about the adoption and use generative AI now will prepare school districts to set precedents about using future technologies in the classroom.

Recommendations for moving forward

Many classroom policies thus far are too narrowly focused on one tool: ChatGPT. Right now, there are thousands of generative AI products that are on the market, and more are being developed every week. School districts need to consider the use not just of ChatGPT, but other generative AI applications, like Llama 2 or BARD , as well as the widespread educational tools, like PowerSchool , Kahoot! , or Khan Academy .

In closing, we recommend strategies below for how school districts can approach generative AI governance, regardless of the product.

Establish guiding principles

In collaboration with edtech specialists, teachers, and students, school districts should develop a set of common, guiding principles for students and teachers around generative AI use. These guidelines should define the purpose and scope of generative AI in the classroom, along with acceptable use cases. These may also serve to establish privacy protections for students and formalize procedures for how teachers can supervise student usage, give feedback, and handle misuse.

Provide training resources for teacher professional development

Whether administrators and/or teachers fear generative AI may disrupt their classrooms or instead welcome its potential, school districts can offer accessible training that will equip all teachers to meet the present moment. These training opportunities may not have to be developed from scratch – districts can adapt online resources, like the Consortium for School Networking (CoSN)’s resource library and TeachAI , who also offer some guiding principles. When educators gain a robust understanding of generative AI, they can apply it productively in their classrooms, as well as support responsible use and understanding among their students.

Empower educators to implement principles

Recognizing that there is no one-size-fits-all policy on generative AI, districts should empower educators to implement institutional recommendations and enforce academic integrity within their classrooms – while applying the technologies in ways that serve their students. This approach models that taken by Department of Education’s recent AI Report , which provides general guidance for learning and teaching with AI—without commenting on specific generative AI tools, due to their rapid progress. Teachers can reference district-level principles as a guiding framework, upon which they can design transparent, well-defined expectations for their students.

Help overburdened districts

Finally, we need to help overburdened and under-resourced districts that already are struggling with instructional, infrastructure, and financial challenges. There remain sharp inequities in public school resources, and modern technologies often accentuate those disparities. Some schools have good digital infrastructures, while others do not. The same also applies to the equitably available financial means to integrate new teaching tools in the classroom.

As schools consider how to utilize generative AI, we should be cognizant of these disparities and provide help to make sure marginalized districts are not left behind. Federal and state officials could earmark money to public school districts who receive minimal assistance on using generative AI to help teachers, students, and administrators deal with its utilization. In the end, for districts to ensure diversity, equity, and inclusion in the deployment of these tools, school leaders ought to level the playing field for their use, especially before its unyielding adoption and use.

The proposed strategies are not required of school districts in any order. Rather, they are the beginning of both immediate and future conversations for how to understand how to leverage generative AI tools in educational settings.

Related Content

Jeremy Baum, John Villasenor

July 18, 2023

Cameron F. Kerry

July 7, 2023

Darrell M. West

May 3, 2023

Meta and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

K-12 Education

Artificial Intelligence

Governance Studies

U.S. States and Territories

Center for Technology Innovation

Darrell M. West, Joseph B. Keller

February 12, 2024

Brahima Sangafowa Coulibaly, Zia Qureshi, Aloysius Uche Ordu, Arushi Sharma, Jennifer L. O’Donoghue, Rebecca Winthrop, Alexandra Bracken, John W. McArthur

December 22, 2023

Rohan Carter-Rau, Brad Olsen

November 20, 2023

  • Share full article

Advertisement

Supported by

Blinken Warns of Disinformation Threat to Democracies

At an international forum, the secretary of state said artificial intelligence’s ability to disrupt the global flow of information could prove politically perilous during a year of elections.

Antony J. Blinken, in a dark suit, stands behind a podium microphone.

By Michael Crowley

Michael Crowley is traveling in Asia with the secretary of state.

Secretary of State Antony J. Blinken warned on Monday that a malicious “flood” of disinformation was threatening the world’s democracies, fueled in part by the swift rise of artificial intelligence, which he says sows “suspicion, cynicism and instability” around the globe.

Mr. Blinken spoke in Seoul at the Summit for Democracy , a global gathering organized by the Biden administration, which has made countering the authoritarian models of nations like Russia and China a top priority.

Mr. Blinken, who as a young man worked briefly as a journalist, said that changes to the international flow of information may be “the most profound” that he has experienced in his career, and that anti-democratic forces were exploiting those changes.

“Our competitors and adversaries are using disinformation to exploit fissures within our democracies,” he said.

He noted that countries totaling nearly half of the world’s population, including India , will hold elections this year under the threat of manipulated information. He did not mention the United States’ presidential election in November, which many analysts say could be influenced by foreign-directed information campaigns like the one Russia waged in 2016 .

The U.S. promotes “digital and media literacy” programs abroad to help news consumers judge the reliability of content, Mr. Blinken said. But he cautioned that American adversaries were clever about laundering their propaganda and disinformation. China, for instance, has purchased cable television providers in Africa and then excluded international news channels from subscription packages, he said.

And increasingly powerful generative A.I. programs, Mr. Blinken said, can “fool even the most sophisticated news consumers.”

The State Department has urged social media platforms to take more action, including by clearly labeling A.I.-generated content. Meta, the parent company of Facebook, announced such a plan last month for content posted on Facebook and Instagram.

But experts at the conference said the challenge was enormous. Speaking on the subject later in the day, Oliver Dowden, the deputy prime minister of Britain, cited the example of an A.I.-generated image of Pope Francis in a puffer jacket that drew wide attention last year.

Mr. Dowden said that even though he understood that the image was fake, he retains a mental association between the pope and puffer jackets. Such images “influence your perceptions” subconsciously, he said.

Mr. Blinken spoke days after a new report commissioned by the State Department and released last week warned that artificial intelligence presents the world with “catastrophic risks.” The report said that an A.I. system “capable of superhuman persuasion” could undermine the democratic process.

It also cited an unnamed prominent A.I. researcher’s concern that “the model’s potential persuasive capabilities could ‘break democracy’ if they were ever leveraged in areas such as election interference or voter manipulation.”

Mr. Blinken discussed the threat of commercial spyware, which he said several governments had used to monitor and intimidate journalists and political activists. He said that six countries — Finland, Germany, Ireland, Japan, Poland and South Korea — were joining a U.S.-led coalition to ensure that commercial spyware “is deployed consistent with universal human rights and basic freedoms.”

President Biden issued an executive order a year ago barring the U.S. government from using commercial spyware, though not similar tools built by U.S. intelligence agencies.

This week’s Summit for Democracy is the third installment of a forum started in 2021 by Mr. Biden, who said during his State of the Union address this month that “freedom and democracy are under attack both at home and overseas.” The meetings are intended to help other nations promote best civil society practices and defend against political sabotage.

Mr. Blinken’s visit to Seoul occurred as North Korea conducted its latest test launch of several short-range ballistic missiles. The launches came days after joint U.S.-South Korean military exercises that North Korea denounced as provocative.

Mr. Blinken did not mention the launches in his public remarks, although the State Department condemned them.

Matthew Miller, a department spokesman, also said in a statement that Mr. Blinken and the South Korean foreign minister, Cho Tae-yul, discussed “Pyongyang’s military support for Russia’s war against Ukraine” and North Korea’s “increasingly aggressive rhetoric and activities.”

Michael Crowley covers the State Department and U.S. foreign policy for The Times. He has reported from nearly three dozen countries and often travels with the secretary of state. More about Michael Crowley

IMAGES

  1. Artificial Intelligence: Pros and Cons: [Essay Example], 639 words

    argumentative essay against artificial intelligence

  2. Artificial Intelligence Essay

    argumentative essay against artificial intelligence

  3. FREE 15+ Argumentative Essay Samples in PDF

    argumentative essay against artificial intelligence

  4. What is Artificial Intelligence Free Essay Example

    argumentative essay against artificial intelligence

  5. Artificial Intelligence, Are the Machines Taking over Free Essay Example

    argumentative essay against artificial intelligence

  6. Artificial Intelligence in Today's Society

    argumentative essay against artificial intelligence

VIDEO

  1. Argumentative Essay

  2. Essay on Artificial Intelligence ⁉️🤯🧠

  3. Argumentative Essay Topic Selection

  4. Essay on Artificial intelligence _#class 10th_#trending #ytshorts _#education

  5. Artificial Intelligence Essay In English || Artificial Intelligence Essay || #mdwriting #handwriting

  6. Shocking Truth About AI and NonFiction Writing Revealed

COMMENTS

  1. Argumentative Essay Example on Artificial Intelligence

    Artificial Intelligence Argumentative Essay Introduction. Artificial Intelligence (AI) is the kind of intelligence displayed by machines. It is the capacity of a machine, specifically a computer, to replicate mental functions. The natural intelligence of people is in contrast to artificial intelligence. Numerous technologies are being created ...

  2. Argumentative Essay Example on Artificial Intelligence in MLA

    The emergence of intelligent algorithms that control robots has led to the loss of jobs that are otherwise tiring and monotonous to humans (Kaplan 113). For example, artificial intelligence controls the robots that are used in the design and manufacture of vehicles. In this case, the people formerly employed in the industry have lost jobs.

  3. Comprehensive Argumentative Essay Paper on Artificial Intelligence

    Unraveling the Promise and Peril of Artificial Intelligence Comprehensive Argumentative Essay Paper on Artificial Intelligence. Artificial Intelligence (AI) stands as a hallmark of human innovation, promising to revolutionize industries, economies, and even the fabric of society itself. With its ability to mimic cognitive functions, AI has penetrated various spheres of human existence, from ...

  4. AI Should Augment Human Intelligence, Not Replace It

    His famous matches against the IBM super-computer Deep Blue in 1996 and 1997 were key to bringing artificial intelligence, and chess, into the mainstream. ... His latest book on artificial ...

  5. The big idea: Should we worry about artificial intelligence?

    A real AI, it suggests, might secretly manufacture nerve gas or nanobots to destroy its inferior, meat-based makers. Or it might just keep us in a planetary zoo while it gets on with whatever its ...

  6. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an open letter written by the ...

  7. The Case Against AI Everything, Everywhere, All at Once

    These checks and balances led to better decisions and less risk. AI everything, everywhere, all at once, is not inevitable, if we use our powers to question the tools and the people shaping them ...

  8. Argumentative Essay on Artificial Intelligence

    Argumentative Essay on Artificial Intelligence. This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples. After decades of development, we now live in a world where we have a huge growth of new and more advanced technology.

  9. Two arguments against human-friendly AI

    The past few decades have seen a substantial increase in the focus on the myriad ethical implications of artificial intelligence. Included amongst the numerous issues is the existential risk that some believe could arise from the development of artificial general intelligence (AGI) which is an as-of-yet hypothetical form of AI that is able to perform all the same intellectual feats as humans ...

  10. AAAI: An Argument Against Artificial Intelligence

    AAAI: an Argument Against Artificial Intelligence. April 2018. Sander Beckers. The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of ...

  11. The Pros And Cons Of Artificial Intelligence

    Reduces employment. We're on the fence about this one, but it's probably fair to include it because it's a common argument against the use of AI. Some uses of AI are unlikely to impact human ...

  12. The Arguments Against Artificial Intelligence

    Musk's comparison of artificial intelligence to summoning a demon is a clear indicator that he does not believe that the talk about the dangers of AI is overblown. Musk is a notable figure in the artificial intelligence world due to his innovations regarding the use of AI in the self-driving cars that Tesla is working on.

  13. Artificial intelligence argumentative essay

    Artificial intelligence has also driven innovation within the economy through diffusing the innovations and technological devices into the economy. Artificial intelligence which includes the use if robotics has impacted the workplaces both positively and negatively. The impacts on artificial intelligence in the workplace begins by the impacts ...

  14. Artificial Intelligence and How It Changes the World: Argumentative Essay

    The term 'Artificial Intelligence was first coined in 1956 by prominent computer and cognitive scientist John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, when he invited a group of academics from various disciplines including, but not limited to, language simulation, neuron nets, and complexity theory, to a ...

  15. Artificial Intelligence: History, Challenges, and Future Essay

    In the editorial "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence" by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired ...

  16. AAAI: An Argument Against Artificial Intelligence

    AAAI: An Argument Against Artificial Intelligence. June 20, 2017. Authors. Track: Papers. Downloads: Download PDF. Topics: WS. Primary Sidebar. We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking "Accept All", you consent to the use of ALL the cookies.

  17. Opinion

    June 30, 2023. In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. "Mitigating the risk of ...

  18. The impact of artificial intelligence on human society and bioethics

    This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships.

  19. Should AI be Regulated? The Arguments For and Against

    Why Generative Artificial intelligence needs to be regulated. There are many reasons to regulate Gen. AI, specifically when looking through the prism of risks to under-privileged or defenceless populations. It can be easy not to take automated and wide-scale discrimination seriously… when you've never been discriminated against. Looking at ...

  20. An Essay on the Negative Effects of Artificial Intelligence

    Download. Artificial intelligence is making rapid strides. It is said that AI could fundamentally change life on our planet. AI has the potential to revolutionize every aspect of daily life including work, mobility, medicine, economy and communication. Companies like Google, Microsoft and IBM have been researching for years in the field of AI.

  21. Argumentative Essay on Artificial Intelligence

    Argumentative essay on Artificial Intelligence. The term "artificial intelligence" refers to computer technology that is capable of thinking for itself. This is a contentious subject since the concept of computers thinking for itself has many severe repercussions, such as computers making decisions that affect humanity rather than people.

  22. Argumentative Essay on Artificial Intelligence

    Argumentative Essay on Artificial Intelligence Artificial Intelligence also referred to as A is the study of intelligence design that simulates human processes such as learning and reasoning. Artificial intelligence has the ability to solve any problem by applying intelligence as seen in computers that are communicating and calculating data ...

  23. Should schools ban or integrate generative AI in the classroom?

    August 7, 2023. The advent of generative AI tools creates both opportunities and risks for students and teachers. So far, public schools have followed one of three strategies, either banning ...

  24. Blinken Warns of Disinformation Threat to Democracies

    At an international forum, the secretary of state said artificial intelligence's ability to disrupt the global flow of information could prove politically perilous during a year of elections.

  25. Gurcharan Das on why it's lonely being an Indian liberal

    In a land of 330m gods where no god can afford to feel jealous, and where people are argumentative, narrow Hindu nationalism is unlikely to have a long shelf life. Liberalism will win out in the end.