• Mobile Close Open Menu

Internet Regulation: The Responsibility of the People

Jan 31, 2020

Stay updated on news, events, and more

Join our mailing list

ESSAY TOPIC: Is there an ethical responsibility to regulate the Internet? If so, why and to what extent? If not, why not?

Last summer, the Federal Trade Commission (FTC) fined Facebook 5 billion dollars for violating the privacy rights of their users. Many argue, however, that users of Facebook have little to expect when it comes to privacy. To an extent, that is true. By putting their personal information online, people allowed their data to be harvested by companies through the use of internet cookies and IP tracking. As users continue to share more and more of their lives online, the expectation of privacy will continue to diminish. The Facebook case is simply an indicator of a wider issue that has arisen in this new internet age.

It is not only companies or friends that see the information we post, but data breaches have exposed financial information of users, and governments have used their power to monitor users' online activities. In an age when it seems like everything is shared online, it is more necessary than ever for people and governments alike to determine their responsibilities, to take control of the direction of the internet.

Governments, in particular, have a difficult road ahead as they determine how much of their citizens' internet lives they wish to monitor and regulate. Their choices will determine the types of regulations, whether it be censorship of information or media, tracking what websites or news users share, or even connecting their usage to their specific location. Some governments around the world have already taken such steps, justifying these actions by saying is it essential for the safety of their citizenry. However, many internet users question whether the government has their best interests at heart. Rather, they believe that the government is trying to expand control over their citizens. In many cases, this is true. Therefore, it is the opinion of this writer that allowing governments to regulate, monitor, and control their citizens' internet activities is too much power, but rather the people should take steps to monitor themselves.

The past few years have seen a dramatic increase in the amount of internet "trolling," which refers to users who act in bad faith in their online interactions, relying on the anonymous nature of the internet to protect them. They might lie about who they are, try to upset others, or share false information often referred to as "fake news." Though this might seem harmless by itself, in large doses it can have an impact on world events. For instance, according to The Telegraph , during the 2016 British Referendum on their European Union membership, Russian trolls sought to influence voters. On the day of the election, they sent millions of tweets and online messages supporting the leave campaign. Russian trolls were also found to be active during the 2016 American Presidential Election. Posing as American citizens, they set up spam accounts that posted news stories, many of which were false, in order to sway voter opinion.

Though it is difficult to estimate the effect these accounts had on the outcomes of the elections, the accounts earned many followers and millions of retweets in which others shared the false information. This rise in "fake news" and "trolling" has led many in the government to consider legislation that would track internet users' activities, so they might better combat these abuses.

Some countries have already taken steps to clamp down on such abuses. South Korea, for instance, has for years used an internet authentication process that ties a person's internet profile to a phone number and real name. This past year, Australia enacted legislation in response to the Christchurch massacre in New Zealand. The legislation gives them the authority to remove content they deem too violent or offensive from media or social websites. Other countries like the United Kingdom have enacted similar laws to mixed reactions.

While the countries listed above might have their citizens' best interests at heart, one need look no further than China to see how such controls might be abused by governments.

For years, China has heavily regulated and censored the internet that Chinese citizens experience, often referred to as "The Great Firewall of China." For an internet company to become available in China, they must first tailor their website so it does not conflict with Chinese interests. For instance, Google, one of the largest companies in the world, famously scrubbed internet searches of references to any brutality or wrongdoing by the Chinese government before they were allowed to operate in China. In addition to censoring the type of media available, China has also taken steps to monitor its citizens' actions online. To obtain internet, users must register their names and phone numbers. In December, they plan to roll-out a new system that requires users to register their faces to get service. While these regulations and censorship have been criticized in the past, the scrutiny has increased in recent months due to the protests and subsequent crackdown in Hong Kong. There, the Chinese government has used its internet regulation to track protestors and block access to helpful internet applications. Apple, for instance, has removed an application used by protestors to track police movements from their internet store. China has also blocked the use of privacy protection programs, called VPNs, so users cannot hide their internet activities. Clearly, citizens cannot always rely on their governments to regulate the internet in their best interests.

Ultimate responsibility lies where it always has, with the people. Governments might have to answer to people, but they move slow or are heavy-handed in their approach. Citizens must take control of regulating the internet. In many instances, users already have through the use of fact-checking, and economic pressures. The rise of "trolling" and "fake news" has encouraged the growth of internet fact-checking sites that provide information on the truthfulness of news stories, posts made by public figures, and even speeches given in real-time. This allows users to sift through and separate the fake news from the real. Users also have economic power, when it comes to the internet. Companies like Twitter, have recently bowed to pressure by users, monitoring their service more closely for bots and troll accounts. Already, they have suspended millions of accounts they have found guilty of spreading false or harmful information. These actions have come about because of user complaints.

The internet, more than anything, brings people together. In its short history, it has already become the greatest source of information and sharing the world has ever known, and it has become this because of the creativity, ingenuity, and contributions of regular people. As the internet was created by the people for the people, should it not also be controlled by the people? Governments may have their role to play, but it needs to be at the behest of their citizens, not the other way around.

Before, I mentioned that the FTC fined Facebook 5 billion dollars. How much does that concern its CEO Mark Zuckerberg, who is worth over 70 billion dollars? Probably not much.

However, at the same time, Facebook lost over 15 million subscribers, a substantial loss in an industry where growth is everything. What concerns Facebook more, the power of governments to levy fines, or the power of their users to leave, thereby making the platform obsolete? As always, the power belongs to the people.

Works Cited:

Al-Heeti, Abrar. "Facebook Lost 15 Million US Users in the Past Two Years, Report Says." CNET, CNET, 6 Mar. 2019, www.cnet.com/news/facebook-lost-15-million-us-users-in-the-past-two-years-report-says/ .

"Apple Bans Hong Kong Protest Location App." BBC News, BBC, 3 Oct. 2019, www.bbc.com/news/technology-49919459 .

Ellis, Megan. "The 8 Best Fact-Checking Sites for Finding Unbiased Truth." MakeUseOf, 30 Sept. 2019, www.makeuseof.com/tag/true-5-factchecking-web"Google in China: Internet Giant 'Plans Censored Search Engine'." BBC News, BBC, 2 Aug. 2018, www.bbc.com/news/technology-45041671 .

Griffiths, James. "Governments Are Rushing to Regulate the Internet. Users Could End up Paying the Price." CNN, Cable News Network, 8 Apr. 2019, https://www.edition.cnn.com/2019/04/08/uk/internet-regulation-uk-australia-intl-gbr/index.html.

Lapowsky, Nicholas ThompsonIssie. "How Russian Trolls Used Meme Warfare to Divide America." Wired, Conde Nast, 17 Dec. 2018, www.wired.com/story/russia-ira-propaganda-senate-report .

Perper, Rosie. "Chinese Citizens Will Soon Need to Scan Their Face before They Can Access Internet Services or Get a New Phone Number." Business Insider, Business Insider, 10 Oct. 2019, www.businessinsider.com/china-to-require-facial-id-for-internet-and-mobile-services-2019-10 .

Popken, Ben. “Russian Trolls Went on Attack during Key Election Moments.” NBCNews.com, NBCUniversal News Group, 14 Feb. 2018, www.nbcnews.com/tech/social-media/russian-trolls-went-attack-during-key-election-moments-n 827176 .

Shepardson, David. "Facebook to Create Privacy Panel, Pay $5 Billion to U.S. to Settle Allegations." Reuters, Thomson Reuters, 24 July 2019, www.reuters.com/article/us-facebook-ftc/facebook-to-create-privacy-panel-pay-5-billion-to-u-s-t o-settle-allegations-idUSKCN1UI2GC .

Timberg, Craig, and Elizabeth Dwoskin. "Twitter Is Sweeping out Fake Accounts, Suspending More than 70 Million in 2 Months." Chicagotribune.com, 7 July 2018, www.chicagotribune.com/business/ct-twitter-removes-fake-accounts-bots-20180706-story.html .

Woollacott, Emma. "Russian Trolls Used Islamophobia To Whip Up Support For Brexit." Forbes, Forbes Magazine, 1 Nov. 2018, www.forbes.com/sites/emmawoollacott/2018/11/01/russian-trolls-used-islamophobia-to-whip-up -support-for-brexit/#3f409b7b65f2 .

Wright, Matthew Field; Mike. "Russian Trolls Sent Thousands of pro-Leave Messages on Day of Brexit Referendum, Twitter Data Reveals." The Telegraph, Telegraph Media Group, 17 Oct. 2018, https://www.telegraph.co.uk/technology/2018/10/17/russian-iranian-twitter-trolls-sent-10-million-tweets-fake-news/ s-fake-news/ .

Yoon, Julia. "South Korea and Internet Censorship." The Henry M. Jackson School of International Studies, 11 July 2019, jsis.washington.edu/news/south-korea-internet-censorship/.

You may also like

JAN 31, 2020 • Article

Big Data, Surveillance, and the Tradeoffs of Internet Regulation

This essay written by Seungki Kim is the first prize winner of the high school category in the 2019 student essay contest. Should internet users be ...

Ethics Empowered

Using the power of ethics to build a better world

Sign up for news & events

[email protected] 212-838-4122 170 East 64th Street New York, NY 10065

  • Privacy Policy
  • Accessibility Policy

PhilosophyMT

More Thought: articles, videos, podcast, tuition

essay in internet ethics

The Internet and Ethical Values

The Internet has become an integral part of modern society, revolutionizing communication, commerce, and the exchange of information. From its humble beginnings in the 1960s as the ARPANET, a network connecting universities and research institutions, to the vast web of interconnected devices, platforms, and people it is today, the Internet has transformed the way we live, work, and interact. This technological marvel has brought with it unparalleled access to knowledge, fostered global connections, and facilitated rapid innovation. However, along with these benefits, the Internet has also raised new ethical questions and challenges that humanity must grapple with. Through these discussions, we aim to better understand how we might act in a digital world, engaging with both the positive and negative aspects of the Internet and the responsibilities it entails.

i. The Role of Morality in Cyberspace:

As the Internet becomes increasingly pervasive, questions about the role of morality in cyberspace have emerged. RA Spinello, in his book “Cyberethics,” discusses the importance of ethical considerations in our online interactions. He argues that cyberspace is not a lawless frontier but rather a space where ethical values should guide behavior. Spinello emphasizes that individuals, organizations, and governments must all take responsibility for their actions online, ensuring that they adhere to ethical principles and promote the common good.

Lawrence Lessig, a legal scholar, identified four constraints on human behavior, which he believes also apply to the Internet. These constraints include the law, social norms, market forces, and architecture (technology). According to Lessig, each of these factors has the power to shape how we act online. For instance, the architecture of the Internet itself can facilitate or hinder certain behaviors, such as anonymity or privacy. Social norms dictate what is considered acceptable in online interactions, while market forces drive the economic incentives that underpin the development and use of digital technologies.

Lessig’s four constraints serve as a reminder that the Internet is a complex ecosystem in which various forces interact to shape our online behavior. As such, it is crucial to recognize the ethical implications of these forces and strive to create a balance that supports the moral values we cherish.

In this context, it is also essential to examine the theories and principles that can help guide ethical behavior in cyberspace. The next section discusses some of the key ethical values and frameworks that are relevant to the digital age.

ii. Ethical Values for the Digital Age:

In order to navigate the ethical challenges posed by the Internet and digital technologies, it is essential to establish a set of ethical values and frameworks that can guide our behavior in the digital age. Several philosophers and scholars have proposed different ethical theories and principles that can help us assess the ethical implications of technology and inform our actions online.

James Moor, a philosopher of technology, proposed a list of core human goods that can be used to assess the ethical implications of technology. These goods include life, knowledge, play, aesthetic experience, sociability, practical reasonableness, and religion. Moor argued that the purpose of technology, including the Internet, should be to enhance these goods, and any technological development that detracts from them should be critically examined. By considering the impact of the Internet on these core human goods, we can better understand the ethical consequences of our online actions and strive to create a digital environment that promotes human flourishing.

Building on Moor’s work, John Finnis developed a “thicker” version of these core human goods. Finnis emphasized that the purpose of technology should be to enhance human flourishing and identified seven basic forms of human flourishing: life, knowledge, practical reasonableness, friendship, play, aesthetic experience, and religion. According to Finnis, these goods are intrinsically valuable, and technology should be designed and used to promote them, rather than undermine them. Finnis’s framework offers a more comprehensive understanding of the relationship between technology and human well-being, allowing us to evaluate the ethical implications of the Internet more effectively.

In contrast, Jacques Ellul, a French philosopher, presented a more disturbing vision of technology, claiming that it leads to a loss of freedom and autonomy. He believed that technology creates a “technological society” in which human values are subordinated to the demands of efficiency and progress. In this view, the Internet may exacerbate these concerns by facilitating surveillance, information overload, and an erosion of privacy.

Another approach to understanding ethical values in the digital age is through the lens of traditional ethical theories, such as utilitarianism, contractarianism, pluralism, and new natural law. Utilitarianism, for example, focuses on maximizing overall happiness or utility, and in the context of the Internet, could be applied to assess the benefits and harms of online actions and policies. Contractarianism, on the other hand, emphasizes mutual agreements and social contracts as the basis for ethical behavior, while pluralism acknowledges the diversity of ethical values and the need for negotiation and compromise in the digital age.

iii. Contemporary Issues:

The Internet’s rapid development has given rise to numerous contemporary ethical issues that require careful consideration and analysis. Some of these concerns relate to privacy, surveillance, free speech, digital divide, online harassment, and the ethical implications of artificial intelligence.

Privacy is a significant concern in the digital age, as vast amounts of personal information are collected and stored online by various entities, including governments and corporations. This has led to debates about the appropriate balance between individual privacy rights and the benefits of data collection for purposes such as targeted advertising, national security, and public health. Max Weber’s concept of the “Iron Cage” warns of the potential dangers of a society dominated by rationalization and bureaucracy, which can lead to a loss of individuality and freedom. In the context of the Internet, this concept can be applied to the increasing surveillance and control facilitated by technology.

Free speech is another crucial issue in the digital age, as the Internet has become a primary platform for the expression and exchange of ideas. While the democratizing potential of the Internet has been celebrated, concerns have also been raised about the spread of misinformation, fake news, and online radicalization. The question of how to balance the values of free speech and the need to prevent harm caused by malicious or false information is a complex ethical challenge that demands thoughtful deliberation.

The digital divide refers to the unequal access to digital technologies and resources, both within and between societies. As the Internet becomes more central to economic, social, and political life, those who lack access to digital resources are at an increasing disadvantage. Addressing the digital divide is an urgent ethical issue, as it relates to social justice and the equitable distribution of opportunities in the digital age.

Online harassment and cyberbullying have also emerged as significant ethical concerns, as the Internet has, unfortunately, provided a platform for malicious behavior and abuse. The anonymity and perceived lack of accountability online can exacerbate these issues, leading to real-world consequences for victims, such as psychological distress and even suicide. Developing effective strategies to address online harassment and promote a more civil and respectful online environment is an important ethical challenge for the digital age.

Finally, the ethical implications of artificial intelligence (AI) are increasingly relevant, as AI systems become more integrated into our lives through applications such as search engines, social media algorithms, and autonomous vehicles. AI raises numerous ethical concerns, including questions about privacy, fairness, accountability, and the potential displacement of human labor. As AI technology continues to advance, it is essential to consider its ethical implications and develop policies and guidelines that ensure its responsible and equitable use.

In conclusion, the Internet and digital technologies have brought about numerous ethical challenges that require thoughtful analysis and action. By engaging with these contemporary issues and grounding our understanding in ethical values and frameworks, we can work towards creating a more just and moral digital world.

' src=

Published by Adrian Camilleri

Content creator and founder of PhilosophyMT View all posts by Adrian Camilleri

Leave a Reply Cancel reply

You must be logged in to post a comment.

Encyclopedia for Writers

Writing with ai, digital ethics.

Because ethics refers to the way groups and individuals relate to, treat, and resolve issues with each other, digital ethics then encompasses how users and participants in online environments interact with each other and the technologies and platforms used to engage. How does a online discussion board community handle flaming? Is it right to give support to pirating sites? What images are appropriate for re-tweeting? Just how private should privacy policies be when agreeing to Terms of Services?

As with any ethical issue or moral dilemma, there is contention and a variegated palette of opinions based upon people’s politics, personalities, and purposes. From a rhetorical perspective, digital ethics is approached by asking a series of questions relating to textual and visual communication:

  • What language and tone are appropriate for a given situation?
  • What are the guidelines that govern a given online community?
  • How are sources used, remixed, and/or altered for a given audience? How are these sources to be referenced or cited fairly?
  • How do users portray themselves online, whether through social media, gaming, avatars, or other means? How is online ethos developed and maintained?
  • How do users approach the increasingly blurred line between private and public? What constitutes (a) public? 

There is no set way, no strict code of conduct for how to communicate or engage online any more than there is a strict code of conduct for how to hold a large, face-to-face conversation on campus (McKee 429). [1] There are, however, conventions to consider. It is useful to think of ethics as the “appropriate” methods for actions and relating to others in a given environment.  Guidance or governance for effective online communication exists only through general patterns of experiences that accumulate over time. Beginning emails with a respectful salutation or filtering images you are tagged in on Facebook are not examples of following rules that are inherently true; rather, beginning emails with a respectful salutation or filtering images you are tagged in on Facebook are useful conventions of digital ethics to abide by because of the patterns of responses to rude emails or inappropriate online photo albums: a lack of response and a lack of a job, respectively. In light of the conventional and theoretical nature of netiquette, under the larger umbrella of digital ethics, one should not aim to memorize a list of netiquette ordinances (although these can provide beneficial in some circles), but aim to understand through a rhetorical lens what digital ethics are and why they are important.

One of the most immediate reasons why digital ethics are important is because how we present, indeed construct our persona(s) effects the way in which our communication and intentions will be received. The notion that individual ethics impact our arguments is nothing new. Much of how we understand and categorize argumentation today stems from Aristotle’s “appeals,” which are generally understood as the means of persuasion — how we support our arguments for specific audiences. In Aristotle’s Rhetoric, we see that there are three overarching appeals used to classify how we argue: logic ( logos ), emotion ( pathos ), and the character of the speaker ( ethos ). (For further information about these rhetorical appeals, see “ Rhetorical Appeals .”) These appeals are not represented hierarchically, not in any order of importance; no, Aristotle penned that the most articulate, effective communicators successfully weaved elements of all three appeals into their speech. Skilled orators were to provide logical proofs of their claims, embedding them in a larger concern to judge and alter the emotional disposition of the audience, all the while portraying themselves as credible, reliable authorities with only good will for the public. This rhetorical balancing act places as much importance on the logic and coherence of an argument as on the speaker uttering the argument. To appear credible, one must “display (i) practical intelligence ( phronêsis ), (ii) a virtuous character, and (iii) good will ( Rhetoric II.1, 1378a6ff). [2] Ultimately, the validity of proof, of scientific and political claims, rests upon the character of the individual.

Of course, there needs to be some reinterpreting of the Aristotelian appeals for the modern communicator. No longer do we as humans construct arguments and share them sporadically in public symposia. Our understanding of rhetoric asserts that most of what we do and say are argumentative in some form, and the ubiquity of technology today means that what we do and say are more “public” than ever before. Communicators in the twenty-first century must constantly think about how to represent themselves in physical and online communities. Facebook profile pages, for example, represent a form of “building character,” not in the sense that people become emotionally stronger through overcoming adversity, but in the sense that profile pages on social media sites literally figure as constructions of the self, as digital representations of character — of Aristotelian ethos .

Click on the About link.

What social and political organizations to I ascribe to?

Check my Groups and Likes.

What activities do I engage in?

Browse my photo albums.

With whom do I associate?

Scroll my friends list.

What are my beliefs?

Read my status updates.

An important part of maintaining a solid digital ethos is critically reflecting on your choices of online self-representation and whether or not these choices reflect your goals as a student and as a professional. If your goal is to get a job after graduation, then your argument, as evidenced through your résumé, cover letter, and email correspondence, is that you are the best candidate for the job. Your résumé constitutes the logical proofs of your claim, while your cover letter may engage in altering the employer’s emotional disposition. If the employer wants to see who you are as a person, and whether or not they might want to interact with you on a daily basis for a lengthy period of time, they might want to know more about your character. Social media sites often reveal meaningful insights into a person’s character; and, if online self-presentation is a core component to rhetoric, then how well will your arguments stand?   

______________________________________________________________________________________

[1] Aristotle. Rhetoric .

[2] McKee, H. (2002). “YOUR VIEWS SHOWED TRUE IGNORANCE!!!”: (Mis) Communication in an online interracial discussion forum.” Computers and Composition 19: 411-434. (link)

Brevity – Say More with Less

Brevity – Say More with Less

Clarity (in Speech and Writing)

Clarity (in Speech and Writing)

Coherence – How to Achieve Coherence in Writing

Coherence – How to Achieve Coherence in Writing

Diction

Flow – How to Create Flow in Writing

Inclusivity – Inclusive Language

Inclusivity – Inclusive Language

Simplicity

The Elements of Style – The DNA of Powerful Writing

Unity

Suggested Edits

  • Please select the purpose of your message. * - Corrections, Typos, or Edits Technical Support/Problems using the site Advertising with Writing Commons Copyright Issues I am contacting you about something else
  • Your full name
  • Your email address *
  • Page URL needing edits *
  • Comments This field is for validation purposes and should be left unchanged.

Leave a Comment Cancel reply

Featured articles.

Student engrossed in reading on her laptop, surrounded by a stack of books

Academic Writing – How to Write for the Academic Community

essay in internet ethics

Professional Writing – How to Write for the Professional World

an illustration of a scale. "Opinion" is being weighed on the left side of the scale. "Facts & Research" are being depicted on the right side. It's clear from the illustration that "facts & research" weigh more than "opinion."

Authority & Credibility – How to Be Credible & Authoritative in Research, Speech & Writing

American Psychological Association Logo

This page has been archived and is no longer being updated regularly.

Questionnaire

The Internet’s ethical challenges

Should you Google your clients? Should you ‘friend’ a student on Facebook? APA’s Ethics Director Stephen Behnke answers those questions and more.

By Sara Martin

Monitor Staff

July/August 2010, Vol 41, No. 7

Print version: page 32

Online ethics

No form of client communication is 100 percent guaranteed to be private. Conversations can be overheard, e-mails can be sent to the wrong recipients and phone conversation can be listened to by others.

But in today’s age of e-mail, Facebook, Twitter and other social media, psychologists have to be more aware than ever of the ethical pitfalls they can fall into by using these types of communication.

“It’s easy not to be fully mindful about the possibility of disclosure with these communications because we use these technologies so often in our social lives,” says Stephen Behnke, PhD, JD, director of APA’s Ethics Office. “It’s something that we haven’t gotten into the habit of thinking about.”

The Monitor sat down with Behnke to discuss the ethical aspects of the Internet for psychology practitioners and how to think about them.

Does the APA Ethics Code guide practitioners on social media?

Yes. The current Ethics Code was drafted between 1997 and 2002. While it doesn’t use the terms “social media,” “Google” or “Facebook,” the code is very clear that it applies to all psychologists’ professional activities and to electronic communication, which of course social media is.

As we look at the Ethics Code, the sections that are particularly relevant to social media are on privacy and confidentiality, multiple relationships and the section on therapy. The Ethics Code does not prohibit all social relationships, but it does call on psychologists to ask, “How does this particular relationship fit with the treatment relationship?”

Is the APA Ethics Office seeing any particular problems in the use of social media?

Everyone is communicating with these new technologies, but our ethical obligation is to be thoughtful about how the Ethics Code applies to these communications and how the laws and regulations apply.

For example, if you are communicating with your client via e-mail or text messaging, those communications might be considered part of your client’s record. Also, you want to consider who else might have access to the communication, something the client him- or herself may not be fully mindful of. When you communicate with clients, the communication may be kept on a server so anyone with access to that server may have access to your communications. Confidentiality should be front and center in your thinking.

Also, consider the form of communication you are using, given the kind of treatment you are providing. For example, there are two very different scenarios from a clinical perspective: In one scenario, you’ve been working with a client face-to-face and you know the client’s clinical issues. Then the client goes away on vacation and you have one or two phone sessions, or a session or two on Skype. A very different scenario is that the psychologist treats a client online, a client he or she has never met or seen. In this case, the psychologist has to be very mindful of the kind of treatment he or she can provide. What sorts of issues are appropriate to treat in that manner? How do the relevant jurisdiction’s laws and regulations apply to the work you are doing?

That’s an example of how the technology is out in front of us. We have this wonderful new technology that allows us to offer services to folks who may never have had access to a psychologist. At the same time, the ethical, legal and regulatory infrastructure to support the technology is not yet in place. A good deal of thought and care must go into how we use the technology, given how it may affect our clients and what it means for our professional lives.

APA needs to be involved in developing that ethical, legal and regulatory infrastructure and needs to be front and center on this.

What do you want members to know about using Facebook?

People are generally aware that what they put on their Facebook pages may be publicly accessible. Even with privacy settings, there are ways that people can get access to your information.

My recommendation is to educate yourself about privacy settings and how you can make your page as private as you want it to be [see further reading box on page 34]. Also, educate yourself about how the technology works and be mindful of the information you make available about yourself. Historically, psychology has talked a lot about the clinical implications of self-disclosure, but this is several orders of magnitude greater, because now anyone sitting in their home or library with access to a terminal can find out an enormous amount of information about you.

Facebook is a wonderful way to social network, to be part of a community. And of course psychologists are going to use this, as is every segment of the population. But psychologists have special ethical issues they need to think through to determine how this technology is going to affect their work.

These days, students are inviting professors to see their Facebook pages and professors are now privy to more information on their students’ lives than ever before. What’s your advice on this trend?

Psychologists should be mindful that whether teaching, conducting research, providing a clinical service or acting in an administrative capacity, they are in a professional role. Each role comes with its own unique expectations, and these expectations have ethical aspects. I would encourage a psychologist who’s considering whether to friend a student to think through how the request fits into the professional relationship, and to weigh the potential benefits and harms that could come from adding that dimension to the teaching relationship. Of course, the professor should also be informed about the school’s policy concerning interacting with students on social networking sites.

How about Googling clients — should you?

In certain circumstances, there may be a good reason to do a search of a client — there may be an issue of safety, for example. In certain kinds of assessments, it might be a matter of confirming information. But again, we always need to think about how this fits into the professional relationship, and what type of informed consent we’ve obtained. Curiosity about a client is not a clinically appropriate reason to do an Internet search. Let’s put it this way: If you know that your client plays in a soccer league, it would be a little odd if on Saturday afternoon you drove by the game to see how your client is doing. In the same way, if you’re doing a search, thinking, “What can I find out about this person?” that raises questions about the psychologist’s motives.

What about Twitter?

Again, you first want to think about what are you disclosing and what is the potential impact the disclosure could have on the clinical work. Also, if you are receiving Tweets from a client, how does that fit in with the treatment?

These questions are really interesting because they are pushing us to think clearly about the relationship between our professional and personal lives. We all have our own social communities and networks, but we also have to be aware about how we act and what we disclose in those domains, which are more accessible. Someone might say that this technology isn’t raising new questions, it’s raising old questions in different ways.

How about blogs?

Be aware that when you author a blog, you’re putting a lot of yourself into it. That’s why you’re doing it. So again, you need to be mindful of the impact it will have on your clinical work. It also depends on what the blog is about. For example, if you’re blogging about religion, politics or movies, in this day and age, some of your clients are going to read the material. If you are sharing your personal views on some important societal issue, be mindful of how that might affect the work you are doing.

When is the next Ethics Code due out and will it more specifically address social media?

The next revision hasn’t been scheduled, but if I had to guess, probably in the next two to three years, APA will begin the process of drafting the next code. I can say with a very high degree of confidence that when APA does draft the next code, the drafters will be very mindful of many issues being raised by social media.

It’s important to think about ethics from a developmental perspective. As our field evolves, new issues emerge and develop. Not all the questions about social media have crystallized yet. We have to make sure that we have a pretty good sense of the right questions and the right issues before we start setting down the rules. Part of that process is exploring where the potential harms to our clients are.

We are just defining the questions, issues, the risks of harm to the client and we’re going to have to let the process unfold. In the meantime, we have to be aware that these technologies are very powerful and far-reaching and bring with them wonderful benefits, but also potential harms. Stay tuned.

Further reading

APA’s Ethics Code

“What sites such as Facebook and Google know and whom they tell”

“Google and Facebook raise new issues for therapists and their clients”

“Price of Facebook privacy? Start clicking”

Letters to the Editor

  • Send us a letter

essay in internet ethics

  • The Ethics of Online Privacy Protection
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Internet Ethics
  • Internet Ethics Resources
  • Your Privacy Online

The Ethics of Online Privacy Protection image link to story

When we try to determine the type of privacy that we want protected, and the extent to which we want it protected, we have to find a balance.

The Ethics of Online Privacy Protection

"When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else." -- David Brin

"The principles of privacy and data protection must be balanced against additional societal values such as public health, national security and law enforcement, environmental protection, and economic efficiency." -- Omer Tene and Jules Polonetsky

Even if you believe that privacy is important and valuable, you may still agree that there are ways in which the collection of vast amounts of data can help promote the " common good ". In an article entitled "Privacy in the Age of Big Data: A Time for Big Decisions," 1 Omer Tene and Jules Polonetsky list some of big data's "big benefits": the analysis of vast amounts of data has enabled researchers to determine adverse side effects of drugs that may otherwise have gone unnoticed; to track (and respond to) the spread of diseases; to develop the "smart grid," which is designed to optimize energy use; to improve traffic control; etc. Other researchers are analyzing big data to gain insights into various aspects of human behavior.

Aside from such benefits, you may also feel that that there are circumstances in which an invasion of privacy would be justified. For example, in some cases we allow law enforcement to collect and use certain information when they are investigating crimes or prosecuting alleged wrongdoers. We want the military to be able to thwart attacks against us: in order to do that, the military might need to invade some people's privacy in order to uncover terrorists or state actors that would harm us. We might support online tracking that would allow, say, the Centers for Disease Control to figure out which people may have been exposed to an infectious disease. We may even be willing to condone some violations of privacy by a school district seeking to uncover and stop an online bully.

In this context, we might consider a warning offered by Justice Brandeis in a famous dissenting opinion that he wrote in 1928, in Olmstead v. United States:

Experience should teach us to be most on our guard to protect liberty when the Government's purposes are beneficent. Men born to freedom are naturally alert to repel invasion of their liberty by evil-minded rulers. The greatest dangers to liberty lurk in insidious encroachment by men of zeal, well meaning but without understanding.

When we try to determine the type of privacy that we want protected, and the extent to which we want it protected, we—the people "born to freedom"—have to find a balance. That's no easy feat, as Eli Noam, director of the Columbia Institute of Tele-information, explains:

Privacy is an interaction, in which the rights of different parties collide. A has a certain preference about the information he receives and lets out. B, on the other hand, may want to learn more about A, perhaps in order to protect herself. … [P]rivacy is an issue of control over information flows, with a much greater inherent complexity than a conventional "consumers versus business," or "citizens versus the state" analysis suggests. 2

Because the striking of that balance may be damaging to some individuals or groups, and because it involves a choice between a "good" or "bad" alternative, or multiple "goods" and "bads," the process of finding that balance draws on the discipline of ethics . In evaluating alternative options, we may ask questions that have been distilled from various approaches to ethical decision-making .

As you evaluate new laws or rules proposed in the name of online privacy protection, new standards suggested by industries that profit from access to your online data, or new online social norms that develop over time, you could ask the following questions:

Would these measures respect the rights of all who have a stake in the outcome? Do some rights "trump" other rights, and, if so, which measure best respects the most important rights? This is the Rights Approach.

Would these measures treat people equally, or proportionately? This is the Justice Approach.

Taking into account, equally, everyone who would be affected by these measures, would these measures promote the most overall happiness, and cause the least overall harm and suffering? This is the Utilitarian Approach.

Would these measures best serve the community as a whole, not just some members? This is the Common Good Approach.

Would these measures lead people to develop the habits and character traits of a good person, the sort of person who is a moral example for others? This is the Virtue Approach.

Discussion Questions

Do all people have an equal need for privacy? Do some, who are more vulnerable in some way, need privacy more than others?

Do you believe that our legislators should pass new laws to protect privacy, given the new technologies and products that now collect information about us?

Should individuals bear the responsibility of protecting their own privacy? In your experience, do most people understand the way in which their information is accessed, collected, or otherwise used online?

1. Tene, Omer and Polonetsky, Jules. "Privacy in the Age of Big Data: A Time for Big Decisions." February 2, 2012. 64 Stan. L. Rev. Online 63. https://www.stanfordlawreview.org/online/privacy-paradox-privacy-and-big-data/ (last visited June 28, 2012). 2. Noam, Eli. "Privacy and Self-Regulation: Markets for Electronic Privacy." 1997. https://www.ntia.gov/report/1997/privacy-and-self-regulation-information-age (last visited June 28, 2012).

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics.

Photo by JermJus under Creative Commons.

  • Defining Privacy
  • Privacy: A Quiz
  • Loss of Online Privacy: What's the Harm?
  • Case Study on Online Privacy
  • Nothing to Hide
  • Do You Own Your Data?
  • How to Protect Your Online Privacy

Additional Resources

  • Suggested Reading and Viewing Lists
  • A Framework for Thinking Ethically

Ethics and the Internet Presentation

Internet ethics, internet conduct, ethics and research on the internet, ethics and business on the internet, ethics and politics on the internet, on-line education and ethics, legal issues and ethics.

Internet ethics comprises various elements including:

  • Information;
  • Network services providers;

Ethics refer to the fundamental rights of others and the regulations which govern how we should behave in relation to others when our behaviors are affecting others.

Internet is mostly self- regulated by the users, and the internet service providers. Generally, the control on internet use and misuse cannot not be fully achieved and is not very much in tandem with the internet technologies.

On understanding that internet misuse was growing rapidly as the internet expanded, in 1989, the Internet Advisory Board (IAB) issued a statement of policy concerning the proper use of the internet; the statement mainly a brief list of practices viewed as unethical and unacceptable including the following: first, seeking to gain unauthorized access to the resources of the internet; secondly, disrupting the intended use of the internet; thirdly, wasting of resources (people, capacity, computer) through disruptive action; additionally, destroying the integrity of computer-based information; lastly, compromising the privacy of other users (Stewart, Tittel, & Chapple, 2008, P.681). These guidelines have been used in formulating most code of ethics relating to computers and internet use. Code of ethics are either specific or general in nature and the they regulate personal behavior.

Below are some of issues originating from the problems of ethics in relation to internet use; internet conduct, research on internet, business on internet, politics and internet, on-line education, legal issues, and social issues.

Internet Ethics

  • Association of service providers;
  • Agreement between the user and the service provider;
  • Codes of conduct are informal codes;
  • Minimum set of netiquette guidelines.

Relating to acceptable behavior on the internet there is no body or universal set of rules or laws regulating that, but this is mainly effected by association of service providers who come up with acceptable use policies, codes of conduct, and the implication of certain laws in relation to the users and the information (Kate and Robson , 1999). Acceptable use behaviors policies are generally contracts between the users and the internet service provider specifying what is acceptable or un acceptable in relation to the network services.

Codes of conduct are guidelines regulating on-line behavior that have developed gradually as the internet grows and are mostly developed over by the on-line or specific communities that why they are referred to as informal codes of internet use.

Various companies or groups adopt a minimum set of netiquette guidelines to regulate the conduct on their networks. Variety of guides on netiquette are found on the internet but one the most referred to is the ten commandments of Internet conduct from the Computer Ethics Institute by Rinaldi (n.d), which provide the following guidelines which many groups and companies have used to develop their own netiquette guidelines:

  • Thou shalt not use a computer to harm other people.
  • Thou shalt not interfere with other people’s computer work.
  • Thou shalt not snoop around in other people’s files.
  • Thou shalt not use a computer to steal.
  • Thou shalt not use a computer to bear false witness.
  • Thou shalt not use or copy software for which you have not paid.
  • Thou shalt not use other people’s computer resources without authorization.
  • Thou shalt not appropriate other people’s intellectual output.
  • Thou shalt think about the social consequences of the program you write.
  • Thou shalt use a computer in ways that show consideration and respect.

Internet Conduct

  • Social science research guidelines;
  • Informed consent;
  • Confidentiality;
  • Avoidance of harm.

Concerning research in the internet especially the social science research most of the internet ethics are structured to take into considerations the codes of conduct relating to the practice of social research and the codes of conduct that relate to behavior in on-line communities and groups. For example Major professional social science organizations, such as the American Sociological Association and the British Sociological Association have developed guidelines for social research.

Other disciplines also have their guidelines on doing internet research. Formulating guidelines for the considerations mainly relates to issues of privacy, informed consent, confidentiality, dignity, and avoidance of harm.

Ethics and Research on the Internet

  • E-business;
  • Protection of data;
  • Accountability;
  • Freedom from invasiveness;
  • Honesty and responsibility.

Emergency of e-business enabled by e-commerce has enabled conducting of business over the internet through trading platforms such as eBay and provision of financial services and transaction through the internet thus enhancing on-line business especially the retail type. For on-line businesses several issues relating to internet ethics arise including ‘honesty and responsibility, accountability, privacy and confidentiality, protection of data (i.e. credit card numbers), freedom from invasiveness (i.e. so-called sticky websites that automatically track and retain customer contact and information), quality of the goods delivered, disclosure and reliability of information (i.e. the scandal with fake paintings sold on eBay), sources of goods, Internet economics vs. traditional economics, impacts of global Internet business, employment through the net (local and global telecommuting), web advertising, competition on the Internet (hacking into data, falsification of data), public information and financial disclosure (investor relations on the Internet), and others.’ (WarsawUniversity, 2001).

One of major issues relating to on-line businesses is the issue of consumer privacy with regard to protection of their data and release of their personal information to third parties. To safeguard their customers majority of on-line businesses have a privacy notice on their web pages.

Ethics and Business on the Internet

  • Freedom and quality of assembly;
  • Public information;
  • Political parties;
  • Quality of political participation enabled;
  • Kind of information provided.

Internet ethics related to political issues mainly concern the impact of the world wide information network on the freedom and quality of assembly, deliberation, community information and decision making, and the type of civic/public participation enabled by the internet on political matters. Internet is a very valuable resource for all types of information including the public information and political information. Internet can serve as a communication link between the government, the public, and the political removing/reducing barriers encountered in these type of communications.

Political parties are using internet to post their manifestos, stand and view relating to public issues, and to open political participation to the public by responding to their issues through the internet for example by responding to public commentaries. Quality of political participation can be gauged depending on the quality and accessibility of the home pages, the depth of the information contained in the website. The kind of information provided can evaluated on the basis on the interactivity offered to visitors and the type of information contained ‘(text only, pictures, photos of candidates or leaders, pre-digested information or actual texts, depth and extent of analysis, further links, possibility of original interpretation of information and for developing independent and critical perspectives and points of view, the degree of personalization of information–the “party line” vs. candidates’ and politicians’ individual views and personal voices, not to mention the degree of social responsibility expressed or implied in all of above)’ (WarsawUniversity, 2001).

Ethics and Politics on the Internet

  • Interactivity;
  • Quality of educational experience;
  • Impact on the nature of learning;
  • Impact on the nature of teaching.

In using IT to offer on-line education, the standard of education should be well maintained or even be improved by taking the following considerations with regard to matters pertaining education and the internet: interactivity, quality of education experience being offered, impact on the nature of learning, impact on the nature of teaching, privacy and other aspects.

On-Line Education and Ethics

  • Intellectual property;
  • Reliability;
  • Internet crimes;
  • Freedom of expression;
  • Data protection.

Internet is legally recognized as a medium of communication, and electronic storage of information that why most website have privacy notices in respect of privacy and intellectual property of the information contained there in with emphasis to respect of intellectual property and privacy in the use of that information.

There are also legislations and laws governing user groups, companies and other internet users on matters relating to reliability, internet crimes, freedom of expression over the net, data security and other matters depending the country applicable or organization or the group.

Legal Issues and Ethics

  • Ethics issues and internet are ever expanding;
  • Emerging frontiers.

Problem of ethics and internet are continuing to increase every day through internet use generating more specific issues and problems concerning internet use. This situation is mainly being by rapid advancement in internet technologies and the fast increasing global reach of the internet. Internet is generating all sorts of communication creating new frontiers for education, business, politics, social interactions and technological advancement in various fields.

Conclusion

Kate, R. and Robson, M. (1999). Your Place or Mine? Ethics, the Researcher, and the Internet. In John Armitage and Joanne Roberts, eds. Proceedings of the conference Exploring Cyber Society: Social, Political, Economic, and Cultural Issues . London: University of Northumbria.

Rinaldi, A. H. (N.d). The Net User Guidelines and Netiquette. Web.

Stewart, J.M., Tittel, E. & Chapple, M. (2008). CISSP: Certified Information Systems Security Professional Study Guide. Ontario: John Wiley and Sons.

  • Ethical and Cultural Considerations in Groups
  • Consequentialist, Deontological, and Virtue Ethics: Ethical Theories
  • Incomplete Families: “The Drover’s Wife,” “The Chosen Vessel,” and “Good Country People”
  • "Quit Meat" Vegetarian Diet: Pros and Cons
  • John Donnes’ Poetry Literature Study
  • Beauty Standards and Their Impact
  • Should People Be Allowed to Design Babies?
  • Professional Values and Ethics Paper
  • Ethical Dilemma: Benefiting from High-Conflicting Personality
  • MacIntyre’s After Virtue: A Study in Moral Theory
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, July 19). Ethics and the Internet. https://ivypanda.com/essays/ethics-and-the-internet/

"Ethics and the Internet." IvyPanda , 19 July 2022, ivypanda.com/essays/ethics-and-the-internet/.

IvyPanda . (2022) 'Ethics and the Internet'. 19 July.

IvyPanda . 2022. "Ethics and the Internet." July 19, 2022. https://ivypanda.com/essays/ethics-and-the-internet/.

1. IvyPanda . "Ethics and the Internet." July 19, 2022. https://ivypanda.com/essays/ethics-and-the-internet/.

Bibliography

IvyPanda . "Ethics and the Internet." July 19, 2022. https://ivypanda.com/essays/ethics-and-the-internet/.

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

Computer and Information Ethics

  • Reference work entry
  • First Online: 16 December 2023
  • Cite this reference work entry

essay in internet ethics

  • Emma Stamm 3  

36 Accesses

Introduction

Computer and information ethics are related fields of practical philosophy which address the proper use of computing and information technology. This entry provides an overview of their history and major topics of interest, including those germane to emerging technological and social developments.

World War II and Cybernetics

The roots of computer ethics can be traced to the 1940s. It is a direct descendant of cybernetics, an interdisciplinary field of study which emerged during the Second World War. Throughout the 1940s and 1950s, several notable scholars pursued cybernetics research, including engineer Vannevar Bush (1890–1974), mathematician John von Neumann (1903–1957), physiologist Arturo Rosenblueth (1900–1970), mathematician Claude E. Shannon (1916–2001), and mathematician Norbert Wiener (1894–1964), who is credited as the field’s founder (Bynum 2008 ; Hamilton 2017 ). Wiener’s cybernetic theory originated with his work on the design of a new kind of...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

[No author given] ACM releases updated code of Ethics . Association for Computing Machinery. 17 July (2018) Retrieved August 3, 2022, from https://www.acm.org/media-center/2018/july/acm-updates-code-of-ethics

[no author given] (2000) Who’s Controlling Cyberspace? A Harvard professor’s new book warns that the Internet is losing its independence to commercial interests. https://www.computerworld.com/article/2592657/who-s-controlling-cyberspace-.html

[No author given] Timothy B. Lee - Aug 24, 2012 3:30 pm U. T. C (2012) Why bitcoin lives in a “legal gray area”. Ars Technica, 24 August. Retrieved August 3, 2022, from https://arstechnica.com/tech-policy/2012/08/why-bitcoin-lives-in-a-legal-gray-area/

Barger RN (2008) Computer ethics: a case-based approach. Cambridge University Press, New York

Book   Google Scholar  

Baum D, Chodosh S (2017) There are two kinds of A.I., and the difference is important. Popular Science. https://productivityhub.org/2019/07/05/there-are-two-kinds-of-ai-and-the-difference-is-important

Bratton B (2022) The revenge of the real: politics for a post-pandemic world. Verso Books, New York

Google Scholar  

Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. Conference on Fairness, accountability, and transparency. In Proceedings of machine learning research 8 1, 1–15

Bynum TW (2000) A very short history of computer ethics. The Newsletter of the American Philosophical Association

Bynum TW (2008) Norbert Wiener and the rise of information ethics. In: van den Hoeven J, Weckert J (eds) Information technology and moral philosophy. Cambridge University Press, Essay

Bynum TW (2011) Creating the journal metaphilosophy. Metaphilosophy 42(3):186–190

Article   Google Scholar  

Bynum TW, Floridi L (2012) The historical roots of information and computer ethics. In: The Cambridge handbook of information and computer ethics. Cambridge University Press, Essay, pp 20–38

Cocking D, Matthews S (2001) Unreal friends. Ethics Inf Technol 2:223–231

Coleman G (2014) Hackers. Culture Digitally. https://culturedigitally.org/2014/10/hackers-draft-digitalkeywords/

Eubanks V (2019) Automating inequality: how high-tech tools profile, police, and punish the poor. Picador, New York

Floridi L (1999) Information ethics: on the philosophical foundations of computer ethics. Ethics Inf Technol 1:37–56

Froehlich T (2004) A brief history of information ethics. Textos Universitaris de Biblioteconomia i Documentació No. 13. https://bid.ub.edu/13froel2.htm

Gotterbarn D (1991) Computer ethics: responsibility regained. National Forum: The Phi Beta Kappa J 71:26–31

Hamilton S (2017) The charismatic cultural life of cybernetics: Reading Norbert Wiener as visible scientist. Can J Commun 42(3):407–429

Hartmans A (2022) The iphone at 15: how Apple’s iconic smartphone changed the world forever and how it’s evolved since 2007. Business Insider. Retrieved August 3, 2022, from https://www.businessinsider.com/apple-iphone-evolution-first-iphone-every-model-2019-12#iphone-3g-2008-2

Hauptmann R (1988) Ethical challenges in librarianship. Oryx Press, Phoenix

Holm EA (2019) In defense of the black box: black box algorithms can be useful in science and engineering. Science. https://www.science.org/doi/10.1126/science.aax0162

Igo SE (2020) The known citizen: a history of privacy in modern America. Harvard University Press, Cambridge, MA

Johnson DG (1994) Computer ethics, 2nd edn. Cambridge University Press, New York

Johnson DG (2020) Engineering ethics: contemporary and enduring debates. Yale University Press

Leiner BM, Cerf VG, Clark DD, Khan RE, Kleinrock L, Lynch DC, Postel J, Roberts LG, Wolff S (1997) Brief history of the Internet. The Internet Society. https://www.internetsociety.org/internet/history-internet/

Lyon D (2017) Surveillance culture: engagement, exposure, and ethics in digital modernity. Int J Commun 11:824–842

Maner W (1996) Is computer ethics unique? Sci Eng Ethics 2(2):137–154

Markoff J (2008) Joseph Weizenbaum, famed programmer, is dead at 85. The New York Times, 13 March. Retrieved August 3, 2022, from https://www.nytimes.com/2008/03/13/world/europe/13weizenbaum.html

Metzinger T (2013) Two principles for robot ethics. In: Hilgendorf E, Günther J-P (eds) Robotik und Gesetzgebung. Nomos, Baden-Baden, pp 263–302

Miller S, Weckert J (2000) Privacy, the workplace and the internet. J Bus Ethics 28:255–265

Moor JH (1985) What is computer ethics? Metaphilosophy 16:266–275. [Republished in Weckert, Computer Ethics ]

Nemorin S (2017) Post-panoptic pedagogies: the changing nature of school surveillance in the digital age. Surveill Soc 15(2):239

Nissenbaum H (1998) Protecting privacy in an information age: the problem of privacy in public. Law Philos 17:559–596

Parker DB (1966) Rules of ethics in information processing. Commun ACM (Assoc Comput Mach) 11(3):198

Schriffert D (2018) Teaching computer ethics. The Research Center on Values in Emerging Technology. https://rcvest.southernct.edu/teaching-computer-ethics/#computer-ethics-in-the-computer-science

Schünemann & Windwehr (2021) Towards a ‘gold standard for the world’? The European General Data Protection Regulation between supranational and national norm entrepreneurship. J Eur Integr 43(7):859–874. https://doi.org/10.1080/07036337.2020.1846032

Shiego H (1996) A code of conduct for robots coexisting with human beings. Robot Auton Syst 18:101–107

Turing AM (1950) Computing machinery and intelligence. Mind 49:433–460

Vallor S (2018) Technology and the virtues: a philosophical guide to a future worth wanting. Oxford University Press, Oxford, UK

van Deursen A, van Dijk J (2020) The digital divide - an introduction. Blog of the University of Twente Centre for Digital Inclusion. https://www.utwente.nl/en/centrefordigitalinclusion/Blog/02-Digitale_Kloof/

Wallach W, Allen C (2010) Moral machines: teaching robots right from wrong. University Press, Oxford, UK

Weizenbaum J (1972) On the impact of the computer on society: how does one insult a machine? Science 176:609–614. [Republished in Weckert, Computer Ethics ]

Weizenbaum J (1991) Computer power and human reason: from judgment to calculation. W.H. Freeman, New York

Wiener N (1950) The human use of human beings. Da Capo Press, New York

Wiener N (1969) Cybernetics: or control and communication in the animal and the machine. MIT Press, Cambridge, MA

Zuboff S (2020) You are now remotely controlled. The New York Times, 24 January. Retrieved August 3, 2022, from https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-capitalism.html

Download references

Author information

Authors and affiliations.

Villanova University, Rosemont, PA, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Emma Stamm .

Editor information

Editors and affiliations.

Regents Professor of the University, System of Maryland and Director of the University of Baltimore Center for International and Comparative Law, University of Baltimore School of Law, Baltimore, USA

Mortimer Sellers

Professor for Legal and Social Philosophy, Department of Legal Theory International and European Law, University of Salzburg, Salzburg, Austria

Stephan Kirste

Section Editor information

University of Graz, Graz, Austria

Norbert Paulo

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Science + Business Media B.V., Dordrecht.

About this entry

Cite this entry.

Stamm, E. (2023). Computer and Information Ethics. In: Sellers, M., Kirste, S. (eds) Encyclopedia of the Philosophy of Law and Social Philosophy. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-6519-1_404

Download citation

DOI : https://doi.org/10.1007/978-94-007-6519-1_404

Published : 16 December 2023

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-6518-4

Online ISBN : 978-94-007-6519-1

eBook Packages : Law and Criminology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

World Bank Blogs Logo

Ethics in the digital world: Where we are now and what’s next

Kate gromova, yaroslav eferin.

Lines connected to Thinkers, symbolizing the meaning of artificial intelligence

Will widespread adoption of emerging digital technologies such as the Internet of Things and Artificial Intelligence improve people’s lives? The answer appears to be an easy “yes.” The positive potential of data seems self-evident. Yet, this issue is being actively discussed across international summits and events. Thus, the agenda of Global Technology Government Summit 2021 is dedicated to questions around whether and how “data can work for all”, emphasizing trust aspects, and especially ethics of data use. Not without a reason, at least 50 countries are grappling independently with how to define ethical data use smoothly without violating people’s private space, personal data, and many other sensitive aspects.  

Ethics goes online

What is ethics per se? Aristotle proposed that ethics is the study of human relations in their most perfect form. He called it the science of proper behavior. Aristotle claimed that ethics is the basis for creating an optimal model of fair human relations; ethics lie at the foundation of a society’s moral consciousness. They are the shared principles necessary for mutual understanding and harmonious relations.

Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.  We do not see each other, but we do share our data; we do not talk to each other, but we give our opinions liberally. So how should these principles evolve for such an online, globalized world? And what might the process look like for identifying those principles?  

Digital chaos without ethics

2020 and the lockdowns clearly demonstrate that we plunge into the digital world irrevocably. As digital technologies become ever more deeply embedded in our lives, the need for a new, shared data ethos grows more urgent. Without shared principles, we risk exacerbating existing biases that are part of our current datasets.  Just a few examples:

  • The common exclusion of women as test subjects in much medical research results in a lack of relevant data on women’s health. Heart disease, for example, has traditionally been thought of as a predominantly male disease. This has led to massive misdiagnosed or underdiagnosed heart disease in women.
  • A study of AI tools that authorities use to determine the likelihood that a criminal reoffends found that algorithms produced different results for black and white people under the same conditions. This discriminatory effect has resulted in sharp criticism and distrust of predictive policing.
  • Amazon abandoned its AI hiring program because of its bias against women. The algorithm began training on the resumes of the candidates for job postings over the previous ten years. Because most of the applicants were men, it developed a bias to prefer men and penalized features associated with women.

These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.

Figure 1. Data ethics concept, 2010-2021     

Country ethics

Figure 2. AI ethics concept, 2010-2021

AI Ethics

Guiding data ethics

Countries are taking wildly differing approaches to address data ethics. Even the definition of data ethics varies. Look, for example, at three countries—Germany, Canada, and South Korea—with differing geography, history, institutional and political arrangements, and traditions and culture.

Germany established a Data Ethics Commission in 2018 to provide recommendations for the Federal Government’s Strategy on Artificial Intelligence. The Commission declared that its  operating principles were based on the Constitution, European values, and its “cultural and intellectual history.” Ethics, according to the Commission, should not begin with establishing boundaries. Rather, when ethical issues are discussed early in the creation process, they may make a significant contribution to design, promoting appropriate and beneficial applications of AI systems.

In Canada, the advancement of AI technologies and their use in public services has spurred a discussion about data ethics. The Government of Canada’s recommendations focuses on public service officials and processes. It provided guiding principles to ensure ethical use of AI and developed a comprehensive Algorithmic Impact Assessment online tool to help government officials explore AI in a way that is “governed by clear values, ethics, and laws.”

The Korean Ministry of Science and ICT, in collaboration with the National Information Society Agency, released Ethics Guidelines for the Intelligent Information Society in 2018. These guidelines build on the Robots Ethics Charter. It calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, Korean ethical policies mainly focused on the adoption of robots into society, while emphasizing the need to balance protecting “human dignity” and “the common good ."  

Do data ethics need a common approach?

The differences among these initiatives seem to be related to traditions, institutional arrangements, and many other cultural and historical factors. Germany places emphasis on developing autonomous vehicles and presents a rather comprehensive view on ethics; Canada puts a stake on guiding government officials; Korea approaches questions through the prism of robots. Still, none of them clearly defines what data ethics is. None of them is meant to have a legal effect. Rather, they stipulate the principles of the information society. In our upcoming study, we intend to explore the reasons and rationale for different approaches that countries take.

Discussion and debate on data and technology ethics undoubtedly will continue for many years to come as digital technologies continue to develop and penetrate into all aspects of human life.   But the sooner we reach a consensus on key definitions, principles, and approaches, the easier the debates can turn into real actions. Data ethics are equally important for government, businesses, individuals and should be discussed openly. The process of such discussion will serve itself as an awareness and knowledge-sharing mechanism.

Recall the Golden Rule of Morality: Do unto others as you would have them do unto you. We suggest keeping this in mind when we all go online.

  • The World Region

Get updates from Data Blog

Thank you for choosing to be part of the Data Blog community!

Your subscription is now active. The latest blog posts and blog-related announcements will be delivered directly to your email inbox. You may unsubscribe at any time.

Kate Gromova

Digital Development Consultant, Co-founder of Women in Digital Transformation

Yaroslav Eferin

Digital Development Consultant

Join the Conversation

  • Share on mail
  • comments added
  • Advanced Search
  • All new items
  • Journal articles
  • Manuscripts
  • All Categories
  • Metaphysics and Epistemology
  • Epistemology
  • Metaphilosophy
  • Metaphysics
  • Philosophy of Action
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Religion
  • Value Theory
  • Applied Ethics
  • Meta-Ethics
  • Normative Ethics
  • Philosophy of Gender, Race, and Sexuality
  • Philosophy of Law
  • Social and Political Philosophy
  • Value Theory, Miscellaneous
  • Science, Logic, and Mathematics
  • Logic and Philosophy of Logic
  • Philosophy of Biology
  • Philosophy of Cognitive Science
  • Philosophy of Computing and Information
  • Philosophy of Mathematics
  • Philosophy of Physical Science
  • Philosophy of Social Science
  • Philosophy of Probability
  • General Philosophy of Science
  • Philosophy of Science, Misc
  • History of Western Philosophy
  • Ancient Greek and Roman Philosophy
  • Medieval and Renaissance Philosophy
  • 17th/18th Century Philosophy
  • 19th Century Philosophy
  • 20th Century Philosophy
  • History of Western Philosophy, Misc
  • Philosophical Traditions
  • African/Africana Philosophy
  • Asian Philosophy
  • Continental Philosophy
  • European Philosophy
  • Philosophy of the Americas
  • Philosophical Traditions, Miscellaneous
  • Philosophy, Misc
  • Philosophy, Introductions and Anthologies
  • Philosophy, General Works
  • Teaching Philosophy
  • Philosophy, Miscellaneous
  • Other Academic Areas
  • Natural Sciences
  • Social Sciences
  • Cognitive Sciences
  • Formal Sciences
  • Arts and Humanities
  • Professional Areas
  • Other Academic Areas, Misc
  • Submit a book or article
  • Upload a bibliography
  • Personal page tracking
  • Archives we track
  • Information for publishers
  • Introduction
  • Submitting to PhilPapers
  • Frequently Asked Questions
  • Subscriptions
  • Editor's Guide
  • The Categorization Project
  • For Publishers
  • For Archive Admins
  • PhilPapers Surveys
  • Bargain Finder
  • About PhilPapers
  • Create an account

Internet Ethics

The Internet is a complex socio-technical system where various moral issues arise as humans interact with online technologies through interfaces, algorithms and hardware design. Internet ethics investigates ethical and societal aspects related to information and communication technologies – specifically the Internet – which arise at the interaction between users, designers, online service providers and policy. Some central topics include privacy, surveillance, personalisation, autonomy, nudging, well-being, disinformation and misinformation, filter bubbles, echo chambers online, bullying and harassment online. 
  • Ethics of Artificial Intelligence ( 2,870 | 11)
  • Machine Ethics ( 538 )
  • Computer Ethics, Misc ( 348 )
1 — 50 / 346
 

Phiosophy Documentation Center

preview

Ethics on the Internet Essay

  • 3 Works Cited

Ethics on the Internet In today's society, there are many ethical issues on the Internet. Some of the biggest issues and concerns seem to be hacking and viruses, copyright infringements, spam, privacy, and cyberporn. Internet ethical issues affect a wide variety of individuals and almost all people today are affected in some kind of a way. Until recently, most computer users had not been very concerned with questions of ethics and may not have been aware of something being seen as an ethical issue, but this depends on every individual's position. However, today there are many concerns about these issues and some organizations are trying to get laws approved protecting individuals in today's society. Copyright infringements …show more content…

Many companies gather information and sell if to other companies and sites for their use. This violates an individual's privacy and allows for spam mail. Spam is a very frustrating thing and is a significant problem. Statistics show that an estimated 30 million email messages are sent out each day and about 30% of them are unsolicited commercial emails. In 1996, the court found that mass mailers don't have a constitutional right to obstruct someone's computer. With costs today, bulk email is extremely cheap for businesses rather than the original way of using postage. With 30 million emails going out every day, this is extremely likely to cause problems with the speed and access to many Internet System Providers (ISPs). Many large ISPs have suffered major system outages as the result of massive junk email mailings. It is believed that almost 95% of recipients don't want to receive these messages, which has caused many junk emailers to use tricks to get you to open their messages by changing the text in the subject line. They may use "the information that you requested" or other descriptions that would probably catch your eye. Most of the time a hacker will say that they are simply alerting the public to certain

Computer Fraud And Abuse Act Of 1986 (CFA)

The consideration of computer ethics fundamentally emerged with the birth of computers. There was concern right away that computers would be used inappropriately to the detriment of society, or that they would replace humans in many jobs, resulting in widespread job loss. To grasp fully the issues involved with computer ethics, it is important to consider the history. Cyber fraud which I

The Case Of A National Do Not Email Registry Act

bulk email.” Although a National Do Not Email Registry containing domain names would alleviate the security issues inherent in a list of individual email addresses, given the challenge in enforcing any form of a National Do Not Email

Annotated Bibliography: Security and Crime in Cyberspace

Spinello, R. A. (2011). Regulating Internet Privacy. Cyber Ethics - Morality and Law in Cyberspace. Sudbury, MA: Jones and Bartlett Learning.

How Can You Leverage User Behavior?

Email marketing feels like one of those vestigial parts of the Internet that hasn’t changed much over the years.

The Laws And Regulations Of Canada

-Canada has been trying to get rid of spam advertising for numerous years now and finally passed an Anti-spam legislation stating that: Your business must have consent to deliver an electronic message to someone, you must clearly identify your business, and must include an unsubscribe feature. This may be to disadvantage, but America has similar anti-spam laws as well.

Why Do Real World Laws Be Required To Regulate Such Activities?

The Internet is unquestionably having a profound impact on many aspects of social, culture, economic, and legal systems throughout the world, moreover, enabling significant advances in global communication technologies, that make it more possible to contemplate the development of a more complex global information society. Such a global society offers many benefits to humankind, but incorporating regulation to enable and promote these information societies present challenges. However, such challenges create difficult questions for those making legal decisions: Do real world laws apply to virtual world problems? Can the laws adapt to regulate such activities? Are existing laws outdated and inadequate? Will new laws be required to suite Internet activities and other information technology developments?

20-30 Quots

At my old job, a large concern and complaint was the amount of e-mails that were received each day (exclusive of actual work). There would often be 20-30 bulletin, update, tips and/or partner advertisement emails every morning that would come through before you even started the day. These would then continue throughout the day and due to e-mail quotas, you would have to waste time constantly weeding through and deleting those that were not important. This became a nuisance and a problem rather quickly.

Internet And Privacy Essay

The sharing of information may well be the most advanced activity of the twenty-first century occurring across ages and backgrounds with relative ease. Nevertheless, the use of information that is aired through the internet raises several genuine concerns regarding nature, intent, source, and destination as well as the consequences of the content. This is particularly true when the information has to do with people 's identities and other activities that may touch on critical aspects of national security and unauthorized business. As such, there is a mix of reactions among individuals regarding the extent of privacy they would like regarding information that they share or retrieve on the internet. While some may have genuine concerns such as protecting their identity, others are on malicious tracks to cover their person and conduct unwarranted business on the web. The mix of concerns led to the rise of the Dark Web on darknets. A darknet is an overlay network that utilizes the public Internet but requires authorization or special software to access mainly to protect the user’s identity and location from network surveillance and traffic analysis (Sui, Caverlee & Rudesill 2003). Such trends on the internet raise the question; is the Dark Web an important and necessary tool to offset pervasive online surveillance in contemporary society or is the moral panic surrounding the Dark Web in global news media justified? The aim of this research is to answer the raised question

Legal And Ethical Issues Of Computing

Legal and ethical issues in computing are not as straight forward as one expects, these two comprehensive concepts can be divided into many sub-sections such as computer Ethics which is a moral standard used as a guideline for computer users, Code of ethics is a guideline in ICT, intellectual property is the own work that is created by individuals, privacy refers to the rights that individual and companies have and how their data is collected and restricted to outside sources, computer crime is an illegal act that involves computers and finally cyber law which is laws that are related to helping protect the internet and other online communication.

Electronic Mail Acceptable Use Policy

The CAN-SPAM Act is a law that sets the rules for commercial email, establishes requirements for commercial messages, gives recipients the right to have entities stop emailing them, and sets forth the penalties for violations. However, the law does not apply just to bulk e-mail. “It covers all commercial messages, which the law defines as ‘any electronic mail message the primary purpose of which is the commercial advertisement or promotion of a commercial product or service” (Federal Trade Commission, 2009).

Digital Ownership

It is an easier way to market the information than before. There are many new trends that are coming in the near future for additional sponsorship such as Disney has a “Magic Band” that guests wear on their wrist as a “proof” into the park. The magic band tracks where guests are located in the park and that way when they see a huge amount of people gathered at one place, they will send out characters such as Mickey or Minnie Mouse to distract them so there is no traffic. Some rewards card program in casinos allow guest to opt-in their email for the program to receive offers. This is also a good way to market because since technology had grown so much, email is becoming a daily thing for most

Internet Privacy Essay

The concern about privacy on the Internet is increasingly becoming an issue of international dispute. ?Citizens are becoming concerned that the most intimate details of their daily lives are being monitored, searched and recorded.? (www.britannica.com) 81% of Net users are concerned about threats to their privacy while online. The greatest threat to privacy comes from the construction of e-commerce alone, and not from state agents. E-commerce is structured on the copy and trade of intimate personal information and therefore, a threat to privacy on the Internet.

Ridiculing Technology Research Paper

If you ever have communicated your plans to your parents over the phone or submitted an assignment to your teacher through an email you have used the convenience of communications technology. We live worlds away from the time where we could only talk to other people face to face and it has allowed us to achieve many great things. Think of how slow sending mail is and before modern technology sending mail was even slower than it is now. Emails have provided a free way to communicate to anyone who also has an email and doesn’t involve wasted paper.

Internet Privacy Essays

  • 5 Works Cited

Ethics in Technology Essay

  • 1 Works Cited

In the early years of computers and computerized technology, computer engineers had to believe that their contribution to the development of computer technology would produce positive impacts on the people that would use it. During the infancy of computer technology, ethical issues concerning computer technology were almost nonexistent because computers back then were not as multifaceted as they are today. However, ethical issues relating to computer technology and cyber technology is undeniable in today’s society. Computer technology plays a crucial role in all aspects of our daily lives. Different forms of computer technology provide unique functionalities that allow people to perform daily activities effectively and efficiently. In

Related Topics

  • Computer software
  • E-mail spam
  • Computer program
  • Today's news
  • Reviews and deals
  • Climate change
  • 2024 election
  • Newsletters
  • Fall allergies
  • Health news
  • Mental health
  • Sexual health
  • Family health
  • So mini ways
  • Unapologetically
  • Buying guides

Entertainment

  • How to Watch
  • My Portfolio
  • Latest News
  • Stock Market
  • The Morning Brief
  • Premium News
  • Biden Economy
  • Stocks: Most Actives
  • Stocks: Gainers
  • Stocks: Losers
  • Trending Tickers
  • World Indices
  • US Treasury Bonds Rates
  • Top Mutual Funds
  • Options: Highest Open Interest
  • Options: Highest Implied Volatility
  • Basic Materials
  • Communication Services
  • Consumer Cyclical
  • Consumer Defensive
  • Financial Services
  • Industrials
  • Real Estate
  • Stock Comparison
  • Advanced Chart
  • Currency Converter
  • Investment Ideas
  • Research Reports
  • Credit Cards
  • Balance Transfer Cards
  • Cash-back Cards
  • Rewards Cards
  • Travel Cards
  • Credit Card Offers
  • Best Free Checking
  • Student Loans
  • Personal Loans
  • Car insurance
  • Mortgage Refinancing
  • Mortgage Calculator
  • Editor's Picks
  • Investing Insights
  • Trending Stocks
  • Morning Brief
  • Opening Bid
  • Fantasy football
  • Pro Pick 'Em
  • College Pick 'Em
  • Fantasy baseball
  • Fantasy hockey
  • Fantasy basketball
  • Download the app
  • Daily fantasy
  • Scores and schedules
  • GameChannel
  • World Baseball Classic
  • Premier League
  • CONCACAF League
  • Champions League
  • Motorsports
  • Horse racing

New on Yahoo

  • Privacy Dashboard

Yahoo Finance

PR Newswire

The Elie Wiesel Foundation Announces 2024 Winners of Prize in Ethics Essay Contest

Four college students from universities nationwide will be awarded scholarships totaling almost $20,000 for their exceptional essays on topical ethical issues

New cycle of Application now open for the 2025 Elie Wiesel Prize in Ethics Essay Contest have been opened

NEW YORK , Oct. 10, 2024 /PRNewswire/ -- Today, the  Elie Wiesel Foundation , an organization founded by Nobel Peace Prize laureate and Holocaust survivor Elie Wiesel and his wife, Marion, announced this year's winners of the Elie Wiesel Prize in Ethics Essay Contest.  The Foundation's scholarship initiative  selected four college student winners for their remarkable essays analyzing relevant ethical issues facing our world.

The Prize in Ethics Essay Contest, established in 1989 by Professor Elie Wiesel and his wife, Marion Wiesel , is an annual competition that challenges college students to contemplate an ethical theme or situation. Many essays often stem from a student's personal experience or introspection. Winners will be granted scholarships in varying amounts, totaling $20,000 .

"Encouraging college students to reflect on their ethics with open-ended questions helps to mold courageous, moral thinking in a world that can be driven by false narratives and manipulative contexts," said Elisha Wiesel , the son of Elie and Marion Wiesel , who is the Foundation's Chairman.  "We are committed to keeping my father's legacy alive; his commitment to education grounded in ethics echoes through this Contest which he founded 35 years ago."

Chosen by a selection committee from hundreds of applicants, this year's winners are:

First Place Winner:  Manu Sundaresan , University of Chicago His essay, "Doing Time," examines the relationship between time, justice, and ethics within incarceration. Drawing from his experience leading writing workshops at Cook County Jail and Emmanuel Levinas' philosophy, the author highlights how incarceration distorts time and erases individuality, critiquing the dehumanizing nature of the criminal justice system and urging a rethinking of justice and ethical responsibility.

He writes, "Time, as it multiplies and takes on a small infinity of words and glances and kindnesses, can become a feeling. It is what we do with that feeling that matters most," reflecting on the power of human interaction to give meaning to time, even within the confines of incarceration.

Second Place Winner:  Danial Alkhoury , University of Texas at Austin His essay, " Scattered Leaves: Piecing Life Back Together " follows the author's journey as a Syrian refugee, from a peaceful childhood in Damascus to the trauma of civil war and forced migration to the U.S. It explores the moral complexities of paying ransom for the author's kidnapped father, the challenges of rebuilding life in a new country, and the struggle with displacement and loss. He emphasizes resilience, community, and the ethical pursuit of renewal and justice.

Third Place Winner:  Anonymous, University of Colorado at Denver The third place winner has chosen to remain anonymous due to the risk of transnational repression by the Iranian government and made this decision to help ensure their safety and protect their family from potential retaliation, harassment, or threats. The Elie Wiesel Foundation has a long history of championing dissidents of tyrannous regimes and honors the individual's decision to continue their work without fear of persecution.

Their submission, "The Bridges of Intersectionality and Fallacy: Unveiling Feminism's Global Paradox," reflects on Iran's 2022 protests following Mahsa Amini's death, critiquing Western feminism's lack of intersectionality and selective solidarity, while calling for a more inclusive feminist framework and exploring the complexities of personal activism, including silence as a form of resistance.

Honorable Mention:  Atlas Chambers, Eckerd College His essay "Southern (dis)Comfort," considers his complicated love for the South, particularly Florida , highlighting the region's natural beauty, cultural challenges, and the resilience required to navigate its often misunderstood complexities.

In addition to scholarships, winners are also awarded a trip to New York City for a seminar to discuss their essays among other ethical topics.  This year's seminar will be led by award-winning writer and Contest Readers Committee member, Michelle Fiordaliso .  The day will end with a celebration of their achievement at the renowned Lotos Club.

Jury member, EWF Board Member, and long-time supporter of the Prize, Dov Seidman , founder of The HOW Institute for Society and LRN, will host the students for an annual luncheon.

"I'm proud to partner with Marion and Elisha Wiesel and the entire Elie Wiesel Foundation for Humanity in awarding the Prize in Ethics. This remarkable group of student winners, who are already making a difference in the world by writing essays that prod the conscience and consider issues through an ethical lens, embody the hope that a new generation of moral leaders will rise to meet the challenges before us," said Seidman.

Additionally, the Foundation has opened its submissions for the 2025 Elie Wiesel Prize in Ethics Essay Contest, accepting applications through December 30, 2024 . The contest is open to all undergraduate students enrolled full-time for the Fall 2024 semester at accredited four-year colleges and universities.  Interested students may apply directly via our submission site: Elie Wiesel Prize in Ethics Essay Contest 2025

About The Elie Wiesel Prize in Ethics Essay Contest  The Elie Wiesel Foundation Prize in Ethics Essay Contest encourages students to write thought-provoking personal essays that raise questions, single out issues, and offer rational arguments for ethical action. The contest is open to all undergraduate full-time students who are registered at accredited four-year colleges or universities in the United States . All submissions to the essay contest are judged anonymously. Winning essays present intensely personal stories, originality, imagination, and clear articulation and convey genuine grappling with an ethical dilemma.  For full details and guidelines:  Prize in Ethics - Elie Wiesel Foundation

About The Elie Wiesel Foundation for Humanity:  Elie Wiesel and his wife, Marion, established The Elie Wiesel Foundation soon after he was awarded the 1986 Nobel Prize for Peace. Now spearheaded by Marion and Elie's son Elisha Wiesel , the Foundation seeks to carry on Elie Wiesel's legacy and spark ethical consciousness of human rights by investing in programs that promote moral leadership and real-world outcomes for victims of injustice. To learn more, visit: www.eliewieselfoundation.org

Media Contact Olivia Crvaric [email protected] 

View original content to download multimedia: https://www.prnewswire.com/news-releases/the-elie-wiesel-foundation-announces-2024-winners-of-prize-in-ethics-essay-contest-302272583.html

SOURCE The Elie Wiesel Foundation for Humanity

Recommended Stories

SEP logo

  • Table of Contents
  • New in this Archive
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Computer and Information Ethics

In most countries of the world, the “information revolution” has altered many aspects of life significantly: commerce, employment, medicine, security, transportation, entertainment, and on and on. Consequently, information and communication technology (ICT) has affected – in both good ways and bad ways – community life, family life, human relationships, education, careers, freedom, and democracy (to name just a few examples). “Computer and information ethics”, in the present essay, is understood as that branch of applied ethics which studies and analyzes such social and ethical impacts of ICT.

The more specific term “computer ethics” has been used, in the past, in several different ways. For example, it has been used to refer to applications of traditional Western ethics theories like utilitarianism, Kantianism, or virtue ethics, to ethical cases that significantly involve computers and computer networks. “Computer ethics” also has been used to refer to a kind of professional ethics in which computer professionals apply codes of ethics and standards of good practice within their profession. In addition, names such as “cyberethics” and “Internet ethics” have been used to refer to computer ethics issues associated with the Internet.

During the past several decades, the robust and rapidly growing field of computer and information ethics has generated university courses, research professorships, research centers, conferences, workshops, professional organizations, curriculum materials, books and journals.

1.1 A cybernetic view of human nature

1.2 wiener’s underlying metaphysics, 1.3 justice and human flourishing, 1.4 a refutation of ethical relativism, 1.5 methodology in information ethics, 2.1 the “uniqueness debate”, 2.2 an agenda-setting textbook, 2.3 an influential computer ethics theory, 2.4 computing and human values, 2.5 professional ethics and computer ethics, 3.1 global laws, 3.2 global cyberbusiness, 3.3 global education, 3.4 information rich and information poor, 4. a metaphysical foundation for computer ethics, 5. exponential growth, papers and books, journals and web sites, related entries, 1. founding computer and information ethics.

In the mid 1940s, innovative developments in science and philosophy led to the creation of a new branch of ethics that would later be called “computer ethics” or “information ethics”. The founder of this new philosophical field was the American scholar Norbert Wiener, a professor of mathematics and engineering at MIT. During the Second World War, together with colleagues in America and Great Britain, Wiener helped to develop electronic computers and other new and powerful information technologies. While engaged in this war effort, Wiener and colleagues created a new branch of applied science that Wiener named “cybernetics” (from the Greek word for the pilot of a ship). Even while the War was raging, Wiener foresaw enormous social and ethical implications of cybernetics combined with electronic computers. He predicted that, after the War, the world would undergo “a second industrial revolution” – an “automatic age” with “enormous potential for good and for evil” that would generate a staggering number of new ethical challenges and opportunities.

When the War ended, Wiener wrote the book Cybernetics (1948) in which he described his new branch of applied science and identified some social and ethical implications of electronic computers. Two years later he published The Human Use of Human Beings (1950), a book in which he explored a number of ethical issues that computer and information technology would likely generate. The issues that he identified in those two books, plus his later book God and Golem, Inc. (1963), included topics that are still important today: computers and security, computers and unemployment, responsibilities of computer professionals, computers for persons with disabilities, information networks and globalization, virtual communities, teleworking, merging of human bodies with machines, robot ethics, artificial intelligence, computers and religion, and a number of other subjects. (See Bynum 2000, 2004, 2005, 2008a, 2008b.)

Although he coined the name “cybernetics” for his new science, Wiener apparently did not see himself as also creating a new branch of ethics. As a result, he did not coin a name like “computer ethics” or “information ethics”. These terms came into use decades later. (See the discussion below.) In spite of this, Wiener’s three relevant books (1948, 1950, 1963) do lay down a powerful foundation, and do use an effective methodology, for today’s field of computer and information ethics. His thinking, however, was far ahead of other scholars; and, at the time, many people considered him to be an eccentric scientist who was engaging in flights of fantasy about ethics. Apparently, no one – not even Wiener himself – recognized the profound importance of his ethics achievements; and nearly two decades would pass before some of the social and ethical impacts of information technology, which Wiener had predicted in the late 1940s, would become obvious to other scholars and to the general public.

In The Human Use of Human Beings , Wiener explored some likely effects of information technology upon key human values like life, health, happiness, abilities, knowledge, freedom, security, and opportunities . The metaphysical ideas and analytical methods that he employed were so powerful and wide-ranging that they could be used effectively for identifying, analyzing and resolving social and ethical problems associated with all kinds of information technology, including, for example, computers and computer networks; radio, television and telephones; news media and journalism; even books and libraries. Because of the breadth of Wiener’s concerns and the applicability of his ideas and methods to every kind of information technology, the term “information ethics” is an apt name for the new field of ethics that he founded. As a result, the term “computer ethics”, as it is typically used today, names only a subfield of Wiener’s much broader concerns.

In laying down a foundation for information ethics, Wiener developed a cybernetic view of human nature and society, which led him to an ethically suggestive account of the purpose of a human life. Based upon this, he adopted “great principles of justice”, which he believed all societies ought to follow. These powerful ethical concepts enabled Wiener to analyze information ethics issues of all kinds.

Wiener’s cybernetic understanding of human nature stressed the physical structure of the human body and the remarkable potential for learning and creativity that human physiology makes possible. While explaining human intellectual potential, he regularly compared the human body to the physiology of less intelligent creatures like insects:

Cybernetics takes the view that the structure of the machine or of the organism is an index of the performance that may be expected from it . The fact that the mechanical rigidity of the insect is such as to limit its intelligence while the mechanical fluidity of the human being provides for his almost indefinite intellectual expansion is highly relevant to the point of view of this book. … man’s advantage over the rest of nature is that he has the physiological and hence the intellectual equipment to adapt himself to radical changes in his environment. The human species is strong only insofar as it takes advantage of the innate, adaptive, learning faculties that its physiological structure makes possible. (Wiener 1954, pp. 57–58, italics in the original)

Given the physiology of human beings, it is possible for them to take in a wide diversity of information from the external world, access information about conditions and events within their own bodies, and process all that information in ways that constitute reasoning, calculating, wondering, deliberating, deciding and many other intellectual activities. Wiener concluded that the purpose of a human life is to flourish as the kind of information-processing organisms that humans naturally are:

I wish to show that the human individual, capable of vast learning and study, which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensorium – and are indeed the key to man’s most noble flights – because variety and possibility belong to the very structure of the human organism. (Wiener 1954, pp. 51–52)

Wiener’s account of human nature presupposed a metaphysical view of the universe that considers the world and all the entities within it, including humans, to be combinations of matter-energy and information. Everything in the world is a mixture of both of these, and thinking , according to Wiener, is actually a kind of information processing . Consequently, the brain

does not secrete thought “as the liver does bile”, as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. (Wiener 1948, p. 155)

According to Wiener’s metaphysical view, everything in the universe comes into existence, persists, and then disappears because of the continuous mixing and mingling of information and matter-energy. Living organisms, including human beings, are actually patterns of information that persist through an ongoing exchange of matter-energy. Thus, he says of human beings,

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves. (Wiener 1954, p. 96) … The individuality of the body is that of a flame…of a form rather than of a bit of substance. (Wiener 1954, p. 102)

Using the language of today’s “information age” (see, for example, Lloyd 2006 and Vedral 2010) we would say that, according to Wiener, human beings are “information objects”; and their intellectual capacities, as well as their personal identities, are dependent upon persisting patterns of information and information processing within the body, rather than on specific bits of matter-energy.

According to Wiener, for human beings to flourish they must be free to engage in creative and flexible actions and thereby maximize their full potential as intelligent, decision-making beings in charge of their own lives. This is the purpose of a human life. Because people have various levels and kinds of talent and possibility, however, one person’s achievements will be different from those of others. It is possible, nevertheless, to lead a good human life – to flourish – in an indefinitely large number of ways; for example, as a diplomat, scientist, teacher, nurse, doctor, soldier, housewife, midwife, musician, tradesman, artisan, and so on.

This understanding of the purpose of a human life led Wiener to adopt what he called “great principles of justice” upon which society should be built. He believed that adherence to those principles by a society would maximize a person’s ability to flourish through variety and flexibility of human action. Although Wiener stated his “great principles”, he did not assign names to them. For purposes of easy reference, let us call them “The Principle of Freedom”, “The Principle of Equality” and “The Principle of Benevolence”. Using Wiener’s own words yields the following list of “great principles” (1954, pp. 105–106):

The Principle of Freedom Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”

The Principle of Equality Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.”

The Principle of Benevolence Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”

Given Wiener’s cybernetic account of human nature and society, it follows that people are fundamentally social beings, and that they can reach their full potential only when they are part of a community of similar beings. Society, therefore, is essential to a good human life. Despotic societies , however, actually stifle human freedom ; and indeed they violate all three of the “great principles of justice”. For this reason, Wiener explicitly adopted a fourth principle of justice to assure that the first three would not be violated. Let us call this additional principle “The Principle of Minimum Infringement of Freedom”:

The Principle of Minimum Infringement of Freedom “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom” (1954, p. 106).

If one grants Wiener’s account of a good society and of human nature, it follows that a wide diversity of cultures – with different customs, languages, religions, values and practices – could provide a context in which humans can flourish. Sometimes ethical relativists use the existence of different cultures as proof that there is not – and could not be – an underlying ethical foundation for societies all around the globe. In response to such relativism, Wiener could argue that, given his understanding of human nature and the purpose of a human life, we can embrace and welcome a rich variety of cultures and practices while still advocating adherence to “the great principles of justice”. Those principles offer a cross-cultural foundation for ethics , even though they leave room for immense cultural diversity. The one restriction that Wiener would require in any society is that it must provide a context where humans can realize their full potential as sophisticated information-processing agents, making decisions and choices, and thereby taking responsibility for their own lives. Wiener believed that this is possible only where significant freedom, equality and human compassion prevail.

Because Wiener did not think of himself as creating a new branch of ethics, he did not provide metaphilosophical comments about what he was doing while analyzing an information ethics issue or case. Instead, he plunged directly into his analyses. Consequently, if we want to know about Wiener’s method of analysis, we need to observe what he does , rather than look for any metaphilosophical commentary upon his own procedures.

When observing Wiener’s way of analyzing information ethics issues and trying to resolve them, we find – for example, in The Human Use of Human Beings – that he tries to assimilate new cases by applying already existing, ethically acceptable laws, rules, and practices . In any given society, there is a network of existing practices, laws, rules and principles that govern human behavior within that society. These “policies” – to borrow a helpful word from Moor (1985) – constitute a “received policy cluster” (see Bynum and Schubert 1997); and in a reasonably just society, they can serve as a good starting point for developing an answer to any information ethics question . Wiener’s methodology is to combine the “received policy cluster” of one’s society with Wiener’s account of human nature, plus his “great principles of justice”, plus critical skills in clarifying vague or ambiguous language. In this way, he achieved a very effective method for analyzing information ethics issues. Borrowing from Moor’s later, and very apt, description of computer ethics methodology (Moor 1985), we can describe Wiener’s methodology as follows:

  • Identify an ethical question or case regarding the integration of information technology into society. Typically this focuses upon technology-generated possibilities that could affect (or are already affecting) life, health, security, happiness, freedom, knowledge, opportunities, or other key human values.
  • Clarify any ambiguous or vague ideas or principles that may apply to the case or the issue in question.
  • If possible, apply already existing, ethically acceptable principles, laws, rules, and practices (the “received policy cluster”) that govern human behavior in the given society.
  • If ethically acceptable precedents, traditions and policies are insufficient to settle the question or deal with the case, use the purpose of a human life plus the great principles of justice to find a solution that fits as well as possible into the ethical traditions of the given society.

In an essentially just society – that is, in a society where the “received policy cluster” is reasonably just – this method of analyzing and resolving information ethics issues will likely result in ethically good solutions that can be assimilated into the society.

Note that this way of doing information ethics does not require the expertise of a trained philosopher (although such expertise might prove to be helpful in many situations). Any adult who functions successfully in a reasonably just society is likely to be familiar with the existing customs, practices, rules and laws that govern a person’s behavior in that society and enable one to tell whether a proposed action or policy would be accepted as ethical. So those who must cope with the introduction of new information technology – whether they are computer professionals, business people, workers, teachers, parents, public-policy makers, or others – can and should engage in information ethics by helping to integrate new information technology into society in an ethically acceptable way. Information ethics, understood in this very broad sense , is too important to be left only to information professionals or to philosophers. Wiener’s information ethics interests, ideas and methods were very broad, covering not only topics in the specific field of “computer ethics”, as we would call it today, but also issues in related areas that, today, are called “agent ethics” (see, for example, Floridi 2013b), “Internet ethics” (Cavalier 2005), and “nanotechnology ethics” (Weckert 2002). The purview of Wiener’s ideas and methods is even broad enough to encompass subfields like journalism ethics, library ethics, and the ethics of bioengineering.

Even in the late 1940s, Wiener made it clear that, on his view, the integration into society of the newly invented computing and information technology would lead to the remaking of society – to “the second industrial revolution” – “the automatic age”. It would affect every walk of life, and would be a multi-faceted, on-going process requiring decades of effort. In Wiener’s own words, the new information technology had placed human beings “in the presence of another social potentiality of unheard-of importance for good and for evil.” (1948, p. 27) However, because he did not think of himself as creating a new branch of ethics, Wiener did not coin names, such as “computer ethics” or “information ethics”, to describe what he was doing. These terms – beginning with “computer ethics” – came into common use years later, starting in the mid 1970s with the work of Walter Maner. (see Maner 1980)

Today, the “information age” that Wiener predicted more than half a century ago has come into existence; and the metaphysical and scientific foundation for information ethics that he laid down continues to provide insight and effective guidance for understanding and resolving ethical challenges engendered by information technologies of all kinds.

2. Defining Computer Ethics

In 1976, nearly three decades after the publication of Wiener’s book Cybernetics , Walter Maner noticed that the ethical questions and problems considered in his Medical Ethics course at Old Dominion University often became more complicated or significantly altered when computers got involved. Sometimes the addition of computers, it seemed to Maner, actually generated wholly new ethics problems that would not have existed if computers had not been invented . He concluded that there should be a new branch of applied ethics similar to already existing fields like medical ethics and business ethics. After considering the name “information ethics”, he decided instead to call the proposed new field “computer ethics”. [ 1 ] (At that time, Maner did not know about the computer ethics works of Norbert Wiener.) He defined the proposed new field as one that studies ethical problems “aggravated, transformed or created by computer technology”. He developed an experimental computer ethics course designed primarily for students in university-level computer science programs. His course was a success, and students at his university wanted him to teach it regularly. He complied with their wishes and also created, in 1978, a “starter kit” on teaching computer ethics, which he prepared for dissemination to attendees of workshops that he ran and speeches that he gave at philosophy conferences and computing science conferences in America. In 1980, Helvetia Press and the National Information and Resource Center on Teaching Philosophy published Maner’s computer ethics “starter kit” as a monograph (Maner 1980). It contained curriculum materials and pedagogical advice for university teachers. It also included a rationale for offering such a course in a university, suggested course descriptions for university catalogs, a list of course objectives, teaching tips, and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. During the early 1980s, Maner’s Starter Kit was widely disseminated by Helvetia Press to colleges and universities in America and elsewhere. Meanwhile Maner continued to conduct workshops and teach courses in computer ethics. As a result, a number of scholars, especially philosophers and computer scientists, were introduced to computer ethics because of Maner’s trailblazing efforts.

While Maner was developing his new computer ethics course in the mid-to-late 1970s, a colleague of his in the Philosophy Department at Old Dominion University, Deborah Johnson, became interested in his proposed new field. She was especially interested in Maner’s view that computers generate wholly new ethical problems, for she did not believe that this was true. As a result, Maner and Johnson began discussing ethics cases that allegedly involved new problems brought about by computers. In these discussions, Johnson granted that computers did indeed transform old ethics problems in interesting and important ways – that is, “give them a new twist” – but she did not agree that computers generated ethically unique problems that had never been seen before. The resulting Maner-Johnson discussion initiated a fruitful series of comments and publications on the nature and uniqueness of computer ethics – a series of scholarly exchanges that started with Maner and Johnson and later spread to other scholars. The following passage, from Maner’s ETHICOMP95 keynote address, drew a number of other people into the discussion:

I have tried to show that there are issues and problems that are unique to computer ethics. For all of these issues, there was an essential involvement of computing technology. Except for this technology, these issues would not have arisen, or would not have arisen in their highly altered form. The failure to find satisfactory non-computer analogies testifies to the uniqueness of these issues. The lack of an adequate analogy, in turn, has interesting moral consequences. Normally, when we confront unfamiliar ethical problems, we use analogies to build conceptual bridges to similar situations we have encountered in the past. Then we try to transfer moral intuitions across the bridge, from the analog case to our current situation. Lack of an effective analogy forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us. (Maner 1996, p. 152)

Over the decade that followed the publication of this provocative passage, the extended “uniqueness debate” led to a number of useful contributions to computer and information ethics. (For some example publications, see Johnson 1985, 1994, 1999, 2001; Maner 1980, 1996, 1999; Gorniak-Kocikowska 1996; Tavani 2002, 2005; Himma 2003; Floridi and Sanders 2004; Mather 2005; and Bynum 2006, 2007.)

By the early 1980s, Johnson had joined the staff of Rensselaer Polytechnic Institute and had secured a grant to prepare a set of teaching materials – pedagogical modules concerning computer ethics – that turned out to be very successful. She incorporated them into a textbook, Computer Ethics , which was published in 1985 (Johnson 1985). On page 1, she noted that computers “pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms.” She did not grant Maner’s claim, however, that computers create wholly new ethical problems. Instead, she described computer ethics issues as old ethical problems that are “given a new twist” by computer technology.

Johnson’s book Computer Ethics was the first major textbook in the field, and it quickly became the primary text used in computer ethics courses offered at universities in English-speaking countries. For more than a decade, her textbook set the computer ethics research agenda on topics, such as ownership of software and intellectual property, computing and privacy, responsibilities of computer professionals, and fair distribution of technology and human power. In later editions (1994, 2001, 2009), Johnson added new ethical topics like “hacking” into people’s computers without their permission, computer technology for persons with disabilities, and ethics on the Internet.

Also in later editions of Computer Ethics , Johnson continued the “uniqueness-debate” discussion, noting for example that new information technologies provide new ways to “instrument” human actions. Because of this, she agreed with Maner that new specific ethics questions had been generated by computer technology – for example, “Should ownership of software be protected by law?” or “Do huge databases of personal information threaten privacy?” – but she argued that such questions are merely “new species of old moral issues”, such as protection of human privacy or ownership of intellectual property. They are not , she insisted, wholly new ethics problems requiring additions to traditional ethical theories, as Maner had claimed (Maner 1996).

The year 1985 was a “watershed year” in the history of computer ethics, not only because of the appearance of Johnson’s agenda-setting textbook, but also because James Moor’s classic paper, “What Is Computer Ethics?” was published in a special computer-ethics issue of the journal Metaphilosophy . There Moor provided an account of the nature of computer ethics that was broader and more ambitious than the definitions of Maner or Johnson. He went beyond descriptions and examples of computer ethics problems by offering an explanation of why computing technology raises so many ethical questions compared to other kinds of technology. Moor’s explanation of the revolutionary power of computer technology was that computers are “logically malleable”:

Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations … . Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity. (Moor, 1985, 269)

The logical malleability of computer technology, said Moor, makes it possible for people to do a vast number of things that they were not able to do before. Since no one could do them before, the question may never have arisen as to whether one ought to do them. In addition, because they could not be done before, perhaps no laws or standards of good practice or specific ethical rules had ever been established to govern them. Moor called such situations “policy vacuums”, and some of those vacuums might generate “conceptual muddles”:

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, that is, formulate policies to guide our actions … . One difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within which to formulate a policy for action. (Moor, 1985, 266)

In the late 1980s, Moor’s “policy vacuum” explanation of the need for computer ethics and his account of the revolutionary “logical malleability” of computer technology quickly became very influential among a growing number of computer ethics scholars. He added additional ideas in the 1990s, including the important notion of core human values : According to Moor, some human values – such as life, health, happiness, security, resources, opportunities, and knowledge – are so important to the continued survival of any community that essentially all communities do value them. Indeed, if a community did not value the “core values”, it soon would cease to exist. Moor used “core values” to examine computer ethics topics like privacy and security (Moor 1997), and to add an account of justice, which he called “just consequentialism” (Moor, 1999), a theory that combines “core values” and consequentialism with Bernard Gert’s deontological notion of “moral impartiality” using “the blindfold of justice” (Gert,1998).

Moor’s approach to computer ethics is a practical theory that provides a broad perspective on the nature of the “information revolution”. By using the notions of “logical malleability”, “policy vacuums”, “conceptual muddles”, “core values” and “just consequentialism”, he provides the following problem-solving method:

  • Identify a policy vacuum generated by computing technology.
  • Eliminate any conceptual muddles.
  • Use the core values and the ethical resources of just consequentialism to revise existing – but inadequate – policies, or else to create new policies that justly eliminate the vacuum and resolve the original ethical issue.

The third step is accomplished by combining deontology and consequentialism – which traditionally have been considered incompatible rival ethics theories – to achieve the following practical results:

If the blindfold of justice is applied to [suggested] computing policies, some policies will be regarded as unjust by all rational, impartial people, some policies will be regarded as just by all rational, impartial people, and some will be in dispute. This approach is good enough to provide just constraints on consequentialism. We first require that all computing policies pass the impartiality test. Clearly, our computing policies should not be among those that every rational, impartial person would regard as unjust. Then we can further select policies by looking at their beneficial consequences. We are not ethically required to select policies with the best possible outcomes, but we can assess the merits of the various policies using consequentialist considerations and we may select very good ones from those that are just. (Moor, 1999, 68)

Beginning with the computer ethics works of Norbert Wiener (1948, 1950, 1963), a common thread has run through much of the history of computer ethics; namely, concern for protecting and advancing central human values, such a life, health, security, happiness, freedom, knowledge, resources, power and opportunity. Thus, most of the specific issues that Wiener dealt with are cases of defending or advancing such values. For example, by working to prevent massive unemployment caused by robotic factories, Wiener tried to preserve security, resources and opportunities for factory workers. Similarly, by arguing against the use of decision-making war-game machines, Wiener tried to diminish threats to security and peace.

This “human-values approach” to computer ethics has been very fruitful. It has served, for example, as an organizing theme for major computer-ethics conferences, such as the 1991 National Conference on Computing and Values at Southern Connecticut State University (see the section below on “exponential growth”), which was devoted to the impacts of computing upon security, property, privacy, knowledge, freedom and opportunities . In the late 1990s, a similar approach to computer ethics, called “value-sensitive computer design”, emerged based upon the insight that potential computer-ethics problems can be avoided, while new technology is under development, by anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such harm. (See, for example, Brey, 2001, 2012; Friedman, 1997; Friedman and Nissenbaum, 1996; Introna, 2005a; Introna and Nissenbaum, 2000; Flanagan, et al., 2008.)

In the early 1990s, a different emphasis within computer ethics was advocated by Donald Gotterbarn. He believed that computer ethics should be seen as a professional ethics devoted to the development and advancement of standards of good practice and codes of conduct for computing professionals. Thus, in 1991, in the article “Computer Ethics: Responsibility Regained”, Gotterbarn said:

There is little attention paid to the domain of professional ethics – the values that guide the day-to-day activities of computing professionals in their role as professionals. By computing professional I mean anyone involved in the design and development of computer artifacts. … The ethical decisions made during the development of these artifacts have a direct relationship to many of the issues discussed under the broader concept of computer ethics. (Gotterbarn, 1991)

Throughout the 1990s, with this aspect of computer ethics in mind, Gotterbarn worked with other professional-ethics advocates (for example, Keith Miller, Dianne Martin, Chuck Huff and Simon Rogerson) in a variety of projects to advance professional responsibility among computer practitioners. Even before 1991, Gotterbarn had been part of a committee of the ACM (Association for Computing Machinery) to create the third version of that organization’s “Code of Ethics and Professional Conduct” (adopted by the ACM in 1992, see Anderson, et al., 1993). Later, Gotterbarn and colleagues in the ACM and the Computer Society of the IEEE (Institute of Electrical and Electronic Engineers) developed licensing standards for software engineers. In addition, Gotterbarn headed a joint taskforce of the IEEE and ACM to create the “Software Engineering Code of Ethics and Professional Practice” (adopted by those organizations in 1999; see Gotterbarn, Miller and Rogerson, 1997).

In the late 1990s, Gotterbarn created the Software Engineering Ethics Research Institute (SEERI) at East Tennessee State University (see http://seeri.etsu.edu/); and in the early 2000s, together with Simon Rogerson, he developed a computer program called SoDIS (Software Development Impact Statements) to assist individuals, companies and organizations in the preparation of ethical “stakeholder analyses” for determining likely ethical impacts of software development projects (Gotterbarn and Rogerson, 2005). These and many other projects focused attention upon professional responsibility and advanced the professionalization and ethical maturation of computing practitioners. (See the bibliography below for works by R. Anderson, D. Gotterbarn, C. Huff, C. D. Martin, K. Miller, and S. Rogerson.)

3. Globalization

In 1995, in her ETHICOMP95 presentation “The Computer Revolution and the Problem of Global Ethics”, Krystyna Górniak-Kocikowska, made a startling prediction (see Górniak, 1996). She argued that computer ethics eventually will evolve into a global ethic applicable in every culture on earth. According to this “Górniak hypothesis”, regional ethical theories like Europe’s Benthamite and Kantian systems, as well as the diverse ethical systems embedded in other cultures of the world, all derive from “local” histories and customs and are unlikely to be applicable world-wide. Computer and information ethics, on the other hand, Górniak argued, has the potential to provide a global ethic suitable for the Information Age:

  • a new ethical theory is likely to emerge from computer ethics in response to the computer revolution. The newly emerging field of information ethics, therefore, is much more important than even its founders and advocates believe. (p. 177)
  • The very nature of the Computer Revolution indicates that the ethic of the future will have a global character. It will be global in a spatial sense, since it will encompass the entire globe. It will also be global in the sense that it will address the totality of human actions and relations. (p.179)
  • Computers do not know borders. Computer networks … have a truly global character. Hence, when we are talking about computer ethics, we are talking about the emerging global ethic. (p. 186)
  • the rules of computer ethics, no matter how well thought through, will be ineffective unless respected by the vast majority of or maybe even all computer users. … In other words, computer ethics will become universal, it will be a global ethic. (p.187)

The provocative “Górniak hypothesis” was a significant contribution to the ongoing “uniqueness debate”, and it reinforced Maner’s claim – which he made at the same ETHICOMP95 conference in his keynote address – that information technology “forces us to discover new moral values, formulate new moral principles, develop new policies, and find new ways to think about the issues presented to us.” (Maner 1996, p. 152) Górniak did not speculate about the globally relevant concepts and principles that would evolve from information ethics. She merely predicted that such a theory would emerge over time because of the global nature of the Internet and the resulting ethics conversation among all the cultures of the world.

Górniak may well be right. Computer ethics today appears to be evolving into a broader and even more important field, which might reasonably be called “global information ethics”. Global networks, especially the Internet, are connecting people all over the earth. For the first time in history, efforts to develop mutually agreed standards of conduct, and efforts to advance and defend human values, are being made in a truly global context. So, for the first time in the history of the earth, ethics and values will be debated and transformed in a context that is not limited to a particular geographic region, or constrained by a specific religion or culture. This could be one of the most important social developments in history (Bynum 2006; Floridi 2014). Consider just a few of the global issues:

If computer users in the United States, for example, wish to protect their freedom of speech on the Internet, whose laws apply? Two hundred or more countries are interconnected by the Internet, so the United States Constitution (with its First Amendment protection of freedom of speech) is just a “local law” on the Internet – it does not apply to the rest of the world. How can issues like freedom of speech, control of “pornography”, protection of intellectual property, invasions of privacy, and many others to be governed by law when so many countries are involved? (Lessig 2004) If a citizen in a European country, for example, has Internet dealings with someone in a far-away land, and the government of that country considers those dealings to be illegal, can the European be tried by courts in the far-away country?

In recent years, there has be a rapid expansion of global “cyberbusiness”. Nations with appropriate technological infrastructure already in place have enjoyed resulting economic benefits, while the rest of the world has lagged behind. What will be the political and economic fallout from this inequality? In addition, will accepted business practices in one part of the world be perceived as “cheating” or “fraud” in other parts of the world? Will a few wealthy nations widen the already big gap between the rich and the poor? Will political and even military confrontations emerge?

If inexpensive access to a global information net is provided to rich and poor alike – to poverty-stricken people in ghettos, to poor nations in the “underdeveloped world”, etc. – for the first time in history, nearly everyone on earth will have access to daily news from a free press; to texts, documents and art works from great libraries and museums of the world; to political, religious and social practices of peoples everywhere. What will be the impact of this sudden and profound “global education” upon political dictatorships, isolated communities, coherent cultures, religious practices, etc.? As great universities of the world begin to offer degrees and knowledge modules via the Internet, will “lesser” universities be damaged or even forced out of business?

The gap between rich and poor nations, and even between rich and poor citizens in industrialized countries, is already disturbingly wide. As educational opportunities, business and employment opportunities, medical services and many other necessities of life move more and more into cyberspace, will gaps between the rich and the poor become even worse?

Important recent developments, which began after 1995, appear to be confirming Górniak’s hypothesis – in particular, the metaphysical information ethics theory of Luciano Floridi (see, for example, Floridi, 1999, 2005a, 2008, 2013b) and the “Flourishing Ethics” theory of the present author which combines ideas from Aristotle, Wiener, Moor and Floridi (see Bynum, 2006).

Floridi, in developing his information ethics theory (henceforth FIE ) [ 2 ] , argued that the purview of computer ethics – indeed of ethics in general – should be widened to include much more than simply human beings, their actions, intentions and characters. He developed FIE as another “macroethics” (his term) which is similar to utilitarianism, deontologism, contractualism, and virtue ethics, because it is intended to be applicable to all ethical situations. On the other hand, FIE is different from these more traditional Western theories because it is not intended to replace them , but rather to supplement them with further ethical considerations that go beyond the traditional theories, and that can be overridden, sometimes, by traditional ethical considerations. (Floridi, 2006)

The name “information ethics” is appropriate to Floridi’s theory, because it treats everything that exists as “informational” objects or processes:

[All] entities will be described as clusters of data, that is, as informational objects. More precisely, [any existing entity] will be a discrete, self-contained, encapsulated package containing the appropriate data structures, which constitute the nature of the entity in question, that is, the state of the object, its unique identity and its attributes; and a collection of operations, functions, or procedures, which are activated by various interactions or stimuli (that is, messages received from other objects or changes within itself) and correspondingly define how the object behaves or reacts to them. At this level of abstraction, informational systems as such, rather than just living systems in general, are raised to the role of agents and patients of any action, with environmental processes, changes and interactions equally described informationally. (Floridi 2006a, 9–10)

Since everything that exists, according to FIE, is an informational object or process, he calls the totality of all that exists – the universe considered as a whole – “the infosphere”. Objects and processes in the infosphere can be significantly damaged or destroyed by altering their characteristic data structures. Such damage or destruction Floridi calls “entropy”, and it results in partial “empoverishment of the infosphere”. Entropy in this sense is an evil that should be avoided or minimized , and Floridi offers four “fundamental principles”:

  • Entropy ought not to be caused in the infosphere (null law).
  • Entropy ought to be prevented in the infosphere.
  • Entropy ought to be removed from the infosphere.
  • The flourishing of informational entities as well as the whole infosphere ought to be promoted by preserving, cultivating and enriching their properties.

FIE is based upon the idea that everything in the infosphere has at least a minimum worth that should be ethically respected, even if that worth can be overridden by other considerations:

[FIE] suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy … . [FIE] holds that being /information has an intrinsic worthiness. It substantiates this position by recognizing that any informational entity has a Spinozian right to persist in its own status, and a Constructionist right to flourish, i.e., to improve and enrich its existence and essence. (Floridi 2006a, p. 11)

By construing every existing entity in the universe as “informational”, with at least a minimal moral worth, FIE can supplement traditional ethical theories and go beyond them by shifting the focus of one’s ethical attention away from the actions, characters, and values of human agents toward the “evil” (harm, dissolution, destruction) – “entropy” – suffered by objects and processes in the infosphere. With this approach, every existing entity – humans, other animals, plants, organizations, even non-living artifacts, electronic objects in cyberspace, pieces of intellectual property – can be interpreted as potential agents that affect other entities, and as potential patients that are affected by other entities. In this way, Floridi treats FIE as a “patient-based” non-anthropocentric ethical theory to be used in addition to the traditional “agent-based” anthropocentric ethical theories like utilitarianism, deontologism and virtue theory.

FIE, with its emphasis on “preserving and enhancing the infosphere”, enables Floridi to provide, among other things, an insightful and practical ethical theory of robot behavior and the behavior of other “artificial agents” like softbots and cyborgs. (See, for example, Floridi and Sanders, 2004.) FIE is an important component of a more ambitious project covering the entire new field of the “Philosophy of Information” (his term). (See Floridi 2011)

The paragraphs above describe key contributions to “the history of ideas” in information and computer ethics, but the history of a discipline includes much more. The birth and development of a new academic field require cooperation among a “critical mass” of scholars, plus the creation of university courses, research centers, conferences, academic journals, and more. In this regard, the year 1985 was pivotal for information and computer ethics. The publication of Johnson’s textbook, Computer Ethics , plus a special issue of the journal Metaphilosophy (October 1985) – including especially Moor’s article “What Is Computer Ethics?” – provided excellent curriculum materials and a conceptual foundation for the field. In addition, Maner’s earlier trailblazing efforts, and those of other people who had been inspired by Maner, had generated a “ready-made audience” of enthusiastic computer science and philosophy scholars. The stage was set for exponential growth. (The formidable foundation for computer and information ethics, which Wiener had laid down in the late 1940s and early 1950s, was so far ahead of its time that social and ethical thinkers then did not follow his lead and help to create a vibrant and growing field of computer and information ethics even earlier than the 1980s.)

In the United States, rapid growth occurred in information and computer ethics beginning in the mid-1980s. In 1987 the Research Center on Computing & Society was founded at Southern Connecticut State University. Shortly thereafter, the Director (the present author) joined with Walter Maner to organize “the National Conference on Computing and Values” (NCCV), funded by America’s National Science Foundation, to bring together computer scientists, philosophers, public policy makers, lawyers, journalists, sociologists, psychologists, business people, and others. The goal was to examine and push forward some of the major sub-areas of information and computer ethics; namely, computer security, computers and privacy, ownership of intellectual property, computing for persons with disabilities, and the teaching of computer ethics. More than a dozen scholars from several different disciplines joined with Bynum and Maner to plan NCCV, which occurred in August 1991 at Southern Connecticut State University. Four hundred people from thirty-two American states and seven other countries attended; and the conference generated a wealth of new computer ethics materials – monographs, video programs and an extensive bibliography – which were disseminated to hundreds of colleges and universities during the following two years.

In that same decade, professional ethics advocates, such as Donald Gotterbarn, Keith Miller and Dianne Martin – and professional organizations, such as Computer Professionals for Social Responsibility, the Electronic Frontier Foundation, and the Special Interest Group on Computing and Society (SIGCAS) of the ACM – spearheaded projects focused upon professional responsibility for computer practitioners. Information and computer ethics became a required component of undergraduate computer science programs that were nationally accredited by the Computer Sciences Accreditation Board. In addition, the annual “Computers, Freedom and Privacy” conferences began in 1991 (see www.cfp.org), and the ACM adopted a new version of its Code of Ethics and Professional Conduct in 1992.

In 1995, rapid growth of information and computer ethics spread to Europe when the present author joined with Simon Rogerson of De Montfort University in England to create the Centre for Computing and Social Responsibility and to organize the first computer ethics conference in Europe, ETHICOMP95. That conference included attendees from fourteen different countries, mostly in Europe, and it became a key factor in generating a “critical mass” of computer ethics scholars in Europe. After 1995, every 18 months, another ETHICOMP conference occurred, moving from country to country in Europe and beyond – Spain, the Netherlands, Italy, Poland, Portugal, Greece, Sweden, Japan, China, Argentina, Denmark, France. In addition, in 1999, with assistance from Bynum and Rogerson, the Australian scholars John Weckert and Christopher Simpson created the Australian Institute of Computer Ethics and organized AICEC99 (Melbourne, Australia), which was the first international computer ethics conference south of the equator. A number of AICE conferences have occurred since then (see http://auscomputerethics.com).

A central figure in the rapid growth of information and computer ethics in Europe was Simon Rogerson. In addition to creating the Centre for Computing and Social Responsibility at De Montfort University and co-heading the influential ETHICOMP conferences, he also (1) added computer ethics to De Montfort University’s curriculum, (2) created a graduate program with advanced computer ethics degrees, including PhDs, and (3) co-founded and co-edited (with Ben Fairweather) two computer ethics journals – The Journal of Information, Communication and Ethics in Society in 2003 (see the section “Other Internet Resources” below), and the electronic journal The ETHICOMP Journal in 2004 (see Other Internet Resources below). Rogerson also served on the Information Technology Committee of the British Parliament, and he participated in several computer ethics projects with agencies of the European Union.

Other important computer ethics developments in Europe in the late 1990s and early 2000s included, for example, (1) Luciano Floridi’s creation of the Information Ethics Research Group at Oxford University in the mid 1990s; (2) Jeroen van den Hoven’s founding, in 1997, of the CEPE (Computer Ethics: Philosophical Enquiry) series of conferences, which occurred alternately in Europe and America; (3) van den Hoven’s creation of the journal Ethics and Information Technology in 1999; (4) Rafael Capurro’s creation of the International Center for Information Ethics in 1999; (5) Capurro’s creation of the journal International Review of Information Ethics in 2004; and Bernd Carsten Stahl’s creation of The International Journal of Technology and Human Interaction in 2005.

In summary, since 1985 computer ethics developments have proliferated exponentially with new conferences and conference series, new organizations, new research centers, new journals, textbooks, web sites, university courses, university degree programs, and distinguished professorships. Additional “sub-fields” and topics in information and computer ethics continually emerge as information technology itself grows and proliferates. Recent new topics include on-line ethics, “agent” ethics (robots, softbots), cyborg ethics (part human, part machine), the “open source movement”, electronic government, global information ethics, information technology and genetics, computing for developing countries, computing and terrorism, ethics and nanotechnology, to name only a few examples. (For specific publications and examples, see the list of selected resources below.)

Compared to many other scholarly disciplines, the field of computer ethics is very young. It has existed only since the late 1940s when Norbert Wiener created it. During the next few decades, it grew very little because Wiener’s insights were so far ahead of everyone else’s. Beginning in 1985, however, information and computer ethics has grown exponentially, first in America, then in Europe, and then globally.

  • Adam, A. (2000), “Gender and Computer Ethics,” Computers and Society, 30(4): 17–24.
  • Adam, A. and J. Ofori-Amanfo (2000), “Does Gender Matter in Computer Ethics?” Ethics and Information Technology , 2(1): 37–47.
  • Anderson, R, D. Johnson, D. Gotterbarn and J. Perrolle (1993), “Using the New ACM Code of Ethics in Decision Making,” Communications of the ACM , 36: 98–107.
  • Bohman, James (2008), “The Transformation of the Public Sphere: Political Authority, Communicative Freedom, and Internet Publics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 66–92.
  • Brennan, G. and P. Pettit (2008), “Esteem, Identifiability, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 175–94.
  • Brey, P. (2001), “Disclosive Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics , Sudbury, MA: Jones and Bartlett.
  • ––– (2006a), “Evaluating the Social and Cultural Implications of the Internet,” Computers and Society , 36(3): 41–44.
  • ––– (2006b), “Social and Ethical Dimensions of Computer-Mediated Education,” Journal of Information, Communication & Ethics in Society , 4(2): 91–102.
  • ––– (2008), “Do We Have Moral Duties Toward Information Objects,” Ethics and Information Technology , 10(2–3): 109–114.
  • ––– (2012), “Anticipatory Ethics for Emerging Technologies,” Nanoethics , 6(1): 1–13.
  • ––– (eds.) (2012), The Good Life in a Technological Age , New York, NY: Routledge.
  • Bynum, T. (1982), “A Discipline in its Infancy,” The Dallas Morning News , January 12, 1982, D/1, D/6.
  • ––– (1999), “The Development of Computer Ethics as a Philosophical Field of Study,” The Australian Journal of Professional and Applied Ethics , 1(1): 1–29.
  • ––– (2000), “The Foundation of Computer Ethics,” Computers and Society , 30(2): 6–13.
  • ––– (2004), “Ethical Challenges to Citizens of the ‘Automatic Age’: Norbert Wiener on the Information Society,” Journal of Information, Communication and Ethics in Society , 2(2): 65–74.
  • ––– (2005), “Norbert Wiener’s Vision: the Impact of the ‘Automatic Age’ on our Moral Lives,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany, NY: SUNY Press, 11–25.
  • ––– (2006), “Flourishing Ethics,” Ethics and Information Technology , 8(4): 157–173.
  • ––– (2008a), “Milestones in the History of Information and Computer Ethics,” in K. Himma and H. Tavani (eds.), The Handbook of Information and Computer Ethics , New York: John Wiley, 25–48.
  • ––– (2008b), “Norbert Wiener and the Rise of Information Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge, UK: Cambridge University Press, 8–25.
  • ––– (2008c), “A Copernican Revolution in Ethics?,” in G. Crnkovic and S. Stuart (eds.), Computation, Information, Cognition: The Nexus and the Liminal , Cambridge, UK: Cambridge Scholars Publishing, 302–329.
  • ––– (2010a), “Historical Roots of Information Ethics,” in L. Floridi (ed.), Handbook of Information and Computer Ethics , Oxford, UK: Wiley-Blackwell, 20–38.
  • ––– (2010b), “Philosophy in the Information Age,” in P. Allo (ed.), Luciano Floridi and the Philosophy of Information , Cambridge, UK: Cambridge University Press, 420–442.
  • Bynum, T. and P. Schubert (1997), “How to do Computer Ethics – A Case Study: The Electronic Mall Bodensee,” in J. van den Hoven (ed.), Computer Ethics – Philosophical Enquiry , Rotterdam: Erasmus University Press, 85–95.
  • Capurro, R. (2007a), “Information Ethics for and from Africa,” International Review of Information Ethics , 2007: 3–13.
  • ––– (2007b), “Intercultural Information Ethics,” in R. Capurro, J. Frühbauer and T. Hausmanninger (eds.), Localizing the Internet: Ethical Issues in Intercultural Perspective , (ICIE Series, Volume 4), Munich: Fink, 2007: 21–38.
  • ––– (2006), “Towards an Ontological Foundation for Information Ethics,” Ethics and Information Technology , 8(4): 157–186.
  • ––– (2004), “The German Debate on the Information Society,” The Journal of Information, Communication and Ethics in Society , 2 (Supplement): 17–18.
  • Capurro, R. and J. Britz (2010), “In Search of a Code of Global Information Ethics: The Road Travelled and New Horizons, ” Ethical Space , 7(2/3): 28–36.
  • Capurro, R. and M. Nagenborg (eds.) (2009) Ethics and Robotics , Heidelberg: Akademische Verlagsgesellschaft, IOS Press.
  • Cavalier, R. (ed.) (2005), The Impact of the Internet on Our Moral Lives , Albany, NY: SUNY Press.
  • Cocking, D. (2008), “Plural Selves and Relational Identity: Intimacy and Privacy Online,” In J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 123–41.
  • de Laat, P., (2010), “How Can Contributions to Open-Source Communities be Trusted?,” Ethics and Information Technology , 12(4): 327–341.
  • ––– (2012), “Coercion or Empowerment? Moderation of Content in Wikipedia as Essentially Contested Bureaucratic Rules,” Ethics and Information Technology , 14(2): 123–135.
  • Edgar, S. (1997), Morality and Machines: Perspectives on Computer Ethics , Sudbury, MA: Jones and Bartlett.
  • Elgesem, D. (1995), “Data Privacy and Legal Argumentation,” Communication and Cognition , 28(1): 91–114.
  • ––– (1996), “Privacy, Respect for Persons, and Risk,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication , Albany: SUNY Press, 45–66.
  • ––– (2002), “What is Special about the Ethical Problems in Internet Research?” Ethics and Information Technology , 4(3): 195–203.
  • ––– (2008), “Information Technology Research Ethics,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 354–75.
  • Ess, C. (1996), “The Political Computer: Democracy, CMC, and Habermas,” in C. Ess (ed.), Philosophical Perspectives on Computer-Mediated Communication , Albany: SUNY Press, 197–230.
  • ––– (ed.) (2001a), Culture, Technology, Communication: Towards an Intercultural Global Village , Albany: SUNY Press.
  • ––– (2001b), “What’s Culture got to do with it? Cultural Collisions in the Electronic Global Village,” in C. Ess (ed.), Culture, Technology, Communication: Towards an Intercultural Global Village , Albany: SUNY Press, 1–50.
  • ––– (2004), “Computer-Mediated Communication and Human-Computer Interaction,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information , Oxford: Blackwell, 76–91.
  • ––– (2005), “Moral Imperatives for Life in an Intercultural Global Village, ” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 161–193.
  • ––– (2008), “Culture and Global Networks: Hope for a Global Ethics?” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 195–225.
  • ––– (2013), “Global? Media Ethics: Issues, Challenges, Requirements, Resolutions” in S. Ward (ed.), Global Media Ethics: Problems and Perspectives , Oxford: Wiley-Blackwell, 253–271.
  • Fairweather, B. (1998), “No PAPA: Why Incomplete Codes of Ethics are Worse than None at all,” in G. Collste (ed.), Ethics and Information Technology , New Delhi: New Academic Publishers.
  • ––– (2011), “Even Greener IT: Bringing Green Theory and Green IT Together,” Journal of Information, Communication and Ethics in Society , 9(2): 68–82.
  • Flanagan, M., D. Howe, and H. Nissenbaum (2008), “Embodying Value in Technology: Theory and Practice,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 322–53.
  • Flanagan, M. and H. Nissenbaum (2014), Values at Play in Digital Games , Cambridge, MA: MIT Press.
  • Floridi, L. (1999), “Information Ethics: On the Theoretical Foundations of Computer Ethics”, Ethics and Information Technology , 1(1): 37–56.
  • ––– (ed.) (2004), The Blackwell Guide to the Philosophy of Computing and Information , Oxford: Blackwell.
  • ––– (2005b), “Internet Ethics: The Constructionist Values of Homo Poieticus,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 195–214.
  • ––– (2006a), “Information Ethics: Its Nature and Scope,” Computers and Society , 36(3): 21–36.
  • ––– (2006b), “Information Technologies and the Tragedy of the Good Will,” Ethics and Information Technology , 8(4): 253–262.
  • ––– (2008), “Information Ethics: Its Nature and Scope,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 40–65.
  • ––– (ed.) (2010), Handbook of Information and Computer Ethics , Cambridge: Cambridge University Press.
  • ––– (2011), The Philosophy of Information , Oxford: Oxford University Press.
  • ––– (2013a), “Distributed Morality in an Information Society,” Science and Engineering Ethics , 19(3): 727–743.
  • ––– (2013b), The Ethics of Information , Oxford: Oxford University Press.
  • ––– (2014), The Fourth Revolution - How the Infosphere is Reshaping Human Reality , Oxford: Oxford University Press.
  • Floridi, L. and J. Sanders (2004), “The Foundationalist Debate in Computer Ethics,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics , 2nd edition, Sudbury, MA: Jones and Bartlett, 81–95.
  • Forester, T. and P. Morrison (1990), Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing , Cambridge, MA: MIT Press.
  • Fried, C. (1984), “Privacy,” in F. Schoeman (ed.), Philosophical Dimensions of Privacy , Cambridge: Cambridge University Press.
  • Friedman, B. (ed.) (1997), Human Values and the Design of Computer Technology , Cambridge: Cambridge University Press.
  • Friedman, B. and H. Nissenbaum (1996), “Bias in Computer Systems,” ACM Transactions on Information Systems , 14(3): 330–347.
  • Gerdes, A. (2013), “Ethical Issues in Human Robot Interaction,” in H. Nykänen, O. Riis, and J. Zelle (eds.), Theoretical and Applied Ethics , Aalborg, Denmark: Aalborg University Press, 125–143.
  • Gert, B. (1999), “Common Morality and Computing,” Ethics and Information Technology , 1(1): 57–64.
  • Goldman, A. (2008), “The Social Epistemology of Blogging,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 111–22.
  • Gordon, W. (2008), “Moral Philosophy, Information Technology, and Copyright: The Grokster Case,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 270–300.
  • Gorniak-Kocikowska, K. (1996), “The Computer Revolution and the Problem of Global Ethics,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics , Guildford, UK: Opragen Publications, 177–90.
  • ––– (2005) “From Computer Ethics to the Ethics of the Global ICT Society,” in T. Bynum, G. Collste, and S. Rogerson (eds.), Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University. Also in Library Hi Tech , 25(1): 47–57.
  • ––– (2007), “ICT, Globalization and the Pursuit of Happiness: The Problem of Change,” in Proceedings of ETHICOMP2007 , Tokyo: Meiji University Press.
  • ––– (2008), “ICT and the Tension between Old and New: The Human Factor,” Journal of Information, Communication and Ethics in Society , 6(1): 4–27.
  • Gotterbarn, D. (1991), “Computer Ethics: Responsibility Regained,” National Forum: The Phi Beta Kappa Journal , 71: 26–31.
  • ––– (2001), “Informatics and Professional Responsibility,” Science and Engineering Ethics , 7(2): 221–30.
  • ––– (2002) “Reducing Software Failures: Addressing the Ethical Risks of the Software Development Life Cycle,” Australian Journal of Information Systems , 9(2): 155–65.
  • ––– (2008) “Once More unto the Breach: Professional Responsibility and Computer Ethics,” Science and Engineering Ethics , 14(1): 235–239.
  • ––– (2009) “The Public is the Priority: Making Decisions Using the SE Code of Ethics,” IEEE Computer , June: 42–49.
  • Gotterbarn, D., K. Miller, and S. Rogerson (1997), “Software Engineering Code of Ethics,” Information Society , 40(11): 110–118.
  • Gotterbarn, D. and K. Miller (2004), “Computer Ethics in the Undergraduate Curriculum: Case Studies and the Joint Software Engineer’s Code,” Journal of Computing Sciences in Colleges , 20(2): 156–167.
  • Gotterbarn, D. and S. Rogerson (2005), “Responsible Risk Analysis for Software Development: Creating the Software Development Impact Statement,” Communications of the Association for Information Systems , 15(40): 730–50.
  • Grodzinsky, F. (1997), “Computer Access for Students with Disabilities,” SIGSCE Bulletin , 29(1): 292–295; [ Available online ].
  • ––– (1999), “The Practitioner from Within: Revisiting the Virtues,” Computers and Society , 29(2): 9–15.
  • Grodzinsky, F., A. Gumbus and S. Lilley (2010), “Ethical Implications of Internet Monitoring: A Comparative Study,” Information System Frontiers , 12(4):433–431.
  • Grodzinsky, F., K. Miller and M. Wolf (2003), “Ethical Issues in Open Source Software,” Journal of Information, Communication and Ethics in Society , 1(4): 193–205.
  • ––– (2008), “The Ethics of Designing Artificial Agents,” Ethics and Information Technology , 10(2–3): 115–121.
  • ––– (2011), “Developing Artificial Agents Worthy of Trust,” Ethics and Information Technology , 13(1): 17–27.
  • Grodzinsky, F. and H. Tavani (2002), “Ethical Reflections on Cyberstalking,” Computers and Society , 32(1): 22–32.
  • ––– (2004), “Verizon vs. the RIAA: Implications for Privacy and Democracy,” in J. Herkert (ed.), Proceedings of ISTAS 2004: The International Symposium on Technology and Society , Los Alamitos, CA: IEEE Computer Society Press.
  • ––– (2010), “Applying the Contextual Integrity Model of Privacy to Personal Blogs in the Blogosphere,” International Journal of Internet Research Ethics , 3(1): 38–47.
  • Grodzinsky, F. and M. Wolf (2008), “Ethical Issues in Free and Open Source Software,” in K. Himma and H. Tavani (eds.), The Handbook of Information and Computer Ethics , Hoboken, NJ: Wiley, 245–272.
  • Himma, K. (2003), “The Relationship Between the Uniqueness of Computer Ethics and its Independence as a Discipline in Applied Ethics,” Ethics and Information Technology , 5(4): 225–237.
  • ––– (2004), “The Moral Significance of the Interest in Information: Reflections on a Fundamental Right to Information,” Journal of Information, Communication, and Ethics in Society , 2(4): 191–202.
  • ––– (2007), “Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to be a Moral Agent?” in Proceedings of ETHICOMP2007 , Tokyo: Meiji University Press.
  • ––– (2004), “There’s Something about Mary: The Moral Value of Things qua Information Objects”, Ethics and Information Technology , 6(3): 145–159.
  • ––– (2006), “Hacking as Politically Motivated Civil Disobedience: Is Hacktivism Morally Justified?” in K. Himma (ed.), Readings in Internet Security: Hacking, Counterhacking, and Society , Sudbury, MA: Jones and Bartlett.
  • Himma, K. and H. Tavani (eds.) (2008), The Handbook of Information and Computer Ethics , Hoboken, NJ: Wiley.
  • Hongladarom, S. (2011), “Personal Identity and the Self in the Online and Offline Worlds,” Minds and Machines , 21(4): 533–548.
  • ––– (2013), “Ubiquitous Computing, Empathy and the Self,” AI and Society , 28(2): 227–236.
  • Huff, C. and T. Finholt (eds.) (1994), Social Issues in Computing: Putting Computers in Their Place , New York: McGraw-Hill.
  • Huff, C. and D. Martin (1995), “Computing Consequences: A Framework for Teaching Ethical Computing,” Communications of the ACM , 38(12): 75–84.
  • Huff, C. (2002), “Gender, Software Design, and Occupational Equity,” SIGCSE Bulletin: Inroads , 34: 112–115.
  • ––– (2004), “Unintentional Power in the Design of Computing Systems.” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility , Oxford: Blackwell.
  • Huff, C., D. Johnson, and K. Miller (2003), “Virtual Harms and Real Responsibility,” Technology and Society Magazine (IEEE), 22(2): 12–19.
  • Huff, C. and L. Barnard (2009), “Good Computing: Life Stories of Moral Exemplars in the Computing Profession,” IEEE Technology and Society , 28(3): 47–54.
  • Introna, L. (1997), “Privacy and the Computer: Why We Need Privacy in the Information Society,” Metaphilosophy , 28(3): 259–275.
  • ––– (2002), “On the (Im)Possibility of Ethics in a Mediated World,” Information and Organization , 12(2): 71–84.
  • ––– (2005a), “Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems,” Ethics and Information Technology , 7(2): 75–86.
  • ––– (2005b) “Phenomenological Approaches to Ethics and Information Technology,” The Stanford Encyclopedia of Philosophy (Fall 2005 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/fall2005/entries/ethics-it-phenomenology/ >.
  • Introna, L. and H. Nissenbaum (2000), “Shaping the Web: Why the Politics of Search Engines Matters,” The Information Society , 16(3): 1–17.
  • Introna, L. and N. Pouloudi (2001), “Privacy in the Information Age: Stakeholders, Interests and Values.” in J. Sheth (ed.), Internet Marketing , Fort Worth, TX: Harcourt College Publishers, 373–388.
  • Johnson, D. (1985), Computer Ethics , First Edition, Englewood Cliffs, NJ: Prentice-Hall; Second Edition, Englewood Cliffs, NJ: Prentice-Hall, 1994; Third Edition Upper Saddle River, NJ: Prentice-Hall, 2001; Fourth Edition (with Keith Miller), New York: Pearson, 2009.
  • ––– (1997a), “Ethics Online,” Communications of the ACM , 40(1): 60–65.
  • ––– (1997b), “Is the Global Information Infrastructure a Democratic Technology?” Computers and Society , 27(4): 20–26.
  • ––– (2004), “Computer Ethics,” in L. Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information , Oxford: Blackwell, 65–75.
  • ––– (2011), “Software Agents, Anticipatory Ethics, and Accountability,” in G. Merchant, B. Allenby, and J. Herkert, (eds.), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The International Library of Ethics, Law and Technology , 7: 61–76. Heidelberg, Germany: Springer.
  • Johnson, D. and H. Nissenbaum (eds.) (1995), Computing, Ethics & Social Values , Englewood Cliffs, NJ: Prentice Hall.
  • Johnson, D. and T. Powers (2008), “Computers as Surrogate Agents,” in J. van den Hoven and J. Weckert, (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 251–69.
  • Kocikowski, A. (1996), “Geography and Computer Ethics: An Eastern European Perspective,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 201–10.
  • Lane, J., V. Stodden, S. Bender, and H. Nissenbaum (eds.) (2014), Privacy, Big Data and the Public Good , Cambridge: Cambridge University Press.
  • Lessig, L. (2004), “The Laws of Cyberspace,” in R. Spinello and H. Tavani (eds.), Readings in CyberEthics , Sudbury, MA: Jones and Bartlett, Second Edition, 134–144.
  • Lloyd, S. (2006), Programming the Universe , New York: Alfred A. Knopf Publishers.
  • Maner, W. (1980), Starter Kit in Computer Ethics , Hyde Park, NY: Helvetia Press and the National Information and Resource Center for Teaching Philosophy.
  • ––– (1996), “Unique Ethical Problems in Information Technology,” in T. Bynum and S. Rogerson (eds.), Science and Engineering Ethics (Special Issue: Global Information Ethics), 2(2): 137–154.
  • Martin, C. and D. Martin (1990), “Professional Codes of Conduct and Computer Ethics Education,” Social Science Computer Review , 8(1): 96–108.
  • Martin, C., C. Huff, D. Gotterbarn, K. Miller, et al . (1996), “A Framework for Implementing and Teaching the Social and Ethical Impact of Computing,” Education and Information Technologies , 1(2): 101–122.
  • Martin, C., C. Huff, D. Gotterbarn, and K. Miller (1996), “Implementing a Tenth Strand in the Computer Science Curriculum” (Second Report of the Impact CS Steering Committee), Communications of the ACM , 39(12): 75–84.
  • Marx, G. (2001), “Identity and Anonymity: Some Conceptual Distinctions and Issues for Research,” in J. Caplan and J. Torpey (eds.), Documenting Individual Identity , Princeton: Princeton University Press.
  • Mather, K. (2005), “The Theoretical Foundation of Computer Ethics: Stewardship of the Information Environment,” in Contemporary Issues in Governance (Proceedings of GovNet Annual Conference, Melbourne, Australia, 28–30 November, 2005), Melbourne: Monash University.
  • Matthews, S. (2008), “Identity and Information Technology.” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 142–60.
  • Miller, K. (2005), “Web standards: Why So Many Stray from the Narrow Path,” Science and Engineering Ethics , 11(3): 477–479.
  • Miller, K. and D. Larson (2005a), “Agile Methods and Computer Ethics: Raising the Level of Discourse about Technological Choices,” IEEE Technology and Society , 24(4): 36–43.
  • ––– (2005b), “Angels and Artifacts: Moral Agents in the Age of Computers and Networks,” Journal of Information, Communication & Ethics in Society , 3(3): 151–157.
  • Miller, S. (2008), “Collective Responsibility and Information and Communication Technology.” in J. van den Hoven and J> Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 226–50.
  • Moor, J. (1979), “Are there Decisions Computers Should Never Make?” Nature and System , 1: 217–29.
  • ––– (1985) “What Is Computer Ethics?” Metaphilosophy , 16(4): 266–75.
  • ––– (1996), “Reason, Relativity and Responsibility in Computer Ethics,” in Computers and Society , 28(1) (1998): 14–21; originally a keynote address at ETHICOMP96 in Madrid, Spain, 1996.
  • ––– (1997), “Towards a Theory of Privacy in the Information Age,” Computers and Society , 27(3): 27–32.
  • ––– (1999), “Just Consequentialism and Computing,” Ethics and Information Technology , 1(1): 65–69.
  • ––– (2001), “The Future of Computer Ethics: You Ain’t Seen Nothin’ Yet,” Ethics and Information Technology , 3(2): 89–91.
  • ––– (2005), “Should We Let Computers Get under Our Skin?” in R. Cavalier, The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 121–138.
  • ––– (2006), “The Nature, Importance, and Difficulty of Machine Ethics,” IEEE Intelligent Systems , 21(4): 18–21.
  • ––– (2007), “Taking the Intentional Stance Toward Robot Ethics,” American Philosophical Association Newsletters , 6(2): 111–119.
  • ––– (2008) “Why We Need Better Ethics for Emerging Technologies,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 26–39.
  • Murata, K. and Y. Orito (2010), “Japanese Risk Society: Trying to Create Complete Security and Safety Using Information and Communication Technology”, Computers and Society , ACM SIGCAS 40(3): 38–49.
  • Murata, K., Y. Orito and Y. Fukuta (2014), “Social Attitudes of Young People in Japan Towards Online Privacy”, Journal of Law, Information and Science , 23(1): 137–157.
  • Nissenbaum, H. (1995), “Should I Copy My Neighbor’s Software?” in D. Johnson and H. Nissenbaum (eds), Computers, Ethics, and Social Responsibility , Englewood Cliffs, NJ: Prentice Hall.
  • ––– (1997), “Can We Protect Privacy in Public?” in Proceedings of Computer Ethics – Philosophical Enquiry 97 (CEPE97), Rotterdam: Erasmus University Press, 191–204; reprinted Nissenbaum 1998a.
  • ––– (1998a), “Protecting Privacy in an Information Age: The Problem of Privacy in Public,” Law and Philosophy , 17: 559–596.
  • ––– (1998b), “Values in the Design of Computer Systems,” Computers in Society , 1998: 38–39.
  • ––– (1999), “The Meaning of Anonymity in an Information Age,” The Information Society , 15: 141–144.
  • ––– (2005a), “Hackers and the Contested Ontology of Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 139–160.
  • ––– (2005b), “Where Computer Security Meets National Security,” Ethics and Information Technology , 7(2): 61–73.
  • ––– (2011), “A Contextual Approach to Privacy Online,” Daedalus , 140(4): 32–48.
  • Ocholla, D, J. Britz, R. Capurro, and C. Bester, (eds.) (2013), Information Ethics in Africa: Cross-Cutting Themes , African Center of Excellence for Information Ethics, Pretoria, South Africa.
  • Orito, Y. (2011), “The Counter-Control Revolution: Silent Control of Individuals Through Dataveillance Systems,” Journal of Information, Communication and Ethics in Society , 9(1): 5–19.
  • Parker, D. (1968), “Rules of Ethics in Information Processing,” Communications of the ACM , 11: 198–201.
  • ––– (1979), Ethical Conflicts in Computer Science and Technology . Arlington, VA: AFIPS Press.
  • Parker, D., S. Swope and B. Baker (1990), Ethical Conflicts in Information & Computer Science, Technology & Business , Wellesley, MA: QED Information Sciences.
  • Pecorino, P. and W. Maner (1985), “A Proposal for a Course on Computer Ethics,” Metaphilosophy , 16(4): 327–337.
  • Pettit, P. (2008), “Trust, Reliance, and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 161–74.
  • Powers, T. M. (2006), “Prospects for a Kantian Machine,” IEEE Intelligent Systems , 21(4): 46–51. Also in M. Anderson and S. Anderson (eds.), IEEE Intelligent Systems , Cambridge, UK: Cambridge University Press, 2011.
  • ––– (2009), “Machines and Moral Reasoning,” Philosophy Now , 72: 15–16.
  • ––– (2011), “Incremental Machine Ethics,” IEEE Robotics and Automation , 18(1): 51–58.
  • ––– (2013), “On the Moral Agency of Computers,” Topoi: An International Review of Philosophy , 32(2): 227–236.
  • Rogerson, S. (1996), “The Ethics of Computing: The First and Second Generations,” The UK Business Ethics Network News , 6: 1–4.
  • ––– (1998), “Computer and Information Ethics,” in R. Chadwick (ed.), Encyclopedia of Applied Ethics , San Diego, CA: Academic Press, 563–570.
  • ––– (2004), “The Ethics of Software Development Project Management,” in T. Bynum and S. Rogerson (eds.), Computer Ethics and Professional Responsibility , Oxford: Blackwell, 119–128.
  • ––– (1995), “Cyberspace: The Ethical Frontier,” The Times Higher Education Supplement (The London Times), No. 1179, June, 9, 1995, iv.
  • ––– (2002), “The Ethical Attitudes of Information Systems Professionals: Outcomes of an Initial Survey,” Telematics and Informatics , 19: 21–36.
  • ––– (1998), “The Ethics of Software Project Management,” in G. Collste (ed.), Ethics and Information Technology , New Delhi: New Academic Publishers, 137–154.
  • Sojka, J. (1996), “Business Ethics and Computer Ethics: The View from Poland,” in T. Bynum and S. Rogerson (eds.), Global Information Ethics , Guilford, UK: Opragen Publications (a special issue of Science and Engineering Ethics ) 191–200.
  • Søraker, J. (2012), “How Shall I Compare Thee? Comparing the Prudential Value of Actual and Virtual Friendship” Ethics and Information Technology , 14(3): 209–219.
  • Spafford, E., K. Heaphy, and D. Ferbrache (eds.) (1989), Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats , Arlington, VA: ADAPSO (now ITAA).
  • Spafford, E. (1992), “Are Computer Hacker Break-Ins Ethical?” Journal of Systems and Software , 17: 41–47.
  • Spinello, R. (1997), Case Studies in Information and Computer Ethics , Upper Saddle River, NJ: Prentice-Hall.
  • ––– (2000), CyberEthics: Morality and Law in Cyberspace , Sudbury, MA: Jones and Bartlett; Fifth Edition, 2014.
  • Spinello, R. and H. Tavani (2001a), “The Internet, Ethical Values, and Conceptual Frameworks: An Introduction to Cyberethics,” Computers and Society , 31(2): 5–7.
  • ––– (eds.) (2001b), Readings in CyberEthics , Sudbury, MA: Jones and Bartlett; Second Edition, 2004.
  • ––– (eds.) (2005), Intellectual Property Rights in a Networked World: Theory and Practice , Hershey, PA: Idea Group/Information Science Publishing.
  • Stahl, B. (2004a), “Information, Ethics and Computers: The Problem of Autonomous Moral Agents,” Minds and Machines , 14: 67–83.
  • ––– (2004b), Responsible Management of Information Systems , Hershey, PA: Idea Group/Information Science Publishing.
  • ––– (2005), “The Ethical Problem of Framing E-Government in Terms of E-Commerce,” Electronic Journal of E-Government , 3(2): 77–86.
  • ––– (2006), “Responsible Computers? A Case for Ascribing Quasi-responsibility to Computers Independent of Personhood or Agency,” Ethics and Information Technology , 8(4):205–213.
  • ––– (2011), “IT for a Better Future: How to Integrate Ethics, Politics and Innovation,” Journal of Information, Communication and Ethics in Society , 9(3): 140–156.
  • ––– (2013), “Virtual Suicide and Other Ethical Issues of Emerging Information Technologies,” Futures , 50: 35–43.
  • ––– (2014), “Participatory Design as Ethical Practice -- Concepts, Reality and Conditions,” Journal of Information, Communication and Ethics in Society , 12(1): 10–13.
  • Stahl, B., R. Heersmink, P. Goujon, C. Flick. J. van den Hoven, K. Wakunuma, V. Ikonen, and M. Rader (2010), “Identifying the Ethics of Emerging Information and Communication Technologies,” International Journal of Technoethics , 1(4): 20–38.
  • Sullins, J. (2006), “When Is a Robot a Moral Agent?,” International Review of Information Ethics , 6(1): 23–30.
  • ––– (2010), “Robo Warfare: Can Robots Be More Ethical than Humans on the Battlefield?,” Ethics and Information Technology , 12(3): 263–275.
  • ––– (2013), “Roboethics and Telerobot Weapons Systems,” in D. Michelfelder, N. McCarthy and D. Goldberg (eds.), Philosophy and Engineering: Reflections on Practice, Principles and Process , Dordrecht: Springer, 229–237.
  • Sunstein, C. (2008), “Democracy and the Internet,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 93–110.
  • Taddeo, M. (2012), “Information Warfare: A Philosophical Perspective,” Philosophy and Technology , 25(1): 105–120.
  • Tavani, H. (ed.) (1996), Computing, Ethics, and Social Responsibility: A Bibliography , Palo Alto, CA: Computer Professionals for Social Responsibility Press.
  • ––– (1999a), “Privacy and the Internet,” Proceedings of the Fourth Annual Ethics and Technology Conference , Chestnut Hill, MA: Boston College Press, 114–25.
  • ––– (1999b), “Privacy On-Line,” Computers and Society , 29(4): 11–19.
  • ––– (2002), “The Uniqueness Debate in Computer Ethics: What Exactly is at Issue and Why Does it Matter?” Ethics and Information Technology , 4(1): 37–54.
  • ––– (2004), Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology , Hoboken, NJ: Wiley; Second Edition, 2007; Third Edition, 2011; Fourth Edition, 2013.
  • ––– (2005), “The Impact of the Internet on our Moral Condition: Do We Need a New Framework of Ethics?” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 215–237.
  • ––– (2006), Ethics, Computing, and Genomics , Sudbury, MA: Jones and Bartlett.
  • Tavani, H. and J. Moor (2001), “Privacy Protection, Control of Information, and Privacy-Enhancing Technologies,” Computers and Society , 31(1): 6–11.
  • Turilli, M. and L. Floridi, (2009), “The Ethics of Information Transparency,” Ethics and Information Technology , 11(2): 105–112.
  • Turilli, M., A. Vacaro and M. Taddeo, (2010), “The Case of Online Trust,” Knowledge, Technology and Policy , 23(3/4): 333–345.
  • Turkle, S. (1984), The Second Self: Computers and the Human Spirit , New York: Simon & Schuster.
  • ––– (2011), Alone Together: Why We Expect More from Technology and Less from Each Other , New York: Basic Books.
  • Turner, A.J. (2011), “Summary of the ACM/IEEE-CS Joint Curriculum Task Force Report: Computing Curricula, 1991,” Communications of the ACM , 34(6): 69–84.
  • Turner, E. (2006), “Teaching Gender-Inclusive Computer Ethics, ” in I. Trauth (ed.), Encyclopedia of Gender and Information Technology: Exploring the Contributions, Challenges, Issues and Experiences of Women in Information Technology , Hershey, PA: Idea Group/Information Science Publishing, 1142–1147.
  • van den Hoven, J. (1997a), “Computer Ethics and Moral Methodology,” Metaphilosophy , 28(3): 234–48.
  • ––– (1997b), “Privacy and the Varieties of Informational Wrongdoing,” Computers and Society , 27(3): 33–37.
  • ––– (1998), “Ethics, Social Epistemics, Electronic Communication and Scientific Research,” European Review , 7(3): 341–349.
  • ––– (2008a), “Information Technology, Privacy, and the Protection of Personal Data,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 301–321.
  • van den Hoven, J. and E. Rooksby (2008), “Distributive Justice and the Value of Information: A (Broadly) Rawlsian Approach,” in J. van den Hoven and J. Weckert (eds.), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press, 376–96.
  • van den Hoven, J. and J. Weckert (2008), Information Technology and Moral Philosophy , Cambridge: Cambridge University Press.
  • Vedral, V. (2010), Decoding Reality , Oxford: Oxford University Press.
  • Volkman, R. (2003), “Privacy as Life, Liberty, Property,” Ethics and Information Technology , 5(4): 199–210.
  • ––– (2005), “Dynamic Traditions: Why Globalization Does Not Mean Homogenization,” in Proceedings of ETHICOMP2005 (CD-ROM), Center for Computing and Social Responsibility, Linköpings University.
  • ––– (2007), “The Good Computer Professional Does Not Cheat at Cards,” in Proceedings of ETHICOMP2007 , Tokyo: Meiji University Press.
  • Weckert, J. (2002), “Lilliputian Computer Ethics,” Metaphilosophy , 33(3): 366–375.
  • ––– (2005), “Trust in Cyberspace,” in R. Cavalier (ed.), The Impact of the Internet on our Moral Lives , Albany: SUNY Press, 95–117.
  • ––– (2007), “Giving and Taking Offence in a Global Context,” International Journal of Technology and Human Interaction , 25–35.
  • Weckert, J. and D. Adeney (1997), Computer and Information Ethics , Westport, CT: Greenwood Press.
  • Weizenbaum, J. (1976), Computer Power and Human Reason: From Judgment to Calculation , San Francisco, CA: Freeman.
  • Westin, A. (1967), Privacy and Freedom , New York: Atheneum.
  • Wiener, N. (1948), Cybernetics: or Control and Communication in the Animal and the Machine , New York: Technology Press/John Wiley & Sons.
  • ––– (1950), The Human Use of Human Beings: Cybernetics and Society , Boston: Houghton Mifflin; Second Edition Revised, New York, NY: Doubleday Anchor 1954.
  • ––– (1964), God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion , Cambridge, MA: MIT Press.
  • Wolf. M., K. Miller and F. Grodzinsky (2011), “On the Meaning of Free Software,” Ethics and Information Technology , 11(4): 279–286.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • A Very Short History of Computer Ethics , paper by Terrell Ward Bynum (2000).
  • Teaching Computer Ethics , book edited by Terrell Ward Bynum, Walter Maner and Jon Fodor (1991).
  • What Is Computer Ethics , a paper by James H. Moor (1985).
  • Why Software Should Be Free , a paper by Richard Stallman (1991).
  • Journal of Information, Communication & Ethics in Society
  • The ETHICOMP Journal
  • Centre for Computing and Social Responsibility
  • Electronic Frontier Foundation
  • Electronic Privacy Information Center
  • Ethics and Information Technology
  • Free Software Foundation
  • International Centre for Information Ethics
  • International Review of Information Ethics
  • Journal of Information Ethics
  • Research Center on Computing and Society
  • Software Engineering Ethics Research Institute

computing: and moral responsibility | information technology: and moral values | information technology: and privacy | information technology: phenomenological approaches to ethics and | privacy | property and ownership | social networking and ethics

Copyright © 2015 by Terrell Bynum < computerethics @ mac . com >

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

Stanford Center for the Study of Language and Information

The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab , Center for the Study of Language and Information (CSLI), Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Regent University logo for the Center for Christian Thought & Action in Virginia Beach, Virginia.

The Intersection of Artificial Intelligence and Christian Thought: A Vision for the Future

As technology continues to evolve at a rapid pace, one of the most profound developments of our time is the advent of artificial intelligence (AI). From self-driving cars to sophisticated data analysis, AI is reshaping the world in ways we could have only imagined a few decades ago. For a Christian liberal arts college, dedicated to integrating faith and learning, the rise of AI presents both challenges and opportunities. This article explores how AI intersects with Christian thought and how believers can thoughtfully engage with this technology in a manner that honors God and serves humanity.

Understanding Artificial Intelligence

Artificial intelligence refers to the capability of machines to perform tasks that typically require human intelligence. These tasks include problem-solving, recognizing patterns, understanding natural language, and even exhibiting forms of creativity. AI can be broadly categorized into “narrow AI,” which is designed for specific tasks (such as speech recognition), and “general AI,” which possesses the ability to perform any intellectual task that a human can do. While the former is already a part of our daily lives, the latter remains a theoretical concept.

Theological Reflections on AI

From a Christian perspective, the development and use of AI invite us to reflect deeply on several theological and ethical questions. At the heart of this reflection is the belief that all creation is a reflection of God’s wisdom and creativity. AI, as a product of human ingenuity, can be seen as an extension of our God-given creativity and stewardship.

  • Imago Dei and Human Uniqueness : The doctrine of Imago Dei, the belief that humans are created in the image of God, underpins much of Christian anthropology. AI challenges us to consider what it means to be human. While machines can simulate certain aspects of human intelligence, they lack the spiritual dimension that is intrinsic to humans. This distinction is crucial as we navigate the ethical implications of AI, ensuring that human dignity and value are upheld.
  • Stewardship and Responsibility : Genesis 1:28 calls humanity to exercise dominion over the earth, which includes the responsible use of technology. AI has the potential to solve complex societal problems, but it also poses risks, such as job displacement, and ethical dilemmas around surveillance and privacy. Christians are called to steward these technologies wisely, advocating for uses that promote justice, equality, and the common good. This stewardship aligns with the biblical mandate to care for creation and serve others selflessly.
  • Bias in data: If AI learns from biased or incomplete data, it can make unfair decisions.
  • Unfair decision-making: AI might make decisions that favor some groups over others in crucial tasks, such as hiring or lending.
  • Lack of diversity in AI development: When AI systems are created by non-diverse teams, important perspectives may be missed.
  • Lack of transparency: AI decisions can be hard to understand or explain, making it difficult to spot and correct biases.
  • These kinds of concerns are leading people to emphasize the importance of making AI ethical, fair, and transparent. Researchers and policymakers are working on feasible solutions, such as using more balanced data, auditing AI for fairness, and creating rules to ensure AI treats everyone fairly.
  • As Christians committed to integrity and compassion, we must advocate for transparency and fairness in AI development. This includes supporting policies and practices that prevent discrimination and ensure that AI benefits all members of society. The prophetic call for justice, seen throughout the Bible, compels us to address these issues with a Christ-centered perspective.

Christian Leadership to Change the World

In the context of AI, Christian leadership is not merely about influencing the direction of technology but transforming the world through a Christ-centered approach. This involves integrating faith and ethics into every aspect of AI development and implementation. Christian leaders are called to be agents of change, embodying the principles of love, justice, and humility in their work with AI.

Practical Engagement with AI

For Christian educators, students, and practitioners, engaging with AI involves both critical reflection and proactive action. This engagement can be fostered in several ways:

  • Curriculum Development : Christian liberal-arts colleges have a unique opportunity to integrate AI literacy into their curricula. Courses that explore the ethical, theological, and practical dimensions of AI can equip students to think critically and creatively about this technology. Interdisciplinary approaches that combine computer science, philosophy, ethics, and theology will prepare students to lead in a world increasingly shaped by AI. This leadership must be grounded in biblical principles, emphasizing wisdom, discernment, and a commitment to serving others.
  • Research and Innovation : Faculty and students can engage in research that addresses the ethical implications of AI and seeks to harness its potential for social good. Collaborative projects with industry and other academic institutions can drive innovations that reflect Christian values, such as projects aimed at improving healthcare accessibility or environmental sustainability. By focusing on research that promotes human dignity and the common good, Christian institutions can model Christ-like leadership in the tech industry.
  • Public Discourse and Advocacy : Christians in academia and beyond can contribute to the broader conversation about AI by writing, speaking, and advocating for ethical AI practices. By participating in public discourse, they can influence policies and practices that ensure AI development aligns with values of justice, equality, and respect for human dignity. Our engagement should be marked by the humility and integrity that Christ exemplified, seeking to be salt and light in the world (Matthew 5:13-14).

The rise of artificial intelligence is a defining feature of our time, presenting both incredible opportunities and significant challenges. For a Christian liberal-arts college, the task is to thoughtfully engage with AI, while grounded in the belief that technology should serve humanity and reflect the glory of God. By integrating faith and learning, fostering interdisciplinary collaboration, and advocating for ethical practices, Christians can contribute to a future where AI is used to promote human flourishing and the common good.

As we navigate this technological frontier, let us do so with wisdom, compassion, and a commitment to justice, while always mindful of our calling to love God and serve our neighbors. In doing so, we can ensure that the development and application of AI honor the Creator and uplift His creation. Through Christ-centered leadership, we can guide the use of AI in ways that reflect the love and justice of God, while fulfilling our mandate to be stewards of His creation and ambassadors of His kingdom. By embracing the concept of “Christian Leadership to Change the World,” we commit ourselves to transformative action that aligns technological advancements with Christ’s teachings, ultimately changing the world for the better.

Similar Posts

Preserving a Constitution Designed for a Moral and Religious People

Preserving a Constitution Designed for a Moral and Religious People

One of the foremost constitutional theorists of the founding generation, John Adams, observed, “Our Constitution was made only for a…

Chief Justice Mark Martin (ret.) on the Independence Counsel Law

Chief Justice Mark Martin (ret.) on the Independence Counsel Law

Former Chief Justice Mark Martin discuss with Pat Robertson the key problems with the regulation and one-word standard determining whether…

Are Educators Dispensers of Knowledge or Brokers of Wisdom?

Are Educators Dispensers of Knowledge or Brokers of Wisdom?

As education continues changing, educators — specifically Christian educators — must gauge whether they are mere dispensers of knowledge or…

Set Point with Steve Perry: The Problem of Corporations Getting Copyright to Orphan Works

Set Point with Steve Perry: The Problem of Corporations Getting Copyright to Orphan Works

Interim Dean Steve Perry discusses how the internet sometimes makes someone’s creative work, whether a poem or song or photo,…

Why Do Stories Matter? – Dr. Peggy Southerland | Regent University

Why Do Stories Matter? – Dr. Peggy Southerland | Regent University

Stories convey value, meaning, and truth, and culture is, in a sense, built on them. Storytellers carry influence with the…

How can I manage my personal finances? | Regent University

How can I manage my personal finances? | Regent University

Finances matter. School of Business & Leadership professor Andrew Root shares some valuable insights on maintaining personal finances sound principles…

IMAGES

  1. Internet Ethics Persuasive Essay Example

    essay in internet ethics

  2. Computer and Internet Ethics

    essay in internet ethics

  3. Ethics and the Internet

    essay in internet ethics

  4. Ethics in the Internet and ICT Profession

    essay in internet ethics

  5. (PDF) ACTIVE ethics: An information systems ethics for the internet age

    essay in internet ethics

  6. Sample essay on ethics

    essay in internet ethics

VIDEO

  1. Write 10 lines English points on Internet

  2. Quotations about internet

  3. Chapter 6 Internet Ethics and Safeguard Revision Part-1

  4. internet ethics

  5. इंटरनेट के उपयोग पर हिंदी में निबंध

  6. Essay on Internet in English || @EssentialEssayWriting || Importance of Internet Essay

COMMENTS

  1. The Social and The Distancing: Internet Ethics in a Time of Crisis

    Irina Raicu is the director of Internet Ethics at the Markkula Center for Applied Ethics. Views are her own. Amid the current pandemic and infodemic, more than ever, it is both joyfully and painfully clear that the internet is not a dual-use but a multitudinous-use technology. ... As L. M. Sacasas put it in a recent essay titled "Every End a ...

  2. PDF Internet Research: Ethical Guidelines 3.0 Association of Internet

    This document introduces Internet Research Ethics (IRE) 3.0. We begin with a review of the AoIR ethical approaches and guidelines that we now designate as IRE 1.0 (Ess and the AoIR ethics working committee, 2002) and IRE 2.0 (Buchanan, 2011, p. 102; Markham & Buchanan, 2012; Ess, 2017). While driven by on-going changes and developments in the

  3. Internet Regulation: The Responsibility of the People

    JAN 31, 2020 • Article . Big Data, Surveillance, and the Tradeoffs of Internet Regulation This essay written by Seungki Kim is the first prize winner of the high school category in the 2019 student essay contest.

  4. Computer and Information Ethics

    The present essay concerns this broad new field of applied ethics. ... In addition, other more specific names, like "cyberethics" and "Internet ethics", have been used to refer to aspects of computer ethics associated with the Internet. During the past several decades, the robust and rapidly growing field of computer and information ...

  5. The Internet and Ethical Values

    The Internet has become an integral part of modern society, revolutionizing communication, commerce, and the exchange of information. From its humble beginnings in the 1960s as the ARPANET, a network connecting universities and research institutions, to the vast web of interconnected devices, platforms, and people it is today, the Internet has transformed the way we live, work, and interact.

  6. Internet Research Ethics

    Internet research ethics is a subdiscipline that fits across many disciplines, ranging from social sciences, arts and humanities, medical/biomedical, and natural sciences. Extant ethical frameworks, including consequentialism, deontology, virtue ethics, and feminist ethics, have contributed to the ways in which ethical issues in Internet ...

  7. Social Networking and Ethics

    Social Networking and Ethics. First published Fri Aug 3, 2012; substantive revision Mon Aug 30, 2021. In the 21 st century, new media technologies for social networking such as Facebook, Twitter, WhatsApp and YouTube began to transform the social, political and informational practices of individuals and institutions across the globe, inviting ...

  8. Internet Ethics Articles

    Markkula Center for Applied Ethics. Focus Areas. Internet Ethics. Internet Ethics Articles. Find information on Internet ethics, including privacy, databases and data mining, cybersecurity, the Internet of things, and the right to be forgotten. (For permission to reprint articles, submit requests to [email protected].)

  9. Digital Ethics

    Netiquette, a hybrid word combining "network" and "etiquette," essentially refers to the social code of the internet. As such, netiquette -- how we communicate, treat others, portray ourselves, and protect ourselves online -- is a question of ethics. Ethics, or moral philosophy, refers generally to how groups and individuals determine moral courses of action.

  10. Internet Ethics Essay

    Internet Ethics Abstract: This paper takes a look at basic ethics in relation to the Internet. By tracing the development of the Internet, it identifies perils of the World Wide Web and their moral significance to a culture trying to move successfully into the twenty-first century.

  11. The Internet's ethical challenges

    The Monitor sat down with Behnke to discuss the ethical aspects of the Internet for psychology practitioners and how to think about them. Does the APA Ethics Code guide practitioners on social media? Yes. The current Ethics Code was drafted between 1997 and 2002. While it doesn't use the terms "social media," "Google" or "Facebook ...

  12. The Ethics of Online Privacy Protection

    "When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else." -- David Brin "The principles of privacy and data protection must be balanced against additional societal values such as public health, national security and law enforcement, environmental protection, and economic efficiency."

  13. Ethics and the Internet

    Network services providers; Issues. Ethics refer to the fundamental rights of others and the regulations which govern how we should behave in relation to others when our behaviors are affecting others. Internet is mostly self- regulated by the users, and the internet service providers. Generally, the control on internet use and misuse cannot ...

  14. Computer and Information Ethics

    Thus Internet ethics rose to prominence as a subfield of computer ethics, and ethical issues related to Internet technology were taken up by scholars in a growing number of research areas. ... The Cambridge handbook of information and computer ethics. Cambridge University Press, Essay, pp 20-38. Google Scholar Cocking D, Matthews S (2001 ...

  15. Internet Research Ethics

    The Internet, as a field, a tool, and a venue, has specific and far reaching ethical issues. Internet research ethics is a subdiscipline that fits across many disciplines, ranging from social sciences, arts and humanities, medical/biomedical, and hard sciences. Extant ethical frameworks, including consequentialism, utilitarianism, deontology ...

  16. Ethics in the digital world: Where we are now and what's next

    Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.

  17. PDF The Ethics of Internet Research

    The ethics, and ethical governance, of online research have been much debated, and a number of professional organizations have promulgated guidelines for researchers considering conduct-ing their research online. This chapter offers an overview of the current position, suggesting relevant considerations in respect of different kinds of project ...

  18. Internet Ethics

    Internet ethics investigates ethical and societal aspects related to information and communication technologies - specifically the Internet - which arise at the interaction between users, designers, online service providers and policy. Some central topics include privacy, surveillance, personalisation, autonomy, nudging, well-being ...

  19. Internet Ethics Essay

    Internet Ethics Essay. Abstract: This paper takes a look at basic ethics in relation to the Internet. By tracing the development of the Internet, it identifies perils of the World Wide Web and their moral significance to a culture trying to move successfully into the twenty-first century. As scientists travel into the future, they are lead by ...

  20. Ethics on the Internet Essay

    1395 Words. 6 Pages. 3 Works Cited. Open Document. Ethics on the Internet. In today's society, there are many ethical issues on the Internet. Some of the biggest issues and concerns seem to be hacking and viruses, copyright infringements, spam, privacy, and cyberporn. Internet ethical issues affect a wide variety of individuals and almost all ...

  21. The Elie Wiesel Foundation Announces 2024 Winners of Prize in Ethics

    The Elie Wiesel Foundation Prize in Ethics Essay Contest encourages students to write thought-provoking personal essays that raise questions, single out issues, and offer rational arguments for ...

  22. Computer and Information Ethics

    Computer and Information Ethics. First published Tue Aug 14, 2001; substantive revision Mon Oct 26, 2015. In most countries of the world, the "information revolution" has altered many aspects of life significantly: commerce, employment, medicine, security, transportation, entertainment, and on and on. Consequently, information and ...

  23. Internet ethics, the key to the digital future

    Some of these principles are summarised below: Participation. Developing digital skills is an integral part of educating citizens across the globe. A dynamic digital society. Digital products and services need to continue to innovate and provide invaluable benefits to society. Data and privacy.

  24. The Intersection of Artificial Intelligence and Christian Thought: A

    Interdisciplinary approaches that combine computer science, philosophy, ethics, and theology will prepare students to lead in a world increasingly shaped by AI. This leadership must be grounded in biblical principles, emphasizing wisdom, discernment, and a commitment to serving others. ... Interim Dean Steve Perry discusses how the internet ...