IT Services – Kansas City

Ethical Issues With AI

          Artificial Intelligence, or AI, is a type of technology that has been rapidly evolving in the last few years. AI is designed to mimic the intelligence of humans in order to perform certain tasks or carry-out certain processes. You probably encounter AI much more than you even realize in your day-to-day life. From self-driving cars to chatbots like ChatGPT, AI is everywhere. And, due to its wide variety of capabilities, AI can often be the center of controversy. Ongoing debates about AI usually discuss what roles we should and should not allow AI to step into, how safe and secure AI programs are, and what boundaries on AI we should and should not create. It can be easy to step over ethical lines using AI, so it is important to have existing guidelines that outline how AI tools should be used.

Here are some of the main ethical concerns when it comes to the use of AI:

Deepfakes and Intellectual Property

          One of the most obvious ethical issues with AI is the creation and use of deepfakes. Deepfakes are videos in which a person has been digitally altered by AI to appear to be someone else. Deepfakes can be very convincing, sometimes even using additional AI to change voices, scenery, etc. With the recent advancements in AI technology, deepfakes can be quite easy to generate. This technology can be used for fun, but can also be used maliciously. Deepfakes can be created to intentionally spread misinformation by making it appear as if someone said or did something that they didn’t actually do. Also, deepfakes have the potential to be used to circumvent voice and facial recognition programs and override security measures. Additionally, the act of impersonating someone comes with its own set of ethical issues. Deepfakes could be used to sway public opinion on important issues in any direction possible. Plus, using a person’s likeness without their permission can be cause for trouble. Overall, it is pretty easy to see why deepfakes might cause problems.

Job Displacement

          One of the most widespread concerns with AI is its ability to replace human workers in certain jobs. Many people fear that automation and AI’s creative abilities will replace certain roles because it is often cheaper and easier for employers to use rather than paying an employee and can often generate convincing material. This would cause a major spike in unemployment and a loss of available jobs for human workers. Some are also concerned about AI replacing artists, musicians, and many other artistic fields in the future. This is one of the main reasons that people are calling for clear limitations on AI. AI has already begun to steal human jobs- how far will it go?

Privacy and Security

          As mentioned above, deepfakes can be a major concern when it comes to online privacy and security. But, AI can also be a threat to security in many other ways. In order to train AI programs, the programmers must use massive amounts of data. Currently, there is very little public information on how the data gets collected, used, processed, and stored. So, it is unknown who can access this data and how they can use it. Also, law enforcement agencies have begun to use AI to monitor and track the movements of certain individuals, a technology that has great potential for misuse. As far as security goes- AI has been known to be the target of several cyberattacks. Additionally, AI technology itself can be used by hackers and scammers to help enhance their scams.

Transparency and Accountability

          It is very important that AI users understand how certain AI technology works, and in some cases, it can be difficult to understand why certain AI tools came to certain conclusions. The decision-making process programmed into AI technology should be disclosed in a way that is understandable, and often it is not. Also, since AI tools are trained by data from human sources, the AI can unintentionally be trained to follow a bias. If the data used to train the AI displayed prejudice towards a certain group, then the AI tool will also display that prejudice in its work. Many people are calling for improved accountability from companies who use and develop AI programs due to these ethical issues.

Misinformation

          Along with deepfakes, AI can be used to misinformation in many other ways as well. AI can create fake content at very little cost. Then, this spread of this material can create social divides and sway opinions that hurt certain individuals or organizations that don’t deserve it. Since AI can create very believable, convincing material, the struggle to distinguish between fact and fiction online could soon become much harder.

Loss of Social Connection

          Some AI technologies have the ability to create very personalized experiences for each user. Especially through the use of things like automated customer service and online chatbots, some are concerned that AI will cause a drop in social connections between real people. Also, AI can be used to personalize things like search engine results or targeted ads- if your entire online experience is catered to your preferences, you will not have as much exposure to other things and may in turn become less sensitive or empathetic towards other people.

Personal Accountability

          Many people are concerned about the potential of AI, particularly AI programs like ChatGPT, to be used to cheat or deceive people. Some people have begun to use ChatGPT to write material for them, create reports, collect information, etc. The most prominent institution that this issue effects so far is education- will students use ChatGPT to write essays for them, then turn them in as assignments? What if employees use AI to prepare a report or press release or white paper and, using AI, includes confidential or private information, or simply doesn’t accurately represent the viewpoint or intended messaging of the company?  This is why clear guidelines on how and when to use AI tools are important. In any given scenario, each individual must understand what they are and aren’t allowed to use ChatGPT for, and people must be transparent about what they are using the tools for.