Thu. Jul 18th, 2019

Plans to Prosper US


Seattle Times Initiates Discussion About Artificial Intelligence, Its Moral and Economic Implications

5 min read
Seattle Times Artificial Intelligence

AI is developing rapidly, and it has become our biggest contemporary achievement and concern at the same time.

The Seattle Times’ Initiative

Technology is rapidly advancing in, for now, controlled environment. We develop new technologies and improve old ones. This progress makes us glad because we are the ones who initiate it, guide it, develop it, and ultimately, control it. However, what happens if we don’t control it anymore? What happens when we don’t control the development rate and don’t guide the progress? Of course, the biggest concern of all is this: What happens if we don’t initiate technological development anymore?

These are all important questions when talking about AI. However, these questions are merely the tip of the iceberg. Even the moral issues that AI technology raises are just part of the problem. There are also potential economic and social consequences that need to be considered. The problem of AI may be the most important one that our society is facing at the moment. MIT scientists, politicians, journalists, experts, and science enthusiasts are all addressing this issue and giving their opinions as AI is becoming more advanced, year in, year out. That’s why the Seattle Times initiated an open discussion on the topic, and all the readers are welcome to join in.

The Philosophical Approach

We said that AI raises questions about social, political, and economic consequences, and even moral concerns.

AI Approach

Anyhow, answering all these questions without establishing a philosophical standpoint would be in vain, and it wouldn’t help our discussion. Therefore, before we start talking about the scientific aspects of AI, we need to establish some basic definitions. First of all, if humans fear that artificial intelligence can overpower human intelligence, we need to answer the basic question: What is intelligence?

We could say that intelligence reflects the level of our ability to adapt to the unknown, the ability to apply knowledge and create brand new approaches. So, intelligence is much more than task-solving skills; it encompasses the ability to understand, learn, and create original ideas. However, when we talk about human intelligence, we cannot say that intelligence is the only thing which determines the way we act. Human action doesn’t derive merely out of mental abilities. It comes from moral beliefs, ideology, and even emotions. Therefore, the question of AI boils down to the following issue: What happens if there is autonomous intelligence that acts liberally and is not affected by any other aspect of human existence, such as emotions or moral beliefs? Wouldn’t that be the embodiment of acting out of self-interest?

The Possible Consequences

These questions present a philosophical riddle. First of all, what would the interest of AI be? If humans developed AI to solve tasks, then AI’s only interest could be task-solving. Following this logic, AI would always strive to make things easier, more successful, and better-performing. However, the biggest question about AI is what happens when AI becomes independent, i.e., when it can learn, improve, and make decisions independently? Could AI develop so much in the future that it will see humans as a bug in the system, as things that are slowing the task down? Well, this does sound like some depressing SF scenario, but the question is interesting.

If we agree with German philosophers and accept their hierarchy in human behavior — will, thinking, speaking, and acting — what happens with AI as it basically only has immediate acting? The nature of AI doesn’t allow it to be similar to humans; therefore, the “thinking” and acting of AI would happen simultaneously. Is this the reason why some scientists refer to AI as “godlike technology”? If AI doesn’t have the will to initiate an action, its actions can be based only on one thing: data.

Now we’ve come to the root of the problem. Doesn’t this scenario with data as the core issue seem familiar (and no, we are not talking about Sarah Connor and the Skynet)? We all remember the Cambridge Analytica scandal and how everything escalated quickly and went too far. This incident is not the only example of how far data collecting can go. For example, China is already collecting financial and behavioral data on its citizens.

Now, the questions seem to be what happens when AI starts “deciding” which data to collect? Moreover, what happens when it “decides” how to use that data? All of these implications are beginning to look like an episode of Black Mirror or Elon Musk’s nightmare. However, besides these philosophical and moral issues and implications, there is an inevitability of social and economic consequences. How can AI influence politics, economy, and even war?

Controlled Benefits and Speculations

AI already has an obvious impact on finances, marketing, and global politics. The development of AI has influenced the lives of the workers that it replaced, the decisions of the entrepreneurs that welcomed it, and the global economy in general. How far can AI go? How much will it influence financial issues and marketing strategies in the future? Will it influence art? How about the entertainment industry? Will it take targeted marketing to a whole new level? These are all questions that seem to be of great importance for humanity and not just the future of humanity but also our present.

MIT professor Max Tegmark once said that AI is the invention that we can’t make mistakes with; we can’t afford to learn from our mistakes about it. When we think about this statement made by a celebrated professor, we can easily consider AI issues our present issues, our contemporary concern, not some distant problem that is waiting for us in the future.

It’s more than clear that we benefit from AI a lot. Our contemporary global needs created AI. This technology is our present and our future, but at the same time, it’s something that we need to be cautious with. Do you agree with these conclusions? Would you like to discuss your opinion with experts and enthusiasts? As we said, the Seattle Times has initiated an open discussion about these issues, and you are free to take part in it.

1 thought on “Seattle Times Initiates Discussion About Artificial Intelligence, Its Moral and Economic Implications

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.