AI Isn’t Coming: It’s Already Here, And It Will Just Keep Evolving

The following is adapted from Surfing Rogue Waves.

AI isnt coming its already.png

People seem to either love or hate talking about artificial intelligence. Yet, it’s not up for debate. Non-biological forms of intelligence are all around us in our everyday lives, whether or not we choose to accept it. 

The US military has already implanted chips into human brains to advance our knowledge and understanding of how to treat and cure individuals suffering from combat trauma and post-traumatic stress disorder. In other studies, we have seen how doctors and researchers can cure acute depression by implanting electrodes directly in the brain of patients.

Today, using just our minds, paralyzed individuals can move bionic limbs and operate computers. There are wireless remote-control technologies that allow individuals to control connected items in their homes through a “mind-reading” electric helmet-like device. The list of how AI impacts us goes on and on.   

All of this progress assumes the human biological brain will remain the central, controlling element of intelligence. However, is that a safe assumption to make? What are the ethical considerations we should be considering? Those are questions worth thinking about, right now, by every single person on this planet, because AI isn’t coming. It’s here, and it affects all of us.

Lack of AI Transparency 

In 2018, Meredith Whittaker, a research scientist at New York University and co-founder of the AI Now Institute, highlighted how AI systems cause many deep concerns around the process, transparency, ability to examine or inspect, and accountability of predictive analytics. 

The day is rapidly approaching (if it’s not already here) when it could not only be impossible for the average person to understand how AI systems make decisions—even the technical experts who created the technology may not have the visibility to understand how the system made the decision. The reality is that many of the machine-learning AI systems work blindly. 

The complete lack of visibility and understanding into how a decision is made creates a major problem around bias in the decision system. We not only lose the ability to remove the bias from the system, but we will not even be aware if bias is happening at all. And yes, machines can be biased. In fact, most technology, AI, programs, and machines are built around bias.

Only as Good as the Data 

We can’t just consider lack of transparency when thinking about the impacts of AI on our world. Due to our humanistic shortcomings in how we make decisions, AI systems are popular because of how they outperform the best of us in predictive-based decision-making. Our reliance and comfort on these automated decisions continue to grow without any awareness of the impacts. 

In less significant situations, the predictive decision carries a relatively low risk. However, these predictive decisions are being used more in applications with higher risks and consequences, like in medicine, where these AI systems are used to identify cancers or treat patients.

Where do we draw the line? Should we allow AI to make life-or-death decisions? Should they make decisions in our legal systems to see who is granted bail and who serves more time? Should these AI systems predict our policing or military actions? Should they decide which target is a threat and how that threat should be suppressed or terminated? Is it too early to call AI “they”? 

For many, the answer would be no. What if the AI systems are not perfect but significantly better and more accurate than what a human could do? What then? Systems capable of predictive data analysis are often more accurate and significantly cheaper. As they continue to evolve, they will become cheaper again and most likely continue to increase in popularity. But they have bias, and they are only as good as the data they use.

How Much Bias Do We Allow?

Even if we’re aware of an AI’s bias, it’s still important to take a good hard look at how much bias we allow. An AI system developed to be highly sensitive to bias would also have to be much more myopic in its abilities. It would have trouble with large amounts of complexity, and vice versa for less-sensitive biased algorithms that would welcome complexity but be much more prone to bias. Who decides how much of what is OK?

These problems and many questions should be addressed before the system is created. But this is rarely the case. It’s hard to determine the ethics when we aren’t even sure what the system can do. 

This is not a futuristic concept or problem; we have seen many of these technological biases creep into our current world, organizations, and governments already. For example, Amazon’s automated recruitment system used for vetting potential employees discriminated toward women’s employment, something Amazon had an issue with long before AI showed up. This system was discontinued in 2017 by Amazon once the bias was identified. 

Human-Robot Interaction

There’s another angle to consider: Human-Robot Interaction (HRI). HRI is exactly what it sounds like, humans and robots interacting together. We see this more and more in both our personal and professional lives. In a world in which humans and machines are stronger together than on their own, this tidal wave of a partnership shows no sign of slowing down anytime soon. 

How do robots care for humans beyond what we see in service or medical settings? 

Once again, we’re facing a conversation that makes many people uncomfortable, and discomfort and tension put us right in the middle of a barrel, meaning it must be addressed. The idea also overlaps with the areas of deception. 

Do we dehumanize the caring aspect of “taking care” of someone? Or do we humanize the AI experience for the sick, ill, elderly, or in need of care? What if humanizing this experience makes patients happier and increases their longevity? If this is the case, and we choose to humanize care, we are deceitful since AI systems cannot actually “care” for anything. 

Where do we draw the line when deceiving what models of AI systems are acceptable and those that are not? With a large demographic of well-funded baby boomer generationals now needing care, AI systems are filling in the care gaps in healthcare. When is it OK to lie a little to people about the treatment they receive? Who gets to decide what is being done for the patient’s “own good”?

How Does Technology Impact Society?

Everything we discuss in AI comes down to the same handful of systems: the transparency of AI systems, data, the security of systems, and so on. Many existing governing bodies have already started to build out much of the standards around specific applications of autonomous systems. 

Autonomous air, water, and ground systems are already a megatrend from which we can dream up many world-changing applications, as well as potential nightmares—like with autonomous weapon systems. Suddenly, it’s not just a matter of the ethics of the technology, but how the technology impacts the society it exists within.

We need to consider these questions, and we need to do it now. AI is here, it’s evolving, and it’s not going away. If we are to have any hope of directing the ethics of AI, we must start the conversation now.

For more advice on how AI is shaping our world, you can find Surfing Rogue Waves on Amazon.

Eric Pilon-Bignell is a pragmatic futurist focused on addressing disruption by increasing the creative capacity of individuals, teams, and organizations to ignite change, innovation, and foster continuous growth. Eric has an undergraduate degree in engineering, an MBA in Information Systems, and a Ph.D. in Global Leadership. His doctoral work primarily explored complexity sciences centered on executive cognition and their use of intuitive improvisation, decision-making, artificial intelligence, and data-based decision models. When he is not working with clients, researching, or writing, he can be found in the mountains or on the water. He founded PROJECT7 to raise awareness and money for research on brain-related illnesses. Eric is currently working and living with his wife in Chicago, Illinois. To connect or learn more about this book, Eric, or PROJECT7, please visit www.ericpb.me.