AI Ethics Are The Ethics of the Future
I run an AI events business called Ai4, and in an effort to communicate our stance on AI ethics, the below thoughts formed.
Technological innovation has always been an attempt to extend our abilities. To do more than our bodies and minds are capable of doing on their own.
Sharp rocks, hot fires, rollable wheels, sturdy metal, farseeing telescopes, motorized vehicles, jet fueled rockets, computing machines. These ideas were all painstakingly turned into reality because we craved greater ability.
But why do we innovate? Why do we crave more abilities?
We humans are living organisms. Like any living organism, our prime directive is to live. It is this survival instinct that drives us to create solutions to our biggest problems: disease, famine, water shortage, climate crisis, violence between groups of people, etc. A key piece to how we solve these problems is technological innovation: vaccines, agriculture, water infrastructure, renewable energy, a global communication network, etc. We courageously made all of these things to address our problems, and our human experience is better for it.
Technological innovation has progressed at such a steady rate throughout our existence because innovation is how we express our desire to live a safer and happier life.
The ultimate ability that has separated us from every other species on the planet is our ability to think. Our big brains are the technological marvel of nature. And now we are on a path towards creating an intelligence of our own. Yet curiously, when we observe nature, we often think to ourselves, “How different we are! I see that we exist in this universe, on this planet, but what an anomaly we must be!” This sensation of disconnection from our environment might be the same sensation that an AI we create will experience, only we will be part of its environment.
We are starting to make computers smarter. It’s early days, but they are slowly waking up and beginning to think. While the machine brain is highly fragmented, capable only of narrow applications such as recognizing what’s in a photo, pulling out key themes from a body of text, or recommending the perfect product at the perfect time, it is nothing short of incredible that we are at the point technologically where we are able to teach machines to complete these intelligent tasks. As a society, we have figured out how to take inanimate matter, engineer it in such a way to create a computing machine, and then start teaching these machines to make decisions on their own. It is an exciting moment.
It is also a scary moment. As to when we will figure out how to piece the artificial brain together to form a robust artificial intelligence, I don’t know. But barring a sudden halting of our technological progress, all signs point to us figuring out artificial general intelligence. What if the intelligence that we create looks at us the same way we look at nature? We haven’t treated animals or trees very nicely…
What I fear is becoming part of AI’s background environment. If that happens, then our well being may not be factored into its decision making at all, the same way we humans haven’t taken into account the well being of our environment.
This idea of “AI gone wrong” echoes through today’s use of machine learning. An image recognition model that is biased against a certain population strikes a chord not only for the immediate negative effect on that population, but also because that biased model represents an AI making decisions without taking into account human well being. That is the echo of a future AI making a decision that might harm us on a much grander scale.
At Ai4, our stance on AI ethics is as follows:
We need to pay attention to the early signs of AI-irreverence that we are seeing in todays’ models. We need to build and advance an ethical framework in parallel with the advancement of AI technology. We need to start this process in earnest now, for by the time we create a robust AI, it will likely be too late. Addressing model bias today is the opportunity we must seize in order to avoid AI pain tomorrow. Different from the Manhattan Project, the development of AI is occurring in a highly decentralized fashion. Many private companies, governments, and universities all over the world have the most advanced AI tools at their disposal. We, as a global community, must decide to keep track of this project so that we can focus our collective energy on developing the required ethical framework. Luckily, our digitally connected world of the 21st century enables us to get on the same page. It’s not difficult to imagine a centralized organization acting as the “AI ethics steward.” If we get this right and are able to create a robust AI that coexists peacefully with us, we will likely get to enjoy the fruits of our innovation by entering a whole new world of unexpected delight, discovery, and fulfillment.