Gray Cox, a professor at College of the Atlantic, held a presentation titled “From Smarter Planet to Wiser Earth.” Cox expanded upon not only the potential but also the risk of applying Artificial Intelligence (AI) programs to our society.
Joined by colleagues and various students on March 23, the virtual meeting began with a word from Doug Allen, coordinator of the Socialists and Marxist Studies Series. There are about 700 programs (eight per semester) typically offered in person.
The interdisciplinary minor is co-sponsored by the Maine Peace Action Committee as well as the Division of Student Affairs and is supported by the Department of Philosophy and the College of Liberal Arts and Sciences. Alejandro Watt Arroyave gave a land acknowledgment statement before the discussion began.
The basis of Cox’s presentation is formed based on his new book, “Smarter Planet or Wiser Earth,” which deals with existential threats to the function of society. He goes on to explain that the systems of our planet are instrumented, interconnected, and intelligent, all of these components being innately inevitable.
Cox references Max Tegmark, who claims that life 3.0 can redesign its own software and hardware in addition to that of Ray Kurzweil, who says that the singularity is near.
“Singularity, the point in which there might be an artificial superintelligence that starts to surpass us in ways we cannot understand. Shouldn’t we hold on with artificial intelligence until we figure out real intelligence?” Cox said.
Intelligence is defined as the ability to achieve, sustain or enhance one or more values in various contexts over time. Guided by those values is the ability to reshape and adapt to new versions of the world and take many forms.
“Intelligence can take many forms; emotional intelligence, social intelligence, physical intelligence, mathematical intelligence, artistic. In this sense, organisms and biological communities may exhibit intelligence, and so may machines and other systems,” Cox said.
Wisdom, however, can be considered a systematic intelligence. AI created by “artifice” is guided by explicit intentions and is typically silicon-based.
Two potential projects related to AI evoke vast complications within society. The first falls upon an increasingly smarter planet via capitalist rationalization. The problem at hand is that developing narrow AI management for one or few values tends to ignore community identity or other imperative considerations.
Second is the creation of ever-better solutions through artificial superintelligence. One issue is whether or not the program would be ethical to us or indifferent. Would it be friendly to good people by general definition and not simply by the metric of its developers? Furthermore, if it is truly intelligent and pursuant of the good, what will it think of our treatment of animals and the planet itself? Is our mutually assured destruction worthy of its friendship?
There are two distinct visions for an intelligent world, the first being monological inference. It starts from one point of view and draws conclusions carried out by a single individual or a machine. For example, if everything is B, and C is A, then C is B.
Cox also explains an alternative to the trolley problem, which was taught in a college course by a colleague. It asks that if you are a surgeon with four patients in need of a lifesaving transplant, and a healthy patient in the waiting room sleeping, should that healthy patient’s organs be sacrificed for the many?
A student responded by saying the best option would be to take organs from one of the people already dying. Though that answer may be missing the point of that particular philosophical thought experiment by refusing either of the two choices, it establishes a third option that goes hand-in-hand with how AI can actively contribute to assessment and problem-solving.
The second vision for the intelligent world is dialogical reasoning, structured by strategies that guide a process of reasoning. It falls along the beliefs of nonviolent Satyagraha practices, and the procedure for implementing it is as follows: encountering differences with someone, pursuing strategies of negotiation, and finally, reaching a genuine, voluntary agreement.
The perspective of developing a smarter planet falls under systems such as Newtonian physics and rocket science which use monological reasoning but can also lead to an ecological collapse. Wiser earth, however, utilizes Gandhian values and a consensus approach to conflict transformation.
Cox developed an online game for children which has similar ethical measures as the trolley problem. In it, there is a bear that will either eat a prince or a paraplegic basketball player and only one can be saved.
This game in particular is not limited to those choices and allows children to explore other potential choices. Expanding the range of options and reprogramming the AI to offer them, allows for further consideration of the principles of their decision. Simplistic computer systems such as that one can incorporate dialogical reasoning.
“In a very simple block-coding kind of program, kids are invited to explain ways in which they may be uncomfortable with their choice or want some different options, other interests they want to take into account. In other words to start applying to get yes principles to the decision,” Cox said.
In order to use the range of AI functions effectively, it is necessary to frame the structures of the relationships and commitments as Human/Nature/AI communities in human ecological systems and make them as open as possible to discerning and engaging with emergent realities.