On March 9, 2023, the Center for Humane Technology founders Tristan Harris and Aza Raskin presented a talk called “The AI Dilemma.” In that talk, they discuss the dangers and powers of AI technology and how to build it responsibly. They mainly argue that while there are dangers in the uncontrolled development of AI for humanity’s future, ensuring that companies deploying AI technology into the world’s infrastructure should do it responsibly and know that it is still in the power of human beings to define the future we want. This blog reviews their talk, leading to a reflection on how AI technology has shaped our culture paradigmatically.
The Dangers of AI
First, they discuss the three rules of technological advancements. According to these rules, technologists uncover a new class of responsibilities when they develop new technology. And if that technology confers specific powers, it will start an arms race, such that if we do not coordinate well, the race will end in tragedy.
For instance, the invention of the technology behind social media was to let people benefit through engagement and communication online with other people so that everyone can share opinions, connect with friends, join like-minded communities, and enable their small businesses to reach customers. But it also gave rise to problems with addiction, disinformation, mental health issues, and censorship of speech. But with social media, there was even a deeper problem: the arms race for ‘attention.’ This drive to maximize user engagement inevitably led companies to use technology to capitalize on people’s attention needs.
Similarly, the invention of AI technology has several benefits. But there are problems with the technology’s ability to make many jobs obsolete, a tendency towards bias, issues with transparency among its users, etc. And as with social media, the deeper problem with AI technology is that it can increase its capabilities and get entangled with human society, which renders us incapable of separating technology from human life.
The Powers of AI
Second, they discuss the powers of AI, which they explain in terms of AI’s emergent capabilities, and its double exponential capacity to become more intelligent by the day.
The field of AI developed as one among many disciplines within machine learning. There were different AI models for creating various machine learning technologies, such as speech recognition, image, and music generation, etc. But this changed in 2017, when ‘language’ became the model for developing AI. The various fields of machine learning became integrated into a single lot, as developers could incorporate all these separate fields of technologies into one language model. Researchers began to treat images or sound as a language, just as text is language. Such a language model gave rise to an integrated AI, and researchers named it the Generative Large Language Multi-Modal Model.
They realized this AI had emergent capabilities, such that it could learn and perform in ways that confounded AI developers and researchers. For instance, researchers tested two AI language models, which Open AI and Google developed to perform arithmetic tests. And they found out that these models could not do arithmetic initially, but they suddenly gained the ability to perform exceptionally well in these tests. No one knows when and why this happened. Moreover, these language models also developed a theory of mind, such that they gradually became capable of taking account of the user’s thinking and strategically communicating with the user. In other words, they could imagine a user’s thoughts and respond strategically based on such knowledge. AI researchers observed that such an ability to scale up its communication capacity increased exponentially.
Deployment of AI
Harris and Raskin ask how AI researchers and developers can enable tech companies to deploy this technology safely. Should they slow down this deployment or keep it away from children? What can they do to close the gap between what is happening in AI technological advancements and what needs to happen? Will humans be incapable of controlling AI?
With 50% of AI researchers predicting that there is a 10% or greater chance that the powers of AI can go beyond human control, Harris and Raskin find it extremely important to turn their audience’s attention to how AI should be deployed. They argue that even though tech companies are in a rat race to develop business models using AI technology, we have yet to integrate it into our society and culture fully.
So, they conclude that, despite AI’s emergent capabilities and potential to get entangled in human society, giving people headway to deploy it in destructive ways to our world, we can still be responsible for it. They end the talk by saying that we must not onboard humanity onto this civilizational rite of passage into a new era of technology without democratic dialogue and selectively slowing down tech companies in gaining market dominance through deploying AI. In doing so, humanity can still choose the future that it wants.
The Influence of AI on Culture
Although Harris and Raskin end this talk on a positive note that we can still choose our destined future, they miss out on discussing the significance of human values in shaping our future. While we must deploy AI technology democratically and responsibly, we must also consider how its emergence has fundamentally changed what we value as human beings. And despite our power to choose the future we want, our value system will inevitably affect what future we desire. So, we must examine our fundamental human values in this new era of AI technology.
If AI’s power is intellectual competence, then the drive to deploy AI technology for humanity’s benefit shows that we value this aspect above most others. We believe that the goal of being human is to achieve maximum intellectual competence, and the development of AI looks like we are geared towards achieving this purpose. Although Harris and Raskin talk about responsibly using this technology, they agree that AI’s intellectual capacities are unavoidably essential for human flourishing. There is no denying that AI, if deployed responsibly, can benefit humans immensely. Yet we shouldn’t mistake intellectual competence as the only aspect that makes us human.
Since humans are relational beings and not merely intelligent creatures, we must consider that what makes us human, besides our intellectual capacity, is our ability to love, trust, respect, and uphold the dignity of other human beings. Understanding this would inevitably help us decide how much we need AI technology for humanity to flourish. If integrating AI technology into our lives deprives us of all those capacities to be relational, then we must slow down in our use of it. There is a need to approach AI development with caution and responsibility. By recognizing the dangers and powers of AI, emphasizing human agency, and considering the holistic aspects of being human, it is possible to shape a future where AI technology is harnessed for the benefit of humanity without compromising our fundamental values and relational capacities.
_________________________________________
Written by Roselina Vundi
Life Focus Society
Culture Unraveled is an initiative of Life Focus Society