Closing the AI Skills Gap Requires a Whole-of-Society Approach

Written by
Aug. 26, 2021

by Lynne Guey, MPA '22 for Annotations Blog

We often hear that America is facing a skills crisis. National security strategies declare the urgent need to invest in Science, Technology, Engineering, and Math (STEM) skills—and with good reason. Reports underscore a sizable skills gap in high-growth technology fields like Artificial Intelligence (AI), quantum computing, and semiconductor manufacturing, which make up an increasing share of national economies. 

Automated work
Visions of the future often conjure images of a world where humans are controlled by intelligent machines. Source: Giphy

Yet while the imperative to double down on STEM education is justified, the United States’ technical skill deficits should not be its only concern. In fact, our biggest societal challenge at the moment lies not in the lack of technological progress, but in an inability to keep up with the collective demands that integrating these new technologies require.

AI changes the fabric of societies, jeopardizing fundamental moral, social, and emotional values that have traditionally held nations together in times of crisis.

Reframing the Problem: The Real Skills Gap 

The question of how to best harness the United States’ talent pipeline in the coming AI-enabled era is an important, albeit complicated, one. The current dominant narrative centers around a “race for talent” between the two preeminent AI superpowers, China and the United States. It is tempting to descend into a single-minded obsession over which country is graduating more STEM PhDs. But whether China or the United States prevails is beside the point, as this framing fails to recognize the larger existential threat behind how AI changes the fabric of societies, jeopardizing fundamental moral, social, and emotional values that have traditionally held nations together in times of crisis. 

Take the threat of health misinformation, for example. We’ve seen how false information about COVID-19 has spread at an unprecedented speed and scale, often amplified by the AI that powers our search algorithms. Social values, such as respectful communication or a nuanced understanding of different perspectives, have failed to temper the cascading effect of fragmented information ecosystems. The result, as described by The Consilience Project, is “disorienting cognitive dissonance, emotional volatility, and tendencies towards extremism, moral righteousness, and ultimately physical violence.” The Consilience Project further elaborates that when society’s members are no longer able to make sense of the world they inhabit, they must prioritize the rebuilding of cultural capital, upgrading of institutions, and creation of novel forums that enable public sense-making. 

Strengthening basic forms of social capacity becomes especially important as machines dictate more decisions that govern our daily lives, from public benefits eligibility to the headlines we read. Our best, perhaps only, defense is to learn how to collectively question these automated outcomes and processes, determine who is liable when algorithmic mistakes happen, and importantly, maintain a grasp of our basic humanity through it all. If any semblance of an open and democratic open society is to remain, AI systems must, as Julia Powles and Helen Nissenbaum describe, “be capable of contest, account, and redress to citizens and representatives of the public interest.”

But it will take more than socially conscious engineers to usher in an AI-enabled future that is accountable to basic human values. We should, thus, explore AI’s full scope, not just from standard technocratic or ethical perspectives that seek to apply the technology for national security or commercial gains, but also through the lens of ecological and humanistic disciplines that lend experience beyond what is necessary for entrance into the labor market.

The added value of humans will almost always involve creativity, conscientiousness, resilience, and motivation—character traits that cannot be replicated by an algorithm.

Call to Action: A New Learning Ethos

As policymakers and educators, we can re-imagine skills curricula and workforce development strategies that cultivate the skills all of society should possess as we race towards a future where humans can no longer interpret the decisions made by our devices. Our current education system dates back to an Industrial Revolution model that focuses on memorization, standardization, and task-oriented precision—skills that will be easily replaced by AI. It’s why fields like high-frequency trading are largely being turned over to computer systems that can outcompete humans in terms of speed and precision.

While we don’t know what the future will bring in terms of technology, work, environmental disasters, or new social movements, we do know that the added value of humans will almost always involve creativity, conscientiousness, resilience, and motivation—character traits that cannot be replicated by an algorithm. AI will never be able to orient itself in a human body, collaborate across differences, or provide judgment that is rooted in social mores. One area worth exploring is how to incorporate more socio-emotional, non-cognitive skills across disciplines, perhaps through more physical education, engagement with the arts, mindfulness exercises, and collaborative civic engagement projects. Over and above technical skills, these shared human experiences build moral, social, and emotional capital to draw upon when humans are called to exercise judgment over the limitations of machine prediction. 

Conclusion

We’ve reached the point where technological innovation has far outpaced the rate of progress in our social systems. Channeling skills development towards high-growth industries solely for the purposes of job preparation — without taking into account the long-term implications of what we’re preparing for, or how our institutions will adapt to the change — risks alienating the core skills and values necessary to maintaining healthy, open societies. As Henry Kissinger describes it, “The Enlightenment started with philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy.” 

Absent a guiding philosophy, we may find that in pushing the limits of our scientific discoveries, we too will be eaten by software of our own creation.