You have /5 articles left.
Sign up for a free account or log in.
Generative AI’s rapid evolution has generated fear and anxiety among some observers and enthusiasm among others. The enthusiasm is for the many ways in AI will improve the lives of people and other life on Earth. The fear is generated by the awesome power and influence this technology may have over our very existence as it addresses such conditions as hunger, poverty and climate change, at the same time it enhances efficiency and effectiveness across the full spectrum of industry, business and commerce.
As reported in Venture Beat, one leader in the field has predicted that AI will be ready to replace an entire country of Ph.D.s as early as next year: “AI will match the collective intelligence of ‘a country of geniuses’ within two years, Anthropic CEO Dario Amodei has warned in a sharp critique of this week’s AI Action Summit in Paris. His timeline—targeting 2026 or 2027—marks one of the most specific predictions yet from a major AI leader about the technology’s advancement toward superintelligence.” By comparison, human work and ingenuity may seem awkward, ineffective and counterproductive. Some have wondered if AI would judge humans to be counterproductive to the future of the planet.
It is clear that historically every productive technology has advanced. Even weapons of war have progressed. Technologies have not been denied by humans. For better or worse, if humans have found utility with a technology, they have developed and supported it. We are seeing that on a grand scale with AI. Regarding the newly released version of Gemini Pro, Wharton professor and AI expert Ethan Mollick reports in LinkedIn, “Google Gemini 2.5 is the first public AI model to definitively beat the performance of human PhDs with access to Google on hard multiple-choice problems inside their field of expertise (around 81%). All AI tests are flawed, but GPQA Diamond has been a pretty good one & in this case was conducted independently.”
Another recent example of the stage of progress is Lindy AI’s development of Agent Swarms. This AI agent can clone itself up to a thousand times to create a swarm of agents that can accomplish massive tasks: communicating, cooperating agents that work together on projects. They call it integration supremacy. AI progression continues unabated at breakneck speed. Ever smarter, faster and more dependable than previous models, AI is on an exponential track of improvement.
As Kat Davis writes in Sidecar, “The exponential growth of AI is a phenomenon to be both observed and engaged with. By proactively exploring and adopting AI technologies, we can ensure that our organizations not only stay ahead in a rapidly evolving landscape but also harness the full potential of AI to redefine the boundaries of what’s possible. Let’s embrace this transformative journey, with AI as our compass, guiding us towards a future brimming with opportunity and innovation.”
Two major studies were released on April 2 and April 3 this year. The first from Elon University, “Imagining the Digital Future,” puts a focus on the impact of technology on humans in the coming decade: “More than 300 experts responded to questions about the impact of change they expect on 12 essential human traits and capabilities by 2035.” They predicted that change brought about by the adoption of AI may be mostly negative in the following nine areas of human life:
- Social and emotional intelligence
- Capacity and willingness to think deeply about complex concepts
- Trust in widely shared norms and values
- Confidence in their native abilities
- Empathy and application of moral judgment
- Mental well-being
- Sense of agency
- Sense of identity and purpose
- Metacognition
Yet, there were areas of optimism, particularly as regards:
- Curiosity and capacity to learn
- Decision-making and problem-solving
- Innovative thinking and creativity
My own contributions to the report tended to be more positive than many of the other contributing experts’, perhaps because of my focus on teaching and learning. The authors highlighted my comment: “Affording humans a universe-wide perspective on nearly everything: This will be a dawn of a new Enlightenment that expands our perspectives beyond the individual and the species to a worldwide and perhaps universe-wide perspective.”
The second report, released the following day, was from Pew Research Center, “How the U.S. Public and AI Experts View Artificial Intelligence.”
“The report shows the views of two key groups: the American public and experts in the field of AI. These surveys reveal both deep divides and common ground on AI. AI experts are far more positive than the public about AI’s potential, including on jobs. Yet both groups want more personal control of AI and worry about lax government oversight. Still, opinions among experts vary, with men more optimistic about AI than women … For example, the AI experts we surveyed are far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years (56% vs. 17%). And while 47% of experts surveyed say they are more excited than concerned about the increased use of AI in daily life, that share drops to 11% among the public.”
A core issue of concern among the public is what will happen to jobs and the associated income for families. Entrepreneur Julia McCoy walks through a timeline from 2025 to 2035 as humanity begins to adapt to far fewer jobs due to AI taking over most positions with superior efficiency, effectiveness and far lower costs. In the podcast “Life After AI Takes Our Jobs: The Rocky Road to 2035 (Complete Timeline),” McCoy describes the move toward a modified universal basic income system in which workers are released part-time at first, then possibly full-time as time progresses, while continuing to receive an income.
The Stanford Basic Income Lab is studying the possibilities and implications of UBI:
“Universal basic income takes on distinct forms in different historical and geographic contexts. It varies based on the funding proposal, the level of payment, the frequency of payment and the particular policies proposed around it. Each of these parameters are fundamental, even if a range of versions still technically count as UBI (a universal, unconditional, individual, regular and cash payment).”
The disruption of AI is not small. We will soon see that it will fully rewrite the concepts of employment and salaries. Perhaps various societies will respond in differing ways. There will be periods of anxiety and concern as jobs are replaced by AI.
We must begin to prepare our students for these possibilities. How will they and their families respond to job loss, career loss and a future that is unlikely to ever have a need for the kind of position for which they were prepared in college? This is unsettling and dramatic. But the rapid evolution of AI will inevitably bring about these challenges. It is incumbent on us in higher education to prepare our students for the near future as the yet-unclear, more distant future unfolds.