(Bloomberg Businessweek) -- Many people in the tech industry say we’re likely to reach artificial general intelligence—AI that can outperform humans at most tasks—within four years. If they’re right, we’ve just elected our first AGI-era president: Donald Trump.
So far, Trump’s thoughts about navigating this historic shift have been a confused mishmash. In June he sat across from YouTube star Logan Paul for a podcast interview and referred to superintelligence as “super-duper AI.” He expressed some fear about deepfakes, calling them “scary,” “alarming,” “disconcerting.” But he was also delighted by large language models, which impressed him by creating an AI-generated script for a speech. “Unbelievable” and “so fast,” he gushed. “It comes out with the most beautiful writing.” Trump, who’s never been particularly committed to delivering speeches as written, even joked about AI getting good enough that he could fire his writer.
Within Silicon Valley, the competing camps on AI are already well formed, as is the language to describe them. On one side is the “accelerationist” or “e/acc” movement, which opposes regulation. Then there are those who favor more safety-focused AI growth, such as those in the “AI alignment” field, which is focused on building AI that adheres to human values. The e/accs tend to call anyone they disagree with “decels.” And it’s common to refer to people who are concerned that AI will wipe out humanity entirely as “doomers.”
The broad expectation is that Trump and his team will be e/accs, eschewing new tech regulation and removing some of what currently exists. “We may actually be on the threshold of the greatest period of technological acceleration in history, with nothing in sight that can hold us back, and clear open road ahead,” proclaimed @bayeslord, one of the founders of the movement, in a post on X as the election results rolled in. “This feels like god’s timeline.”
Trump has said he wants to rescind President Joe Biden’s 2023 executive order on AI, which laid out a framework for mitigating some of the technology’s risks. Republicans have been particularly critical of its plans to address the ways that AI could amplify bias or discrimination in hiring decisions. Within the crowd that’s worried about excessive wokeness, this idea “gave people the ick,” says Dean Ball, a researcher at George Mason University’s Mercatus Center. Policy analysts also suspect Trump will dismantle or reshape the fledgling US AI Safety Institute, established last year and led by researcher Paul Christiano, who’s long been focused on AI alignment.
Trump has also criticized the CHIPS and Science Act, the Biden administration’s major initiative to re-energize the US semiconductor industry. The chip supply chain is crucial to AI, whose advanced systems require cutting-edge chips manufactured in Taiwan. But people on both sides of the aisle hope Trump’s hostility to the CHIPS Act is just bluster, and are betting that he’ll be receptive to arguments that it’s necessary to stay ahead of Beijing in AI capability. “We have to be at the forefront,” Trump told Paul on his podcast. “We have to take the lead over China.”
For their part, AI safety advocates think a Trump administration could be more open to their ideas than their accelerationist rivals assume. “A lot of people have a super rudimentary view, where they think Trump is going to be categorically anti-regulation and will rescue them from the flames of these liberal regulation structures,” says Sneha Revanur, the founder of Encode Justice, an organization for young people concerned about AI risks. “The partisan lines have not been drawn cleanly.”
Republicans aren’t immune to concerns about AI-related dangers. One surprising moment came in September, when Trump’s daughter Ivanka posted on X about “Situational Awareness,” a 165-page manifesto by former OpenAI researcher Leopold Aschenbrenner laying out a dire scenario in which AGI sparks a war with China. The post prompted a cascade of shocked replies from mind-boggled AI safety wonks. How had Ivanka heard about this deep cut? And if she had in fact been “safety-pilled,” did she still hold enough sway over her father to steer him in the same direction?
Others in Trump’s orbit have expressed concern about different aspects of AI. Senator Josh Hawley has said he’s worried about a wide range of issues related to lax safety at OpenAI and other companies, while Senator Ted Cruz has introduced revenge-porn legislation that includes a ban on AI-generated images and videos. Vice President-elect JD Vance posted on X this March that “one of the biggest risks” of AI was left-wing bias.
The biggest person to watch is probably Elon Musk. On one hand, the accelerationists hold him up as a hero, someone who could help Trump get the AI machine roaring faster. But in October, Musk told Tucker Carlson he would push for a regulatory body that “has insight into what these companies are doing and can ring the alarm bell.” Musk supported California’s SB 1047, an aggressive AI regulation bill that OpenAI and other major companies opposed, claiming its requirements were too onerous. (California Governor Gavin Newsom vetoed the bill in September, but state legislators are expected to revive it in some form.) Musk’s views are potentially complicated by his personal grudge against OpenAI, which he co-founded but left in 2018. He has since publicly criticized OpenAI, sued it, and started X.ai Corp., a rival AI company.
In the coming months, Republicans will have to figure out where they stand on some of AI’s inherent contradictions. Are they concerned about further empowering Silicon Valley, an industry they’ve criticized harshly in the past? Or are they more concerned that anything but maximal acceleration gives the edge to China? And, if they do believe that superintelligence could arrive while Trump is still serving out his term, could a bit of doomerism creep into their thinking?
Casey Mock, chief policy officer at the Center for Humane Technology, says Republican lawmakers are most likely to stay more focused on what AI is already doing than what it could do in the future. He says to expect to hear most about “kitchen table” issues such as deepfake nudes that spread at schools and students using AI to cheat on homework. “They have a mandate to deal with that rather than more abstract risks or harms” such as AGI going rogue, he says.Read next: The Many Ironies of Trump’s US Election Victory and Others to Come
©2024 Bloomberg L.P.