Sunday, September 8, 2024

AI will automate tasks, not jobs and other AI insights from Fortune Brainstorm Tech

Must read

Hello. Today, I’m writing from Deer Valley, Utah, where Fortune is holding its Brainstorm Tech conference. AI has, unsurprisingly, been a major theme of the event. Here’s a recap of some of the key AI tidbits so far:

On Monday, my colleague Emma Hinchliffe interviewed San Francisco Federal Reserve President Mary Daly, who said generative AI’s impact on the labor market will depend on what we do with the technology. Daly said we should expect generative AI to contribute to at least average productivity growth, which is currently 1.5% annually.

But she also said that if AI helps us to invent new products and new processes, rather than simply automating existing ones, then its potential impact on productivity growth would be much greater. “If we say in a decade, [AI] was disappointing that is because of us,” Daly said. She also noted that all previous new technologies have, in the long term, created more jobs than they’ve eliminated, and she suspects AI will be no different.

Picking up these ideas, Stanford University economist Erik Brynjolfsson urged companies to view AI as complementary to human labor. Acknowledging that many companies have struggled to figure out how to derive a reasonable return on investment from generative AI, Brynjolfsson said the key was to stop thinking about jobs and start thinking about tasks. AI can automate some tasks within an organization, but it can’t automate entire jobs (at least not yet). In fact, as automation helps lower the cost associated with some roles, demand for those roles could actually increase, leading to the hiring of more people for those jobs. (This is called Jevons Paradox.) Brynjolfsson has cofounded a company called Workhelix that helps companies do this kind of task-based analysis and come up with strategic plans for implementing AI in the most impactful way within their organization. Among the tasks best suited to AI automation today include many in software development and within customer contact centers, he said.

Robinhood CEO Vladimir Tenev told Fortune editor-in-chief Alyson Shontell that he sees AI democratizing access to wealth management services. While very high net-worth individuals will continue to be served by human financial advisors, AI will be able to give many other people access to good financial advice who could never have afforded a financial advisor before.

Agility Robotics CEO Peggy Johnson showed off the company’s Digit humanoid robot, which is already working inside warehouses as part of a multi-year deal with GXO Logistics. Johnson said Agility is now integrating Digit with large language models (LLMs) so that people can give Digit instructions in natural language. Johnson says she sees Digit and humanoid robots like it as necessary for helping to meet a shortfall of some 1.1 million warehouse workers in the U.S.

Clara Shih, Salesforce’s AI chief, talked about how to build trust in AI within large organizations. She touted Salesforce’s own Einstein AI trust layer, which includes features such as data security and guardrails to prevent toxic language from being generated and techniques to defend against prompt injection attacks. That’s when an adversary crafts a prompt that is designed to trick an LLM into jumping its guardrails.

She also said the company will begin rolling out AI software with more “agentic” qualities soon. These are AI models that will be able to perform tasks within workflows, not simply generate emails, letters, or customer service dialogues. More broadly, Shih said, one way organizations could develop more trust in AI was to make sure they were using the right AI model for the problem at hand. Just throwing a general-purpose large language model at every business dilemma was unlikely to result in the value companies are hoping to see from AI.

This morning, I interviewed Google’s chief scientist Jeff Dean, who said increasingly long context windows, such as those Google has pushed with Gemini, will help tame AI hallucinations. But he also agreed with recent comments from Microsoft’s Bill Gates that LLMs alone will not deliver AGI even if we continue to scale them up. Dean concurred that some other innovation would be necessary algorithmically.

There’ll be plenty of more discussion of AI over the next few days at Brainstorm Tech—it wraps up Wednesday around lunchtime. You can tune in to the livestream here, watch archived sessions here, and catch up on coverage of many of the sessions on fortune.com.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news…If you want a better understanding of how AI can transform your business and hear from some of Asia’s top business leaders about AI’s impact across industries, please join me at Fortune Brainstorm AI Singapore. The event takes place July 30-31 at the Ritz Carlton in Singapore. And today is your last chance to register to attend! We’ve got Alation CEO Satyen Sangani talking about AI’s impact on the digital transformation of Singapore’s GXS Bank, Grab CTO Sutten Thomas Pradatheth speaking on how quickly AI can be rolled out across the APAC region, Josephine Teo, Singapore’s minister for communication and information talking about that island nation’s quest to be an AI superpower, and much much more. You can apply to attend here. Just for Eye on AI readers, I’ve got a special code that will get you a 50% discount on the registration fee. It is BAI50JeremyK.

AI IN THE NEWS

Yandex cofounder launches new European AI infrastructure company. Arkady Volozh, a cofounder of Russian tech group Yandex, is launching a new AI infrastructure company called Nebius Group, mainly staffed by former Yandex employees, the Financial Times reports. This move follows the sale of Yandex’s core Russian assets due to the war in Ukraine. Nebius, which will be based in Europe, aims to develop a cloud computing platform for AI model training and is collaborating with leading AI start-ups in Europe and has a data center in Finland.

OpenAI is reportedly training a new reasoning AI model codenamed ‘Strawberry.’ That’s according to a story from Reuters, which cites internal OpenAI documents it obtained. The model is supposed to be a reasoning engine that can help future AI agents to take actions across the internet.

New AI safety and security company backed by X.ai advisor and top adversarial AI researchers emerges from stealth. The company, called Gray Swan, has been cofounded by Dan Hendrycks, the director of the Center for AI Safety and an advisor to Elon Musk’s X.ai, as well as Matt Fredrikson, Zico Kolter, and Andy Zou, all of whom are well known AI researchers at Carnegie Mellon University who research ways to attack large language models. Gray Swan announced two products—one an LLM that it says is much more robust to attacks than other AI models, and the other a product that will assess how any given LLM is likely to behave when subjected to various kinds of prompt injection attacks. Hendrycks has also been in the news lately as one of the major backers of California’s State Senate Bill 1047, which would require companies building advanced AI models to take various steps to prevent potential “catastrophic harms.” You can read more about Gray Swan’s debut on the company blog here.

Nvidia, Apple, Anthropic, and Salesforce used YouTube video transcripts without Google’s permission to train AI models, investigation finds. Wired copublished findings of an investigation from news outlet Proof News that found companies, including Anthropic, Apple, Nvidia, and Salesforce had used subtitles from more than 173,000 YouTube videos, in violation of YouTube’s rules against unauthorized data harvesting. Creators who had uploaded the videos to the Google-owned platform were unaware that their content was being used and many of them called for compensation and regulation in response. But many of the companies involved in using the YouTube transcripts claimed their actions should fall under a “fair use” exemption from any copyright claims.

U.K. government expected to introduce a landmark AI bill on Wednesday. The new Labour government will use Wednesday’s “King’s Speech” (an annual address by the monarch to Parliament in which the government lays out its legislative agenda) to announce plans to pursue a new AI law, the Financial Times reported. The bill will be aimed at creating binding rules for the development of advanced AI models, according to the newspaper. The previous government of Prime Minister Rishi Sunak had focused on voluntary commitments from tech companies developing AI rather than legal requirements.

EYE ON AI RESEARCH

Universities, unable to compete with Big Tech and startups, look to find niche AI research areas. For more than a decade, university computer science departments have bemoaned the brain drain of top AI researchers and recent PhD graduates to tech companies offering not only much higher salaries, but access to far larger clusters of expensive graphics processing units (the type of chips most commonly used for AI applications) and vast amounts of data on which to train AI models. That situation has only gotten worse in the LLM era we are in now. Some universities are now trying to see if they can successfully zig while the rest of the AI field zags, according to a story in the Wall Street Journal. Rather than encouraging AI researchers to work on LLMs, they are hiring academics to explore totally new algorithms, architectures, and in some cases, even hardware, that would require far fewer GPUs and less energy. In other cases, the paper says, universities are using partnerships with Big Tech to gain access to GPUs. And then there are a few universities that are spending big to try to develop GPU clusters that are sizable enough to at least have a shot at offering what individual researchers might have access to at places such as OpenAI, Microsoft, Google, and Meta.

FORTUNE ON AI

Bosses and employees have wildly different expectations about how much time they can save with AI —by Ryan Hogg

California AI bill SB-1047 sparks fierce debate, Senator likens it to ‘Jets vs. Sharks’ feud —by Sharon Goldman

OpenAI announced a new scale to track AI progress. But wait—where is AGI? —by Sharon Goldman

Nvidia’s market cap will soar to $50 trillion—yes, trillion—says early investor in Amazon and Tesla —by Sasha Rogelberg

AI CALENDAR

July 21-27: International Conference on Machine Learning (ICML), Vienna, Austria

July 30-31: Fortune Brainstorm AI Singapore (register here)

Aug. 12-14: Ai4 2024 in Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia.

Dec. 9-10: Fortune Brainstorm AI San Francisco (register here

BRAIN FOOD

Donald Trump’s running mate J.D. Vance is a leading proponent of open-source AI. Vance, who was a venture capitalist before becoming a U.S. senator from Ohio, has previously touted the benefits of open-source AI models, The Information reported. Vance said in March that open-source AI models—which users can modify, potentially overcoming any guardrails initially built into the models by the companies developing them—were the best defense against “woke AI.” He posted these comments on X in response to the controversy over Google’s Gemini chatbot and its text-to-video generation guardrails, which were originally so strict about preventing potentially racist imagery that the model couldn’t produce images of groups of white people even in cases when such groupings would be historically accurate (Nazi rallies, Viking feasts, etc.).

Beyond Vance’s support for open AI models, the Republican Party’s election manifesto has endorsed the idea of repealing President Joe Biden’s executive order on AI and says the party will seek a “pro innovation” and anti-regulation stance towards the technology. Trump’s election campaign is also attracting campaign donations from some of Silicon Valley’s best-known “effective accelerationists,” (or e/accs) who believe in unbridled AI development because they see the technology’s promise far outweighing any potential harms. This includes a16z’s Marc Andreessen and Ben Horowitz. But Trump has also attracted support from billionaires Elon Musk and Peter Thiel, both of whom have more ambiguous ideas about AI development and the potential existential risks of the technology, but have generally endorsed libertarian approaches to technology regulation.    

Latest article