Mustafa Suleyman remembers the epochal moment he grasped artificial intelligence’s potential. It was 2016 — Paleolithic times by A.I. standards — and DeepMind, the company he had co-founded that was acquired by Google in 2014, had pitted its A.I. machine, AlphaGo, against a world champion of Go, the confoundingly difficult strategy game. AlphaGo zipped through thousands of permutations, making fast work of the hapless human. Stunned, Suleyman realized the machine had “seemingly superhuman insights,” he says in his book on A.I., “The Coming Wave.”

The result is no longer stunning — but the implications are. Little more than a year after OpenAI’s ChatGPT software helped bring generative A.I. into the public consciousness, companies, investors and regulators are grappling with how to shape the very technology designed to outsmart them. The exact risks of the technology are still being debated, and the companies that will lead it are yet to be determined. But one point of agreement: A.I. is transformative. “The level of innovation is very hard for people to imagine,” said Vinod Khosla, founder of the Silicon Valley venture capital firm Khosla Ventures, which was one of the first investors in OpenAI. “Pick an area: books, movies, music, products, oncology. It just doesn’t stop.”

If 2023 was the year the world woke up to A.I., 2024 might be the year in which its legal and technical limits will be tested, and perhaps breached. DealBook spoke with A.I. experts about the real-world effects of this shift and what to expect next year.

Judges and lawmakers will increasingly weigh in. The flood of A.I. regulations in recent months is likely to come under scrutiny. That includes President Biden’s executive order in October, which, if Congress ratifies, could compel companies to ensure that their A.I. systems cannot be used to make biological or nuclear weapons; embed watermarks on A.I.-generated content; and to disclose foreign clients to the government.

At the A.I. Safety Summit in Britain in November, 28 countries, including China — though not Russia — agreed to collaborate to prevent “catastrophic risks.” And in marathon negotiations in December, the E.U. drafted one of the world’s first comprehensive attempts to limit the use of artificial intelligence, which, among other provisions, restricts facial recognition and deep fakes and defines how businesses can use A.I. The final text is due out in early 2024, and the bloc’s 27 member countries hope to approve it before European Parliament elections in June.

With that, Europe might effectively create global A.I. rules, requiring any company that does business in its market, of 450 million people, to cooperate. “It makes life tough for innovators,” said Matt Clifford, who helped organize the A.I. summit in Britain. “They have to think about complying with a very long list of things people in Brussels are worried about.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

What Investors and the Fed Will Look For in the Jobs Report

Jobs report: the numbers to watch Wall Street, the White House and…

How to Think About Financial Aid and Paying for College

The Supreme Court decisions that struck down affirmative action programs and President…

How to Take the Whole Family Skiing Without Going Broke

Skiing is not a cheap sport. It requires a lot of gear…

NUE by Nomad: A Culinary Adventure from Bahrain to Riyadh

Riyadh, S. Arabia. January 26, 2024 – Mediamark Digital In the vibrant…