Search Inside Bitcoins

How Sam Bankman-Fried’s Crypto Empire Failed and How It Affected A.I.

Don’t invest unless prepared to lose all the money you invest. This is a high-risk investment, you shouldn’t expect to be protected if something goes wrong.

Join Our Telegram channel to stay up to date on breaking news coverage

Anthropic, a San Francisco-based artificial intelligence research organization, raised $580 million in April for “AI safety” research.

The one-year-old lab, which develops artificial intelligence (AI) systems that generate language, was little known in Silicon Valley. However, the sum of money pledged to the small business eclipsed what VCs were investing in other A.I. start-ups, including those that were staffed with some of the most knowledgeable researchers in the industry.

Sam Bankman-Fried, the founder and CEO of FTX, the cryptocurrency exchange that filed for bankruptcy last month, served as the leader of the investment round. A leaked balance statement after FTX’s abrupt collapse revealed that Mr. Bankman-Fried and his associates had invested at least $500 million in Anthropic.

Their investment was a part of a covert and vain attempt to investigate and counteract the risks posed by artificial intelligence, which many people in Bankman-Fried’s circle thought may ultimately ruin the planet and harm humanity. According to a count by The New York Times, the 30-year-old entrepreneur and his FTX colleagues invested or granted more than $530 million over the last two years to more than 70 A.I.-related businesses, academic labs, think tanks, independent projects, and individual researchers to address concerns about the technology.

According to four people familiar with the A.I. activities who were not authorized to speak in public, some of these groups and persons are now worried whether they can continue to spend that money. They expressed concern that Bankman-Fried’s accident may call into question their studies and damage their reputations. Additionally, they warned that some of the A.I. start-ups and organizations might later become involved in FTX’s bankruptcy procedures and have their grants revoked in court.

Concerns in the A.I. industry are an unanticipated result of FTX’s failure, demonstrating how far-reaching the ramifications of the crypto exchange’s collapse and Bankman-Fried’s disappearing wealth have been.

According to Andrew Burt, a visiting fellow at Yale Law School and attorney who focuses on the dangers of artificial intelligence

Some might be surprised by the connection between these two emerging fields of technology

However, there are obvious connections between the two.

Effective Altruism

Because of his involvement in “effective altruism,” a philanthropic movement where contributors want to optimize the impact of their giving over the long term, Bankman-Fried was (allegedly) trying to inspire.

Effective philanthropists frequently worry about what they refer to as catastrophic threats, like pandemics, bioweapons, and nuclear war. They have a keen interest in artificial intelligence. Many effective altruists think that increasingly potent A.I. can benefit the world, but they are concerned that if it is not developed in a safe manner, it could do significant harm.

Effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, businesses, and governments should be prepared for it. While A.I. experts concur that any doomsday scenario is a long way off — if it happens at all — effective altruists have long argued that it is not.

Research into A.I.’s impact

Top AI research laboratories like DeepMind, which is owned by Google’s parent company, and OpenAI, which was created by Elon Musk and others, have employed numerous effective altruists during the past ten years. They were instrumental in the development of the subject of study known as “AI safety,” which looks into the potential for harm that can be done by AI systems or the possibility of unanticipated malfunctions.

Similar studies were done by effective altruists at Washington think groups that influence policy. The majority of the funding for Georgetown University’s Center for Security and Emerging Technology came from Open Philanthropy, an effective altruist giving organization supported by Dustin Moskovitz, a co-founder of Facebook. The center studies the effects of artificial intelligence and other emerging technologies on national security. In these think tanks, effective altruists also serve as researchers.

Future Fund

Since 2014, Bankman-Fried claims to have been participated in the effective altruist movement. He explained in a New York Times interview in April that he had purposefully chosen a successful job so he could give away far larger sums of money, adopting a philosophy known as earning to give.

He and a few of his FTX coworkers introduced the Future Fund in February, which would provide funding for “ambitious projects to enhance humanity’s long-term prospects.” Will MacAskill, one of the Center for Effective Altruism’s founders, and other influential members of the movement helped to manage the fund.

By the beginning of September, The Future Fund pledged to award grants totaling $160 million to a variety of initiatives, including studies into pandemic preparedness and economic expansion. A total of $30 million was set aside for donations to a variety of groups and individuals researching AI-related concepts.

One grant for artificial intelligence from the Future Fund was $2 million to Lightcone Infrastructure, a little-known business. The online debate forum LessWrong is maintained by Lightcone, and in the middle of the 2000s it started delving into the idea that artificial intelligence might one day wipe out humans.

The Alignment Research Center, a group that works to align future A.I. systems with human interests in order to prevent the technology from going rogue, received $1.25 million from Bankman-Fried and his colleagues. Other initiatives that were working to reduce the long-term risks of A.I. were also supported. Additionally, they contributed $1.5 million to Cornell University for related research.

The Future Fund also contributed roughly $6 million to three initiatives involving “big language models,” a type of more potent A.I. that can create computer programs, tweets, emails, and blog entries. The funds were designed to decrease unexpected and undesirable behavior from these systems as well as to mitigate how the technology might be used to propagate false information.

Mr. MacAskill and others in charge of the Future Fund left the initiative when FTX declared bankruptcy, expressing “basic issues about the validity and integrity of the commercial operations” underpinning it. A comment from Mr. MacAskill was not forthcoming.

With the $500 million funding of Anthropic, Bankman-Fried and his associates invested directly in start-ups in addition to the awards made by the Future Fund. A group that includes a number of effective altruists that had defected from OpenAI created the corporation in 2021. By creating its own language models, which can cost tens of millions of dollars to create, it is attempting to make AI safer.

Money from Bankman-Fried and his associates has already been given to several organizations and people. Others only received a part of what was promised. According to the four people with knowledge of the organizations, some people are confused about whether the grants will need to be returned to FTX’s creditors.

When contributors file for bankruptcy, charities are susceptible to clawbacks, according to Jason Lilien, a partner at the Loeb & Loeb law firm who focuses on nonprofit organizations. Despite being in a little better position than charities, businesses that receive venture capital from bankrupt businesses are nevertheless subject to clawback claims, he added.

Effective altruists, according to Dewey Murdick, the head of the Georgetown think tank Center for Security and Emerging Technology, which is funded by Open Philanthropy, have made significant contributions to research on artificial intelligence.

He cited the fact that there is greater conversation about how A.I. systems can be developed with safety in mind as evidence that “since they have increased money, it has increased attention on these issues.”

However, Oren Etzioni of the Allen Institute for Artificial Intelligence, a Seattle-based A.I. lab, claimed that the opinions of the effective altruist community were occasionally excessive and frequently exaggerated the strength or danger of contemporary technologies.

He said that this year the Future Fund had offered him funding for studies that would aid in foretelling the coming and dangers of “artificial general intelligence,” a machine that is capable of performing every task that the human brain is capable of. But because scientists do not yet know how to construct it, Etzioni claimed that this idea cannot be accurately forecast.

Related

Join Our Telegram channel to stay up to date on breaking news coverage

Read next