{"id":362211,"date":"2022-12-05T00:48:55","date_gmt":"2022-12-05T05:48:55","guid":{"rendered":"https:\/\/insidebitcoins.com\/?p=362211"},"modified":"2022-12-05T00:48:55","modified_gmt":"2022-12-05T05:48:55","slug":"how-sam-bankman-frieds-crypto-empire-failed-and-how-it-affected-a-i","status":"publish","type":"post","link":"https:\/\/insidebitcoins.com\/news\/how-sam-bankman-frieds-crypto-empire-failed-and-how-it-affected-a-i","title":{"rendered":"How Sam Bankman-Fried’s Crypto Empire Failed and How It Affected A.I."},"content":{"rendered":"

Anthropic, a San Francisco-based artificial intelligence research organization, raised $580 million<\/a> in April for “AI safety” research.<\/p>\n

The one-year-old lab, which develops artificial intelligence (AI)<\/a> systems that generate language, was little known in Silicon Valley. However, the sum of money pledged to the small business eclipsed what VCs were investing in other A.I. start-ups, including those that were staffed with some of the most knowledgeable researchers in the industry.<\/p>\n

Sam Bankman-Fried, the founder and CEO of FTX, the cryptocurrency exchange that filed for bankruptcy last month, served as the leader of the investment round. A leaked balance statement after FTX’s abrupt collapse<\/a> revealed that Mr. Bankman-Fried and his associates had invested at least $500 million in Anthropic.<\/p>\n

Their investment was a part of a covert and vain attempt to investigate and counteract the risks posed by artificial intelligence, which many people in Bankman-Fried’s circle thought may ultimately ruin the planet and harm humanity. According to a count by The New York Times, the 30-year-old entrepreneur and his FTX colleagues invested or granted more than $530 million over the last two years to more than 70 A.I.-related businesses, academic labs, think tanks, independent projects, and individual researchers to address concerns about the technology.<\/p>\n

According to four people familiar with the A.I. activities who were not authorized to speak in public, some of these groups and persons are now worried whether they can continue to spend that money. They expressed concern that Bankman-Fried’s accident may call into question their studies and damage their reputations. Additionally, they warned that some of the A.I. start-ups and organizations might later become involved in FTX’s bankruptcy procedures and have their grants revoked in court.<\/p>\n

Concerns in the A.I. industry<\/a> are an unanticipated result of FTX’s failure, demonstrating how far-reaching the ramifications of the crypto exchange’s collapse and Bankman-Fried’s disappearing wealth have been.<\/p>\n

According to Andrew Burt, a visiting fellow at Yale Law School and attorney who focuses on the dangers of artificial intelligence<\/p>\n

\n

Some might be surprised by the connection between these two emerging fields of technology<\/p>\n<\/blockquote>\n

However, there are obvious connections between the two.<\/p>\n

Effective Altruism<\/h2>\n

Because of his involvement in “effective altruism,” a philanthropic movement where contributors want to optimize the impact of their giving over the long term, Bankman-Fried was (allegedly) trying to inspire.<\/p>\n

Effective philanthropists frequently worry about what they refer to as catastrophic threats, like pandemics, bioweapons, and nuclear war. They have a keen interest in artificial intelligence. Many effective altruists think that increasingly potent A.I. can benefit the world, but they are concerned that if it is not developed in a safe manner, it could do significant harm.<\/p>\n

Effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, businesses, and governments should be prepared for it. While A.I. experts concur that any doomsday scenario is a long way off \u2014 if it happens at all \u2014 effective altruists have long argued that it is not.<\/p>\n

Research into A.I.’s impact<\/h2>\n

Top AI research laboratories like DeepMind, which is owned by Google’s parent company, and OpenAI, which was created by Elon Musk and others, have employed numerous effective altruists during the past ten years. They were instrumental in the development of the subject of study known as “AI safety,” which looks into the potential for harm that can be done by AI systems or the possibility of unanticipated malfunctions.<\/p>\n

Similar studies were done by effective altruists at Washington think groups that influence policy. The majority of the funding for Georgetown University’s Center for Security and Emerging Technology came from Open Philanthropy, an effective altruist giving organization supported by Dustin Moskovitz, a co-founder of Facebook. The center studies the effects of artificial intelligence and other emerging technologies on national security. In these think tanks, effective altruists also serve as researchers.<\/p>\n

Future Fund<\/h2>\n

Since 2014, Bankman-Fried claims to have been participated in the effective altruist movement. He explained in a New York Times interview in April that he had purposefully chosen a successful job so he could give away far larger sums of money, adopting a philosophy known as earning to give.<\/p>\n

He and a few of his FTX coworkers introduced the Future Fund in February, which would provide funding for “ambitious projects to enhance humanity’s long-term prospects.” Will MacAskill, one of the Center for Effective Altruism’s founders, and other influential members of the movement helped to manage the fund.<\/p>\n

By the beginning of September, The Future Fund pledged to award grants totaling $160 million to a variety of initiatives, including studies into pandemic preparedness and economic expansion. A total of $30 million was set aside for donations to a variety of groups and individuals researching AI-related concepts.<\/p>\n

One grant for artificial intelligence from the Future Fund was $2 million to Lightcone Infrastructure, a little-known business. The online debate forum LessWrong<\/a> is maintained by Lightcone, and in the middle of the 2000s it started delving into the idea that artificial intelligence might one day wipe out humans.<\/p>\n

The Alignment Research Center, a group that works to align future A.I. systems with human interests in order to prevent the technology from going rogue, received $1.25 million from Bankman-Fried and his colleagues. Other initiatives that were working to reduce the long-term risks of A.I. were also supported. Additionally, they contributed $1.5 million to Cornell University for related research.<\/p>\n

The Future Fund also contributed roughly $6 million to three initiatives involving “big language models,” a type of more potent A.I. that can create computer programs, tweets, emails, and blog entries. The funds were designed to decrease unexpected and undesirable behavior from these systems as well as to mitigate how the technology might be used to propagate false information.<\/p>\n

Mr. MacAskill and others in charge of the Future Fund left the initiative when FTX declared bankruptcy, expressing “basic issues about the validity and integrity of the commercial operations” underpinning it. A comment from Mr. MacAskill was not forthcoming.<\/p>\n

With the $500 million funding of Anthropic, Bankman-Fried and his associates invested directly in start-ups in addition to the awards made by the Future Fund. A group that includes a number of effective altruists that had defected from OpenAI created the corporation in 2021. By creating its own language models, which can cost tens of millions of dollars to create, it is attempting to make AI safer.<\/p>\n

Money from Bankman-Fried and his associates has already been given to several organizations and people. Others only received a part of what was promised. According to the four people with knowledge of the organizations, some people are confused about whether the grants will need to be returned to FTX’s creditors.<\/p>\n

When contributors file for bankruptcy, charities are susceptible to clawbacks, according to Jason Lilien, a partner at the Loeb & Loeb law firm who focuses on nonprofit organizations. Despite being in a little better position than charities, businesses that receive venture capital from bankrupt businesses are nevertheless subject to clawback claims, he added.<\/p>\n

Effective altruists, according to Dewey Murdick, the head of the Georgetown think tank Center for Security and Emerging Technology, which is funded by Open Philanthropy, have made significant contributions to research on artificial intelligence.<\/p>\n

He cited the fact that there is greater conversation about how A.I. systems can be developed with safety in mind as evidence that “since they have increased money, it has increased attention on these issues.”<\/p>\n

However, Oren Etzioni of the Allen Institute for Artificial Intelligence, a Seattle-based A.I. lab, claimed that the opinions of the effective altruist community were occasionally excessive and frequently exaggerated the strength or danger of contemporary technologies.<\/p>\n

He said that this year the Future Fund had offered him funding for studies that would aid in foretelling the coming and dangers of “artificial general intelligence,” a machine that is capable of performing every task that the human brain is capable of. But because scientists do not yet know how to construct it, Etzioni claimed that this idea cannot be accurately forecast.<\/p>\n

Related<\/h3>\n