When Mark Zuckerberg, the chief executive of Meta, announced last year that his company would release an artificial intelligence system, Jeffrey Emanuel had reservations.
Mr. Emanuel, a part-time hacker and full-time A.I. enthusiast, had tinkered with “closed” A.I. models, including OpenAI’s, meaning the systems’ underlying code could not be accessed or modified. When Mr. Zuckerberg introduced Meta’s A.I. system by invitation only to a handful of academics, Mr. Emanuel was concerned that the technology would remain limited to just a small circle of people.
But in a release last summer of an updated A.I. system, Mr. Zuckerberg made the code “open source” so that it could be freely copied, modified and reused by anyone.
Mr. Emanuel, the founder of the blockchain start-up Pastel Network, was sold. He said he appreciated that Meta’s A.I. system was powerful and easy to use. Most of all, he loved how Mr. Zuckerberg was espousing the hacker code of making the technology freely available — largely the opposite of what Google, OpenAI and Microsoft have done.
“We have this champion in Zuckerberg,” Mr. Emanuel, 42, said. “Thank God we have someone to protect the open-source ethos from these other big companies.”
Mr. Zuckerberg has become the highest-profile technology executive to support and promote the open-source model for A.I. That has put the 40-year-old billionaire squarely on one end of a divisive debate over whether the potentially world-changing technology is too dangerous to be made available to any coder who wants it.
Microsoft, OpenAI and Google have more of a closed A.I. strategy to guard their tech, out of what they say is an abundance of caution. But Mr. Zuckerberg has loudly stood behind how the technology should be open to all.
“This technology is so important, and the opportunities are so great, that we should open source and make it as widely available as we responsibly can, so that way everyone can benefit,” he said in an Instagram video in January.
That stance has turned Mr. Zuckerberg into the unlikely man of the hour in many Silicon Valley developer communities, prompting talk of a “glow-up” and a kind of “Zuckaissance.” Even as the chief executive continues grappling with scrutiny over misinformation and child safety issues on Meta’s platforms, many engineers, coders, technologists and others have embraced his position on making A.I. available to the masses.
Since Meta’s first fully open-source A.I. model, called LLaMA 2, was released in July, the software has been downloaded more than 180 million times, the company said. A more powerful version of the model, LLaMA 3, which was released in April, reached the top of the download charts on Hugging Face, a community site for A.I. code, at record speed.
Developers have created tens of thousands of their own customized A.I. programs on top of Meta’s A.I. software to perform everything from helping clinicians read radiology scans to creating scores of digital chatbot assistants.
“I told Mark, I think that open sourcing LLaMA is the most popular thing that Facebook has done in the tech community — ever,” said Patrick Collison, chief executive of the payments company Stripe, who recently joined a Meta strategic advisory group that is aimed at helping the company make strategic decisions about its A.I. technology. Meta owns Facebook, Instagram and other apps.
Mr. Zuckerberg’s new popularity in tech circles is striking because of his fraught history with developers. Over two decades, Meta has sometimes pulled the rug out from under coders. In 2013, for instance, Mr. Zuckerberg bought Parse, a company that built developer tools, to attract coders to build apps for Facebook’s platform. Three years later, he shuttered the effort, angering developers who had invested their time and energy in the project.
A spokeswoman for Mr. Zuckerberg and Meta declined to comment. (The New York Times last year sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems.)
Open-source software has a long and storied history in Silicon Valley, with major tech battles revolving around open versus proprietary — or closed — systems.
In the internet’s early days, Microsoft jockeyed to provide the software that ran internet infrastructure, only to eventually lose out to open-source software projects. More recently, Google open sourced its Android mobile operating system to take on Apple’s closed iPhone operating system. Firefox, the internet browser, WordPress, a blogging platform, and Blender, a popular set of animation software tools, were all built using open-source technologies.
Mr. Zuckerberg, who founded Facebook in 2004, has long backed open-source technology. In 2011, Facebook started the Open Compute Project, a nonprofit that freely shares designs of servers and equipment inside data centers. In 2016, Facebook also developed Pytorch, an open-source software library that has been widely used to create A.I. applications. The company is also sharing blueprints of computing chips that it has developed.
“Mark is a great student of history,” said Daniel Ek, Spotify’s chief executive, who considers Mr. Zuckerberg a confidant. “Over time in the computing industry, he’s seen that there’s always been closed and open paths to take. And he has always defaulted to open.”
At Meta, the decision to open source its A.I. was contentious. In 2022 and 2023, the company’s policy and legal teams supported a more conservative approach to releasing the software, fearing a backlash among regulators in Washington and the European Union. But Meta technologists like Yann LeCun and Joelle Pineau, who spearhead A.I. research, pushed the open model, which they argued would better benefit the company in the long term.
The engineers won. Mr. Zuckerberg agreed that if the code was open, it could be improved and safeguarded faster, he said in a post last year on his Facebook page.
While open sourcing LLaMA means giving away computer code that Meta spent billions of dollars to create with no immediate return on investment, Mr. Zuckerberg calls it “good business.” As more developers use Meta’s software and hardware tools, the more likely they are to become invested in its technology ecosystem, which helps entrench the company.
The technology has also helped Meta improve its own internal A.I. systems, aiding ad targeting and recommendations of more relevant content on Meta’s apps.
“It is 100 percent aligned with Zuckerberg’s incentives and how it can benefit Meta,” said Nur Ahmed, a researcher at MIT Sloan who studies A.I. “LLaMA is a win-win for everybody.”
Competitors are taking note. In February, Google open sourced the code for two A.I. models, Gemma 2B and Gemma 7B, a sign that it was feeling the heat from Mr. Zuckerberg’s open-source approach. Google did not respond to requests for comment. Other companies, including Microsoft, Mistral, Snowflake and Databricks, have also started offering open-source models this year.
For some coders, Mr. Zuckerberg’s A.I. approach hasn’t erased all of the baggage of the past. Sam McLeod, 35, a software developer in Melbourne, Australia, deleted his Facebook accounts years ago after growing uncomfortable with the company’s track record on user privacy and other factors.
But more recently, he said, he recognized that Mr. Zuckerberg had released “cutting edge” open-source software models with “permissive licensing terms,” something that can’t be said for other big tech companies.
Matt Shumer, 24, a developer in New York, said he had used closed A.I. models from Mistral and OpenAI to power digital assistants for his start-up, HyperWrite. But after Meta released its updated open-source A.I. model last month, Mr. Shumer started relying heavily on that instead. Whatever reservations he had about Mr. Zuckerberg are in the past.
“Developers have started to see past a lot of issues they’ve had with him and Facebook,” Mr. Shumer said. “Right now, what he’s doing is genuinely good for the open-source community.”