Leaked Google memo admits being defeated by open-source AI


A leaked Google memo summarizes, point-by-point, why Google lost to open-source AI and lays out a path to regain dominance and own the platform.

The memo begins by acknowledging that their competitor has never been OpenAI, and will always be open source.

Can't compete with open source

Furthermore, they admit that they are not competing with open source in any way, that they have lost the fight for AI dominance.

They write:

“We have done a lot of work at OpenAI. Who will cross the next milestone? What will be next?

But the uncomfortable truth is that we are not equipped to win this arms race, and neither is OpenAI. The third faction has been quietly eating our lunch while we bicker.

Of course, I'm talking about open source.

To put it bluntly, they are licking us. What we consider to be “big open problems” is solved and in people's hands today. “

Much of the memo describes how open source beat Google.

Despite Google's slight advantage over open source, the memo's authors admit it's slipping away and never coming back.

The self-analysis of the metaphorical cards they dealt themselves was rather pessimistic:

“While our model still has a slight edge in terms of quality, the gap is closing at an impressive rate.

Open source models are faster, more customizable, more private, and more powerful.

They were doing things with parameters of $100 and 13B, while we were struggling with parameters of $10M and 540B.

And they did it in weeks, not months. “

Large language model size is not an advantage

Perhaps the most chilling realization expressed in the memo is that Google's size is no longer an advantage.

The sheer size of their models is now seen as a disadvantage, rather than an insurmountable advantage they see.

The leaked memo lays out a series of events that suggest Google's (and OpenAI's) grip on artificial intelligence may soon end.

It tells how just a month ago, in March 2023, the open source community got hold of a leaked open source model called LLaMA, the Large Language Model, developed by Meta.

In days and weeks, the global open source community developed all the building blocks needed to create Bard and ChatGPT clones.

Complex steps like instruction tuning and reinforcement learning from human feedback (RLHF) were quickly replicated by the global open source community, and at a considerable price.

  • instruction tuning
    The process of fine-tuning a language model so that it performs a specific operation for which it was not originally trained.
  • Reinforcement Learning from Human Feedback (RLHF)
    A technique in which humans score the output of a language model so that it learns which outputs are pleasing to humans.

RLHF is the technology used by OpenAI to create InstructGPT, which is the underlying model of ChatGPT, allowing GPT-3.5 and GPT-4 models to accept instructions and complete tasks.

RLHF is open source tinder

The scale of open source scares Google

What scares Google in particular is that the open source movement can expand their projects in ways that closed source can't.

The question answering dataset used to create the open source ChatGPT clone Dolly 2.0 was created entirely by thousands of employee volunteers.

Google and OpenAI rely in part on questions and answers gleaned from sites like Reddit.

The open-source question answering dataset created by Databricks is said to be of higher quality because the people who created it are professionals who provide answers better than those from public forums.

The leaked memo observed:

“In early March, the open source community got their first really capable base model, as Meta's LLaMA was leaked to the public.

It has no instructions or dialogue adjustments, and no RLHF.

Still, the community immediately understood the significance of what they got.

What followed was a plethora of innovations, with only a few days between major developments…

Just one month later, we are here, and have variants of instruction tuning, quantification, quality improvement, human evaluation, multimodality, RLHF, and more, many of which build upon each other.

Best of all, they've fixed scaling issues that anyone can patch.

Many new ideas come from ordinary people.

The bar for training and experimentation has dropped from the total output of a major research institution to one person, one night, and a powerful laptop. “

In other words, what took Google and OpenAI months or even years to train and build, takes days for the open source community.

This must be a truly dire scenario for Google.

This is one of the reasons why I write so much about the open source AI movement, because it really does look like the future of generative AI is here in a relatively short time.

Open source has historically outpaced closed source

The memo cites recent experience with OpenAI's DALL-E, a deep learning model for creating images compared to open-source Stable Diffusion, foreshadowing what's happening with generative AI like Bard and ChatGPT today.

Dall-e was released by OpenAI in January 2021. The open-source version, Stable Diffusion, was released a year and a half later in August 2022 and surpassed Dall-E in popularity in just a few weeks.

This timeline graph shows how quickly Stable Diffusion has surpassed Dall-E:

A screenshot from Google Trends shows how open-source Stable Diffusion surpassed Dall-E in popularity and enjoys an absolute lead in just three weeks

The Google Trends timeline above shows how interest in the open source Stable Diffusion model greatly outpaced Dall-E within three weeks of its release.

Although Dall-E has been available for a year and a half, interest in Stable Diffusion has grown exponentially, while OpenAI's Dall-E has stagnated.

The existential threat of similar events beyond Bard (and OpenAI) is creating a nightmare for Google.

Open source model creation process is superior

Another factor that worries Google engineers is that the process of creating and improving open-source models is quick, inexpensive, and well suited to the global collaborative approach common to open-source projects.

The memo notes that new techniques such as LoRA (Low-Rank Adaptation of Large Language Models) allow language models to be fine-tuned in a matter of days at very low cost, with the resulting LLMs comparable to the extremely expensive LLMs created by Google and OpenAI.

Another benefit is that open source engineers can build on previous work, iterating, without having to start from scratch.

Building large language models with billions of parameters in the way that OpenAI and Google have been doing is not necessary today.

This may be the point Sam Alton hinted at recently when he suggested that the era of large-scale language models is over.

The authors of the Google memo contrast the cheap and fast LoRA approach with current large-scale AI approaches.

The memo authors reflect on Google's shortcomings:

“By contrast, training a giant model from scratch discards not only the pre-training, but any iterative improvements made on top. In an open-source world, these improvements can quickly become dominant, making the cost of full retraining Extremely expensive.

We should consider whether each new application or idea really requires an entirely new model.

…in fact, in terms of engineer hours, these models improve much faster than our largest variants, and the best ones are already essentially indistinguishable from ChatGPT. “

The authors conclude that what they see as their advantage, namely the large size of the model and the high cost that comes with it, is actually a disadvantage.

The globally collaborative nature of open source makes innovation more efficient and orders of magnitude faster.

How do closed source systems compete with the vast majority of engineers worldwide?

The authors conclude that they cannot compete and that direct competition is, in their words, a “losing proposition”.

This is the crisis and storm that is forming outside of Google.

If you can't beat open source join them

The only solace the memo authors find in open source is that since open source innovation is free, Google can take advantage of it too.

In the end, the authors conclude that the only way to be open to Google is to own the platform, just like they dominate the open source Chrome and Android platforms.

They point out how Meta has benefited from releasing LLaMA's large language models for research, and how they now let thousands of people do their work for free.

Perhaps the biggest takeaway from this memo is that Google may in the near future try to replicate their open source dominance by releasing their projects on an open source basis, thereby owning the platform.

The memo concludes that open source is the most viable option:

“Google should position itself as a leader in the open source community, leading by engaging with the broader conversation rather than ignoring it.

This might mean taking uncomfortable steps like publishing model weights for small ULM variants. This necessarily means giving up some control over our model.

But such compromises are inevitable.

We cannot hope to both drive and control innovation. “

Open source takes away the AI ​​fire

Last week I mentioned the Greek myth of the human hero Prometheus stealing fire from the gods on Mount Olympus, pitting Prometheus’ open source against the “Olympian gods” of Google and OpenAI:

I tweets:

“As Google, Microsoft, and Open AI quarrel with each other and turn their backs on them, will open source go away with their fire?”

The leaked Google memo corroborates this observation, but it also points to the possibility that Google may change strategy and join the open source movement, thereby choosing and dominating it, as they did with Chrome and Android.

Read the leaked Google memo here:

Google “We don't have a moat, and neither does OpenAI”





Source link

We will be happy to hear your thoughts

Leave a reply

Smart Passive Income
Logo
EMAIL YOUR CUSTOMERS LOVE
Learn 4 Steps to Better Transactional Email for Your Online Store.
Yes, please
Index