Why the Google-Hugging Face Partnership is a Breakthrough for Open AI Development

Google and Hugging Face, the leading open AI platform, have announced a strategic partnership to advance the development and deployment of generative AI applications.

The partnership will make it easier and faster for developers to train, tune, and serve open models on Google Cloud, using AI-optimized infrastructure such as Tensor Processing Units (TPUs) and Graphics Processing Units (GPUs). Hugging Face users and Google Cloud customers will also be able to deploy models for production on Google Cloud with Inference Endpoints, accelerate applications with TPUs on Hugging Face Spaces, and manage usage through their Google Cloud account.

How the Google-Hugging Face Partnership Benefits Developers

The Google-Hugging Face partnership offers several benefits for developers who want to build generative AI applications using open models and technologies.

  • Access to TPUs: TPUs are specialized hardware devices that are designed to speed up machine learning tasks, especially those involving large matrix operations. TPUs are more efficient and cost-effective than GPUs, as they consume less power and have a smaller carbon footprint. With this partnership, Hugging Face users will have access to TPUs available through Google Cloud, which can significantly improve the performance and scalability of their applications.
  • Integration with Vertex AI: Vertex AI is Google’s machine learning and MLOps platform that provides a unified environment for building, managing, and deploying machine learning models. With the integration between the two platforms, Hugging Face users can target Vertex AI as the deployment platform to host and manage open models. They can also leverage Vertex AI features, such as AutoML, Explainable AI, and Continuous Monitoring, to enhance their model development and deployment workflows.
  • Support for Google Kubernetes Engine: Google Kubernetes Engine (GKE) is a managed service that runs Kubernetes clusters on Google Cloud. Kubernetes is an open-source system that automates containerized applications’ deployment, scaling, and management. The partnership also supports GKE deployments, allowing developers to create new generative AI applications using Kubernetes, which offers fine-grained control and customization capabilities.

How the Google-Hugging Face partnership impacts the AI industry

Google-Hugging Face Partnership

The Google-Hugging Face partnership is seen as a significant step into AI for Alphabet, Google’s parent company, and is compared to the collaboration between Microsoft and OpenAI. However, Hugging Face’s head of product, Jeff Boudier, commented that the Google-Hugging Face partnership is quite different, as it focuses on open models and open source.

Hugging Face has attracted substantial investments from tech giants, including Google. In a Series D funding round, Hugging Face raised $235 million, with participation from Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce, and others, doubling the startup’s valuation to $4.5 billion. With its commitment to open source and open models, Hugging Face has quickly become the preferred platform for hosting models, datasets, and inference endpoints. Almost all the open model providers, like Meta, Microsoft, and Mistral, make their models available on Hugging Face Hub.

Google has foundation models that are exclusively available on its public cloud platform. Gemini, one of the top-performing large language models, was announced last month. Other models like Imagen, Chirp, and Codey are part of the Vertex AI offering. With the integration of Hugging Face with Google Cloud, customers can choose proprietary models and open models for building and deploying generative AI applications in the cloud.

The partnership between Google and Hugging Face is expected to democratize AI by making it easy for companies to build their own AI using open models and technologies. As Hugging Face becomes the central hub for open-source AI software, this collaboration will likely double the size of its repository of AI-related software.

The new capabilities, including Vertex AI and GKE deployment options, are expected to be available to Hugging Face Hub users in the first half of 2024.

Share: