News

Weights and Biases raises $45 million to advance MLOps governance

Weights and Biases, provider of a platform for enabling collaboration and governance across teams building machine learning models, today revealed it has raised a $45 million series B round led by Insight Partners.

The company provides a software-as-a-service (SaaS) platform designed to make it easier for AI teams to first reproduce results and then ultimately explain how an AI model actually works, Weights and Biases CEO Lukas Biewald said.

The Weights and Biases platform is used mainly to keep track of AI experiments and manage different versions of the datasets employed to train an AI model. The funding is going to be employed to expand current capabilities, in addition to adding the ability to evaluate models, launch hyperparameter search queries, and manage data pipelines, Biewald said. Weights and Biases will also be expanding its engineering, growth, sales, and customer success teams, Biewald added.

The challenge teams building AI models face today is that the same experiment might yield different results on different days because machine learning algorithms don’t behave consistently or might have discovered additional insights from the data that has been exposed to them. That issue creates a need to track how experiments involving different AI models behave. The capabilities provided by Weights and Biases will play a critical role in enabling IT organizations to govern AI models as well as eventually complying with any compliance mandate, Biewald noted. “It’s irresponsible to deploy an AI model in a production environment if you don’t know how it was built,” Biewald said.

That governance issue also goes to the root of a current controversy over AI explainability. Builders of AI models can’t explain with absolute certainty why machine learning algorithms employed in one experiment might have come up with a result that differs from another experiment on the same data. At the core of that issue is the fact that software is built differently by machines than it is by humans. A machine learning operations (MLOps) platform essentially governs the process through which humans enable machine learning algorithms to build software. Those algorithms, however, may not build a piece of software the same way twice. As IT teams update AI models when new data becomes available a platform that allows them to keep track of experiments becomes essential, Biewald noted.

Existing Weights & Biases investors include Coatue, Trinity Ventures, and Bloomberg Beta. As part of the round, Insight Partners managing director George Mathew has joined the Weights and Biases board of directors.

In the meantime, what precisely constitutes a set of best practices for MLOps remains a work in progress. Weights and Biases, for example, is positioning its platform as a complement to feature stores that are emerging to provide IT organizations with a repository for sharing both AI models and the components that were employed to construct them. The sector has already attracted a bevy of startups, as well as the attention of Amazon Web Services (AWS), Microsoft, Google, IBM, and other cloud service providers.

At the same time, it’s unclear how much influence open source projects will exert as the category matures. Most open source MLOps projects are still relatively nascent, and many organizations don’t necessarily want to rely on open source software they need to integrate and maintain. But many builders of AI models have already shown a strong preference for open source tools such as Tensorflow. Weights and Biases currently makes its client software available under an open source license, but the backend SaaS platform is based on proprietary code.

The current lack of AI explainability doesn’t appear to be having a significant impact on overall enthusiasm for research and development. In the wake of the economic downturn brought on by the COVID-19 pandemic, AI projects that promise to reduce costs are being greenlighted across a range of vertical industries. The challenge, of course, is vetting those projects. After all, it’s one thing to be wrong. It’s generally quite another to be wrong about something at the level of scale involving an AI project.

VentureBeat

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Source: Read Full Article