Home » News » Automatic AI Picture Producer Stability AI Is Worth $1 Billion

Automatic AI Picture Producer Stability AI Is Worth $1 Billion

(Image Credit Google)
According to a report from Bloomberg quoting a "source familiar with the subject," Stability AI, the firm behind the well-known text-to-image AI software Stable Diffusion, has obtained new funding that puts the company at about $1 billion. It represents a significant endorsement of the company's approach to AI development, which, in contrast to established players like OpenAI and Google, emphasizes open-source models that anybody may use without restriction. Stability AI announced in a press release that it had raised $101 million in a round led by Coatue, Lightspeed Venture Partners, and O'Shaughnessy Ventures.  The company said it would use the funds to "accelerate the development of open AI models for image, language, audio, video, and more, for consumer and enterprise use cases globally." Stability AI, an advocate of automatic AI picture production, is valued at $1 billion One of the best instances of text-to-image AI, which also includes models like OpenAI's DALL-E, Google's Imagen, and Mid journey, is Stable Diffusion. But Stability AI has set itself apart from the competition by releasing its software open-source.  This implies that anyone can extend the code of the business or even use it to power their own commercial solutions. Stability AI claims it intends to make money by creating this supporting infrastructure and customizing versions of the program for business clients. It offers its own commercial version of the model, dubbed DreamStudio.  The business is headquartered in London and employs about 100 people worldwide. It claims that during the course of the following year, this will be increased to about 300 employees. Additionally, the business produces huge AI models as open-source models, Stability AI, an advocate of automatic AI picture production, is valued at $1 billion There are unanswered questions regarding the legal problems that text-to-image models have inherently, in addition to malicious applications.  These algorithms are all trained using data that has been scraped from the internet, including content that is protected by copyright; examples include artist blogs and websites and stock photography websites. Some people who have had their work used to train these systems without their consent have expressed interest in taking legal action or receiving recompense.  As businesses like Stability AI demonstrate their capacity to turn the labor of people into their own profit, these problems will presumably become increasingly more serious.

By Raulf Hernes

If you ask me raulf means ALL ABOUT TECH!!

RELATED NEWS

After more than 30 years of operation, Telescope c...

news-extra-space

In the area of generative artificial intelligence ...

news-extra-space

According to internal memos obtained by , Meta's V...

news-extra-space
2
3
4
5
6
7
8
9
10