Adobe is rolling out an artificial intelligence (AI) model that can use text prompts to generate video.
The Firefly Video Model was announced Monday (Oct. 14) as part of the company’s larger Adobe MAX product showcase.
The announcement comes on the heels of AI–powered video tools from other tech companies, such as OpenAI, ByteDance and Meta.
As a report by Reuters noted, Adobe — facing much bigger competitors — has staked its fortunes on creating AI models trained on data that it has rights to use, thus making sure the results can be legally used in commercial work.
The company will begin opening the tool to people on its waitlist but has not given a wider release date.
Adobe has not announced any customers using its video tools, but its image generation clients include PepsiCo, IBM, Mattel, IPG Health and Deloitte, who use the technology to “optimize workflows and scale content creation so creatives can spend more time exploring their creative visions,” the company said in its announcement.
The launch comes 10 days after Meta introduced generative AI research that shows how simple text inputs can be used to create custom videos and sounds and edit existing videos.
Dubbed Meta Movie Gen, this AI model expands upon the company’s earlier generative AI models Make-A-Scene and Llama Image, and combines the modalities of those earlier generation models and allows further fine-grained control.
In other AI news, PYMNTS on Monday explored the rise of AI agents, software programs that carry out specific tasks without constant supervision.
“Whether handling customer requests, diagnosing medical conditions or predicting market trends, AI agents are versatile workhorses,” the report said. “Instead of waiting for humans to input every command, these agents operate autonomously, reacting to real-time data and adjusting their actions accordingly.”
AI agents come in several varieties, each with a range of capabilities. The most basic are reactive agents, which respond to environmental changes but don’t learn from experiences. They are essentially rule-followers, flawlessly executing instructions, but not anticipating what’s coming next.
“Proactive agents are more sophisticated,” PYMNTS wrote. “They can plan and anticipate future actions, making them useful for businesses that need foresight. They don’t just react, they strategize. By analyzing patterns, they can make predictions and optimize processes, often in real time.”
The post Adobe Introduces Video Generation Capabilities for Firefly AI Model appeared first on PYMNTS.com.
{Categories} _Category: Platforms{/Categories}
{URL}https://www.pymnts.com/artificial-intelligence-2/2024/adobe-introduces-video-generation-capabilities-for-firefly-ai-model/{/URL}
{Author}PYMNTS{/Author}
{Image}{/Image}
{Keywords}artificial intelligence,Adobe,Adobe AI,Adobe MAX,AI,AI video,Firefly Video Model,GenAI,generative AI,Meta,News,OpenAI,PYMNTS News,text to video,video AI,What’s Hot{/Keywords}
{Source}Platforms{/Source}
{Thumb}{/Thumb}