Keyframer: How Apple’s amazing AI animation tool can animate images with text

Imagine being able to animate any image with just a few words. That’s what Apple’s new AI animation tool, Keyframer, can do. Keyframer is a prototype that uses large language models (LLMs) to generate CSS code that animates Scalable Vector Graphic (SVG) files.

SVG files are a type of image format that can be scaled up or down without losing quality. They are often used for web design, logos, icons, and illustrations. With Keyframer, users can upload an SVG image and enter a text prompt that describes how they want the image to be animated. For example, they can type “make the stars twinkle” or “make the Saturn rotate”. The tool will then produce a CSS code that applies the animation to the image.

keyframer

How Keyframer works

Keyframer is powered by OpenAI’s GPT4, a state-of-the-art LLM that can generate natural language texts for various tasks. Apple’s researchers believe that LLMs have a lot of potential for animation, as they have already shown impressive results in other creative domains such as writing and image generation. Keyframer is one of the first examples of how LLMs can be used for animation.

The tool works by converting the text prompt into a sequence of tokens, which are then fed into the GPT4 model. The model then outputs a sequence of tokens that represent the CSS code for the animation. The CSS code is then parsed and applied to the SVG image, creating the animation.

The benefits of Keyframer

The tool has several advantages over other methods of AI-generated animation. First, it is very simple and intuitive to use, as it does not require any coding skills or complex software. Users can create multiple animation designs in one go, and tweak the parameters such as color codes and animation durations in a separate window. The CSS code is also fully editable, so users can fine-tune the animation as they wish. Second, it is very flexible and versatile, as it can animate any SVG image based on any text prompt. Users can experiment with different styles, effects, and transitions, and create unique and expressive animations.

The limitations and challenges of Keyframer

However, there are certain limitations and challenges associated with the tool as well. First, it is not publicly available yet, and it has only been tested by a small group of 13 people, who used two simple SVG images provided by the researchers. It is not clear how well the tool can handle more complex and diverse images and prompts. Second, it is only suitable for web-based animations, such as loading sequences, data visualization, and animated transitions.

It cannot produce the high-quality and realistic animation that is seen in movies and video games, which requires more advanced techniques and data sources. The researchers acknowledge that using text descriptions alone is not enough to capture the richness and complexity of animation, and suggest that future work could explore other modalities such as audio, video, and sketches.

Keyframer and the future of generative AI

Keyframer is part of Apple’s ongoing efforts to explore the possibilities of generative AI for creative applications. In December, the company introduced Human Gaussian Splats (HUGS), an AI model that can create animation-ready human avatars from video clips. Last week, the company also released MGIE, an AI model that can edit images using text-based descriptions. These models demonstrate how AI can augment and enhance human creativity, and open up new ways of expressing and communicating ideas.

Apple AI animation tool, Keyframer, is a novel and exciting technology that can bring images to life with text. It shows how LLMs can be applied to animation, and how animation can be made more accessible and fun for everyone. Keyframer is a glimpse into the future of generative AI and animation.

Share: