Animate Images with AI

Transform static images into dynamic animations, create video content, and generate motion graphics using advanced AI models. Perfect for marketing content, social media, presentations, and creative projects.

Overview

AI animation capabilities enable you to:
  • Bring images to life with realistic motion and effects
  • Create video content from static images and descriptions
  • Generate smooth transitions between different scenes
  • Add special effects like particle systems and lighting
  • Produce marketing animations for social media and advertising

Image-to-Video

Convert static images into dynamic video content

Motion Effects

Add realistic motion, wind, water, and particle effects

Scene Transitions

Create smooth transitions between different scenes

Character Animation

Animate characters, faces, and objects naturally

Quick Start

import requests
import base64
from PIL import Image
import io

class ImageAnimator:
    def __init__(self, api_key):
        self.api_key = api_key
    
    def animate_image(self, image_path, animation_prompt, duration=3.0, fps=24):
        """Animate a static image based on text description"""
        
        # Convert image to base64
        with open(image_path, "rb") as image_file:
            image_base64 = base64.b64encode(image_file.read()).decode('utf-8')
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/generations",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "model": "runway-gen2",
                "prompt": animation_prompt,
                "image": image_base64,
                "duration": duration,
                "fps": fps,
                "resolution": "1280x720"
            }
        )
        
        result = response.json()
        return result["data"][0]["url"]
    
    def create_motion_effect(self, image_path, effect_type="wind"):
        """Add specific motion effects to an image"""
        
        effect_prompts = {
            "wind": "gentle wind blowing through the scene, leaves and fabric moving naturally",
            "water": "water flowing and rippling naturally, reflections dancing",
            "fire": "realistic fire flickering and dancing with natural movement", 
            "snow": "soft snow falling gently with natural drift patterns",
            "clouds": "clouds moving slowly across the sky with natural flow",
            "particles": "magical particles floating and glowing with ethereal movement"
        }
        
        prompt = effect_prompts.get(effect_type, effect_type)
        return self.animate_image(image_path, prompt)
    
    def animate_character(self, image_path, action_description):
        """Animate characters or people in images"""
        
        animation_prompt = f"natural character animation: {action_description}, smooth and realistic movement, maintaining character appearance"
        
        return self.animate_image(image_path, animation_prompt, duration=4.0)
    
    def create_cinemagraph(self, image_path, motion_area, motion_type="subtle"):
        """Create cinemagraph-style animations with selective motion"""
        
        motion_descriptions = {
            "subtle": "very subtle, barely noticeable movement",
            "gentle": "gentle, smooth, hypnotic movement", 
            "dramatic": "more pronounced but still elegant movement"
        }
        
        motion_desc = motion_descriptions.get(motion_type, motion_type)
        prompt = f"cinemagraph style animation focusing on {motion_area} with {motion_desc}, everything else remains perfectly still"
        
        return self.animate_image(image_path, prompt, duration=2.0)

# Usage examples
animator = ImageAnimator("YOUR_API_KEY")

# Basic image animation
animated_url = animator.animate_image(
    "landscape.jpg",
    "camera slowly panning across the beautiful landscape, revealing hidden details"
)
print(f"Animated landscape: {animated_url}")

# Add wind effect
wind_animation = animator.create_motion_effect("portrait.jpg", "wind")
print(f"Wind effect: {wind_animation}")

# Animate character
character_animation = animator.animate_character(
    "person_standing.jpg", 
    "person waves hello and smiles warmly"
)
print(f"Character animation: {character_animation}")

# Create cinemagraph
cinemagraph = animator.create_cinemagraph(
    "coffee_shop.jpg",
    "steam rising from the coffee cup",
    "gentle"
)
print(f"Cinemagraph: {cinemagraph}")

Advanced Animation Techniques

Story-Driven Animation

class StoryAnimator:
    def __init__(self, api_key):
        self.api_key = api_key
    
    def create_narrative_sequence(self, images, story_prompts, transition_style="smooth"):
        """Create a narrative sequence from multiple images"""
        
        animations = []
        
        for i, (image_path, story_prompt) in enumerate(zip(images, story_prompts)):
            # Create individual scene animation
            scene_prompt = f"{story_prompt}, cinematic storytelling, {transition_style} transitions"
            
            animated_scene = self.animate_scene(image_path, scene_prompt, scene_number=i)
            animations.append(animated_scene)
        
        # Combine scenes into a cohesive story
        return self.combine_scenes(animations, transition_style)
    
    def animate_scene(self, image_path, story_prompt, scene_number=0):
        """Animate a single scene with story context"""
        
        # Add scene-specific timing and pacing
        duration = 5.0 if scene_number == 0 else 4.0  # Longer intro
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/generations",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "model": "runway-gen2",
                "prompt": story_prompt,
                "image": self.encode_image(image_path),
                "duration": duration,
                "fps": 24,
                "resolution": "1920x1080",
                "style": "cinematic"
            }
        )
        
        return response.json()["data"][0]["url"]
    
    def combine_scenes(self, scene_urls, transition_style):
        """Combine multiple animated scenes"""
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "sequence",
                "inputs": scene_urls,
                "transitions": [transition_style] * (len(scene_urls) - 1),
                "output_format": "mp4"
            }
        )
        
        return response.json()["data"]["url"]
    
    def add_narrative_voice(self, video_url, script, voice_style="narrator"):
        """Add AI-generated voiceover to the animation"""
        
        # Generate voiceover
        voice_response = requests.post(
            "https://api.anyapi.ai/v1/audio/speech",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "model": "tts-1-hd",
                "input": script,
                "voice": voice_style,
                "response_format": "mp3"
            }
        )
        
        audio_url = voice_response.json()["url"]
        
        # Combine video with audio
        final_response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "add_audio",
                "video": video_url,
                "audio": audio_url,
                "mix_level": 0.8
            }
        )
        
        return final_response.json()["data"]["url"]
    
    def encode_image(self, image_path):
        """Helper to encode image to base64"""
        with open(image_path, "rb") as image_file:
            return base64.b64encode(image_file.read()).decode('utf-8')

# Usage
story_animator = StoryAnimator("YOUR_API_KEY")

# Create a story sequence
images = ["scene1.jpg", "scene2.jpg", "scene3.jpg"]
story_prompts = [
    "opening scene: peaceful morning in a small village, gentle introduction",
    "rising action: storm clouds gathering, tension building in the atmosphere", 
    "climax: dramatic lightning illuminating the landscape, powerful moment"
]

narrative_video = story_animator.create_narrative_sequence(images, story_prompts, "cinematic")

# Add voiceover
script = """
A peaceful morning in the village was about to change forever. 
Dark clouds gathered on the horizon, bringing with them an storm of unprecedented power.
When lightning finally struck, it illuminated not just the landscape, but the beginning of a new chapter.
"""

final_video = story_animator.add_narrative_voice(narrative_video, script, "narrator")
print(f"Complete narrative: {final_video}")

Marketing Animation Studio

class MarketingAnimator:
    def __init__(self, api_key):
        self.api_key = api_key
    
    def create_product_showcase(self, product_image, product_name, key_features):
        """Create animated product showcase for marketing"""
        
        showcase_prompt = f"""
        Professional product showcase for {product_name}:
        - Smooth 360-degree rotation revealing all angles
        - Elegant lighting that highlights key features: {', '.join(key_features)}
        - Premium commercial photography style
        - Subtle zoom-in to show quality and detail
        - Professional studio lighting with soft shadows
        """
        
        return self.animate_with_style(product_image, showcase_prompt, "commercial")
    
    def create_social_media_story(self, image, platform="instagram", mood="energetic"):
        """Create platform-optimized animated stories"""
        
        platform_specs = {
            "instagram": {
                "aspect_ratio": "9:16",
                "duration": 3.0,
                "style": "vibrant, engaging, trendy"
            },
            "tiktok": {
                "aspect_ratio": "9:16", 
                "duration": 2.5,
                "style": "dynamic, fast-paced, attention-grabbing"
            },
            "facebook": {
                "aspect_ratio": "16:9",
                "duration": 4.0,
                "style": "professional, clean, accessible"
            }
        }
        
        spec = platform_specs.get(platform, platform_specs["instagram"])
        
        mood_styles = {
            "energetic": "dynamic movement, vibrant colors, upbeat pacing",
            "calm": "smooth, gentle motion, peaceful transitions",
            "luxurious": "elegant, sophisticated movement, premium feel",
            "playful": "bouncy, fun animations, colorful and lively"
        }
        
        style_description = mood_styles.get(mood, mood)
        
        prompt = f"""
        {platform} story animation with {style_description}:
        - Optimized for {spec['aspect_ratio']} aspect ratio
        - {spec['style']} presentation
        - Engaging visual flow that holds attention
        - Brand-appropriate motion and pacing
        """
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/generations",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "model": "runway-gen2",
                "prompt": prompt,
                "image": self.encode_image(image),
                "duration": spec["duration"],
                "aspect_ratio": spec["aspect_ratio"],
                "fps": 30
            }
        )
        
        return response.json()["data"][0]["url"]
    
    def create_explainer_animation(self, concept_image, explanation_text):
        """Create educational explainer animations"""
        
        # Break down explanation into key points
        key_points = explanation_text.split('. ')
        
        animations = []
        
        for i, point in enumerate(key_points):
            animation_prompt = f"""
            Educational explainer animation step {i+1}:
            - Clear visual demonstration of: {point}
            - Smooth, easy-to-follow motion
            - Professional educational style
            - Highlight key elements as they're explained
            - Maintain visual clarity and focus
            """
            
            step_animation = self.animate_with_style(
                concept_image, 
                animation_prompt, 
                "educational",
                duration=3.0
            )
            animations.append(step_animation)
        
        # Combine steps into complete explainer
        return self.combine_animations(animations, "educational")
    
    def create_before_after_reveal(self, before_image, after_image, reveal_style="wipe"):
        """Create before/after reveal animations"""
        
        reveal_styles = {
            "wipe": "smooth horizontal wipe transition revealing the transformation",
            "fade": "elegant cross-fade transition showing the change",
            "split": "split-screen comparison with synchronized movement",
            "morph": "smooth morphing transition between before and after states"
        }
        
        style_prompt = reveal_styles.get(reveal_style, reveal_style)
        
        # First animate the before state
        before_animation = self.animate_with_style(
            before_image,
            f"before state: subtle movement preparing for transformation, {style_prompt}",
            "transformation"
        )
        
        # Then animate the after state  
        after_animation = self.animate_with_style(
            after_image,
            f"after state: revealing the final result, {style_prompt}",
            "transformation"
        )
        
        # Combine with transition
        response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "before_after_transition",
                "before": before_animation,
                "after": after_animation,
                "transition": reveal_style,
                "duration": 4.0
            }
        )
        
        return response.json()["data"]["url"]
    
    def animate_with_style(self, image, prompt, style, duration=3.0):
        """Helper method to animate with specific style"""
        
        style_modifiers = {
            "commercial": "professional commercial quality, studio lighting, premium feel",
            "educational": "clear, focused, easy to understand, professional presentation", 
            "transformation": "dramatic reveal, compelling visual story, engaging pacing",
            "social": "trendy, engaging, optimized for social media viewing"
        }
        
        full_prompt = f"{prompt}. {style_modifiers.get(style, '')}"
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/generations",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "model": "runway-gen2",
                "prompt": full_prompt,
                "image": self.encode_image(image),
                "duration": duration,
                "quality": "high"
            }
        )
        
        return response.json()["data"][0]["url"]
    
    def combine_animations(self, animation_urls, style):
        """Combine multiple animations with appropriate transitions"""
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "sequence",
                "inputs": animation_urls,
                "style": style,
                "auto_transitions": True
            }
        )
        
        return response.json()["data"]["url"]
    
    def encode_image(self, image_path):
        """Helper to encode image to base64"""
        with open(image_path, "rb") as image_file:
            return base64.b64encode(image_file.read()).decode('utf-8')

# Usage
marketing_animator = MarketingAnimator("YOUR_API_KEY")

# Product showcase
product_animation = marketing_animator.create_product_showcase(
    "wireless_headphones.jpg",
    "Premium Wireless Headphones",
    ["noise cancellation", "premium materials", "comfortable fit"]
)
print(f"Product showcase: {product_animation}")

# Social media story
social_story = marketing_animator.create_social_media_story(
    "brand_lifestyle.jpg",
    platform="instagram",
    mood="energetic"
)
print(f"Instagram story: {social_story}")

# Explainer animation
explainer = marketing_animator.create_explainer_animation(
    "app_interface.jpg",
    "Our app simplifies your workflow. First, connect your accounts. Then, automate repetitive tasks. Finally, track your productivity gains."
)
print(f"Explainer animation: {explainer}")

# Before/after reveal
transformation = marketing_animator.create_before_after_reveal(
    "room_before.jpg",
    "room_after.jpg", 
    "wipe"
)
print(f"Transformation reveal: {transformation}")

Interactive Animation Workflows

class InteractiveAnimationStudio:
    def __init__(self, api_key):
        self.api_key = api_key
        self.project_assets = {}
        self.animation_history = []
    
    def start_animation_project(self, project_name, base_image, project_type="general"):
        """Initialize a new animation project with context"""
        
        self.project_assets = {
            "name": project_name,
            "base_image": base_image,
            "type": project_type,
            "animations": [],
            "variations": []
        }
        
        # Create initial animation
        initial_prompt = self.get_initial_prompt(project_type)
        first_animation = self.create_base_animation(base_image, initial_prompt)
        
        self.project_assets["animations"].append({
            "name": "base_animation",
            "url": first_animation,
            "prompt": initial_prompt
        })
        
        return {
            "project_id": project_name,
            "base_animation": first_animation,
            "suggestions": self.get_next_suggestions(project_type)
        }
    
    def add_animation_layer(self, layer_type, layer_description):
        """Add a new animation layer to the current project"""
        
        base_animation = self.project_assets["animations"][-1]["url"]
        
        layer_prompts = {
            "effects": f"add visual effects: {layer_description}",
            "motion": f"enhance motion: {layer_description}",
            "atmosphere": f"add atmospheric elements: {layer_description}",
            "lighting": f"adjust lighting and mood: {layer_description}",
            "particles": f"add particle effects: {layer_description}"
        }
        
        prompt = layer_prompts.get(layer_type, layer_description)
        
        # Apply layer to existing animation
        response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "add_layer",
                "base_video": base_animation,
                "layer_prompt": prompt,
                "blend_mode": "natural"
            }
        )
        
        enhanced_animation = response.json()["data"]["url"]
        
        self.project_assets["animations"].append({
            "name": f"{layer_type}_layer",
            "url": enhanced_animation,
            "prompt": prompt
        })
        
        return enhanced_animation
    
    def create_animation_variations(self, variation_count=3):
        """Create multiple variations of the current animation"""
        
        base_animation = self.project_assets["animations"][-1]
        variations = []
        
        variation_styles = [
            "more dramatic and intense version",
            "softer, more elegant version", 
            "faster-paced, more dynamic version",
            "cinematic, film-quality version",
            "artistic, stylized version"
        ]
        
        for i in range(variation_count):
            style = variation_styles[i % len(variation_styles)]
            
            variation_prompt = f"{base_animation['prompt']}, {style}"
            
            response = requests.post(
                "https://api.anyapi.ai/v1/video/generations",
                headers={
                    "Authorization": f"Bearer {self.api_key}",
                    "Content-Type": "application/json"
                },
                json={
                    "model": "runway-gen2",
                    "prompt": variation_prompt,
                    "reference_video": base_animation["url"],
                    "variation_strength": 0.6,
                    "duration": 3.0
                }
            )
            
            variation_url = response.json()["data"][0]["url"]
            
            variations.append({
                "style": style,
                "url": variation_url,
                "prompt": variation_prompt
            })
        
        self.project_assets["variations"] = variations
        return variations
    
    def adjust_animation_timing(self, new_duration, pacing="natural"):
        """Adjust the timing and pacing of the current animation"""
        
        current_animation = self.project_assets["animations"][-1]["url"]
        
        pacing_styles = {
            "natural": "maintain natural, realistic pacing",
            "slow": "slow, contemplative pacing for emphasis",
            "fast": "quick, energetic pacing for excitement", 
            "dramatic": "dramatic pauses and emphasis for impact"
        }
        
        pacing_instruction = pacing_styles.get(pacing, pacing)
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/edit",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "operation": "retiming",
                "input_video": current_animation,
                "new_duration": new_duration,
                "pacing_style": pacing_instruction,
                "preserve_quality": True
            }
        )
        
        retimed_animation = response.json()["data"]["url"]
        
        self.project_assets["animations"].append({
            "name": "retimed_animation",
            "url": retimed_animation,
            "prompt": f"retimed to {new_duration}s with {pacing} pacing"
        })
        
        return retimed_animation
    
    def export_final_animation(self, format_type="mp4", quality="high"):
        """Export the final animation in the desired format"""
        
        final_animation = self.project_assets["animations"][-1]["url"]
        
        response = requests.post(
            "https://api.anyapi.ai/v1/video/export",
            headers={
                "Authorization": f"Bearer {self.api_key}",
                "Content-Type": "application/json"
            },
            json={
                "input_video": final_animation,
                "output_format": format_type,
                "quality": quality,
                "optimization": "web" if format_type == "mp4" else "source"
            }
        )
        
        export_url = response.json()["data"]["download_url"]
        
        return {
            "download_url": export_url,
            "project_summary": self.get_project_summary(),
            "total_animations": len(self.project_assets["animations"])
        }
    
    def get_initial_prompt(self, project_type):
        """Get appropriate initial prompt based on project type"""
        
        initial_prompts = {
            "marketing": "professional, eye-catching animation suitable for marketing",
            "artistic": "creative, artistic animation with beautiful visual flow",
            "educational": "clear, informative animation that explains concepts well",
            "social": "engaging, trendy animation perfect for social media",
            "cinematic": "film-quality animation with dramatic cinematography"
        }
        
        return initial_prompts.get(project_type, "high-quality, engaging animation")
    
    def get_next_suggestions(self, project_type):
        """Get contextual suggestions for next steps"""
        
        suggestions = {
            "marketing": [
                "Add product highlight effects",
                "Create brand-colored overlays",
                "Add call-to-action elements"
            ],
            "artistic": [
                "Experiment with particle effects", 
                "Try different color grading",
                "Add abstract motion elements"
            ],
            "educational": [
                "Add explanatory text overlays",
                "Highlight key learning points",
                "Create step-by-step reveals"
            ]
        }
        
        return suggestions.get(project_type, [
            "Add motion effects",
            "Adjust lighting and mood", 
            "Create variations"
        ])
    
    def get_project_summary(self):
        """Get a summary of the current project"""
        
        return {
            "project_name": self.project_assets["name"],
            "total_animations": len(self.project_assets["animations"]),
            "variations_created": len(self.project_assets.get("variations", [])),
            "final_animation": self.project_assets["animations"][-1]["url"] if self.project_assets["animations"] else None
        }

# Usage
studio = InteractiveAnimationStudio("YOUR_API_KEY")

# Start a marketing project
project = studio.start_animation_project(
    "Product Launch Campaign",
    "new_product.jpg", 
    "marketing"
)
print(f"Started project: {project['project_id']}")
print(f"Base animation: {project['base_animation']}")

# Add effects layer
enhanced = studio.add_animation_layer(
    "effects", 
    "golden particle effects that highlight the product's premium quality"
)
print(f"Enhanced with effects: {enhanced}")

# Create variations
variations = studio.create_animation_variations(3)
print(f"Created {len(variations)} variations")

# Adjust timing
final_timing = studio.adjust_animation_timing(5.0, "dramatic")
print(f"Final timing adjustment: {final_timing}")

# Export final result
export_result = studio.export_final_animation("mp4", "high")
print(f"Export complete: {export_result['download_url']}")
print(f"Project summary: {export_result['project_summary']}")

Animation Styles and Effects

TechniqueDescriptionBest For
CinemagraphSubtle motion in specific areasSocial media, websites
ParallaxMulti-layer depth movementImmersive storytelling
MorphingSmooth shape transformationsProduct reveals, transitions
Particle EffectsDynamic particle systemsMagic, energy, atmosphere
Character AnimationNatural character movementMarketing, storytelling
Camera MotionVirtual camera movementsCinematic reveals

Effect Categories

Natural Effects

Wind, water, fire, snow, clouds, organic movement

Magical Effects

Particles, energy beams, glowing elements, sparkles

Cinematic Effects

Camera moves, lighting changes, depth of field

Motion Graphics

Text animations, UI elements, geometric shapes

Best Practices

1. Animation Planning

  • Storyboard first: Plan the motion before creating
  • Consider context: Match animation style to purpose
  • Duration matters: Optimize length for platform and use case
  • Smooth transitions: Ensure natural, believable movement

2. Technical Optimization

  • Frame rate: Use appropriate FPS for smooth motion
  • Resolution: Balance quality with file size requirements
  • Compression: Optimize for web delivery when needed
  • Format selection: Choose the right format for your platform

3. Creative Guidelines

  • Subtle is powerful: Often less motion is more effective
  • Consistent style: Maintain visual coherence throughout
  • Physics realism: Respect natural motion laws
  • Timing variation: Use varied timing for visual interest

4. Platform Optimization

  • Social media: Short, attention-grabbing animations
  • Marketing: Professional, brand-consistent motion
  • Education: Clear, easy-to-follow movements
  • Entertainment: Dramatic, engaging storytelling

Common Use Cases

Marketing & Advertising

Product showcases, brand animations, social media content

Social Media Content

Instagram stories, TikTok videos, engaging posts

Presentations

Dynamic slides, concept visualizations, data animations

Web Content

Hero banners, loading animations, interactive elements

Educational Content

Explainer videos, process demonstrations, tutorials

Creative Projects

Art installations, experimental videos, personal projects

E-commerce

Product demos, before/after reveals, feature highlights

Entertainment

Short films, music videos, creative storytelling

Model Recommendations

Animation TypeRecommended ModelStrengths
Realistic MotionRunway Gen-2Natural movement, high quality
Artistic StylesStable Video DiffusionCreative flexibility, styles
Character AnimationRunway Gen-2Facial expressions, gestures
Product ShowcasesRunway Gen-2Commercial quality, lighting
Quick PrototypesPika LabsFast generation, iterations

Getting Started