49
Diffusion-based Image-to-Video
5 subtopics
50
Choosing steps/CFG and motion strength (avoid over-motion)
51
Keeping identity: reference image weighting and face/subject locks
52
Temporal consistency strategies (seeds, guidance, consistency settings)
53
Control signals: depth, pose, edges (when and how to use them)
54
Diffusion I2V artifacts and fixes (texture crawl, jitter, morphing)
55
Video Transformers (I2V / T2V hybrids)
4 subtopics
56
Selecting a model: realism vs stylization and best use-cases
57
Clip length, context, and memory limits (what drives coherence)
58
Balancing prompt vs image conditioning to control motion and style
59
Stitching multiple generations into a longer shot (planning + blending)
60
Keyframe & Interpolation Approaches
3 subtopics
61
Animating between start/end frames (keyframe planning)
62
Frame interpolation to smooth low-fps output
63
Speed ramps and motion retiming to improve pacing
64
3D/2.5D Parallax Animation
4 subtopics
65
Creating depth maps and handling depth errors for parallax
↗ Creating foreground/midground/background layers for parallax (see Chapter 2)
66
Simulated camera moves: push-in, orbit, dolly (what looks natural)
67
Avoiding cardboarding and edge tearing (cleanup and feathering)
68
Fine-Tuning & Personalization (LoRA/embeddings)
4 subtopics
69
Collecting a small dataset safely (10–30 images, consistent labeling)
70
Training a LoRA for character/style consistency (basic workflow)
71
Validation and overfit prevention (holdout checks, drift checks)
72
Applying personalization in I2V: strengths, triggers, and safe ranges