APOSTLE
arrow_back AI Creative Director
Module 02 Midjourney V7 Deep Dive

The Editor and Omni Reference System

Master Midjourney's built-in editor tools — inpainting, panning, zoom, retexture — and the powerful new Omni Reference system.

schedule 14 min
signal_cellular_alt Intermediate
menu_book Lesson 04 of 14

The Editor and Omni Reference System

Vary Region (Inpainting)

Vary Region lets you select a portion of a generated image and regenerate just that area while keeping the rest intact.

  • How to access: Click "Vary Region" on any upscaled image.
  • Selection tools: Freeform lasso and rectangular selection.
  • Prompt override: You can write a new prompt for just the selected region.
  • Best for: Fixing hands, changing clothing, swapping backgrounds, removing unwanted elements.

Tips:

  • Select slightly larger than the area you want to change — this gives the model context for blending.
  • Keep your region prompt simple and specific. Don't re-describe the entire scene.
  • Multiple small region edits often work better than one large one.

Pan

Pan extends your image in any direction (left, right, up, down) while maintaining visual consistency.

  • How to access: Use the arrow buttons on any upscaled image.
  • Use cases: Expanding composition, revealing more environment, adjusting framing.
  • Prompt support: You can add a prompt to guide what appears in the new area.

Zoom Out

Zoom Out reveals more of the scene around your existing image, as if pulling the camera back.

  • Options: 1.5x and 2x zoom levels, plus custom zoom.
  • Best for: Creating wider compositions from tight crops, adding environmental context.
  • Custom zoom: Lets you specify an exact zoom level and add a guiding prompt.

Retexture

Retexture keeps the structural composition of your image but regenerates the surface appearance.

  • How it works: Preserves shapes, positions, and composition while changing materials, colors, and textures.
  • Best for: Exploring color palettes, changing seasons, material studies, style variations.
  • Prompt required: Describe the new texture/style you want applied to the existing composition.

Combined Editor Workflow

The real power comes from combining these tools in sequence:

  1. Generate your base image.
  2. Zoom Out if you need more environmental context.
  3. Pan to adjust framing and reveal relevant areas.
  4. Vary Region to fix specific problem areas (hands, faces, details).
  5. Retexture to explore alternative color/material treatments.

This iterative approach lets you sculpt a single image through multiple refinement passes rather than relying on re-rolling for perfection.


Omni Reference: Step-by-Step

Omni Reference (--oref) is V7's unified reference system. It intelligently interprets your reference image and applies relevant attributes — style, subject, composition, or all three.

7-Step Process

  1. Prepare your reference image — Choose a clear, high-quality image that represents what you want to transfer. Upload it to Discord or use a URL.
  2. Write your scene prompt — Describe the new image you want to create. Be specific about what should change from the reference.
  3. Add the oref parameter — Append --oref [image_url] to your prompt.
  4. Set the weight — Add --ow [value] to control how strongly the reference influences the output.
  5. Generate and evaluate — Run the prompt and assess how well the reference was interpreted.
  6. Adjust weight — If the reference is too dominant, lower --ow. If it's being ignored, raise it.
  7. Iterate — Refine your text prompt and weight until the balance between your description and the reference is right.

Omni Reference Weight Guide

--ow Range Behavior When to Use
1–25 Subtle influence. Reference acts as a gentle suggestion. When you want a hint of the reference style without overpowering your prompt
100 (default) Balanced. Reference and text prompt share equal influence. General purpose; good starting point
200–400 Strong reference adherence. Output closely matches reference aesthetics. Style transfer, maintaining brand consistency, matching a specific look
400–1000 Near-literal reproduction. Text prompt becomes secondary. Character consistency, recreating a specific image in a new context

Best Practices

  1. Start at default weight (100) and adjust — Don't guess. Start at 100, evaluate, then move up or down in increments of 50-100.
  2. Use high-quality, unambiguous references — The clearer your reference, the better the model interprets it. Avoid busy, multi-subject reference images.
  3. Your text prompt still matters — Even at high weights, the text prompt guides what the model does with the reference. Don't leave it vague.
  4. Match aspect ratios — If your reference is 3:2, generating at 3:2 produces more consistent results than generating at 1:1.
  5. Combine with personalization — Oref + your --p profile can produce results that feel both referenced AND personal to your aesthetic.

Limitations

  • One reference per prompt — You can only use one --oref image at a time. For multi-reference workflows, use --sref for style and --oref for subject separately.
  • Not compatible with Vary/Pan/Zoom — Oref cannot be used in combination with editor tools. Generate with oref first, then edit.
  • Not compatible with --draft mode — Omni Reference requires full rendering. Draft mode skips the reference processing.
  • Not compatible with --q 4 — Ultra-quality mode and oref conflict. Use --q 2 or default.
  • ~2x GPU cost — Using oref approximately doubles the GPU time per generation compared to text-only prompts.

Exercise

Editor Mastery Challenge

  1. Generate a portrait image you're 70% happy with.
  2. Use Vary Region to fix one specific issue (hands, clothing detail, background element).
  3. Use Pan to extend the image in one direction, adding environmental context.
  4. Use Zoom Out 1.5x to reveal more of the scene.
  5. Now find a reference image with a completely different mood/color palette. Use --oref with that reference and your original prompt to create a stylistic variation.
  6. Compare your original, edited, and oref versions side by side. Which workflow produced the best result?
Copied to clipboard