Background removal separates a subject from its surroundings so you can place it on transparency, swap the scene, or composite it into a new design. Under the hood you’re estimating an alpha matte—a per-pixel opacity from 0 to 1—and then alpha-compositing the foreground over something else. This is the math from Porter–Duff and the cause of familiar pitfalls like “fringes” and straight vs. premultiplied alpha. For practical guidance on premultiplication and linear color, see Microsoft’s Win2D notes, Søren Sandmann, and Lomont’s write-up on linear blending.
If you can control capture, paint the backdrop a solid color (often green) and key that hue away. It’s fast, battle-tested in film and broadcast, and ideal for video. The trade-offs are lighting and wardrobe: colored light spills onto edges (especially hair), so you’ll use despill tools to neutralize contamination. Good primers include Nuke’s docs, Mixing Light, and a hands-on Fusion demo.
For single images with messy backgrounds, interactive algorithms need a few user hints—e.g., a loose rectangle or scribbles—and converge to a crisp mask. The canonical method is GrabCut (book chapter), which learns color models for foreground/background and uses graph cuts iteratively to separate them. You’ll see similar ideas in GIMP’s Foreground Select based on SIOX (ImageJ plugin).
Matting solves fractional transparency at wispy boundaries (hair, fur, smoke, glass). Classic closed-form matting takes a trimap (definitely-fore/definitely-back/unknown) and solves a linear system for alpha with strong edge fidelity. Modern deep image matting trains neural nets on the Adobe Composition-1K dataset (MMEditing docs), and is evaluated with metrics like SAD, MSE, Gradient, and Connectivity (benchmark explainer).
Related segmentation work is also useful: DeepLabv3+ refines boundaries with an encoder–decoder and atrous convolutions (PDF); Mask R-CNN gives per-instance masks (PDF); and SAM (Segment Anything) is a promptable foundation model that zero-shots masks on unfamiliar images.
Academic work reports SAD, MSE, Gradient, and Connectivity errors on Composition-1K. If you’re picking a model, look for those metrics (metric defs; Background Matting metrics section). For portraits/video, MODNet and Background Matting V2 are strong; for general “salient object” images, U2-Net is a solid baseline; for tough transparency, FBA can be cleaner.
This converter runs entirely in your browser. When you select a file, it is read into memory and converted to the selected format. You can then download the converted file.
Conversions start instantly, and most files are converted in under a second. Larger files may take longer.
Your files are never uploaded to our servers. They are converted in your browser, and the converted file is then downloaded. We never see your files.
We support converting between all image formats, including JPEG, PNG, GIF, WebP, SVG, BMP, TIFF, and more.
This converter is completely free, and will always be free. Because it runs in your browser, we don't have to pay for servers, so we don't need to charge you.
Yes! You can convert as many files as you want at once. Just select multiple files when you add them.