Background Remover
Remove image background free online. Works in your browser, no uploads. Download as transparent PNG.
Advertisement
Drop an image to remove background
Works best with solid or uniform backgrounds
Upload ImagePNG, JPG, WEBP · No file size limit · 100% private
🔒 Fully Private
Images stay in your browser. No uploads, no servers, no data collection.
🎯 Smart Detection
Automatically detects background color from image corners for clean removal.
🖼️ Multiple Backgrounds
Replace with transparent, white, black, or any custom color.
Advertisement
How to Use This Tool
Upload Your Image
Upload a JPEG or PNG image. For best results, use a well-lit photo where the subject is clearly distinct from the background.
AI Removes the Background
The AI automatically detects the subject and removes everything else. Processing takes 2–4 seconds. No manual selection required.
Download Transparent PNG
Preview the result on a checkered background. Optionally replace with a solid colour or your own background image. Download as PNG.
Advertisement
Related Tools
Frequently Asked Questions
Does background removal work on product photos?
Why use PNG and not JPG for transparent backgrounds?
Can it handle complex backgrounds like hair?
Is there a file size or resolution limit?
About Background Remover
The product team needs thirty headshots for the About page by Friday, and half of them were taken against different busy office walls. Or your Shopify catalog is full of vendor-supplied product photos shot on random backgrounds and the theme expects clean cutouts on white. This remover runs a U2Net-based ONNX model in your browser via onnxruntime-web (WebAssembly backend with optional WebGPU acceleration on recent Chrome), producing an alpha-matted PNG without uploading the image anywhere. It does a decent job on typical subjects — people against contrasted backgrounds, products with defined edges — and an honest mediocre job on the hard cases that all ML matting tools struggle with: flyaway hair, fur, motion blur, semi-transparent glass, and subjects that blend into the background color. Output is a transparent PNG at the source resolution; you can optionally swap in a solid color, a gradient, or a different uploaded background in the same session rather than round-tripping through another editor for simple composites.
When to use this tool
Generating consistent headshots for a team page
Twenty employees sent photos taken on different phones in different lighting. Run each through the remover, composite onto a uniform dark-gray background, and the About page reads as a coherent team rather than a patchwork of office wallpaper. Plan roughly 3–5 seconds per image on a modern laptop.
Preparing Shopify or Amazon product shots
Amazon requires a pure-white background (#FFFFFF) for main product images. Drop the vendor photo in, remove the background, composite onto white, and the listing passes Amazon's image validation without a manual Photoshop pass. For products with soft or translucent edges (glass, jewelry) expect to touch up the matte manually.
Making a transparent sticker from a drawing
You scanned a marker drawing or sketched on an iPad and want to use it as a transparent asset in a Figma file or Keynote slide. The remover handles simple high-contrast drawings well (black ink on white paper) and produces a clean alpha channel you can drop into any layout.
Creating a cutout for a social-media post
An Instagram collage wants your subject isolated against a branded gradient background. Remove once, keep the PNG, then composite against as many different backgrounds as the campaign needs without re-running the remover. The alpha channel is reusable across all subsequent composites.
Isolating an object to measure or label it
Science fair photos, real-estate listing shots, and educational diagrams often need a clean subject against a known neutral background so measurements, labels, or callouts are unambiguous. The remover is a quick preprocessing step before passing the cutout into a layout tool or slide deck.
How it works
- 1
U2Net salient-object segmentation via ONNX Runtime Web
We run a quantized U2Net model (around 44MB download, cached after first use) through onnxruntime-web. The model produces a per-pixel probability map of how likely each pixel is to be part of the foreground subject. We threshold and soft-edge that map to produce the alpha channel. U2Net is specifically a salient-object detector — it learns to find the dominant subject in an image, not to segment every object, which is why it works on portraits and product shots but fails on images with no clear single subject.
- 2
WebGPU acceleration on supported browsers
Chrome 113+ and Edge 113+ ship WebGPU by default on Windows and macOS; Safari 17.4+ has it behind a flag. When available, we run the ONNX session with the WebGPU execution provider, which is typically 3–5x faster than the pure-WASM fallback on the same hardware. Firefox and older browsers fall back to WASM with SIMD; expect 3–6 seconds per 1000x1000 image on WASM versus 1–2 seconds on WebGPU.
- 3
Matte refinement via morphological erosion and feathering
Raw segmentation output has jagged edges at pixel transitions. We apply a 1-pixel erosion to pull the matte slightly inside the object boundary (preventing background halos) and a light Gaussian feather (sigma 0.8) to soften the alpha transition so the composite does not look cut-with-scissors. You can toggle the refinement off if the raw model output looks better on your specific image.
Honest limitations
- · Model quality drops visibly on hair wisps, fur, semi-transparent materials, motion blur, and subjects that color-match the background; expect manual edge touch-up in a real editor for professional-quality output.
- · First-time use requires downloading the roughly 44MB ONNX model, which is cached for subsequent runs but adds a noticeable delay on the first image of the session.
- · On iOS Safari (prior to 17.4) and low-memory devices, WebAssembly may run out of heap on inputs above roughly 3000x3000 pixels; resize the source down first if you hit allocation errors.
Pro tips
The model is decent, not magic — edit edges manually on the hard cases
U2Net does a good job on subjects with clearly contrasting backgrounds and defined silhouettes. It struggles predictably on hair (especially flyaway strands against busy backgrounds), fur, fuzzy clothing, semi-transparent materials (glass, plastic bottles, veils), motion blur, and subjects whose color nearly matches the background. For high-stakes commercial work (product listings, hero photography, professional portraits) expect to touch up the matte in a real editor afterward — the remover is a starting point that saves 80% of the work, not a replacement for a skilled retoucher.
Photograph for the remover, not against it
If you control the shoot, give the model an easy job. Use a backdrop with strong luminance contrast against the subject (subject in dark clothing against a bright backdrop, or vice versa); avoid backdrops that match the subject's skin tone, hair color, or dominant clothing color. Shoot in diffused even lighting to minimize shadows that the model may interpret as part of the subject. A ten-dollar roll of seamless paper plus two softboxes produces product shots that remove cleanly in under two seconds; a busy office wall in mixed sunlight will produce matte errors no matter which tool you use.
Re-use the matte rather than re-running the model
Background swaps are free once you have the alpha channel — compositing a transparent PNG over a new background is a simple canvas operation that takes milliseconds. If you need the same subject against three different backgrounds, run the remover once, save the transparent PNG, then composite it three times. Running the remover three times wastes 15+ seconds of ML compute and gives you three slightly different mattes (since the model's quantized output is not perfectly deterministic across runs) that will make the final three composites inconsistent.
Frequently asked questions
Why does the background remover fail on my photo with wispy hair?
Hair is notoriously hard for every matting algorithm — commercial, ML, and traditional alike. U2Net produces a per-pixel probability map, and fine strands only a few pixels wide occupy ambiguous values (say, 0.4 probability) that get thresholded to either fully in or fully out, producing either cut-off hair or visible background halo in the strands. Dedicated alpha matting algorithms (Deep Image Matting, MODNet) are better at hair but are much larger models and still imperfect. For professional portrait work where hair edges matter, expect to open the result in Photoshop and refine with the Select & Mask Refine Edge brush, which samples local color statistics to recover individual strands.
Is my image uploaded to a server for processing?
No. The entire pipeline runs in your browser — the ONNX model downloads once from a CDN (cached afterward), then all inference runs via onnxruntime-web on either the WebGPU or WebAssembly backend locally. Your image bytes never cross the network. You can verify by opening devtools Network panel during background removal; the only requests should be the initial model download (on first use only) and standard page analytics that never see DOM or pixel data. This matters because people often remove backgrounds from personal photos, internal product shots, or confidential reference images where uploading to a third-party service would be a privacy failure.
How does this compare to commercial tools like remove.bg?
Commercial services typically run larger, more recent models (often custom fine-tuned variants of ModNet, BackgroundMattingV2, or proprietary architectures) on GPU-accelerated servers, and they produce visibly better edges on hard cases like hair, fur, and glass. Our in-browser model is smaller (44MB vs gigabyte-scale server models) because it has to download and run on consumer hardware, and quality on easy cases is close to commercial, but hard cases are noticeably worse. The trade-off is privacy (nothing leaves your machine) and cost (no per-image fee or monthly subscription). For casual and internal work the in-browser tool is fine; for commercial product photography consider paying for remove.bg Pro or Photoroom.
Can I remove a specific object other than the main subject?
Not reliably. U2Net is a salient-object detector — it finds the most visually dominant subject in the image and masks that. If you want to remove a specific non-salient object (say, a person in the background of a landscape, or one product from a group shot), U2Net will often pick the wrong subject. For that use case you need a semantic-segmentation or instance-segmentation model like SAM (Segment Anything), which is larger and not yet practical to run in-browser at interactive speed. Workaround for simple cases: crop the image first to isolate only the object you want masked, then run the remover on the cropped region.
What image resolution works best?
The model was trained at 320x320 input resolution, and we upsample the output matte to match the source resolution via bilinear interpolation. Source images between roughly 800x800 and 2500x2500 work well — large enough to show meaningful detail, small enough that the upsampled matte does not look blocky. Very high-resolution sources (4000x4000+) are run through the same 320x320 model path, so the matte itself is not more detailed than on a smaller source; you pay the cost of the bigger source for no accuracy gain. For best quality, resize your source to around 1500x1500 before running the remover.
Background removal is almost always a step in a longer visual pipeline. After cutout, image-resizer takes the transparent PNG to the exact dimensions a product listing or social post requires; image-compressor shrinks it for fast web delivery while preserving the alpha channel in WebP or PNG. If the final output is a marketing composite with text overlay, meme-generator handles caption addition onto the composited image. For pre-processing where the source photo has inconsistent sizing before masking, run image-resizer first to hit the roughly 1500x1500 sweet spot the model handles best.
Advertisement