JSON Formatter

Format and validate JSON online for free. Beautify or minify JSON instantly.

✓ Free✓ No sign-up✓ Works in browser

Advertisement

16 lines · 262 chars
Valid JSON — Object with 5 keys
197 chars
1
262 chars
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Advertisement

How to Use This Tool

1

Paste Your JSON

Paste your raw or minified JSON into the left input panel. The formatter accepts any valid JSON including nested objects and arrays.

2

Format or Minify

Click Format/Beautify to add proper indentation and line breaks, making it human-readable. Click Minify to compress it for production use.

3

Copy the Result

Click the Copy button to copy the formatted JSON to your clipboard. Use it in your code editor, API client, or documentation.

Advertisement

Related Tools

Frequently Asked Questions

What is JSON formatting?
JSON formatting (also called beautifying or pretty-printing) adds proper indentation, line breaks, and spacing to compact JSON, making it much easier for humans to read and debug.
Why does my JSON show an error?
Common JSON errors include: missing commas between properties, trailing commas after the last property, unquoted keys, single quotes instead of double quotes, and missing closing brackets.
What is the difference between JSON format and minify?
Formatting adds whitespace for readability. Minifying removes all unnecessary whitespace to reduce file size — useful for production APIs and web performance.
Is my JSON data secure when I use this tool?
Yes. All JSON processing happens entirely in your browser using JavaScript. Your data never leaves your device or gets sent to our servers.

About JSON Formatter

Stripe just returned a 400 on a webhook and the body is a single 12KB line of minified JSON with a nested error.details.payment_method_options object buried somewhere inside. Or your analytics pipeline dumped a malformed event into CloudWatch and you need to find which index in a 200-item array has the trailing comma before the on-call rotation gets louder. This formatter parses with the native JSON.parse engine (same one your runtime uses), catches the exact 'Unexpected token } in JSON at position 2847' and translates position 2847 into line 84 column 17 so you can stop counting commas by hand. It pretty-prints with configurable 2/4/tab indent, minifies for copying into single-line log searches, builds a collapsible tree view for 10MB+ files, and optionally sorts keys alphabetically so you can diff two API responses meaningfully. Everything stays in the tab — paste a response containing a customer SSN and it never touches a server.

How it works

  1. 1

    Parsed by the native V8 JSON engine

    Input goes straight into JSON.parse — the same C++ implementation your Node.js backend uses. When it throws, we grab the 'position N' from the error message and walk the input string to convert that byte offset into a line and column, so the error says 'Unexpected comma — Line 84, column 17' instead of 'position 2847'.

  2. 2

    Indentation via JSON.stringify replacer

    Pretty-printing uses JSON.stringify(value, null, indentSize) which handles nested objects, arrays, and Unicode escape sequences correctly. Key sorting is applied via a recursive walk that rebuilds each object with keys in locale-independent lexical order before the stringify call so output is stable across Node versions.

  3. 3

    Tree mode renders lazily

    The collapsible tree view only renders nodes above depth 2 by default and defers deeper nodes until you click to expand. That keeps a 10MB input file responsive — rendering a fully-expanded 500,000-node tree would jank the browser for seconds and often crash Safari tabs on low-memory devices.

Pro tips

Use minify mode to grep your own logs

When an error message references a JSON field but you only have pretty-printed logs, paste the pretty output here, minify it, and search for the specific key-value substring in your log aggregator. CloudWatch, Datadog, and Loki all index by substring, and 'amount':2499,'currency':'usd' on a single line matches far more reliably than matching across line breaks with context flags.

Sort keys before checking responses into fixtures

If you store API response fixtures in your test suite, always sort keys before committing. Most servers do not guarantee key order (Go maps explicitly randomize it), so an unsorted fixture flakes on every test run. Pre-sort once here, commit, and your snapshot diffs only show real payload changes instead of key-order churn that masks actual regressions.

Escape-aware error messages save five minutes per bug

The line/column conversion is more valuable than it looks. JSON.parse's native error says 'at position 2847' which is useless in a 3KB response. By walking the input to count newlines before that offset, we output 'Line 84, column 17' which maps directly to what your editor shows. Paste, see the line number, Ctrl+G to that line in VS Code, fix the comma in under ten seconds.

Honest limitations

  • · Strict JSON only — no trailing commas, no comments, no unquoted keys. For JSON5 or JSONC (VS Code settings files) strip those features first.
  • · Very large inputs (>20MB) may freeze the tab during JSON.parse because the call is synchronous and blocks the main thread.
  • · Tree view depth is unbounded, but expanding a fully-nested 100k-node tree can use 300MB+ of heap in Chrome — prefer search instead of full-expand for large payloads.

Frequently asked questions

Does this support JSONC or JSON5 with comments and trailing commas?

No. We use the strict JSON.parse spec because that matches what 99% of production parsers (V8, Go encoding/json, Python json, Rust serde_json) actually accept. Comments and trailing commas are convenient in config files but will crash a real API. If you are cleaning up a VS Code settings.json or a tsconfig, strip comments first with a regex like /\/\*[\s\S]*?\*\/|\/\/.*$/gm applied line by line — or use a JSON5-aware formatter built specifically for tooling configs rather than API payloads.

Why does my 30MB JSON file freeze the browser?

JSON.parse is synchronous and runs on the main thread. Parsing a 30MB string typically takes 2 to 5 seconds during which the tab cannot respond to input, and the resulting object tree can consume 10x the raw byte size in heap because of V8's object overhead. For files over 20MB, prefer a streaming parser like jsonlines (if each record is a separate line) or pipe through jq at the command line where you have proper memory controls and can use --stream mode for constant-memory processing.

How does the line-and-column error reporting actually work?

When JSON.parse throws, V8 includes the byte offset in the message string ('at position 2847'). We regex that number out, then iterate the original input character by character counting '\n' bytes up to that offset to derive a 1-indexed line number, and count characters from the last newline to derive the column. The whole translation takes under a millisecond even on a 10MB input and gives you a pointer straight to the bad character — far more useful than a raw byte offset that means nothing in a text editor.

Is key sorting deterministic across runs?

Yes, as long as you are sorting the same input. We sort by String.prototype.localeCompare with the default (en-US) collation, which gives consistent ordering regardless of the object's original insertion order. This matters when you are diffing two responses — without sorting, a server that serializes maps in hash order (Go, some Python configs) will show phantom diffs for objects whose keys are semantically equivalent. Sort both, diff the sorted outputs, and only real content differences remain.

Can the tool handle JSON with BigInt values or NaN?

Standard JSON has no BigInt, NaN, or Infinity — the spec only allows finite numbers and strings. V8 parses integers larger than Number.MAX_SAFE_INTEGER (2^53-1) with precision loss: the literal 9007199254740993 silently becomes 9007199254740992. If you are dealing with 64-bit IDs (Twitter snowflakes, Discord IDs), always serialize them as strings on the server. Some libraries extend JSON with BigInt support via a reviver, but pasting a BigInt literal here will simply lose precision without warning.

JSON work rarely happens in isolation. When a payload contains a JWT in the Authorization header, the jwt-decoder breaks it into header, payload, and signature JSON objects you can paste back here for diffing. Base64-encoded fields inside a webhook (think Stripe's raw_body signature verification) go through the base64-encoder to reveal their JSON structure. And if you are writing a log scraper that extracts specific fields from structured JSON, the regex-tester is where you iterate on the pattern against real sample payloads before wiring it into Logstash or Vector.

Advertisement