TL;DR — paste a link, inspect parts, clean UTM, and export params
Paste any link into this URL parser to instantly extract protocol, origin, hostname, port, path, query, and hash. View path segments, a domain split (subdomain/domain/TLD), and decoded/encoded parameter tables for both the query and hash. Clean UTM and ad tracking, sort keys A→Z, strip empty values, normalize, and export as JSON
, .env
, or cURL
flags. It’s fast, private, and 100% client‑side.
URL Parser — analyze, clean, and export query strings (client‑side)
This upgraded URL analyzer is built for real‑world debugging. It does more than just split protocol and host: you get origin, port, pathname, search, and hash at a glance, plus path segments you can copy individually. A smart domain guess shows subdomain, domain, and TLD for quick SEO and routing checks. Parameters are presented in a clear table with both decoded and encoded forms, and duplicates are preserved in order for accuracy during triage.
Messy campaign links? Use Clean UTM to remove marketing and ad tracking keys like utm_source
, utm_campaign
, gclid
, and fbclid
. Long query strings? Tap Sort params to alphabetize keys for stable diffs and easier comparisons, or Strip empty to remove parameters without values. Click Normalize to rebuild the query string in the detected order for consistent output, then copy the refreshed URL with one click.
When you need to reuse parameters elsewhere, export them the way your workflow prefers: copy an object as JSON
for scripts, output .env
lines for server configs, or generate cURL
--data-urlencode
flags for quick HTTP tests. The hash fragment is not ignored, either: when it looks like a query (for example #?token=xyz
), the parser surfaces those key–values beside decoded/encoded views for a complete picture.
It’s all client‑side, so nothing leaves your browser. The parser is tolerant of minor issues: it translates +
to spaces for form‑style values and handles imperfect percent sequences as gracefully as possible. If a link lacks a protocol, the tool can assume https://
, and a Base URL field lets you resolve relative paths without changing your original string. In short: paste, inspect, clean, and ship a pristine link in seconds.
How to use the URL Parser
- Paste a URL: Include the protocol if you can. If not, the tool can infer
https://
. For relative links, set a Base URL. - Scan the basics: Protocol, origin, hostname, port, path, search, and hash appear instantly with copy buttons.
- Inspect parameters: Query pairs are shown decoded and encoded, with duplicates preserved in order. If the hash carries pairs (for example
#?x=1
), they are parsed too. - Clean & organize: Remove UTM/trackers, sort keys A→Z, and strip empty values. Click Normalize to rebuild a tidy link.
- Export: Copy as
JSON
,.env
lines, orcURL
flags. You can also copy individual fields or the entire URL.
Key features
- Smart parsing with optional Base URL (handles missing protocol gracefully)
- Instant breakdown: protocol, origin, hostname, port, path, query, hash
- Domain guess: subdomain / domain / TLD, plus clickable path segments
- Query table with decoded + encoded values; order and duplicates preserved
- Hash query extraction when fragments contain pairs (e.g., #?x=1)
- One‑click transforms: Clean UTM, Sort params A→Z, Strip empty values, Normalize
- Copy exports for JSON, .env, and cURL --data-urlencode
- Tolerant decoding (+ to space, graceful handling of malformed %)
- Private and fast: 100% client‑side
Tips
- If a link fails to parse, add https:// or set a Base URL for relative paths.
- Use Clean UTM to remove marketing/tracking keys like utm_source, gclid, and fbclid.
- Sort params A→Z to stabilize diffs and make long links easier to compare.
- Strip empty values to tidy URLs produced by conditional forms or client code.
- Export as JSON for config, .env for server apps, or cURL flags for quick requests.
- Need to percent‑encode or decode parts? Use the URL Encoder / Decoder companion tool.
Frequently asked questions
Will it work with custom schemes like s3:// or app://?
s3:
) and the bucket/path split across host and pathname where appropriate.How are duplicate keys represented in JSON export?
a=1
and a=2
becomes { "a": ["1", "2"] }
.What does Normalize do?
Does the tool modify the path or only the query?
Is the parser safe for sensitive links?
Related tools
URL Encoder / Decoder · Base64 Encoder / Decoder · JSON Formatter
Last updated: Aug 25, 2025