URL Parser

Basics

Full URL (href)
https://onlinetextlab.com/path/page?user=test&id=123&id=123#section-2?q=hello+world&gclid=XYZ
Origin
https://onlinetextlab.com
Protocol
https:
Username
(empty)
Password
(empty)
Hostname
onlinetextlab.com
Port
(empty)
Pathname
/path/page
Search (query string)
?user=test&id=123&id=123
Hash
#section-2?q=hello+world&gclid=XYZ

Domain guess

Subdomain
(none)
Domain
onlinetextlab.com
TLD
com

Path segments

pathpage

Query parameters

KeyValue (decoded)Value (encoded)Actions
usertesttest
id123123
id123123

Duplicate keys are preserved in order (e.g., a=1&a=2).

Hash

Hash (raw)
#section-2?q=hello+world&gclid=XYZ

Hash query (if any)

KeyValue (decoded)Value (encoded)Actions
section-2?qhello worldhello+world
gclidXYZXYZ

Hash often carries client‑side state; when it looks like a query (#?x=1), we show pairs here.

TL;DR — paste a link, inspect parts, clean UTM, and export params

Paste any link into this URL parser to instantly extract protocol, origin, hostname, port, path, query, and hash. View path segments, a domain split (subdomain/domain/TLD), and decoded/encoded parameter tables for both the query and hash. Clean UTM and ad tracking, sort keys A→Z, strip empty values, normalize, and export as JSON, .env, or cURL flags. It’s fast, private, and 100% client‑side.

URL Parser — analyze, clean, and export query strings (client‑side)

This upgraded URL analyzer is built for real‑world debugging. It does more than just split protocol and host: you get origin, port, pathname, search, and hash at a glance, plus path segments you can copy individually. A smart domain guess shows subdomain, domain, and TLD for quick SEO and routing checks. Parameters are presented in a clear table with both decoded and encoded forms, and duplicates are preserved in order for accuracy during triage.

Messy campaign links? Use Clean UTM to remove marketing and ad tracking keys like utm_source, utm_campaign, gclid, and fbclid. Long query strings? Tap Sort params to alphabetize keys for stable diffs and easier comparisons, or Strip empty to remove parameters without values. Click Normalize to rebuild the query string in the detected order for consistent output, then copy the refreshed URL with one click.

When you need to reuse parameters elsewhere, export them the way your workflow prefers: copy an object as JSON for scripts, output .env lines for server configs, or generate cURL --data-urlencode flags for quick HTTP tests. The hash fragment is not ignored, either: when it looks like a query (for example #?token=xyz), the parser surfaces those key–values beside decoded/encoded views for a complete picture.

It’s all client‑side, so nothing leaves your browser. The parser is tolerant of minor issues: it translates + to spaces for form‑style values and handles imperfect percent sequences as gracefully as possible. If a link lacks a protocol, the tool can assume https://, and a Base URL field lets you resolve relative paths without changing your original string. In short: paste, inspect, clean, and ship a pristine link in seconds.

How to use the URL Parser

  1. Paste a URL: Include the protocol if you can. If not, the tool can infer https://. For relative links, set a Base URL.
  2. Scan the basics: Protocol, origin, hostname, port, path, search, and hash appear instantly with copy buttons.
  3. Inspect parameters: Query pairs are shown decoded and encoded, with duplicates preserved in order. If the hash carries pairs (for example #?x=1), they are parsed too.
  4. Clean & organize: Remove UTM/trackers, sort keys A→Z, and strip empty values. Click Normalize to rebuild a tidy link.
  5. Export: Copy as JSON, .env lines, or cURL flags. You can also copy individual fields or the entire URL.

Key features

  • Smart parsing with optional Base URL (handles missing protocol gracefully)
  • Instant breakdown: protocol, origin, hostname, port, path, query, hash
  • Domain guess: subdomain / domain / TLD, plus clickable path segments
  • Query table with decoded + encoded values; order and duplicates preserved
  • Hash query extraction when fragments contain pairs (e.g., #?x=1)
  • One‑click transforms: Clean UTM, Sort params A→Z, Strip empty values, Normalize
  • Copy exports for JSON, .env, and cURL --data-urlencode
  • Tolerant decoding (+ to space, graceful handling of malformed %)
  • Private and fast: 100% client‑side

Tips

  • If a link fails to parse, add https:// or set a Base URL for relative paths.
  • Use Clean UTM to remove marketing/tracking keys like utm_source, gclid, and fbclid.
  • Sort params A→Z to stabilize diffs and make long links easier to compare.
  • Strip empty values to tidy URLs produced by conditional forms or client code.
  • Export as JSON for config, .env for server apps, or cURL flags for quick requests.
  • Need to percent‑encode or decode parts? Use the URL Encoder / Decoder companion tool.

Frequently asked questions

Will it work with custom schemes like s3:// or app://?
Often yes. Custom schemes can still be parsed; you’ll see the protocol captured (for examples3:) and the bucket/path split across host and pathname where appropriate.
How are duplicate keys represented in JSON export?
When exporting as JSON, duplicate keys are folded into arrays so that a=1 and a=2 becomes { "a": ["1", "2"] }.
What does Normalize do?
It rebuilds the query string in the detected order for a clean, consistent URL, which helps with readability and debugging.
Does the tool modify the path or only the query?
Transform actions target the query parameters. Path segments are shown for inspection and easy copy, but not altered by Clean UTM / Sort / Strip / Normalize.
Is the parser safe for sensitive links?
Yes. Everything runs locally in your browser; nothing is sent to a server. Still, treat secrets with care and avoid sharing screenshots of sensitive URLs.

Related tools

URL Encoder / Decoder · Base64 Encoder / Decoder · JSON Formatter

Last updated: Aug 25, 2025