Methodology
How our ATS parser actually works.
Most ATS tools hide their methodology behind a "proprietary AI" label and give you an arbitrary score. We don't. Here's exactly what we do, what we check, and what our score means.
What we are (and what we're not)
We are: a parsing diagnostic tool. We run your resume through the same kinds of text-extraction engines used by Workday, Greenhouse, Lever, Taleo, and iCIMS. We show you the raw extracted output — what the ATS actually sees.
We are not: an AI that judges whether you're qualified for a specific job. We don't give you a fake "matching score" against a JD without running real keyword analysis. When we DO run JD-match (paid feature), it's an explicit comparison — not a vibes-based number.
The parsing pipeline (4 stages)
01
Text extraction
We parse your PDF or DOCX using the same engines real ATS platforms use. For PDFs: pdf-parse + selective OCR fallback for image-PDFs. For DOCX: mammoth-style XML extraction. We capture not just the words but the structure (headers, bullets, columns).
02
Structural analysis
We detect the things ATS engines struggle with: multi-column layouts, tables, headers/footers, embedded images, font substitutions, glued tokens (e.g., 'SAPOracle' instead of 'SAP, Oracle'), unusual section names, and missing required fields.
03
15+ failure-pattern detectors
Each detector targets a specific ATS failure mode we've observed in real Workday/Greenhouse/Taleo extractions. They flag issues with a severity (critical / warning / info) and a one-line fix.
04
Score computation
Our parsing score = 100 minus penalties: -15 per critical issue, -5 per warning, -1 per info. This is a deterministic formula — the same resume always scores the same. No AI vibes.
ATS engines we test against
We benchmark our detection logic against the parsing behavior of 8 widely-deployed ATS platforms. We don't access their proprietary code — we observe their extraction outputs and reverse-engineer the failure patterns.
Every detector in our codebase corresponds to a real failure pattern observed in at least 2 of these 8 engines.
What our score does NOT mean
- Not a JD-match score. A score of 90/100 doesn't mean you'll get the job. It means your resume parses cleanly through ATS engines.
- Not the score the employer sees. Real ATS engines compute scores based on the employer's configured weights (keywords, education, location, etc.). Our score is structural, not match-based.
- Not a guarantee of getting interviews. Parsing cleanly is necessary but not sufficient. You also need relevant experience and JD-keyword alignment.
- Not a substitute for a recruiter's judgment. Ultimately, a human reads your resume after the ATS pre-filter. A perfect parsing score doesn't mean a perfect resume.
Privacy & data handling
- Resumes are parsed in memory. We never write them to persistent disk.
- The free-scan parsing happens entirely server-side. The file is held only for the duration of the request (typically <10 seconds).
- Paid rebuilds use Anthropic's Claude API for bullet rewrites. Anthropic's data policy applies; we don't store the structured output.
- No accounts, no logins, no email gates on the free tier — there's nothing to hack because there's nothing stored.
Test your resume — see the methodology in action
Free 30-second scan. No signup. See exactly which detectors fired and why.
Free ATS scan →