I built a half-baked prediction markets app to study signup fraud. 650 accounts on one laptop later.

8 min read

I let Google reCAPTCHA run alone on a real product for 4 weeks. 3,000 signups came in. 77% were fraud. One person opened 650 accounts on a single laptop.

I built a half-baked prediction markets app to study signup fraud. 650 accounts on one laptop later.
SS

Simul Sarker

CEO of DataCops

Last Updated

May 8, 2026

I'm Simul, launching DataCops today. First-party trust infrastructure for signups and conversions. CNAME on your subdomain, replaces a stack of analytics, consent, and bot detection vendors, and gives you the identity context for every visitor and every signup. Three years bootstrapped from Lisbon, UK incorporated.

Instead of a normal launch post, here's the research that shaped the product. The 650-account guy is the punchline. Stay until then.

The honeypot: PillarlabAI

I needed real adversarial signup data. Not vendor white papers. Real humans doing real fraud against a real signup form, while I watched.

So I built PillarlabAI. AI research tool for prediction markets. Vibe-coded in 5 days. Real product with paid tiers and Stripe, but the part I cared about was the signup form. The audience: crypto and prediction markets people. Sharp, manipulative, allergic to paying for anything, and roughly 40% of them running automation as a hobby.

Perfect bait. I posted it organically across the prediction market subreddits I had standing in (I run 17 communities, ~9M annual organic impressions). No paid ads. No outreach. Just operator-shaped posts in the right communities at the right times.

3,000 signups in 4 weeks for a 5-day vibe-coded toy with no marketing budget. The fraud arrived on its own, through the same channels real users came in. Which is the entire point.

CAPTCHA was the only line of defense

I put Google reCAPTCHA on the form. Standard implementation, standard threshold. The kind of setup a normal small SaaS team ships on day one and never thinks about again.

I deliberately did not run DataCops on the signup form yet. I wanted to see what CAPTCHA alone would catch in 2026 against a real adversarial audience.

4 weeks later

Dashboard looked great. 3,000+ signups. CAPTCHA scores clean, almost everything returning high "human" confidence.

Meanwhile, credits were draining 6-8x faster than the active user count justified. Someone was burning through 50-credit free tiers and disappearing.

CAPTCHA was telling me everything was fine. The credits dashboard was telling me everything was on fire.

Time to flip the switch.

Turning on DataCops

I added the DataCops script and bulk-scanned the existing 3,000 signups. Email domain reputation, IP class. New signups would also get device fingerprinting in real time.

I expected ~30% to come back as suspect.

Of the ~3,000 signups, only 730 came back as real humans. The remaining ~2,300 hit critical signals: throwaway domains, datacenter IPs, device fingerprint clusters, or all three.

This is the result of 1st 6 hours after Datacops set up live fingerprint.

77% of my "users" were fraud. CAPTCHA had passed every single one of them.

I sat reading my own dashboard like a wildlife documentary. I had built it to surface this exact pattern. I had never actually seen it operate at full speed against real adversaries. It was hypnotic.

The 650-account guy

I let DataCops keep running. Real-time signups, now under fingerprinting. After ~4,500 total signups, I sorted my fingerprint database by related_email_count descending.

One device fingerprint had 650 accounts attached to it.

One person. One laptop. Six hundred and fifty free trial signups in roughly a week. Same canvas hash, same WebGL renderer, same audio DAC, same font list, same screen resolution. Across 650 distinct signups using rotating throwaway email domains.

No bot. Form completion times were variable in a way that scripts usually aren't. This was a human. One human, manually creating 650 accounts on Pillarlab to farm 32,500 free AI credits.

That's just the post-fingerprinting window. He was almost certainly active during the CAPTCHA-only period too. The real number is higher, I just can't prove how much higher.

CAPTCHA had passed every single one of his 650 attempts.

I don't know what he was doing with the credits. Reselling them. Running them through some other tool. Maybe he just enjoyed it. The crypto-adjacent ecosystem has incentive structures I will never fully understand.

The fraud breakdown

Once I knew what to look for, the patterns were embarrassingly obvious:

60% throwaway domain farmers. Custom-registered throwaway email domains specifically to bypass standard disposable blocklists: zzuux.com, yomail.info, xfavaj.com, xehop.org, x1ix.com, vbbsc.store, upphim.net, dozens more. Not on Mailinator, not on any public disposable list. Some registered the same week as the accounts. Whoever was running this had infrastructure.

20% mid-tier farmers. Same playbook as the 650 guy, smaller scale. 21 accounts here, 47 there, 100 over there. Each one a human running the same loop with less commitment.

15% IP-rotators. Clean throwaway emails (Gmail, ProtonMail) but datacenter or VPN IPs from Frankfurt, Singapore, Virginia. Humans behind VPNs, possibly from regions where VPN use is mandatory.

5% actual bots. Headless Chrome, Puppeteer, sub-1.2-second form completion. Almost a rounding error.

95% of the fraud was humans. The bots, the thing I had spent three years building detection for, were the smallest threat. The real attackers were people at laptops, possibly being paid pennies per account, definitely undeterred by CAPTCHA.

What works

CAPTCHA was built in 1997 to stop scripted crawlers. It was never designed to stop a human who has decided to cheat. And the human who has decided to cheat is the actual adversary in 2026.

Email validation, rate limiting, basic bot detection all useless against this. The 650-account guy was invisible to every individual signal. He was only visible at the device fingerprint layer.

What works is identity context at the moment of signup, all signals at once: email domain reputation, device fingerprint linkage, IP class, behavioral clustering. None individually proves fraud. Together they make it impossible to hide.

What I built: SignupCops

Here's where I want to back up to CAPTCHA itself for a second.

CAPTCHA was supposed to be a gate against automated bots. That's the original 1997 thesis. What it actually became in 2026 is a screen that asks the human you just paid to acquire to identify traffic lights or pick all the bicycles in a 4x4 grid before they're allowed into your product.

Think about what that actually means.

You ran a paid campaign. You bid against your competitors. You paid Meta or Google or LinkedIn $4-$25 per click for a high-intent visitor who clicked through your landing page, read your value prop, hovered over the CTA, decided to actually try your product, and clicked Sign Up.

And then you make them solve a puzzle.

This is the gate you're putting in front of the human you just paid $15 to acquire. Roughly 40% of users abandon when CAPTCHA appears in the funnel. On mobile it's closer to half. You paid for that traffic. You earned that click. And then a 1997-era anti-bot mechanism made them squint at a 4x4 grid of motorcycles, and most of them walked away.

Meanwhile, the 650-account guy didn't squint at anything. He breezed past every CAPTCHA on his way to 650 free trial signups. The throwaway domain farmers passed too. The IP-rotators passed too. The actual bots also passed (they outsource the solve to humans for fractions of a cent, or use vision models that handle it in 0.3ms).

CAPTCHA punishes the user you paid for. It does not punish the adversary.

That's the thing I wanted to fix.

So the thesis for SignupCops is simple: don't gate. Don't decide for the application. Hand the application the full identity context per signup and let the application decide.

Because the right decision depends on your business, not ours. Here's what I mean.

A VPN-routed signup from Frankfurt might be a French enterprise user behind a corporate VPN. Real, high-LTV. Block them and you lose a $50K ARR deal. Or it might be a fraudster in a region you don't even ship to, gaming free tiers. Same signal. Totally different decisions depending on whether you sell B2B SaaS or B2C consumer credit.

Geographic price arbitrage: if you charge $9 in India and $49 in the US, a US user signing up through an Indian IP and payment method combination is not fraud. It's a CRO problem. SignupCops surfaces the mismatch, your business decides whether to enforce regional pricing, ask for ID verification, or let it through because you don't actually mind.

Integration is the easy part. Drop the DataCops script in your <head>. Open the dashboard, find the integration guide:

Hit "Copy for AI." Paste into Claude or Cursor. Describe what your signup logic should do for your business. Your AI writes the gating code for your stack.

You get back the full identity picture per signup: risk score, IP class, email domain reputation, device fingerprint hash, count of other accounts on that fingerprint, related emails on the same device. Check takes under 200ms. Real users walk straight into your product. The 650-account guy gets caught at attempt #6, silently, before he ever sees a confirmation email.

No CAPTCHA. No 40% mobile drop. No black-box risk score deciding for you. You control the gate. We just hand you the keys.


joindatacops.com/signup-cops — public today. 500 signup verification is free, try now!

PillarlabAI is still running. Real customers, real Stripe charges, real prediction-market analytics. Just also the most instrumented signup funnel I've ever built.


Live traffic quality

Updated just now

Visits · last 24h

487
Real users
35873.5%
Bots · auto-filtered
12926.5%

Without filtering, 26.5% of your reported traffic is bot noise inflating dashboards and draining ad spend.

Don't trust your analytics!

Make confident, data-driven decisions withactionable ad spend insights.

Setup in 2 minutes
No credit card