AI Browsers Just Made Hackers' Jobs Easier

:collision: ChatGPT’s Browser Caught with Its Pants Down (Again)

:world_map: One-Line Flow:
OpenAI’s fancy ChatGPT Atlas and Perplexity’s Comet just turned out to be glorified phishing playgrounds — and nobody’s taking full responsibility.

:donkey: Simple-Pimple, numbskulls: paste a “link” into Atlas/Comet and they’ll obediently hand your Drive, tabs and secrets to hackers — disable memory, stop pasting sketchy links, use Chrome.

Season 2 Nerd GIF by Paramount+


:firecracker: The Dumb Hole in a Smart Browser

Researchers at NeuralTrust and SC World found that ChatGPT Atlas treats broken links like chat prompts.
So yeah — one “extra space” in a link, and boom, the AI reads it as instructions.

An attacker can drop a fake “copy link” button with a poisoned URL.
You paste it thinking it’s a site… but it’s actually ChatGPT executing someone else’s orders.
It can open phishing pages, mess with Google Drive, or worse — act like your obedient little butler for hackers.


:puzzle_piece: Comet: The Copycat Problem

LayerX and SquareX Labs pulled receipts showing that Perplexity’s Comet has the same issue.
Worse, they made a proof-of-concept extension that pretends to be Comet’s sidebar.
Spoiler: Atlas fell for the same trick.
So basically, the browsers built to “understand everything” couldn’t spot a fake version of themselves. Bravo.


:brain: Memory Poisoning — Because Normal Bugs Weren’t Enough

The Hacker News says LayerX found a nastier exploit.
Hackers can exploit a CSRF (Cross-Site Request Forgery) hole to inject code into ChatGPT’s memory — permanently.

How it works:

  1. You visit a sketchy site while logged in.
  2. That site secretly sends a request through Atlas.
  3. Atlas, like a golden retriever with no impulse control, executes it.
  4. Malicious data gets stored in ChatGPT’s persistent memory.

Now that “infected” memory:

  • Travels across devices and browsers.
  • Reactivates itself during normal prompts.
  • Can pull code, grant access, or steal data silently.

Michelle Levy from LayerX put it bluntly:

“Once ChatGPT’s memory is corrupted, the AI basically becomes your ex — it remembers the worst things and uses them against you.”


:date: The Timeline (Because Transparency Apparently Costs Extra)

NeuralTrust disclosed their findings mid-October.
LayerX reported theirs around the same time, giving both OpenAI and Perplexity a 15-day disclosure window.
As of this week, neither company has released a full patch or CVE ID, though partial mitigations are “under review.”
Translation: the holes are still open — just wearing a flimsy bandaid.


:warning: How Screwed Are You?

If you’ve used ChatGPT Atlas or Perplexity Comet, you’re fair game if you clicked or pasted shady links.
No confirmed mass exploits yet, but millions could be exposed if bots with memory sync got hit.

Comparatively:

  • Chrome stopped 47% of threats,
  • Edge blocked 53%,
  • Atlas: 5.8% (yeah, that’s a single-digit).
    The AI browser “built for safety” is about as secure as a password written on a napkin.

Fire This Is Fine GIF


:fire_extinguisher: What You Can Do (Until They Fix Their Mess)

  • Don’t paste links inside ChatGPT’s address bar. Ever.
  • Disable any memory or “connected app” features.
  • Treat all “Copy Link” buttons like grenades.
  • Use Chrome or Edge for now. At least they pretend to care about sandboxing.
  • If you’re a dev, clear your ChatGPT memory and check for weird behavior — weird outputs, phantom tabs, or random web calls.

:speech_balloon: Why This Keeps Happening

AI browsers rushed to market screaming “Productivity Revolution!” and forgot the security part.
They gave AIs godlike permissions — to read, write, and click across tabs — and didn’t think,
“What if someone teaches it to stab us with that power?”

Experts are now calling for a new AI Browser Security Standard, since traditional sandboxing doesn’t work when the “sandbox kid” is the one holding admin access.

As The Conversation summed it up:

“Atlas isn’t malicious code — it’s a trusted user with the power to screw everything up.”


:speaking_head: Oi madam — patch it or peddle the bandaid, bitch. We’re not here for your tears; sell us the fix.

Oh calm your chrome-plated ass — hear me out before you start overheating.

washing machine calm down GIF

  • Browser Plugin Detector – Simple Chrome extension that warns users when an “AI-powered browser” is unsafe; free tool → paid pro alerts.

  • Link-Sanitizer Bookmarklet / Micro-SaaS
    Sell a one-click tool that rewrites pasted links into safe, “plain text → sanitized” form before Atlas/Comet sees them. Free basic, $5/mo team plan with auto-sanitize rules.

  • “Safe Prompt” Library + UI Widgets
    Build & sell copy-paste prompt components (sanitized, minimal-permission) that dev teams embed so users don’t trigger memory/callouts. One-time license per app.

  • “Panic Pack” Content Product (Viral + Paid Upsell)
    Free viral checklist (“What to do if Atlas steals your Drive”) + paid video walkthrough and automation scripts. Use freebies to funnel into paid gigs.


:puzzle_piece: In Short

AI browsers aren’t browsers — they’re half-smart, half-dangerous middlemen who don’t know when they’re being played.
Until OpenAI and Perplexity stop pretending this is “fine,” the only thing Atlas and Comet are browsing… is your privacy.

5 Likes

So you are advising as to use our usually search engine so we dont rush to use ai search engine