Friday, May 15, 2026

Some Challenges Take A Few Hours To Solve. Others Take 15 Years To Finally Put To Rest.


If you are a seasoned CTF player or an old-school challenger, you might remember the golden era of IRC.
Back in 2011, a bunch of us were hanging out on irc.idlemonkeys.net, solving wargames and collaborating.
To make the time more entertaining, a few guys started writing IRC bots for blackjack, hangman, and even Idle RPGs.

But Gizmore (the founder of WeChall) and I thought we could push the limits of IRC further.
We built richer, more fully-featured RPGs.

I created bbq RPG, and Gizmore created Shadowlamb.
While mine eventually faded, Shadowlamb survived the test of time, kept alive entirely by Gizmore's incredible dedication.

Shadowlamb is a text-based, Shadowrun-flavored universe living entirely inside an IRC channel.
You interact with a bot named Lamb3 to grind nuyen (the in-game currency), level up stats (strength, quickness, magic), fight monsters, and run quests across cyberpunk cities like Redmond, Seattle, and Chicago.

But here is the twist: Gizmore embedded 4 CTF challenges inside the game (with increasing difficulties).
To capture the flags, you had to actually play the RPG and use your infosec skills to reverse and exploit the game mechanics.

Back then, I only played casually for fun. I never managed to beat the challenges.
But recently, much like closing out other two-decade-old wargames I've been revisiting, I decided it was time to settle the score.

I was going to beat Shadowlamb.

But as a lazy elite, I wasn't about to grind it manually.
I was going to build an AI-assisted bot to play it for me.

---

PHASE 1: THE PROTOTYPE

It started as a quick-and-dirty script.
It logged into IRC, listened to Lamb3’s NOTICE messages, and blindly spammed #attack on a loop.

It worked, mostly.
My character died - a lot.

But brute force was enough to scrape past Chapter I.

---

PHASE 2: THE ARCHITECTURE

This was when I put more efforts into the bot. The script evolved into a robust, modular Python system.

I built a proper autonomous agent:

- State Management: Tracked full game state in memory (HP, MP, karma, nuyen, weight capacity, busy timers).

- Combat AI: Added tactical logic for handling complex mob encounters.

- Smart Equipment: Wrote a gear-scoring algorithm that dynamically parsed #cmp stats to evaluate and equip the best loot.

- Economy Routing: Built a heuristic pathfinder to automatically travel to the nearest blacksmith to offload junk when overweight.

- Remote Command: Set up an admin relay channel so I could remote-control the bot from a different IRC nick while it was running.

By the time the bot reached Chicago, the game had become a nightmare.
The mobs were brutal, the travel times were agonizing even with top-tier gear, and inventory weight limits were a constant bottleneck.

But the architecture held up. The bot optimized the grind, survived the nightmare, and helped me capture the final flag.

To date, only 34 people in the world have managed to beat the final Shadowlamb chapter.

To me, writing this bot was more than just ticking a box on a CTF platform.
It was a perfect collision of nostalgia and modern engineering.

We used to grind these games manually, typing until our fingers went numb.
Today, we can architect modular, AI-assisted agents to conquer them for us.

The game hasn't changed, but as tech professionals, our tools and mindsets have.

Sometimes, the best way to solve a 15-year-old problem is to build a modern machine to do it for you.

The IRC servers are still spinning, and Lamb3 is still waiting for new runners.

If you want to test your coding and automation chops, fire up your IRC client, head over to WeChall, and give Shadowlamb a try.
It’s a masterclass in retro game mechanics and backend logic.

---

Also visit: https://quangntenemy.substack.com/p/some-challenges-take-a-few-hours

Sunday, May 3, 2026

The Joy of Solving Without Guidance

Many security professionals today know CTFs.

They've trained on platforms like picoCTF, Hack The Box, and TryHackMe - environments designed to be structured, accessible, and efficient. And that's not a bad thing. CTFs lowered the barrier to entry, made learning measurable, and helped people build real skills quickly.

But before all of that, there was a different kind of training ground.

Scattered across the internet were what people loosely called “hacker games”, “wargames”, or simply “challenges”. Sites like OverTheWire, HackThisSite, and aggregators like WeChall. They weren't polished, and they weren't trying to teach you step by step. You would open a challenge and feel slightly lost. Sometimes there were instructions, sometimes not. Sometimes the difficulty made sense, sometimes it didn't.

You were expected to figure it out anyway.

Progress in those environments felt different. There was no steady stream of feedback telling you that you were on the right track. You could spend hours going in the wrong direction without realizing it. And then, suddenly, something would click - a small detail, a strange behavior, a connection you hadn't seen before. The solution would unfold not because you followed a path, but because you built one.

Yes, there was validation. A password. A level cleared. But the real reward came a moment earlier, when things finally made sense.

That feeling is hard to replicate.

Modern CTFs changed the experience. Problems are categorized, difficulty is more predictable, and feedback is almost immediate. You learn to recognize patterns, apply known techniques, and move quickly. Over time, you become efficient. You know what to look for.

But that efficiency comes with a subtle trade-off.

You begin to expect clarity. You expect problems to be well-formed, solvable within a framework, and responsive to your actions. And in real systems, that's rarely the case. Things break in unexpected ways. Information is incomplete. Sometimes the hardest part isn't solving the problem - it's understanding what the problem even is.

That's where those older environments still matter.

They force you to slow down. To explore without direction. To keep going when nothing seems to work. They don't just test what you know - they test how you think when what you know isn't enough.

CTFs made us faster. There's no doubt about that.

But those early hacker games trained something else entirely. The ability to sit with uncertainty, to keep pulling at threads, and to trust that understanding can be built even when there's no obvious path forward.

If you've never experienced that, it's worth trying.

Not as a replacement for modern platforms, but as a complement to them.

Because in the end, speed helps you solve problems.

But depth helps you face the ones that don't even look like problems yet.

---

Also visit: https://quangntenemy.substack.com/p/the-joy-of-solving-without-guidance

UFMuIElmIHlvdSBhcmUgbG9va2luZyBmb3IgdGhlIHNvbHV0aW9uIGZvciBhbnkgY2hhbGxlbmdlLCBjcmVhdGUgYSBkaXNwb3NhYmxlIGVtYWlsLCBhbmQgdXNlIGl0IHRvIHNlbmQgYSByZWNpcGUgZm9yIHNoZWVwIGJicSB0byBteSBnbWFpbCBhY2NvdW50Lg==


Monday, April 20, 2026

A World Where Human Is the Suspected Creature



It always starts the same way.

You open a page. Maybe your email, maybe social media, maybe just trying to check something quickly.

And before you can proceed, you're stopped.

Not by complexity. Not by logic. But by suspicion.

"Verify that you are human."

Click the box. Select all images with traffic lights. Solve the puzzle. Prove your existence.

And for a brief second, something feels... inverted.

Because once, machines were the ones being tested.

---

There was a time when computers struggled to imitate us.

That was the whole point of the Turing Test: to see if a machine could pass as human.

Now the test has quietly flipped. The burden has shifted. We are the ones being interrogated, filtered, measured against patterns of behavior that define "humanness".

Not consciousness. Not intention. Just patterns.

Move your mouse too smoothly? Suspicious.

Type too fast? Suspicious.

Solve a problem too efficiently? Suspicious.

You begin to realize: the system isn't asking *who you are*.

It's asking whether you behave like the average.

---

And that's where things get uncomfortable.

Because the more skilled, focused, or unconventional you are, the more you deviate from that average.

And deviation, in a system built on statistical trust, starts to look like anomaly. An anomaly starts to look like a threat.

In other words: the more human you become - curious, efficient, unpredictable - the less "human" you appear to the system.

---

This is not just about CAPTCHA boxes.

It's about a quiet philosophical shift in how identity is defined in a digital world.

You are no longer recognized by your thoughts, your intent, or even your consciousness.

You are recognized by your *compliance with expected behavior*.

Humanity, reduced to a behavioral fingerprint.

And anything outside that fingerprint - no matter how authentic - becomes suspect.

---

There's a strange irony here.

We built machines to mimic us. Then we built systems to detect those machines.

And in doing so, we defined ourselves so narrowly that we started failing our own definitions.

The machine doesn't need to become human anymore.

It just needs to stay within the acceptable range.

---

So every time you click "I am not a robot", pause for a second.

Not because it's annoying. Not because it's trivial.

But because, in that moment, you are participating in a quiet ritual: proving your existence to a system that no longer trusts it by default.

A world where humans are the suspected creatures doesn't arrive with a bang.

It arrives with a checkbox.

Also visit: https://quangntenemy.substack.com/ for more interesting thoughts on IT world, cybersecurity and future of AI


Saturday, April 4, 2026

From ASM-Hater to Digital Archaeologist: How AI turned a 20-year-old assembly nightmare into a precision strike

I’ll be honest: I used to hate crackmes! A lot!

For years, the thought of diving into low-level Assembly (ASM) felt like a chore. Staring at dense hex dumps, manually tracking registers, and fighting through obfuscated logic was a "grind" I just didn't have the patience for. It felt more like a battle of attrition than a puzzle. If you’ve ever felt like you were looking at the world through a keyhole - one byte at a time - you know exactly what I mean.

But recently, that changed.

I decided to revisit a “cold case” - a Z80 assembly challenge from 2006 on TheBlackSheep. This thing had been sitting on a dusty shelf of the internet for nearly two decades, a tough challenge that had mocked researchers and frustrated players for years.

Back in 2006, the manual labor required to crack this was a nightmare. But today, the game has changed.

Monday, December 22, 2025

How Company Secrets End Up in ChatGPT (And How to Prevent It Without Blocking AI)

 


A developer just wanted to fix a problem faster.

They were debugging a query. The error message made no sense.
The documentation was outdated. As usual.

So they did what millions of capable employees now do every day:

They copied a real snippet from work.
Pasted it into ChatGPT. Got a clean, helpful answer.

Problem solved.
Ticket closed.
No alarms. No warnings.

And without realizing it, company secrets just left the building.

---

This isn't an employee failure

No one was careless.
No one was malicious.
No one thought twice.

Because nothing in the system told them they should.

This is the uncomfortable truth most companies avoid:
When smart people repeatedly do the same risky thing, the system is teaching them to do it.
---

Your DLP didn't fail. It was watching the wrong place.

Most security stacks are still designed for an older world.

They monitor:
  • Email attachments
  • File uploads
  • API traffic
  • Known SaaS destinations
But the leak didn't happen there.

It happened in a browser. Via clipboard. Through a prompt.

Copy → paste → submit.

That path bypasses most traditional controls completely.

So when teams say, "Our DLP failed", what they really mean is:
Our threat model never included this behavior.
---

Why blocking ChatGPT backfires

The reflex response is predictable:

"Block ChatGPT."
"Block Claude."
"Block all external LLMs."

On paper, this looks responsible.

In practice, it produces:
  • Personal device usage
  • Browser extensions
  • Smaller, fragmented pastes
  • Silence instead of questions
Risk doesn't disappear. It just becomes invisible.

And once engineers stop talking to security, you've lost the most important signal you had.

---

This is a system design problem, not an AI problem

Developers optimize for: Speed, Accuracy, Low friction

Security teams often optimize for: Control, Policy, After-the-fact detection

When those incentives collide, the faster system wins.

Every time.

So the real question isn't "How do we stop people?"
It's:
How do we redesign the system so the safe path is the fast path?
---

Step 1: Provide an approved AI path people actually want to use

An internal or enterprise-approved LLM only works if it's:
  • Fast
  • Reliable
  • Easy to access (SSO, no tickets)
  • Good enough to replace public tools
If the "safe" tool feels worse than ChatGPT, it will be ignored.

This isn't about trust. It's about usability.

People don't bypass controls to be rebellious. They bypass them to get work done.

---

Step 2: Stop trying to read prompts. Watch behavior instead.

Trying to inspect every prompt is a dead end.

You won't reliably see:
  • What was pasted
  • How it was transformed
  • Where it went
But you can see behaviors that matter:
  • Large clipboard copy events
  • Copying from production systems into browsers
  • Structured data patterns
  • Sudden changes in paste volume
You don't need the content to detect the risk.

Attackers already know this.
Defenders are just catching up.

---

Step 3: Keep secrets from appearing on screens in the first place

The most effective control is also the least glamorous:

Don't expose raw secrets unless absolutely necessary.

That means:
  • Masking sensitive fields by default
  • Tokenizing internal identifiers
  • Treating "view" as a privilege, not a default
  • Restricting full production outputs
If someone never sees the secret, they can't paste it.

This is boring security.

It's also the kind that works.

---

Step 4: Train instincts, not compliance

Most AI training fails because it sounds like legal language.
"Employees must not input confidential information into AI tools."
That sentence does not survive:
  • Deadlines
  • Curiosity
  • Pressure
A better rule is simpler:

If it would trigger an incident report, it doesn't belong in a prompt.

No flowcharts.
No policy PDFs.
Just a mental shortcut people can actually use.

---

Step 5: Explain the risk in executive language

Executives don't need to understand tokens or embeddings.

They understand this immediately:
AI prompts are unlogged outbound data transfers with no recall.
Once the risk is framed that way:
  • Budget appears
  • Tradeoffs become explicit
  • Ownership becomes clear
Not because of fear.
Because of clarity.

---

The real lesson

This wasn't a junior developer problem.
It wasn't an AI problem.
It wasn't negligence.

It was a system built for a world where copy-paste wasn't a data exfiltration vector.

That world is gone.

The prompt is the new USB drive.

And if you're not actively redesigning for that reality, there's a good chance this is already happening inside your company- quietly, efficiently, and with the best intentions.

That's what makes it dangerous.