Sunday, May 3, 2026

The Joy of Solving Without Guidance

Many security professionals today know CTFs.

They've trained on platforms like picoCTF, Hack The Box, and TryHackMe - environments designed to be structured, accessible, and efficient. And that's not a bad thing. CTFs lowered the barrier to entry, made learning measurable, and helped people build real skills quickly.

But before all of that, there was a different kind of training ground.

Scattered across the internet were what people loosely called “hacker games”, “wargames”, or simply “challenges”. Sites like OverTheWire, HackThisSite, and aggregators like WeChall. They weren't polished, and they weren't trying to teach you step by step. You would open a challenge and feel slightly lost. Sometimes there were instructions, sometimes not. Sometimes the difficulty made sense, sometimes it didn't.

You were expected to figure it out anyway.

Progress in those environments felt different. There was no steady stream of feedback telling you that you were on the right track. You could spend hours going in the wrong direction without realizing it. And then, suddenly, something would click - a small detail, a strange behavior, a connection you hadn't seen before. The solution would unfold not because you followed a path, but because you built one.

Yes, there was validation. A password. A level cleared. But the real reward came a moment earlier, when things finally made sense.

That feeling is hard to replicate.

Modern CTFs changed the experience. Problems are categorized, difficulty is more predictable, and feedback is almost immediate. You learn to recognize patterns, apply known techniques, and move quickly. Over time, you become efficient. You know what to look for.

But that efficiency comes with a subtle trade-off.

You begin to expect clarity. You expect problems to be well-formed, solvable within a framework, and responsive to your actions. And in real systems, that's rarely the case. Things break in unexpected ways. Information is incomplete. Sometimes the hardest part isn't solving the problem - it's understanding what the problem even is.

That's where those older environments still matter.

They force you to slow down. To explore without direction. To keep going when nothing seems to work. They don't just test what you know - they test how you think when what you know isn't enough.

CTFs made us faster. There's no doubt about that.

But those early hacker games trained something else entirely. The ability to sit with uncertainty, to keep pulling at threads, and to trust that understanding can be built even when there's no obvious path forward.

If you've never experienced that, it's worth trying.

Not as a replacement for modern platforms, but as a complement to them.

Because in the end, speed helps you solve problems.

But depth helps you face the ones that don't even look like problems yet.

---

Also visit: https://quangntenemy.substack.com/p/the-joy-of-solving-without-guidance

UFMuIElmIHlvdSBhcmUgbG9va2luZyBmb3IgdGhlIHNvbHV0aW9uIGZvciBhbnkgY2hhbGxlbmdlLCBjcmVhdGUgYSBkaXNwb3NhYmxlIGVtYWlsLCBhbmQgdXNlIGl0IHRvIHNlbmQgYSByZWNpcGUgZm9yIHNoZWVwIGJicSB0byBteSBnbWFpbCBhY2NvdW50Lg==


Monday, April 20, 2026

A World Where Human Is the Suspected Creature



It always starts the same way.

You open a page. Maybe your email, maybe social media, maybe just trying to check something quickly.

And before you can proceed, you're stopped.

Not by complexity. Not by logic. But by suspicion.

"Verify that you are human."

Click the box. Select all images with traffic lights. Solve the puzzle. Prove your existence.

And for a brief second, something feels... inverted.

Because once, machines were the ones being tested.

---

There was a time when computers struggled to imitate us.

That was the whole point of the Turing Test: to see if a machine could pass as human.

Now the test has quietly flipped. The burden has shifted. We are the ones being interrogated, filtered, measured against patterns of behavior that define "humanness".

Not consciousness. Not intention. Just patterns.

Move your mouse too smoothly? Suspicious.

Type too fast? Suspicious.

Solve a problem too efficiently? Suspicious.

You begin to realize: the system isn't asking *who you are*.

It's asking whether you behave like the average.

---

And that's where things get uncomfortable.

Because the more skilled, focused, or unconventional you are, the more you deviate from that average.

And deviation, in a system built on statistical trust, starts to look like anomaly. An anomaly starts to look like a threat.

In other words: the more human you become - curious, efficient, unpredictable - the less "human" you appear to the system.

---

This is not just about CAPTCHA boxes.

It's about a quiet philosophical shift in how identity is defined in a digital world.

You are no longer recognized by your thoughts, your intent, or even your consciousness.

You are recognized by your *compliance with expected behavior*.

Humanity, reduced to a behavioral fingerprint.

And anything outside that fingerprint - no matter how authentic - becomes suspect.

---

There's a strange irony here.

We built machines to mimic us. Then we built systems to detect those machines.

And in doing so, we defined ourselves so narrowly that we started failing our own definitions.

The machine doesn't need to become human anymore.

It just needs to stay within the acceptable range.

---

So every time you click "I am not a robot", pause for a second.

Not because it's annoying. Not because it's trivial.

But because, in that moment, you are participating in a quiet ritual: proving your existence to a system that no longer trusts it by default.

A world where humans are the suspected creatures doesn't arrive with a bang.

It arrives with a checkbox.

Also visit: https://quangntenemy.substack.com/ for more interesting thoughts on IT world, cybersecurity and future of AI


Saturday, April 4, 2026

From ASM-Hater to Digital Archaeologist: How AI turned a 20-year-old assembly nightmare into a precision strike

I’ll be honest: I used to hate crackmes! A lot!

For years, the thought of diving into low-level Assembly (ASM) felt like a chore. Staring at dense hex dumps, manually tracking registers, and fighting through obfuscated logic was a "grind" I just didn't have the patience for. It felt more like a battle of attrition than a puzzle. If you’ve ever felt like you were looking at the world through a keyhole - one byte at a time - you know exactly what I mean.

But recently, that changed.

I decided to revisit a “cold case” - a Z80 assembly challenge from 2006 on TheBlackSheep. This thing had been sitting on a dusty shelf of the internet for nearly two decades, a tough challenge that had mocked researchers and frustrated players for years.

Back in 2006, the manual labor required to crack this was a nightmare. But today, the game has changed.

Monday, December 22, 2025

How Company Secrets End Up in ChatGPT (And How to Prevent It Without Blocking AI)

 


A developer just wanted to fix a problem faster.

They were debugging a query. The error message made no sense.
The documentation was outdated. As usual.

So they did what millions of capable employees now do every day:

They copied a real snippet from work.
Pasted it into ChatGPT. Got a clean, helpful answer.

Problem solved.
Ticket closed.
No alarms. No warnings.

And without realizing it, company secrets just left the building.

---

This isn't an employee failure

No one was careless.
No one was malicious.
No one thought twice.

Because nothing in the system told them they should.

This is the uncomfortable truth most companies avoid:
When smart people repeatedly do the same risky thing, the system is teaching them to do it.
---

Your DLP didn't fail. It was watching the wrong place.

Most security stacks are still designed for an older world.

They monitor:
  • Email attachments
  • File uploads
  • API traffic
  • Known SaaS destinations
But the leak didn't happen there.

It happened in a browser. Via clipboard. Through a prompt.

Copy → paste → submit.

That path bypasses most traditional controls completely.

So when teams say, "Our DLP failed", what they really mean is:
Our threat model never included this behavior.
---

Why blocking ChatGPT backfires

The reflex response is predictable:

"Block ChatGPT."
"Block Claude."
"Block all external LLMs."

On paper, this looks responsible.

In practice, it produces:
  • Personal device usage
  • Browser extensions
  • Smaller, fragmented pastes
  • Silence instead of questions
Risk doesn't disappear. It just becomes invisible.

And once engineers stop talking to security, you've lost the most important signal you had.

---

This is a system design problem, not an AI problem

Developers optimize for: Speed, Accuracy, Low friction

Security teams often optimize for: Control, Policy, After-the-fact detection

When those incentives collide, the faster system wins.

Every time.

So the real question isn't "How do we stop people?"
It's:
How do we redesign the system so the safe path is the fast path?
---

Step 1: Provide an approved AI path people actually want to use

An internal or enterprise-approved LLM only works if it's:
  • Fast
  • Reliable
  • Easy to access (SSO, no tickets)
  • Good enough to replace public tools
If the "safe" tool feels worse than ChatGPT, it will be ignored.

This isn't about trust. It's about usability.

People don't bypass controls to be rebellious. They bypass them to get work done.

---

Step 2: Stop trying to read prompts. Watch behavior instead.

Trying to inspect every prompt is a dead end.

You won't reliably see:
  • What was pasted
  • How it was transformed
  • Where it went
But you can see behaviors that matter:
  • Large clipboard copy events
  • Copying from production systems into browsers
  • Structured data patterns
  • Sudden changes in paste volume
You don't need the content to detect the risk.

Attackers already know this.
Defenders are just catching up.

---

Step 3: Keep secrets from appearing on screens in the first place

The most effective control is also the least glamorous:

Don't expose raw secrets unless absolutely necessary.

That means:
  • Masking sensitive fields by default
  • Tokenizing internal identifiers
  • Treating "view" as a privilege, not a default
  • Restricting full production outputs
If someone never sees the secret, they can't paste it.

This is boring security.

It's also the kind that works.

---

Step 4: Train instincts, not compliance

Most AI training fails because it sounds like legal language.
"Employees must not input confidential information into AI tools."
That sentence does not survive:
  • Deadlines
  • Curiosity
  • Pressure
A better rule is simpler:

If it would trigger an incident report, it doesn't belong in a prompt.

No flowcharts.
No policy PDFs.
Just a mental shortcut people can actually use.

---

Step 5: Explain the risk in executive language

Executives don't need to understand tokens or embeddings.

They understand this immediately:
AI prompts are unlogged outbound data transfers with no recall.
Once the risk is framed that way:
  • Budget appears
  • Tradeoffs become explicit
  • Ownership becomes clear
Not because of fear.
Because of clarity.

---

The real lesson

This wasn't a junior developer problem.
It wasn't an AI problem.
It wasn't negligence.

It was a system built for a world where copy-paste wasn't a data exfiltration vector.

That world is gone.

The prompt is the new USB drive.

And if you're not actively redesigning for that reality, there's a good chance this is already happening inside your company- quietly, efficiently, and with the best intentions.

That's what makes it dangerous.

Thursday, December 18, 2025

A small moment that meant more than expected

A friend lost her phone.
As many of us know, a phone today isn't just a device - it's access to photos, messages, work tools, banking apps, and daily routines.

I helped her lock things down.
Passwords were changed, accounts secured, risks contained.
We remotely erased all data on the device and locked it completely.
Whatever was lost, it won't be misused.

The good news: her data is secure.
The difficult part: what was on that phone can't be recovered.
Safe doesn't always mean reversible.

Later, she gave me a gift.
A small ceramic piece - simple, thoughtful, made by hand.

It was a quiet reminder.

We work in a fast, digital world where systems can usually be fixed.
But trust, care, and real human gestures still matter just as much.

Sometimes the most meaningful outcomes aren't measured in recovery -
but in knowing the right steps were taken, at the right time.

Also visit: https://quangntenemy.substack.com/p/a-small-moment-that-meant-more-than