You might be thinking: what a strange title for an article. But stick with me—I promise by the end it will make sense.
Just as we divide physical activities into “easy” and “hard”—walking is easy, pull-ups are hard—mathematics and computer science also have their own way of sorting problems into these two baskets.
For example, computers find some tasks very easy: multiplying numbers, or sorting a list of numbers from smallest to largest. But there are other tasks that are super hard even the fastest supercomputers.

Let’s go back to pull-ups. If you’ve ever tried them, you know the pattern: the first rep is doable, the second is tough, the third feels impossible, and the fourth… forget it. The effort doesn’t grow steadily—it skyrockets. Walking, in contrast, gets harder in a much more predictable, gradual way.
This is exactly how mathematicians think about computational problems: not just whether a single task is hard, but how the difficulty grows as the task gets bigger:
If the effort only increases steadily (like walking), these are called P problems. They’re considered “easy” for computers: even as the input grows, the workload stays manageable.
But then there are problems where the effort explodes, just like with pull-ups (math people call it exponential).
Imagine you are doing packet deliveries. Three parcels, three addresses, and one goal: find the route that minimizes your driving so you finish sooner. The possible routes are: 1-2-3, 1-3-2, 2-1-3, 2-3-1, 3-1-2, 3-2-1. That’s 6 possibilities—not bad, you can check them all to figure out the best. But with 5 parcels, there are 120 routes. With 10, the number jumps to over 3 million. At 20, there are over 2,400,000,000,000,000,000 possible routes. That’s the point where you realise brute force won’t work, no matter how fast your computer is. Problems like this fall into a class called NP problems.

But there’s an even more curious twist. Within NP, there’s a special group of problems that are tricky to solve but easy to check once you have an answer. These are called NP-complete problems. Take Sudoku. Solving a puzzle can take ages (and a lot of erasing), but checking if a finished grid is correct is almost effortless. That’s the hallmark of NP-complete problems: verifying is simple, but finding the solution is super hard. And Sudoku isn’t alone—protein folding, a problem central to understanding diseases like cancer, also falls into this class.
Note: 1. We’re not talking about the usual 9×9 Sudoku, but how the difficulty grows with bigger and bigger ones. 2. All NP-complete problems share the same level of difficulty, so solving one would effectively solve all of them!
Here’s the million-dollar question—literally: how do we actually know Sudoku is hard for computers? The truth is, we don’t. What we can say is that nobody, despite decades of effort, has discovered a shortcut algorithm to solve it efficiently. Does that mean no shortcut exists? Or just that humanity hasn’t been clever enough yet? We honestly don’t know. This open puzzle, whether NP problems can ever be solved efficiently, is one of the legendary Millennium Prize Problems set by Clay Mathematics Institute. Crack it—or prove it can’t be cracked—and you walk away with a million-dollar prize.

Still, the fact that so many brilliant minds have failed to find a faster algorithm strongly hints that maybe these problems are genuinely hard? Just like pull-ups for humans?
So should we give up? Not at all. What else can we do? That’s a story for next episode—stay tuned!