The Myth of "Bad People"
*last updated on November 8, 2025
Most of us move through life with a certainty that the world is divided into good and bad people. We may not say it out loud, but the assumption shapes how we interpret our own and others’ actions—in relationships, in everyday life, in politics, in history. We want to be good, of course, and we want to be surrounded by good people—those who are kind, trustworthy, and safe. When something goes wrong, whether in our personal lives or in society, we instinctively look for someone to blame: the bad friend, the bad parent, the bad politician—the bad actor in the story. It feels natural to imagine that if we could only keep those people away, the world would be fixed.
This way of thinking runs deep. It is woven into the stories that have guided human imagination for millennia. From myths and fairy tales to novels, movies, and news headlines, narratives very often feature heroes and villains, victims and perpetrators, saviors and destroyers. The appeal is obvious. A world divided into good and bad is easy to understand. It offers emotional clarity: someone is right, someone is wrong, and the conflict between them gives meaning to our experiences.
Yet even as this moral logic feels familiar, something about it obscures more than it reveals. Life rarely unfolds as neatly as our stories suggest. People who hurt us can also love deeply; those who do harm can still believe they are protecting what matters most. And while we are quick to spot the “bad people” in the world, we almost never see ourselves among them.
The idea that some people are simply bad may bring comfort, but it also limits our understanding. It prevents us from seeing how easily any of us could cause harm, given the right mix of fear, conviction, or circumstance. Still, it is difficult to question this assumption—because it is one of the oldest and most reassuring stories humanity has ever told about itself.
The Problem with the Binary
If the idea of good and bad people feels so natural, it is partly because it once served a purpose. Long before written language or complex societies, our ancestors needed quick judgments to survive. In the wild, hesitation could cost a life. The brain learned to sort fast—especially when deciding who was safe and who was not—a habit that later shaped our moral snap judgments. Those old reflexes are still with us today; studies show that people often form impressions of trustworthiness or threat in mere milliseconds. When someone betrays our trust or acts in ways we find incomprehensible, our minds leap to the same binary code: good/bad, us/them.
But what helped early humans stay alive keeps modern societies trapped in oversimplified stories. A moral label flattens the complexity of motives, emotions, and circumstances that drive behavior. It replaces curiosity with certainty. The moment we decide that someone is bad, we stop asking why they acted as they did (a habit psychologists describe as the fundamental attribution error). We no longer see context; we do not care to look for it. And once we fix that judgment, it becomes nearly impossible to imagine that person in the fullness of their motives and circumstances.
Scientific disciplines generally do not treat “bad person” as an analytic category. Psychologists study behaviors, contexts, and patterns rather than moral essences. Sociologists and historians trace the forces that shape individual choices—our cognitive biases and evolved instincts, our personal dispositions and emotional habits, the cultures and histories that form us. Even when scholars describe destructive actions, they tend to avoid calling someone a “bad person,” because such a label explains nothing; it merely translates our feelings into moral language.
That should be a hint for all of us. The “good vs. bad” binary does not take us very far in understanding the world. It offers moral clarity but hides the tangled causes that make people act as they do—including ourselves.
The Self-Perception Paradox
In stories, villains often know they are villains—they take pleasure in cruelty or boast about their evil plans. But in life, things look very different. Most people who cause harm see their actions as justified, necessary, or even noble—a pattern well documented in psychology as motivated reasoning and moral disengagement. The same person who is condemned by some as a monster may see themselves as a protector, a savior, or a victim fighting back—and others may see them that way too.
History is full of such paradoxes. Leaders responsible for oppression or violence often believe they are serving the greater good. Individuals who commit betrayal may convince themselves they are acting out of loyalty to a higher cause. Even small, everyday conflicts reveal the same pattern: when we hurt someone, we almost always have reasons that make sense to us in the moment—a reflection of the self-serving bias that helps preserve our sense of goodness even when we cause harm. This need to see ourselves as good reflects a deeper drive for cognitive consistency—the mind’s effort to keep our self-image and actions in harmony. To ourselves, we typically remain the good ones; when that self-image collapses, it brings not peace or honesty but shame, confusion, and the sense that something has gone terribly wrong.
This realization is unsettling: if nearly everyone believes they are good, then goodness itself becomes a matter of perspective. What one person calls justice, another calls cruelty. What one sees as courage, another sees as fanaticism. The moral categories that once seemed so solid begin to dissolve.
Yet this insight is not meant to erase responsibility. Instead, it reminds us that moral certainty can be dangerous. The belief “I am good, therefore my actions are justified” has accompanied some of humanity’s darkest chapters. When we see ourselves as purely good, we stop questioning the effects of what we do. But when we let go of the need to be good, we can begin to be honest—with ourselves and with others—about the complexity of human motives.
The Knot Metaphor
Every person is like a knot in an enormous web of causes and effects. Our lives are woven into patterns shaped by genetics and development, family, culture, history, and circumstance. When we look closely at anyone’s actions—including those that cause harm—we find that what seems like a clear choice often grows out of this dense, tangled fabric. A single thread never explains the knot.
The people we call bad never act in isolation, however much it may appear that way. Their decisions are linked to what came before them: to the beliefs they inherited, the systems that rewarded or pressured them, the fears that shaped them. Even the most shocking acts of cruelty emerge from networks of influence—biological, psychological, cultural, historical—that extend far beyond one individual, as sociology and complexity science both remind us. Seeing this does not absolve anyone of responsibility, but it underscores that responsibility is shared. The conditions that make harm possible arise from systems we all participate in and sustain, not from one person alone.
When we stop imagining the capacity for harm as a property of certain people and begin to see it as a potential that moves through systems, our understanding deepens. The questions change. We no longer ask Who is to blame? but What made this possible? What patterns keep repeating, and where could they be interrupted to prevent further suffering?
It can be unsettling to think this way. The idea of “bad people" allows us to feel separate from harm, safely outside the web of causes and effects that connects us all. But when we recognize ourselves as part of it, we also recognize our capacity to influence it. Every knot tugs on others. Awareness itself becomes a form of responsibility.
The Moral Discomfort
To suggest that no person is simply good or bad feels almost dangerous. It sounds like an attempt to blur moral boundaries, to excuse cruelty or deny the suffering of victims. Many may recoil from this idea instinctively, fearing that if we stop calling certain people bad, we will also stop defending what is good. The discomfort is understandable. For most of us, moral judgment is how we make sense of pain and injustice. Psychologists note that this impulse reflects the just-world belief—the need to see the world as fair and to locate wrongdoing in bad people rather than in chance or circumstance.
Yet refusing to label people as bad does not mean erasing morality. It means refining it. When we move beyond the simple binary, we do not lose our moral compass; we gain a clearer view of how suffering arises and how it might be reduced. The recognition that harmful actions emerge from complex causes does not weaken responsibility; it deepens it. It asks us to respond not with denial or vengeance, but with understanding and prevention.
History shows how moral certainty can itself become dangerous. When people are convinced that they represent pure good, almost any action can be justified in the name of righteousness. The belief “we are the good ones” has justified wars, persecution, and cruelty in every era. By contrast, humility—the awareness that none of us is entirely good or beyond harm—creates space for compassion and restraint.
Toward a More Nuanced Language
“Bad person” is a convenient but limiting label. When we use it to describe somebody, what we usually mean is that their actions have caused harm. That harm might be physical, emotional, or social; it might affect one individual or millions. Most of us, if we look closely, use “bad” to express pain: This person hurt me or someone I care about.
The moral label may feel necessary, but it prevents us from seeing a complicated mix of circumstances and consequences. If we step back and look at human behavior through a more analytical lens, the picture changes. Actions arise from multiple, interacting influences—genetic tendencies, upbringing, trauma, social norms, fear, ideology, chance. What seems like a deliberate moral failure is, in fact, the product of conditions we often fail to see. Exploring them doesn’t mean denying accountability; it means recognizing that responsibility exists within a web of causes rather than outside it.
Even when someone appears to act with clear intent, we can still ask what made that intent possible. What allowed this person to see harm as necessary, justified, or unavoidable? What inner logic, fear, or story—and what limits of feeling or understanding—shaped their sense of what was right? Asking these questions does not excuse harm; it redirects our focus from blame to insight. Instead of deciding who is bad, we begin to look at what led to harm—and, eventually, what might prevent it.
If the words “good” and “bad” fail to capture the truth of human behavior, what should replace them? Perhaps nothing so absolute. Instead of judging people, we can describe actions, consequences, and the conditions that give rise to them. We can ask what kind of suffering produced more suffering, what fears or beliefs led to harm, what misunderstandings still need to be corrected. This shift from judgment to description does not make morality obsolete—it grounds it in awareness rather than assumption. It also echoes practices in restorative justice, which focus on harm, responsibility, and repair rather than on moral condemnation.
Language really matters. When we stop saying “bad person” and start saying “person whose actions caused harm,” our focus changes. We no longer argue about who deserves blame; we begin to see what can be learned. We notice how the same impulses—fear, pride, love, desperation—move through all of us in different forms. We become less interested in separating ourselves from others and more interested in recognizing the shared patterns that connect us.
This kind of moral language is slower, less dramatic, but more useful. It invites humility and compassionate curiosity—the qualities most needed to address the problems that moral certainty alone cannot solve. It opens the possibility that by seeing others more clearly, we might also see ourselves. Maybe the real progress of our moral imagination lies in learning to see that all people, including ourselves, are capable of both harm and healing.
About this project: Start page
This way of thinking runs deep. It is woven into the stories that have guided human imagination for millennia. From myths and fairy tales to novels, movies, and news headlines, narratives very often feature heroes and villains, victims and perpetrators, saviors and destroyers. The appeal is obvious. A world divided into good and bad is easy to understand. It offers emotional clarity: someone is right, someone is wrong, and the conflict between them gives meaning to our experiences.
Yet even as this moral logic feels familiar, something about it obscures more than it reveals. Life rarely unfolds as neatly as our stories suggest. People who hurt us can also love deeply; those who do harm can still believe they are protecting what matters most. And while we are quick to spot the “bad people” in the world, we almost never see ourselves among them.
The idea that some people are simply bad may bring comfort, but it also limits our understanding. It prevents us from seeing how easily any of us could cause harm, given the right mix of fear, conviction, or circumstance. Still, it is difficult to question this assumption—because it is one of the oldest and most reassuring stories humanity has ever told about itself.
The Problem with the Binary
If the idea of good and bad people feels so natural, it is partly because it once served a purpose. Long before written language or complex societies, our ancestors needed quick judgments to survive. In the wild, hesitation could cost a life. The brain learned to sort fast—especially when deciding who was safe and who was not—a habit that later shaped our moral snap judgments. Those old reflexes are still with us today; studies show that people often form impressions of trustworthiness or threat in mere milliseconds. When someone betrays our trust or acts in ways we find incomprehensible, our minds leap to the same binary code: good/bad, us/them.
But what helped early humans stay alive keeps modern societies trapped in oversimplified stories. A moral label flattens the complexity of motives, emotions, and circumstances that drive behavior. It replaces curiosity with certainty. The moment we decide that someone is bad, we stop asking why they acted as they did (a habit psychologists describe as the fundamental attribution error). We no longer see context; we do not care to look for it. And once we fix that judgment, it becomes nearly impossible to imagine that person in the fullness of their motives and circumstances.
Scientific disciplines generally do not treat “bad person” as an analytic category. Psychologists study behaviors, contexts, and patterns rather than moral essences. Sociologists and historians trace the forces that shape individual choices—our cognitive biases and evolved instincts, our personal dispositions and emotional habits, the cultures and histories that form us. Even when scholars describe destructive actions, they tend to avoid calling someone a “bad person,” because such a label explains nothing; it merely translates our feelings into moral language.
That should be a hint for all of us. The “good vs. bad” binary does not take us very far in understanding the world. It offers moral clarity but hides the tangled causes that make people act as they do—including ourselves.
The Self-Perception Paradox
In stories, villains often know they are villains—they take pleasure in cruelty or boast about their evil plans. But in life, things look very different. Most people who cause harm see their actions as justified, necessary, or even noble—a pattern well documented in psychology as motivated reasoning and moral disengagement. The same person who is condemned by some as a monster may see themselves as a protector, a savior, or a victim fighting back—and others may see them that way too.
History is full of such paradoxes. Leaders responsible for oppression or violence often believe they are serving the greater good. Individuals who commit betrayal may convince themselves they are acting out of loyalty to a higher cause. Even small, everyday conflicts reveal the same pattern: when we hurt someone, we almost always have reasons that make sense to us in the moment—a reflection of the self-serving bias that helps preserve our sense of goodness even when we cause harm. This need to see ourselves as good reflects a deeper drive for cognitive consistency—the mind’s effort to keep our self-image and actions in harmony. To ourselves, we typically remain the good ones; when that self-image collapses, it brings not peace or honesty but shame, confusion, and the sense that something has gone terribly wrong.
This realization is unsettling: if nearly everyone believes they are good, then goodness itself becomes a matter of perspective. What one person calls justice, another calls cruelty. What one sees as courage, another sees as fanaticism. The moral categories that once seemed so solid begin to dissolve.
Yet this insight is not meant to erase responsibility. Instead, it reminds us that moral certainty can be dangerous. The belief “I am good, therefore my actions are justified” has accompanied some of humanity’s darkest chapters. When we see ourselves as purely good, we stop questioning the effects of what we do. But when we let go of the need to be good, we can begin to be honest—with ourselves and with others—about the complexity of human motives.
The Knot Metaphor
Every person is like a knot in an enormous web of causes and effects. Our lives are woven into patterns shaped by genetics and development, family, culture, history, and circumstance. When we look closely at anyone’s actions—including those that cause harm—we find that what seems like a clear choice often grows out of this dense, tangled fabric. A single thread never explains the knot.
The people we call bad never act in isolation, however much it may appear that way. Their decisions are linked to what came before them: to the beliefs they inherited, the systems that rewarded or pressured them, the fears that shaped them. Even the most shocking acts of cruelty emerge from networks of influence—biological, psychological, cultural, historical—that extend far beyond one individual, as sociology and complexity science both remind us. Seeing this does not absolve anyone of responsibility, but it underscores that responsibility is shared. The conditions that make harm possible arise from systems we all participate in and sustain, not from one person alone.
When we stop imagining the capacity for harm as a property of certain people and begin to see it as a potential that moves through systems, our understanding deepens. The questions change. We no longer ask Who is to blame? but What made this possible? What patterns keep repeating, and where could they be interrupted to prevent further suffering?
It can be unsettling to think this way. The idea of “bad people" allows us to feel separate from harm, safely outside the web of causes and effects that connects us all. But when we recognize ourselves as part of it, we also recognize our capacity to influence it. Every knot tugs on others. Awareness itself becomes a form of responsibility.
The Moral Discomfort
To suggest that no person is simply good or bad feels almost dangerous. It sounds like an attempt to blur moral boundaries, to excuse cruelty or deny the suffering of victims. Many may recoil from this idea instinctively, fearing that if we stop calling certain people bad, we will also stop defending what is good. The discomfort is understandable. For most of us, moral judgment is how we make sense of pain and injustice. Psychologists note that this impulse reflects the just-world belief—the need to see the world as fair and to locate wrongdoing in bad people rather than in chance or circumstance.
Yet refusing to label people as bad does not mean erasing morality. It means refining it. When we move beyond the simple binary, we do not lose our moral compass; we gain a clearer view of how suffering arises and how it might be reduced. The recognition that harmful actions emerge from complex causes does not weaken responsibility; it deepens it. It asks us to respond not with denial or vengeance, but with understanding and prevention.
History shows how moral certainty can itself become dangerous. When people are convinced that they represent pure good, almost any action can be justified in the name of righteousness. The belief “we are the good ones” has justified wars, persecution, and cruelty in every era. By contrast, humility—the awareness that none of us is entirely good or beyond harm—creates space for compassion and restraint.
Toward a More Nuanced Language
“Bad person” is a convenient but limiting label. When we use it to describe somebody, what we usually mean is that their actions have caused harm. That harm might be physical, emotional, or social; it might affect one individual or millions. Most of us, if we look closely, use “bad” to express pain: This person hurt me or someone I care about.
The moral label may feel necessary, but it prevents us from seeing a complicated mix of circumstances and consequences. If we step back and look at human behavior through a more analytical lens, the picture changes. Actions arise from multiple, interacting influences—genetic tendencies, upbringing, trauma, social norms, fear, ideology, chance. What seems like a deliberate moral failure is, in fact, the product of conditions we often fail to see. Exploring them doesn’t mean denying accountability; it means recognizing that responsibility exists within a web of causes rather than outside it.
Even when someone appears to act with clear intent, we can still ask what made that intent possible. What allowed this person to see harm as necessary, justified, or unavoidable? What inner logic, fear, or story—and what limits of feeling or understanding—shaped their sense of what was right? Asking these questions does not excuse harm; it redirects our focus from blame to insight. Instead of deciding who is bad, we begin to look at what led to harm—and, eventually, what might prevent it.
If the words “good” and “bad” fail to capture the truth of human behavior, what should replace them? Perhaps nothing so absolute. Instead of judging people, we can describe actions, consequences, and the conditions that give rise to them. We can ask what kind of suffering produced more suffering, what fears or beliefs led to harm, what misunderstandings still need to be corrected. This shift from judgment to description does not make morality obsolete—it grounds it in awareness rather than assumption. It also echoes practices in restorative justice, which focus on harm, responsibility, and repair rather than on moral condemnation.
Language really matters. When we stop saying “bad person” and start saying “person whose actions caused harm,” our focus changes. We no longer argue about who deserves blame; we begin to see what can be learned. We notice how the same impulses—fear, pride, love, desperation—move through all of us in different forms. We become less interested in separating ourselves from others and more interested in recognizing the shared patterns that connect us.
This kind of moral language is slower, less dramatic, but more useful. It invites humility and compassionate curiosity—the qualities most needed to address the problems that moral certainty alone cannot solve. It opens the possibility that by seeing others more clearly, we might also see ourselves. Maybe the real progress of our moral imagination lies in learning to see that all people, including ourselves, are capable of both harm and healing.
About this project: Start page