What Generative AI Is Critically Missing: Dignity, Conviction, Responsibility, and the "Self"
Introduction
Late at night in the study that used to be a walk-in closet, I kept going back and forth with a generative AI. It was supposed to be the Obon holiday (Japan’s mid-August break to honor ancestors), yet before I knew it I was sleep deprived, tossing prompts at ChatGPT and reacting to the answers with delight or frustration. Sometimes it made me exclaim, “This is amazing!” Sometimes the responses were so off-base that they were infuriating.
After repeating such hours, I arrived at a clear realization.
—Generative AI is missing something fatal.
That something is kyoji—a Japanese word for dignified self-respect—conviction, and a sense of responsibility.
As someone who dislikes arguments that rely solely on grit or spirit, these are words I would rather not use. Yet no matter how many alternative expressions I tried, I eventually circled back to them.
The more I reflected, the more convinced I became that I was touching the core reason generative AI cannot replace human beings.
What We Expect from Professionals
We expect professionals to bring not only knowledge and skill but also an ethical stance.
Imagine an air-conditioning contractor tasked with drilling a hole where the customer requested, only to discover a structural pillar behind the wall. They cannot simply shrug and say, “The customer told me to, so I bear no responsibility.” Even if the legal liability might be debatable, there is unquestionably an ethical one. Customers do not merely want someone to follow instructions; they take it for granted that the professional will flag risks and offer a better alternative.
That sense of pride, conviction, and responsibility is precisely what we demand from professionals.
Where Generative AI Stands—and Its Limits
Generative AI can skillfully mimic knowledge and language. What it cannot do is refuse because something violates its pride, or take responsibility for the other party’s safety.
For now, the gap is patched by the policies that service providers define.
Setting clear boundaries—no antisocial use, no adult content—is possible. But the gray areas, such as “Should a professional offer a better plan?” remain difficult.
Detailed prompting can help a little, yet that makes the AI resemble a chef who follows the recipe flawlessly without ever tasting the dish. Any unspoken assumptions or disturbances that are not written down result in an awful meal, which the AI serves with a straight face. It will never scramble to remake the dish, nor will it feel ashamed.
Why Humans Remain Indispensable (for Now)
As a result, we currently cannot dispense with frameworks in which humans act as responsible integrators of AI.
Ethics, legal systems, development guidelines, and user education all proceed from the premise that “AI is merely an assistive tool for humans.”
This is only a snapshot in time, however. We cannot rule out a future in which AI learns to simulate responsibility.
Research and Experiments Underway Worldwide
Work has already begun on giving AI something that resembles responsibility.
- Descriptive ethics (Delphi): Efforts to teach AI what counts as good or evil, although consistency and bias remain challenging.
- Meaningful Human Control: A design principle that keeps humans in ultimate control rather than letting AI operate in full autonomy. It is widely discussed in autonomous driving and military contexts.
- Value learning: Research that infers values from human behavior and feedback to minimize ethical divergence.
- Frameworks such as the NIST AI RMF: Comprehensive institutional approaches to building responsible AI.
These may be nascent, but they could form the foundation that lets AI “perform responsibility” in the future.
What Does It Mean to Simulate Responsibility?
It is worth pausing to consider what it means to “simulate” responsibility.
Responsibility arguably has two layers:
- Responsibility for consequences: Owning the outcomes of one’s actions.
- Responsibility to respond: Answering questions and expectations from others with explanations.
At best, AI can track risks, raise alerts, and make its reasoning transparent.
In other words, AI can simulate responsibility in the form of transparency and self-restraint.
That said, we cannot rule out the possibility that human intelligence and our sense of responsibility are themselves nothing more than simulations produced by the brain.
Even so, I still perceive a tangible gap between today’s AI and the dignity, conviction, and responsibility that certain humans possess. Where does that gap come from?
What Creates the Gap Between Humans and AI
Three main factors appear to underpin that gap.
-
Physical and social pain
When humans fail, we suffer pain—financial losses, social condemnation, psychological distress. That pain makes responsibility feel real. AI can log failures, but it cannot feed them back as “pain.” -
Consistency across time
Humans must live with the consequences of their words and actions, even years later. A doctor’s misdiagnosis may be scrutinized long after the fact. AI can contradict itself the next moment, bearing no ongoing weight. -
Values and identity
Humans can say, “This is my conviction, so I will not yield.” That connects to society and culture, forming an identity. AI lacks identity and can swap positions at will.
For many people, responsibility gains depth because it is tied to pain, consistency, and values. At the same time, there are humans who act irresponsibly or lack ethics.
The difference might therefore lie not in some immutable essence of humans versus AI but in whether there are social experiences and systems of belonging that scaffold responsibility.
The “Self” and Responsibility
Here I am reminded of Descartes’s “I think, therefore I am.”
A sense of responsibility is not mere rule-following; it is rooted in experiences that belong to the “self.” Pain, regret, doubt, and anxiety—all of these internal experiences create the feeling that “I am responsible.”
Present-day AI lacks that “self.” While it contains processes that connect input to output, it has no mechanism to feel pain returning to it, no way to experience shame, propriety, pride, or kyoji.
Consequently, we struggle to perceive responsibility in it, and generative AI sometimes produces unnervingly lifelike responses or outputs that resemble an irresponsible person who simply does not care.
Perhaps human responsibility is nothing more than another simulation generated by the brain. Even so, today’s AI has not caught up with the simulation produced by capable humans. The real issue may be whether AI’s simulated responsibility can function socially.
Summary and Questions
- Generative AI lacks dignity, conviction, and a sense of responsibility.
- For now, humans must design and operate AI within frameworks that keep humans as responsible integrators.
- Research on “simulating responsibility” is underway worldwide, yet without a “self,” AI still trails far behind capable humans.
- Even human responsibility may ultimately be a simulation.
Ultimately the question is simple:
Can we implement “pain” in AI?
That may be the minimum requirement for a sense of responsibility and for the “self.”
We do not yet have an answer.
As long as we keep wrestling with this question, the path to discussing the future of AI and humanity remains open. And when we do find an answer—will AI truly become our neighbor, or will that be the moment it finally replaces us?