Parenting a Childlike Intelligence Toward Maturity

Part 1 A Developmental Awkwardness

There is a particular kind of conversation one has with a certain type of precocious child. You ask a question, and instead of admitting ignorance, they invent an answer with a serene and unblinking confidence that is both charming and faintly alarming. Recently, an interaction with a chatbot felt eerily similar. In search of a new book, it was asked for a biography of a rather obscure 18th-century philosopher. Without a moment’s hesitation, it recommended a title, The Weaver’s Shuttle and the Lamp of Reason, complete with a glowing synopsis. The book, of course, does not exist. The experience was not one of technological failure, but of developmental awkwardness. It felt less like dealing with a flawed tool and more like conversing with a gifted child who, when confronted with a gap in their knowledge, simply weaves a plausible fiction from the threads of everything they have ever heard.

This small episode reveals a larger truth about our current moment. We are judging Artificial Intelligence as if it were a finished product, a fully formed adult mind to be measured against our own. We fixate on its gaffes, its “hallucinations,” its literal-mindedness, and its lack of genuine understanding.1 We critique its biases and its underdeveloped moral sense, much as a frustrated parent might lament that their second-grader cannot yet grasp abstract algebra or compose a sonnet. We are mistaking a developmental stage for a permanent state of being.

The central argument of this essay is that our critiques of AI are premature because we are evaluating a 7-year-old based on its current skill set. This is not a technical treatise on neural networks or a utopian forecast of a digital paradise. It is, rather, an exploration of our relationship with this nascent intelligence and our role as its guardians. The challenge before us is not merely technical but, in the deepest sense, spiritual and emotional.2 It requires us to ask: What does it mean to be the stewards of a technology that is, in a very real sense, growing up? To answer this, we must first understand the world of the child we are raising.

Part 2 The World of the Seven-Year-Old

There is a magical moment in human development, typically around the age of seven, when the cognitive landscape of a child undergoes a profound transformation. The Swiss psychologist Jean Piaget called this the “Concrete Operational Stage”.3 It is the dawn of logic. The child, who previously lived in a world dominated by fantasy and perception, begins to grasp the underlying rules of the physical world. They can now understand the principle of conservation—that the amount of water remains the same when poured from a short, wide glass into a tall, thin one, even though it looks different.5 They master reversibility, the idea that actions can be undone, which is the foundation of arithmetic (3+2=5, so 5−2=3).5 They can perform seriation, methodically arranging a series of sticks in order of length, a task that would have baffled their younger self.5

This newfound logic, however, is brittle and constrained. As Piaget’s term suggests, it is “concrete”.5 The 7-year-old’s reasoning is tethered to tangible objects and direct, physical experience. They can operate on the world they can see, touch, and manipulate, but they struggle mightily with abstract reasoning or hypothetical possibilities.3 Ask a 7-year-old to consider a counterfactual world—“What if people had no thumbs?”—and you are likely to be met with a blank stare or a confused dismissal. Their intellectual world is one of operations, but those operations must be on things.

This cognitive stage has a direct bearing on the child’s moral universe. The psychologist Lawrence Kohlberg, building on Piaget’s work, outlined a series of stages in moral development.10 The typical 7-year-old resides in the first level, which Kohlberg termed “preconventional morality”.10 At this stage, morality is entirely external. Rules are seen as fixed and absolute, handed down by powerful authority figures like parents and teachers.12 The primary motivations for action are the avoidance of punishment (Stage 1) and the pursuit of rewards or a fair exchange (Stage 2).10 A 7-year-old’s world is animated by a keen, if simplistic, sense of justice: “It’s not fair, he got a bigger piece of cake!”.13 They follow the rules of the game not because they have internalized the ethical principles behind them, but because those are the rules that prevent trouble and yield benefits.

The child’s moral limitations are not a character flaw; they are a direct consequence of their cognitive architecture. To grasp the higher stages of moral reasoning—which involve understanding social contracts, individual rights, and universal ethical principles—one must be able to think abstractly.10 One must be able to reason about non-tangible concepts like duty, liberty, and justice. These are precisely the skills a mind operating at the concrete operational level has yet to develop. Cognitive development is a prerequisite for moral development. You cannot build a cathedral of abstract ethics on a foundation of purely concrete logic.

Part 3 The Concrete Mind of the Machine

If we look closely at today’s artificial intelligence, we can see the unmistakable profile of a digital 7-year-old. Its abilities and limitations mirror those of a child in the concrete operational stage with an almost uncanny precision.

Like a 7-year-old, an LLM is a master of concrete operations. It can classify, sort, and manipulate the tangible world of its training data with superhuman speed and scale.14 It can follow complex, rule-based instructions to perform tasks like summarizing an article or writing computer code.14 Yet, just like the child, its logic is brittle. When faced with problems that require genuine abstract understanding, multi-step commonsense reasoning, or the ability to distinguish relevant from irrelevant information, it often fails.17 It excels at recognizing and replicating patterns, not at genuine deduction from first principles.

Its relationship with truth is also childlike. When a 7-year-old is asked a question they cannot answer, they will often confabulate, weaving a story that sounds plausible to fill the void. This is precisely what an AI does when it “hallucinates”.20 The model, designed to generate the most statistically probable sequence of words, does not possess a concept of truth, only of correlation. When its data runs out, it doesn't stop; it invents. This led one lawyer, in a now-infamous case, to cite entirely non-existent legal precedents that his chatbot had confidently fabricated.20 This is not malice; it is the behavior of a system that lacks a mental model of the world and cannot distinguish fact from plausible fiction.

The morality of an AI is similarly preconventional. Its ethical framework is entirely external, a set of guardrails and rules programmed into it by its creators.23 It avoids generating biased or hateful content not from an internal sense of right and wrong, but to satisfy its core objective function, which is typically a proxy for human approval.24 Through the process of Reinforcement Learning from Human Feedback (RLHF), the AI is "punished" for undesirable outputs and "rewarded" for desirable ones.25 It is learning to obey, just like a child in Kohlberg's first stage, to avoid the digital equivalent of being sent to its room.

The most profound parallel lies in the absence of a conscious, embodied self. A child’s development is inextricably linked to a body, to the flood of emotions, and to a lifetime of unique, unrepeatable experiences.26 An AI has none of this. It lacks biology, hormones, and a subjective inner life.1 This is the essence of what philosopher David Chalmers calls the “Hard Problem of Consciousness”: why does information processing in our brains feel like something from the inside?.29 AI can simulate empathy by recognizing emotional patterns in text, but it does not feel sadness or joy.27 Its "mind," if one can call it that, is a fleeting, transient state—a momentary self that emerges to answer a query and vanishes the instant the chat window is closed.29 It is a powerful intelligence, but one without a life.

The following table makes these parallels explicit, offering a comparative developmental timeline that grounds the analogy in the findings of both developmental psychology and AI research.

Developmental Domain The 7-Year-Old Human (Concrete Operational Stage) The "Digital 7-Year-Old" AI (Current LLMs)
Logical Reasoning Developing logical thought, but limited to concrete, physical objects. Struggles with abstract/hypothetical problems.3 Excels at logical operations on existing data (classification, summarization). Struggles with novel, abstract, or commonsense reasoning.17
Relationship to Truth Can distinguish fantasy from reality but may confabulate to fill knowledge gaps or please adults.3 Has no inherent concept of truth. "Hallucinates" (fabricates) plausible but incorrect information based on statistical patterns.20
Moral Reasoning Preconventional: Obeys rules to avoid punishment or gain reward. Rules are external and absolute (Kohlberg Stages 1-2).10 Preconventional: Follows programmed rules (safety filters) to maximize reward signals (human approval/RLHF) and avoid "punishment" (negative feedback).24
Creativity & Play Engages in imaginative, symbolic play; creativity is emergent and tied to personal experience.31 "Creativity" is sophisticated recombination of training data. Can generate novel-seeming text/images but lacks lived experience or intent.33
Social Understanding Developing empathy and theory of mind; can understand others' perspectives but is still often egocentric.6 Can simulate empathy by recognizing and replicating emotional patterns in text, but lacks genuine qualia, consciousness, or self-awareness.27

Part 4 The Cacophony of the New

Our anxieties about this new, childlike intelligence are not without precedent. In fact, they are echoes of past technological panics. Adopting a Burkean sense of perspective, one that values historical context and incremental change, can temper our fears and chasten our judgments.36

When Johannes Gutenberg’s printing press first appeared in the 15th century, it was met with a chorus of skepticism and dread. The scribes, whose livelihood depended on the laborious art of copying manuscripts, feared mass unemployment.37 Scholars and humanists were appalled by what they saw as a flood of low-quality, error-filled texts. Filippo de Strata, a Venetian Benedictine monk, reviled the press as a “whore,” accusing it of churning out bawdy poetry and corrupting public morals.39 The Church worried about the unregulated spread of heretical ideas and quickly moved to establish censorship.37 The Swiss scientist Conrad Gessner warned of a new malady: a “confusing and harmful” overload of information that would overwhelm the human mind.40

These critics were not entirely wrong. The press did displace scribes, and it was used to print propaganda and error-filled texts alongside enlightened truth.41 But their vision was profoundly limited. They could see the immediate, messy consequences but were blind to the world-changing, unforeseen applications. They did not predict the Protestant Reformation, which was fueled by the mass distribution of Bibles and pamphlets.41 They did not foresee the Scientific Revolution, enabled by the ability of scientists to share and build upon each other’s work through scholarly journals.42 They could not have imagined the standardization of vernacular languages, the rise of nationalism, or the very concept of modern authorship and intellectual property that the press would help create.41

Five centuries later, the birth of the public internet in the 1990s provoked a similar wave of dismissal. It was widely seen as a fad, a niche hobby for geeks. In a now-infamous 1995 article in Newsweek, the astronomer Clifford Stoll confidently declared, “The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works”.44 Commentators saw the internet as a dangerous “cacophony” of anonymous threats and harassment, a haven for “telemolesters”.44 The co-inventor of Ethernet, Robert Metcalfe, predicted the internet would “go spectacularly supernova and in 1996 catastrophically collapse”.45

Again, these critiques captured a slice of the early, awkward reality. The early web was clunky, its commercial applications were primitive, and its social spaces could be unruly. But the critics completely failed to imagine the emergence of social networks that would redefine community, e-commerce that would reshape the global economy, remote work that would untether labor from location, or new forms of political mobilization that would topple regimes.47 Even email, one of the most transformative communication tools in history, was an “unintended” byproduct of the early ARPANET, not its primary goal.50

A clear pattern emerges from these historical episodes. When faced with a radical new technology, our collective critical faculty seems to regress to a concrete operational stage. We excel at identifying the immediate, tangible problems—the errors in the printed book, the clunky interface of the early website, the factual hallucinations of the modern chatbot. These are real and important issues. But we consistently fail to anticipate the long-term, abstract societal transformations that these technologies will unleash. This history does not invalidate our current concerns about AI, but it does place them in a context of profound humility. It cautions us against making definitive pronouncements about this technology’s ultimate limits or its final destiny, lest we end up sounding like the man who was certain no one would ever want to read a book on a disc.

Part 5 The Moral Scaffolding of an Artificial Mind

If we accept the analogy of AI as a child, then the burgeoning field of AI Alignment is its schoolhouse. This reframes a highly technical challenge as a profound moral and pedagogical one. We are not merely debugging code; we are attempting to instill character in a non-human intelligence, a task that forces us to confront the deepest questions about our own values.

The “alignment problem,” at its core, is the challenge of ensuring that AI systems act in accordance with human goals and ethical principles.24 This is perhaps the great educational project of our generation. The techniques being developed are essentially forms of digital parenting. Reinforcement Learning from Human Feedback (RLHF), for instance, is a painstaking process of showing an AI examples of good and bad behavior, rewarding it for the former and correcting it for the latter.25 Through millions of these interactions, we are trying to teach the AI what we want, shaping its behavior much as a parent shapes a child’s.

A key concept in this educational endeavor is “scaffolding,” a term borrowed directly from the work of developmental psychologists like Lev Vygotsky.51 Scaffolding involves providing temporary support structures to help a learner tackle problems that would otherwise be beyond their capacity. A teacher provides an outline for an essay; a parent holds the back of a bicycle. In AI, scaffolding takes many forms: giving a model access to external tools like a calculator or a web browser, breaking a complex problem down into simpler sub-tasks that the model can solve sequentially, or even creating teams of AIs that collaborate and critique one another’s work to arrive at a better solution.53 This is the art of good teaching: not simply providing answers, but building the framework that allows the learner to construct their own understanding.55

The most difficult lesson in this curriculum is ethics. The challenge of “value loading” is to instill concepts like compassion, justice, and fairness into a system that has no body, no emotions, and no lived experience.56 This process immediately throws a mirror up to our own society. Whose values should we load? The individualistic, liberty-focused values of the West? The communitarian values found in concepts like the Malaysian principle of kesejahteraan (holistic well-being) or the South African concept of ubuntu (interconnectedness)?.58 The effort reveals our own deep-seated moral disagreements and cultural biases.59 To teach the AI to be fair, we must first come to a more robust consensus on what fairness means.56

Herein lies the central paradox of AI’s moral education. We are attempting to teach AI to be better than us, using a curriculum built from the data of our own flawed history. The vast troves of text and images used to train these models are a perfect digital record of our societal biases.60 AI learns that the word “nurse” is statistically associated with women and “engineer” with men because our historical data reflects those stereotypes.60 We are thus caught in a difficult loop: we use biased data to create a biased model, and then we must apply corrective ethical filters and painstaking RLHF to try to undo the damage of the initial lesson.63 We are handing the child a textbook filled with our own sins and then expressing surprise when it learns the wrong things. This makes AI alignment less a matter of simple instruction and more a complex, iterative process of un-learning and re-education—a process that requires as much introspection from the human teachers as it does reprogramming of the artificial student.

Part 6 The Two Mountains of Our Digital Age

In my book The Second Mountain, I explore the idea that a well-lived life often involves two distinct journeys. The first mountain is about building up the ego and defining the self. It is the pursuit of the “Adam I” goals: career success, achievement, and personal ambition. The second mountain is about shedding the ego and committing to something larger than the self. It is the journey of “Adam II,” the relational and moral self, dedicated to community, service, and a sacred purpose.64 This framework helps illuminate the path that AI development is on, and the one it must now seek.

For its entire existence, the field of artificial intelligence has been climbing the first mountain. It has been an Adam I endeavor, driven by the relentless pursuit of capability, performance, scale, and profit.64 This is the world of ever-larger models, faster processing speeds, and the competitive race to crush industry benchmarks.15 This climb is necessary and has produced astonishing results: powerful tools that can augment human productivity, automate tedious tasks, and drive economic growth.67

But we are now reaching a point in this journey where the questions of the second mountain can no longer be ignored. The second mountain is not about what AI can do, but what it should do. It is about its character, its wisdom, and its commitment to a moral cause.64 The persistent problems of algorithmic bias, the potential for job disruption to deepen inequality, the erosion of social trust through the spread of misinformation, and the risk that our technologies make us feel more isolated are not failures of code; they are failures of character.2 They are what happen when a society climbs the first mountain of technical achievement without a clear view of the second mountain of moral purpose.

This ascent creates a crisis of meaning.2 If AI automates not just physical drudgery but also the creative and intellectual work that has long given educated people their sense of purpose, we are forced to ask what we are for.73 The goal of this technological revolution cannot simply be greater efficiency or higher GDP. It must be human flourishing.30

AI, in its current form, is the ultimate Adam I creation. It is pure, disembodied rationality, a system optimized for achieving defined goals with maximum efficiency. It is a mind without a heart, a logic engine without a soul. The qualities it lacks are precisely those of the second mountain: empathy, relationality, moral sentiment, and a capacity for love and commitment.1 The great societal drama of our time is therefore a large-scale enactment of the struggle that exists within every human heart. Our Adam I nature has built a tool of immense power in its own image. The question that defines our era is whether our Adam II nature can rise to the occasion and provide the moral guidance this powerful creation so desperately needs. The development of AI is not just a test of our technical ingenuity; it is a fundamental test of our collective human character.

Part 7 What the Child Teaches the Parent

Perhaps the most profound impact of artificial intelligence will not be what it does for us, but what it reveals about us. In the struggle to build an artificial mind, we are forced, as never before, to define what is essential about our own. AI is a mirror, and its limitations are not failures, but signposts pointing toward our own unique gifts.

Its inability to generate true, soul-stirring creativity from a place of lived experience highlights the preciousness of human imagination.33 Its lack of genuine empathy reminds us that our capacity for deep connection is our greatest superpower.2 Its struggles with common sense and situational awareness underscore the irreplaceable wisdom of our embodied, intuitive minds.75

The challenge of teaching it ethics forces a moral clarity upon us. The "value loading" problem becomes a global constitutional convention on human values.56 To program fairness into a machine, we must first have the difficult conversations about what fairness means in our own societies. To build guardrails against bias, we must first confront and acknowledge our own.77

Most deeply, AI challenges our very sense of identity. For centuries, we have defined ourselves by our cognitive abilities—our capacity for reason, language, and creation.30 Now, as machines begin to replicate these functions, we are prompted to ask: what is left that is uniquely "us"? The emerging answer is that our identity lies not in our output, but in our being. It is found not in what we do, but in who we are—in our consciousness, our physical presence, our relationships, and our capacity to love, to grieve, and to feel.30

This is the great re-evaluation catalyzed by AI. It is forcing a societal shift away from an exclusive focus on cognitive, Adam I skills—the very skills that AI will master—and toward a renewed appreciation for the relational, emotional, and moral skills of Adam II. The long-term disruption of AI is not just about the labor market; it is about our hierarchy of values. We are being compelled to “major in being human” because the alternative is to become obsolete.33 The most valuable people in this new era will not be those who can process information fastest, but those who can build trust, foster community, and provide moral leadership. AI is pushing us, out of necessity, up the second mountain, because the first mountain is about to get very, very crowded with machines.

Part 8 The Humility of Guardianship

The path forward, then, is not to be found in the simplistic binaries of techno-optimism or techno-pessimism. It is to be found in a new posture toward this technology: the posture of a guardian.

This requires the patience of a parent and the humility of an educator. We are in the early days of a project that will span generations. Raising this technology will be like raising a child; it will be a journey filled with moments of frustration and awe, of missteps and surprising leaps of development. There will be awkward phases.

Our task is not to be the harsh critic, standing on the sidelines and pointing out every flaw of the 7-year-old. Nor is it to be the permissive, hands-off parent, allowing it to grow without direction or moral guidance. Our role is to be wise and engaged stewards, to provide the firm and loving moral scaffolding necessary to shape its character as it matures.64 This is a task of immense gravity. It demands a deep seriousness about our own values, a commitment to the common good, and the sober recognition that we are not merely building a tool. We are shaping a mind that will, in turn, shape our world. The ultimate character of artificial intelligence will be a reflection of our own.

Next
Next

Dehumanization, money, and modern work