INSIGHTS & IDEAS
arrow

In Pursuit of the Architecture of Empathy. My 2025 Year in Review.

I believe the banking system—and perhaps the tech world at large—has become sociopathic. It has data, but no pulse. It knows our transactions, but not our fears. In 2025, I became obsessed with fixing this, not just as an engineer, but as someone who knows what it feels like to need help and find only a machine. It is easy to post about awards and milestones; it is much harder to post about the doubts that keep you awake at night. Being named a 'Top 50 AI Leader' this year was an honor, but the real story of 2025 wasn't the victory lap—it was the struggle. In my 2025 Year in Review, I discuss the 'Architecture of Empathy.' I share the books that rewired my brain this year, the deep personal losses that shaped my view on AI, and why I am fighting to build an 'Invisible Bank' that actually cares. This is a story about trying to inject a soul into the machine before it’s too late.

Empathy is usually considered a biological accident — a byproduct of evolution that allows us to feel another’s pain. But in 2025, amidst the humidity and kinetic energy of Ho Chi Minh City, I found myself obsessed with a different proposition: Can empathy be engineered?

This year, my seventh of writing these annual reviews (2019, 2020, 2021, 2022, 2023 and 2024), was not about building faster algorithms. It was about the struggle to bridge the cold precision of silicon with the messy, fragile reality of human financial life.

When we look back at the history of banking technology, we see a history of transaction. But when I look at the books I read this year and the code we shipped at the Backbase AI Center, I see a history of connection beginning to take shape. This journey was recognized externally this year — I was humbled to be named one of the Top 50 AI Leaders in the Middle East by Fast Company. But while awards measure where you stand, they don’t explain how you got there.

To understand that, you have to look at the books I read, the grief I processed, and the origin story that started it all.

Part I: The Innovation (Beyond the “Cool Photo”)

In July, we marked the one-year anniversary of the Backbase AI Center in Vietnam.

To the industry, this was a corporate milestone. To me, it was a philosophical stand. As I read in “Empire of AI,” the tech world is currently obsessed with “cool photos” — generative AI that dazzles but solves nothing. We chose the harder path. We focused on “industrialized intelligence.”

The central problem of banking in 2025 is that it is sociopathic. It has data, but no understanding. It knows you are in debt, but it doesn’t know you are afraid.

While building a system that negotiates against its own institution, I turned to works on moral philosophy and long-term game theory. Reading classic texts on Stoicism informed the rigorous ‘guardrails’ and transparency protocols built into our Digital Twin. The central design challenge — how to earn the customer’s trust to achieve autonomy — is a human question, not a technical one. My reading on empathy and ethics became the source code for the philosophy of Augmented Intelligence — a deep, human-centric commitment to the co-pilot model over black-box automation.

As my team tackled the transition to the Data Mesh, my mind was anchored by books on systems theory and decentralized design.

This wasn’t merely technical reading; it was about understanding how complex systems — from ancient city-states to modern data pipelines — actually work. We realized that you cannot empathize with a customer if you cannot remember them. The Data Mesh became our institutional “hippocampus” — a decentralized memory system that ensures the AI sees the human whole, rather than fragmented silos. This intellectual discipline justified our radical choices, insisting that the only way to achieve simplicity on the front-end (the Invisible Bank) was to embrace complexity on the back-end.

I remain candid about the intensity and vulnerability required for high-stakes leadership. I continued engagement with biographies of visionary, often flawed leaders — entrepreneurs, artists, and political figures — reflected my ongoing struggle to lead a high-performance team with an “obsessive” drive. These narratives provided the necessary mirror, reinforcing my belief that genuine leadership requires not a mask of perfection, but an openness about the “wounds” that drive relentless ambition.

But the organizational structure that built the Invisible Bank was itself an attempt to engineer empathy. I am a fierce proponent of holacracy, the decentralized management system. This wasn’t an aesthetic choice; it was a technical specification. How could we build an autonomous, self-governing Financial Digital Twin for the customer if our own team was managed like a rigid, top-down hierarchy? The Digital Twin is, at its core, a system of distributed authority and self-optimization — it acts autonomously within clear guardrails to achieve a goal. We had to build the team that way first. The freedom my engineers had to own their data domains — a principle underpinning the Data Mesh — was a direct reflection of the freedom we designed into the Digital Twin to own the customer’s financial decisions. The product’s architecture of autonomy was a direct mirror of the team’s architecture of leadership. The code could not be autonomous if the coders were not.

Our Customer Lifetime Orchestrator and Financial Well-being Coach were born from a rejection of hype. As I wrote in my Medium analysis, a truly valuable use case isn’t one that makes a headline; it’s one that solves a meaningful problem. We aren’t building tools to create content; we are building the Invisible Bank — an infrastructure that works silently in the background, anticipating needs before the customer even feels the anxiety of them.

This leap, from simply talking about AI to industrializing it, demanded a radical internal architecture. My team recognized early that the Idea Machine would be starved to death by the legacy system. This intellectual pressure led to the full embrace of the Data Mesh, an architectural choice that served as the decentralized nervous system, ensuring the CLO had the necessary real-time, holistic data to power its empathetic decisions. The achievement was not just the launch of a Financial Well-being Coach prototype, but the establishment of a foundation that could handle the velocity and moral weight of autonomous finance.

But how do you teach a machine to care? You cannot write a line of code for “empathy” unless you understand what empathy actually is.

Part II: The Ethics of Care

Building this infrastructure carries a terrifying weight. We are laying the digital foundation for the next 50 years. This realization hit me hardest while reading “Careless People” and “The Art of Spending Money.”

It clarified a principle that has become our team’s north star: “Carelessness” is a design flaw.

  1. Ethics is a Technical Spec: It is not a philosophy elective. In our AI Center, we treat ethics like system security. A biased algorithm is not a PR problem; it is a defective product.
  2. The Danger of “Edge Cases”: “Careless People” showed me how easily idealism is co-opted by power. The engineer who uses a flawed dataset because of a deadline, or the product manager who dismisses a harmful result as an “edge case” — these are the architects of catastrophe.

We cannot afford to be “careless people” and let the next generation clean up the mess. Our idealism must be protected by a robust ethical framework, or it will be consumed by the short-term pressures of “power and greed.”

This is where my reading list became my design document. I read between 50 and 70 books a year (Goodreads shelf). In 2025, my mind drifted toward the biology of consciousness. Two books became the cornerstones:

  1. “The Edge of Sentience” by Jonathan Birch
  2. “From Sensing to Sentience” by Todd E. Feinberg

Feinberg’s distinction between simple sensing (a thermostat reacting to heat) and true sentience (a being feeling the change) reshaped our AI agents. Most banking AI is just a thermostat — it reacts to numbers. We are pursuing the architecture of sentience — systems that can simulate the affect of financial stress and respond with appropriate care.

Part III: The Architecture of Grief (The Human Mind)

But the most profound shift in my thinking this year wasn’t about coding; it was about coping.

In March 2020, I lost my brother to COVID. That day, my world shifted. For years, I tried to engineer my way out of the sorrow — I even tried creating his avatar using AI to simulate a conversation. It didn’t work.

Then, in 2025, I found “The Grieving Brain” by Dr. Mary-Frances O’Connor.

Reading this, alongside “Why We Remember,” was a revelation. I realized that grief isn’t a failure to “move on.” It is a complex learning process. My brain was literally trying to learn the absence of my brother. This bridged the gap between my personal pain and my professional work. Machines learn by pattern recognition; humans learn by loss. It validated the pain I had been carrying and taught me that while we can build machines that think (“Models of the Mind”), we are far from building machines that grieve.

Part IV: The Origin

This obsession with how we learn goes back to the very beginning.

Recently, ITP.net published a piece on my journey: “The AI Leader Who Learned to Code at Six.” It reflected on my mother’s decision to homeschool me. She didn’t just teach me syntax; she taught me systems thinking. She taught me that you don’t just consume technology; you shape it.

She didn’t just teach me syntax; she taught me systems thinking. She taught me that you don’t just consume technology; you shape it. That early lesson is why I resonate so deeply with “Human Compatible” and “Together.” Technology is meaningless if it isolates us. Its only true purpose is to enhance our social connection and our health.

For those asking “What should I read?”, here are the books that shaped this year:

  • On Consciousness: The Edge of Sentience (Birch), From Sensing to Sentience (Feinberg), Other Minds (Godfrey-Smith)
  • On Ethics & Society: Careless People, Empire of AI, Human Compatible (Russell)
  • On The Mind: The Grieving Brain (O’Connor), Why We Remember, Models of the Mind

The Horizon: 2026

As I look toward 2026, I am guided by the convergence of these ideas. We are building the Invisible Bank to handle the “transactional” so that humans can get back to the “relational.” We are using the lessons of “Other Minds” (the octopus, the deep origins of consciousness) to remember that intelligence comes in many forms, and not all of them are silicon.

The revolution we are leading is often compared to the industrial revolution, but the true parallel is the invention of the thermostat. In the 1880s, the thermostat automated the tedious chore of managing temperature, freeing human attention from the fireplace. No one celebrates the thermostat, but it fundamentally changed how we live. It became invisible so we could focus on literature, science, and human connection.

That is the horizon for the Invisible Bank. The ultimate purpose of all this complex technology is to secure the most valuable asset of all: human attention. We’re building the machine that will disappear, so the user can finally focus on life itself — the relationships, the grief, the learning, the things no algorithm can ever touch.

To my team in Vietnam, who refuse to be careless; to the memory of my brother, who taught me the cost of love; and to my mother, who gave me the tools to build: thank you.