top of page
Search

Who are we building AI for?

  • Writer: Saemi Nadine Jung
    Saemi Nadine Jung
  • 56 minutes ago
  • 3 min read

Last night, I came across a piece in France 24 about the first reported victim of AI agent harassment, including an interview with Shambaugh. It left me thinkin about a deceptively simple question:


Who are we developing these technologies for?


See the article here:



Screenshot of google results when searched for the term "AI agent"



Capability as the dominant paradim:


In the tech sector, momentum is often treated as the best virtue. People are trained to build, ship, optimize, and scale. And the kinds of questions that get asked tend to be technical and performance-driven:


  • Can it do more?

  • Can it act autonomously?

  • Can it outperform humans?

  • Can it reduce friction?


And by all means, these are legitimate questions.

But they’re aslo incomplete.


Because every system we deploy affects real people. Not an abstract "user." Not a data point. but a real person. Yet our development frameworks often priortize capability over consequence (nor responsibility).


When AI agents become more autonomous, they don’t just complete tasks — they execute and initiate actions, make decisions, and persist in pursuit of objectives. If something goes wrong, the systems do not pause to reflect. They don't wake up the next morning wishing they had handled things differently. They do not apologize.


Humans can reflect and apologize. AI agents can’t.


That difference is not trivial. It has deep ethical and institutional implications.



The Accountability Gap



The story in France 24 isn’t merely about a technical glitch or a malfunction. It’s about impact. About what happens when systems operate at scale without robust guardrails. About what it feels like to be on the receiving end of something automated, persistent, and out of your control.


It raises questions about recourse: when harm occurs, who bears responsibility?


The issue extends beyond conversational agents or automated messaging systems. Consider, for example, the marketing of “Full Self-Driving (FSD)” functionality by Tesla. Despite the name, liability in the event of an accident remains with the human driver. If responsibility ultimately resides with the person behind the wheel, can we meaningfully describe the system as “full” self-driving? Or does the terminology obscure a gap between capability and accountability?


See this article on how Tesla will handle insurance for Robotaxi and FSD unsupervised: https://www.notateslaapp.com/news/2687/un

 


Language matters. So do incentives.


Innovation without friction may appear as progress. Innovation without accountability can produce harm. I don’t think the solution is to slow down and halt technological development out of fear. But I do think we need to shift the questions we ask and broaden the evaluative framework guiding that development:



From Capability Questions to Governance Questions



In addition to asking what a system can do, we should be asking:


  • Who could this affect? Who could be affected by its failure?

  • How easy is it to stop if something goes wrong?

  • Who is responsible when harm occurs?

  • What meaningful recourse exists for those impacted?


Responsible AI isn’t just a governance checklist or a compliance exercise. It’s a normative commitment to centering human values. It’s deciding that human impact is not an afterthought.


AI systems will continue to grow more capable, more embedded and more autonomous. The defining challenge will not be technical sophistication alone. It will be whether we can align innovation with responsibility.


It’s building systems that serve people — and protect them.


Before we ship the next feature or deploy the next agent at scale, we should start asking these questions instead of what AI can do for us.


Who is this truly for?


And who might pay the price if we get it wrong?



 
 
 

Comments


bottom of page