Researcher: Rosita Dita Bergman
Affiliation: Ambient Logic Research Lab
Publication: Working Paper Series | Vol. 1, No. 9
Classification: Urban Governance (G); Social Sustainability (S); Human-Computer Interaction (HCI)
Smart cities are no longer just a "science experiment" - they are becoming the actual foundation of where we live. However, as cities become more automated, a big problem has appeared: most "Smart" systems are built for efficiency, not for fairness. This creates a world built for the "Standardized Citizen" - someone who is tech-savvy, and moves at a certain pace - while leaving everyone else behind.
This paper introduces the Inclusion Algorithm. It is a new way to design cities using Agentic AI (AI that can sense and adapt in real-time) to support everyone, including the elderly, people with disabilities, and those who don't even own a smartphone.
We are moving toward "Post-Anthropocentric" design - which is a fancy way of saying we are building cities that adapt to us, rather than forcing us to act like machines. The goal is to make sure the "Digital Twin" of our city respects the rights of every citizen, ensuring that technology supports our diversity instead of enforcing a single, "standardized" way of living.
In 2026, the "Smart City" is no longer a futuristic concept but a functional reality. However, the operationalization of these cities has largely relied on Geometric Ghosts—models that track physical assets (pipes, wires, traffic) while remaining biologically silent (Bergman, 2025).
The central problem of 2026 urbanism is the Standardized Citizen Bias. Most urban AI models are trained on datasets derived from "digitally active" populations. This creates a feedback loop where the city optimizes for those it can "see," effectively rendering the elderly, the marginalized, and the non-digital citizen invisible to the city’s automated services.
"We are building the most powerful persuasion tools in human history... the question is: who trains the trainer?"
— Tristan Harris
2.1 The Trust Gap in Autonomous Governance
Thaichon et al. (2025) have identified a widening "Trust Gap" in virtual and augmented urban environments. As AI move from reactive assistants to Agentic Managers (managing energy grids and social service distribution), the lack of "Affective Transparency" creates a friction point. If citizens do not understand the logic behind an autonomous nudge, they reject the system, regardless of its efficiency.
2.2 The Quiet Arrival of AGI and Agency Erosion
Yudkowsky (2024) and Kokotajlo (2025) argue that the arrival of Artificial General Intelligence (AGI) is not a single "event" but a slow process of Agency Erosion. In the urban context, this manifests as the Convenience Trap: humans stop making choices because the AI makes them "easier," leading to a gradual loss of spatial sovereignty.
2.3 Neuro-Spatial Resonance
Recent studies in Neuro-Urbanism suggest that standardized urban environments—optimized for machine-speed throughput - induce chronic stress (cortisol spikes) in human inhabitants. The Reality Gap is the mathematical difference between what a sensor records and what a human nervous system experiences.
To address the "Standardized Bias," I propose the Inclusion Algorithm. This is not a single piece of code, but a design philosophy that shifts the goal of AI from optimization to accommodation.
Pillar I: Inclusive Ambient Sensing
Instead of relying on biometric "Digital Handshakes" (which require smartphones or IDs), we utilize Volumetric LiDAR and Gaussian Splatting. This allows the city to sense a "human presence" and respond to its needs (e.g., slowing down traffic for an elderly walker) without needing to know "who" that person is. This protects Anonymity while ensuring Visibility.
Pillar III: Post-Anthropocentric Urbanism
We must move beyond a human-only view of the city. A "Post-Anthropocentric" city uses AI to balance the needs of human social vibration, biophilic ecosystems, and autonomous infrastructure. It treats the city as a Living Organism rather than a machine.
The integration of the Sovereignty Audit and the Inclusion Algorithm provides a twofold protection for the future city:
Defense: The Sovereignty Audit protects against the erosion of human choice and "Hidden Nudging."
Growth: The Inclusion Algorithm ensures that the city’s central nervous system is trained on the full spectrum of human diversity.
The future of urbanism is not found in more hardware, but in better Logic. We must code for Diversity as rigorously as we once coded for Efficiency. By operationalizing Spatial Justice, Ambient Logic ensures that the Digital Twin becomes a tool for human flourishing, bridging the Reality Gap and reclaiming the city for all its citizens.
Harris, T., & Raskin, A. (2025). The Race for Intimacy: Ethical Implications of Agentic Social Design. Center for Humane Technology.
Kokotajlo, D. (2025). Steerability and Goal Alignment in Autonomous Urban Agents. Journal of AI Governance, 14(2), 88-105.
Thaichon, S., et al. (2025). Agentic AI and the Trust Gap in Virtual Environments. International Journal of AI Ethics & Spatial Behavior, 8(1), 102-118.
Yudkowsky, E. (2024). The Quiet Arrival: Detecting AGI through Agency Erosion. Intelligence Research Institute Reports.