Among the powerful shifts that AI has brought, one unexpected transformation has largely gone under the radar. In the U.S., the historically chaotic, under-managed and often dangerous curb, is increasingly becoming one of the most dynamic levers for improvements in safety, reducing traffic congestion, raising sustainability standards, and supporting local commerce.
As the place where most street-level activity intersects- cars, delivery vans, cyclists, scooters and pedestrians, among others- caring for urban centers’ most valuable bits of real estate has never been as essential as in the post-pandemic world.
When the Spanish flu tore through the U.S. in 1918- becoming the last pandemic to mirror Covid in scope and devastation- Wisconsin had just become the first state to identify its highway systems with numbers; New England inaugurated its Interstate Routes four years later; and the federal government had only started to fund the development of a national road network with private sector technology support. Roadworks as spaces of social connectivity were only starting to emerge.
Bottom line: the U.S. has never experienced the intersectional cohesion that the curb provides today, when markets are reopened, mobility remains critical for economic development, and every inch of urban space is expected to serve different competing purposes at once.
Before this newfound reexamination into Americans’ urban use, Los Angeles-based startup Automotus had identified an unprecedented opportunity: leveraging emerging technologies, from AI to computer vision, to modernize curb operations at scale.
StartupBeat spoke to Ganesh Vanama, a computer vision engineer at the company, about how current technology converges with policy design and real-time data to reshape how cities across the U.S. move and flourish.
What drew you to computer vision, and how did that path lead you to Automotus’ mission at the curb?
I’ve always been fascinated by how we perceive and adapt to our visual environment. Embedding that knowledge into a machine—computer vision—has unlocked transformative opportunities across industries. My path led me to Automotus because its mission uses this revolutionary technology to solve critical urban challenges: managing the curb to eliminate traffic fatalities, significantly reduce congestion, and cut CO2 emissions per zone.
What problem at the curb felt most urgent to solve when you joined Automotus?
The most urgent issue was the massive data blind spot that prevented cities from effectively managing safety and congestion.
Traditional methods couldn’t consistently capture critical violations like double-parking or blocked bike lanes, which are direct causes of fatalities and gridlock. Our mission was to provide the real-time, objective data necessary to inform proactive policy and enable automated enforcement.
How does computer vision at the curb measurably change city mobility and curb behavior?
Computer vision creates a positive feedback loop that fundamentally changes curb behavior. By accurately detecting and documenting traffic violations that cause congestion and safety hazards- like double-parking- we enable cities to enforce their rules consistently and at scale.
This accountability encourages compliant driver behavior, leading to measurable impacts: reduced congestion, better flow for all modalities, and significantly safer curbs for pedestrians and cyclists.
Which metrics best capture “curb health” before and after deployment, and how do you track them?
The health of the curb is best captured by measuring the reduction in two key metrics before and after deployment.
On the one hand, we have a number of double-parks, which is a direct measure of safety hazard and traffic obstruction. Pittsburgh, for example, saw a 97% reduction in these risks after we provided stakeholders with this data.
Similarly, the average dwell time in loading zones is a measure of turnover and efficiency. The average stay in Pittsburgh decreased by 23%, meaning more vehicles can use these spaces and park legally.
We track these by integrating our vision data with feedback loops from the cities on their border data points.
What feedback loops from cities, fleets, and drivers matter most for judging success?
The most critical feedback loops confirm the core objectives are being met. In cities, the most critical feedback is the quantifiable impact on Vision-Zero goals and their willingness to commit long-term, which we see in extended contracts or new legislation enabling automated enforcement.
From commercial fleets and drivers, success is judged by improved operational efficiency. Positive feedback from large delivery partners reporting increased delivery efficiency and reduced driver stress is key. Requests for more zones also confirm the value proposition.
And finally, from local businesses, the feedback loop is measured by improved access and turnover, which they report as directly facilitating merchandise delivery and improving customer access.
Which computer vision tasks are the hardest at the curb, and how do you address them?
The consistently hardest task is handling dynamic and complex occlusions. This means identifying and maintaining a track on a vehicle when its line of sight is temporarily blocked by large passing vehicles like buses or trucks.
Our solution precisely determines the vehicle’s presence, location, and violation status, even with partial or interrupted visibility.
How do cities differ from a data perspective, and how do these discrepancies shape your approach?
Cities differ primarily in their policy and legal readiness for automation.This legal readiness- or lack thereof- largely determines whether the city’s laws permit the use of automated technology to issue tickets or citations to violators through the mail.
In cities with low legal readiness, we focus on data-driven advocacy. For this, we deploy cameras to quantify the problem and provide real-time alerts for directed manual enforcement, building the evidence needed to change state law for full automation.
In cities with high legal readiness, we focus on Iterative policy evolution; we use the data to design and refine dynamic policies, such as progressive pricing and enabling full automated enforcement via citation, to achieve immediate and maximum behavior change.
What edge cases persist, and how do you mitigate them?
While occlusions are the hardest computer vision task, the persistent real-world edge cases are low-visibility conditions like night and severe weather.
To mitigate them, we adopt a layered approach: robust hardware, using high-dynamic-range cameras with IR illumination to ensure high-quality data capture in low-light and adverse conditions; and extensive training. Our AI models are rigorously trained on massive, diverse datasets that include extreme weather and lighting scenarios from cities across the country to ensure reliable 24/7 accuracy.
What evidence shows automated enforcement shifts driver behavior, and what still resists change?
In cities like Pittsburgh, we see a 97% reduction in double-parking, and critically, 87% of first-time violators never receive a second citation, proving that the system encourages rapid, sustained behavior change.
The primary resistance, however, is not technical but behavioral and systemic. This involves overcoming decades-old habits and the initial friction of adoption.
While adoption is growing, with over 7,700 registered vehicles in Philly,, a portion of drivers still resists registration, leading to mailed citations. This is a matter of ongoing education and ensuring the user experience is as seamless as possible.
Which non-technical interventions amplify impact the most?
The most effective non-technical levers are clear physical signaling and progressive pricing policy.
Unique signaling visually distinct curb treatments, like the “purple curbs” in Pittsburgh, and immediately signal to drivers that the space is different and camera-enforced. This is amplified by a robust, multi-channel public education campaign to build trust and explain the why.
Through progressive policy tweaks that use data to incentivize short-term use—such as a progressive rate structure or a two-hour maximum—dramatically amplify impact by driving higher turnover and efficiency.
How do you collaborate with policy, legal, and city operations to deploy responsibly?
Responsible deployment hinges on iterative development and cross-departmental alignment.
Through cross-departmental coordination, we facilitate continuous meetings between the policy makers and the enforcement or operations teams to ensure alignment on legal authority, contracts, and system integration.
We also use pilot data to provide the evidence and language necessary for city partners to successfully draft and pass local legislation, ensuring the automated system is legally sound before citations are issued.
And, by collaborating with legal teams, we ensure our license plate readers adhere strictly to a ‘privacy-by-design’ principle; only collecting data required for payment or citation, and never storing or sharing personally identifiable information.
What does a reimagined curb look like by 2030? How will computer vision evolve to support it?
By 2030, the curb will be a fully dynamic, data-driven, and multimodal asset governed by ‘policy as software’.
It will look like a seamless extension of the street, where rules, time limits and pricing adjust automatically in real-time based on fluctuating demand- which converts paid parking to loading zones as needed. This will also help cities virtually eliminate congestion and achieve Vision Zero goals.
Computer vision will evolve into the real-time operating system for the curb. It will provide near-perfect, 24/7 reliability in classifying all vehicles and activities, enabling the city to manage every inch of curb space dynamically and with complete equity. This will maximize the public value of the most precious resource in the urban core.
Disclosure: This article mentions a client of an Espacio portfolio company.