Neural Net Car Removal
Chris Harris / May 06, 2019
5 min read •
Why?
In part a response following the global movements to mitigate and rally against climate-change (Extinction Rebellion). In part stemming from my own personal abhorrence of the obnoxiousness of cars in what should be pedestrian friendly environments and the desire to live in greater harmony with nature. I dream of being rid of the pollution, the obstacles, but most of all for me personally, as someone who is highly sensitive, it's the cacophony of sounds and jarring relentlessness of a constant flow of traffic. The future of cities is good public transport, bikes and walkable streets. I see no need for the regular use of cars inside densely populated cities.
The Tech
The A.I. I use here isn't itself novel, I've adapted the code from two open-source repositories on GitHub. The vehicle detection is the same type as that which is used in self-driving cars. I detect and then remove vehicles using the code from a recent paper called Globally and Locally Consistent Image Completion . This model is trained on the Places2 dataset by MIT which contains a lot of images of outdoor places, I hoped therefore it would be able to bias the filling in of the cars towards more natural scenes. The implementation at the moment is very simple, I've toiled away for long periods of time on work unnoticed in the past and recently a friend and collaborator suggested the maxim of 'don't do any projects that take longer than a day' to me which really hit home, (at least as a starting point). This is the first of these projects. Despite the unfinished nature of the result, I think it showcases the struggle quite well. The struggle that we all face to try and make our environment more hospitable to life.
It currently works at about 2 fps on commodity hardware, but could definitely work real time on dedicated GPU/TPU hardware in the future, with adjustments and performance improvements. With enough work, it's certainly possible it could be run in real time on high end mobiles. I first saw this technique used for people in video by Michail Rybakov in his work "Deleting Human Bodies" When the cars show up and contribute to the glitchy effect, it's the model not recognizing them well enough, as it's currently done frame by frame. There's lots of room for improvement by integrating knowledge of the prior frames, both to do object tracking making the detection of the vehicles more robust, and to learn about the nature of the scene without vehicles.
The Vision
I had this idea for a while - becoming increasingly distraught by the amount of trucks and cars around my walking route, effecting my well-being. I had tried to imagine them away - roads becoming expanses of grass or flowing rivers, passing cars becoming birds, crumbling pavements becoming mountain trails - It kinda worked if I concentrated hard enough. I thought if the hardware advancements became common place enough, perhaps this imagined biophillic world could be something I could make for myself and others too.
I envisioned an AR app that augments the city-scape to be more environmentally friendly, specially with more biophillia , while still allowing safe navigation. e.g. Replacing cars with something less obnoxious, perhaps a flowing river, perhaps a flock of birds. Originally I dismissed this idea, thinking it would not actually contribute to the overall good of the physical world we do inhabit.
I picked the thought back up after a tweet from climate advocate Christine Larivière where she suggested a similar idea and proposed that these kinds of tools could at least help depict and encourage the change we want to see in the world.
The challenge to keep in mind with this kind of technology, as succinctly depicted by Samim is to make sure we use technology for the betterment of our real lives and be weary of tech for tech sake.
If this experiment were to grow into the more elaborate AR vision, I would like proceeds of the app going towards improving the real world location it's used in, or some other means of a positive feedback loop.
Encouraged also by digital artist M Plummer-Fernandez, we're planning a number of other experiments using technology for meaningful causes, seeking to bring a greater harmony between us humans, our peers and the natural world.
There's a growing collective of people, including those mentioned and many more, who are no longer waiting for the big tech or political players to do this sort of work, so we are stepping in to do it ourselves, starting from what humble beginnings we can.
Feel free to keep up to date with more experiments in imagining and manifesting better human harmonies by following me on twitter.
Collecting future co-conspirators
for conversations not 'content'