Stumbling upon old photos of a place often evokes wonder and nostalgia. One can’t help but wonder how Times Square looked like in the ’40s. How many of these buildings were here at the time and was the skyline always this crowded? There are endless questions, but modern mapping technologies such as Google Street View can only go back so much. But what if we had something like that for even older days?
20-year New York resident Raimondas Kiveris is a software engineer at Google Research. And he has seen the city change. He says, “It was a completely different place, a different town.” Pondering upon that made him wonder what the city looked like before he got there, or before he was born. So he had to think of a solution himself. “There’s really no easy way to find that information in any organized way. So I was starting to think, can we somehow enable this kind of virtual time travel?” he said.
RELATED: ‘GOOGLE MAPS FOR THE BODY’ IS IN THE MAKING
Now, three years later after this idea’s conception, his project of making a sort of virtual time-traveling app is starting to come together, and it’s open-source. The app, called rǝ, lets users enjoy the scenery in both street-level view and bird’s eye view. it has a slider with which you can go back and forth in time and watch how neighborhoods are built.
The map is limited to cities in the U.S. and goes back to the year 1800. Most of the 3D models of buildings are automatically generated. The main source of data for the project is the historical fire insurance maps. These archived materials contain precise info on building ages, heights, sizes, materials, and even roof shapes.
Kiveris wants his map to be more than a crude outline of America’s buildings, that’s why the project is an open-source one. It allows the integration of photographs which are then analyzed with neural networks and the inferred properties are added to improve the blocky buildings.
He explained “If we have photos of a building showing the facade in some detail, we can do much more,” and added, “We can essentially do semantic parsing of that facade and figure out this area here is a window, this area is a cornice, this a stair, this is a door.”
Several areas such as the Chelsea neighborhood in Manhattan are already furnished with parsed images. Check it out.