Where All the World’s a Sunny Day
I’d love to have been at the meeting at Google where they decided to do Street View, the feature in Google Maps where you can see photos taken by cars driving down the street.
I imagine Larry Page, Sergey Brin and Eric Schmidt sitting around a table when one of them said, “Hey, other two Google guys, let’s send a fleet of cars up and down every street the world taking high-resolution images in every direction. It will take a long time, cost a small fortune, and maybe there will be some lawsuits over privacy violations. 18 months from now, when digital technology has improved, we can chuck out all the old photos and do it all again.”
“What would we use it for?”, one of the other guys might have asked.
Perhaps the first guy then replied, “Well, um, you could look at a picture of your house…”
And then maybe there was this long pause and the other two guys threw their hands into the air and shouted, “We love it!”
I realize that there are other companies in the world offering similar services. For example, the Norwegian company Finn.no’s Kart has a feature called ‘Gatebilder’ (Street Pictures), which is essentially the same offering as Google’s Street View. Bing has ‘Streetside’. My question is: Why? Why is Google spending so much effort to have a year-old, blurry digital image of every place in the world?
One obvious answer is: because other companies are and Google doesn’t want to get left behind.
But, what can Google (or anyone else) do with this information that they couldn’t do otherwise? I asked that very question on Quora: What is the purpose of Google Street View?. Paul Tourville, author of the Ursus Pacificus blog, offered a couple of suggestions: “I think it’s … for their brand identity and market penetration… and to offer a tool and let the users figure out what it’s good for.”
One thing to realize that the Google car doesn’t just take photos, it also has a laser range finder which estimates the distance from the car to the closest object in each direction, thus creating a 3D map of the local environment. And, the car also has a device for detecting WiFi networks and collecting information from them (as Google has recently gotten itself into hot water over.)
I’d like to suggest a couple of possible answers to my own question:
One possible use for all of this information is to power ‘augmented reality‘, the idea that soon we’ll be able to point our cellphone camera at the side of a building and see that up on the second-floor there’s a dance studio. That might be helpful information. But, from what I can tell, you don’t really need photos to do that. The cellphone has a camera, so you can just show the camera’s image on the screen and then overlay a little label saying ‘Emma’s Dance Studio’. (I could be completely wrong about this: Perhaps collecting all of that laser range data and WiFi network data is critical to getting that little label to appear properly.)
Another possibility is that the images themselves and the process of gathering these images is key to developing self-driving cars, another pet project that Google executives have mentioned on occasion. People waste a lot of time driving. Making self-driving cars would free up time for doing more important things, like searching on Google and clicking the ads. So, Google could be post-processing the images they collect to see whether or not the road surface has two marked lanes or only one, to read street signs to find speed limit on each segment of road, and so forth. Presumably, then, the self-driving Google cars have video cameras on board, to help stay on the road, and radar, to avoid hitting other cars or objects.
What else might be on the Google car? It could have a light meter on it, to detect which images were taken in full sunlight and which were cloudy. Notice how most of the images in Street View look like they were taken on a sunny summer day. Do they not run the Google car in the rain? Why is there no ‘Night View’ option, or ability to switch between seasons?
Google Earth displays the type and placement of trees. (This is not a generic icon representing ‘a tree is located here’, but the actual species of each tree for 50 different species. I’ve heard that laser range finders can be used in identifying types of vegetation, but it would make sense for the Google car to be taking UV photos as well, to aid this process.
What other things might there be on each Google car? If I was outfitting the Google car, I’d probably throw an infrared camera on there too. This would let me know which buildings were losing abnormally large amounts of heat, in case I wanted one day to show them ads for insulation. (It would also help identify which houses had sizable grow operations going on in the basement, which would be handy to know next time Larry Page and Sergey Brin drove out to Burning Man.)
I’ve heard that GPS is bad at determining elevation, so adding some sort of inclinometer or even altimeter would let me record the elevation at each point on the Earth’s surface.
A thermometer and barometer would eventually give lots of weather-related details from far more locations than is now possible. Many weather stations are located in urban areas, so temperature data must be corrected to account for the urban heat island effect. Running a fleet of Google cars all over planet Earth for a decade or two would not only provide an unprecedented number of measurements for studying global warming, but also help contribute directly to climate change as well.
Similarly, a CO detector, a pollen detector, and other measures of air quality might be useful. A microphone outside vehicle could measure ambient sound levels, so that I could make a ‘sound map’ of the noisiest and quietest areas. I don’t know what I’d do with this.
A phased plasma rifle in the 40-watt range could be useful for offing errant Facebook executives, particularly if the cameras on the Google car had automatic face detection / recognition.
An accelerometer would help to determine road surface quality and the presence of potholes.
Google Maps has a feature called Google Map Maker, in which a small ‘Edit’ button will appear for some locations. Clicking on this button takes the user to a feature that lets users ‘mark up’ maps to mark the locations of important buildings or the surface type of a a given road (asphalt, dirt, etc.), the suitability of the road for bicycles, the presence or absence of line dividers, and so forth.
I’ve noticed that this ‘Edit’ button only appears in countries which have not yet been visited by the Google Street View car. So, for example, I am allowed to mark a segment of road in Pyongyang, North Korea, as being marked two-lane and made of asphalt, but not the road out in front of my own house.
Presumably then, the Google car (which has driven by my house) has already collected this information for my segment of road, so they don’t need me (or want me) to input that information into Google Map Maker. But in places like North Korea, where they have not yet visited, they allow users to mark road features and buildings.
There’s actually quite a bit Google can be doing with post-processing of the images they collect. Detecting faces gives an estimate of pedestrian density. Text detection and recognition means that they could find, for example, areas with large Hispanic populations based on the language on signs and billboards. They could be monitoring the advertisements on billboards.
Looking at facial symmetry would allow them to estimate the density of hot girls in a given area:
At this point, I guess the question isn’t what is Google doing with Street View data, it’s what couldn’t they do? And what should they do? And what shouldn’t they?