These past weeks, I’ve been scouring the country and especially my hometown Groningen, to do interviews and spend time with people for the ‘Funda’ case. In exploring the various practices that sustain and that rely on Funda, a database of real estate in the Netherlands, I’ve been exploring how visual material about real estate is produced and used and re-used.
The aim is to understand how photos (but also floor plans, 3D plans, and videos) contribute to the constitution and circulation of a knowledge about real estate objects, as a kind of everyday knowledge that is strongly visual. Through participant observation, I follow how material is produced by real estate agents, how it is used by house-buyers, and how Funda as a web-based information infrastructure plays a role in shaping this. I’ve also been tracing how other sources of visual material get used by house-buyers, and how various sources relate to each other.
By the way, I’m still looking for users of Funda or potential house-buyers to talk to, so get in touch if you’re willing to talk to me about your experiences (email@example.com).
A new application for high end mobile phones called Layar features the real estate database Funda (one of our 4 cases). By panning one’s mobile phone using the built-in camera, this application displays information on objects encountered. For example, by pointing the phone towards a row of houses, any property found in Funda will be highlighted on the screen, and the database entry further consulted.This site has a demo of Layar, which Sara Kjellberg kindly brought to our attention. Layars is currently directed to the Netherlands, and the Funda application will be launched at the beginning of July.
As the name hints, one can layer information onto the images on the mobile phone, so that one’s presence at a geographical point in a space becomes the interface to the database (together with the use of the right mobile phone and subscription!). If I ever get my hands on this, I’ll be very curious to see how the linking of objects viewed through the camera and those in the database in done. I can imagine that GIS information can be used to link the location of the phone and that this information can be used to draw information from the database on the general location. But this is rather coarse. Does image recognition enter the picture to refine the selection of information to be presented? At first blush, this seems like an overly complex computational challenge to be performed on the fly on a mobile phone. So how does it work?