Last fall, while traveling around Australia, I was lucky to spend a couple of days at the Queensland Public Library Association Conference. Speaking on a panel with me, my co-investigator for the Learning 2.0 project Warren Cheetham and David West, senior manager at Moreton Bay Region Libraries, was Paul Hagon from the National Library of Australia. The ideas he shared intrigued me and I attended his mash up session later that day. Hearing Paul speak about combining Google maps with the library’s digital collections was AMAZING. It was the first time that the true power of mashing up a library’s collection and sending it outside the library walls for people to access via their mobile devices made total sense to me.
I asked Paul if he’d appear with me here on TechSource to talk about what he's been doing and he agreed. A bit of time has passed, but we recently managed to fit in a conversation:
Michael Stephens: I’m intrigued by your bio, Paul. You worked at the Australian War Memorial doing Web development and now at the National Library. How did you find your way to library work?
Paul Hagon: I started out my career with no thought of working in libraries. I studied Industrial Design and started designing office furniture. In the mid 90's I started doing some web design work, ended up working for a museum and my entry to the cultural world began. I really like working with the wealth of information that is available. You can't do anything without having good content & these institutions have endless amounts of incredible content. I enjoy the problem solving aspect of finding the best, different and interesting ways we can interact with this information.
MS: You once wrote: "Recently I've been thinking of more and more ways that museums and libraries can expose their collections via other methods besides typing a search term into a search box - yawn..." I can't help but agree and the examples I saw off at the Queensland Public Library Association meeting were outstanding. Would you share some of your favorite ideas and implementations for taking digital collections to the "street?"
PH: There is so much happening nowadays that we're always on the verge of something new and exciting. I'm a huge fan of location based services. Give me the information that is relevant to my immediate surroundings. What are images in our collections that are related to my location, what journals do you have that are relevant to a location. GPS chips or a camera could be the new search box.
How can we easily provide this information to others to use? There are too many applications for us to be building them all. I think this is the important part. We should just be the 'pipes' that provide the information. We should be able to build API's to provide our data to others with approriate licenses (eg: creative commons) to let others build upon it. Let the tourism industry incorporate our data into a specific application. Let a school incorporate our data into an education program. Imagine teaching an ecology class & being able to build a location based curriculum where your students could go on a field trip & be able to instantly pull up historical photos & the latest scientific journals that are directly related to the ecosystem they are studying.
The other part of the equation is the devices. It's been less than 2 years since the iPhone (via the appstore) became a viable interface. We now have the iPad. Internet enabled TV's are just starting to appear. We are interacting with these using gestures rather than through a textual interface. Imagine if your TV had gesture recognition & you interacted with it by waving your arms about, smiling for yes and frowning for no. How could we be accessing our collections using these methods?
I think that recently released devices like the iPad have the potential to become the modern day coffee table book. How easy would it be to build a 'dynamic coffee table book' for this device that showcased our collections (particularly thinking images) that were displayed based upon some external influences like the news or weather and you swiped to move between photos & rotated the device to expose the text based traditional metadata behind the image. How much more engaging is that than clicking on a few underlined links?
MS: Tell us about the mash up demo you did at the VALA Conference in February - from the tweets and conference chatter and reading your conference papers, It seems as though you wowed the crowd.
PH: VALA was a fantastic conference. My presentation showed off some work I have been experimenting with. We are all familiar with the concept of 'full text search' for items like books. What I was looking at was exploring the concept of full text search for different collection types like images. How can we extract information directly from the patterns of pixels that make up an image?
I have been researching facial detection and facial recognition within our image collections, so we could automatically categorise image types (eg: portraits) based on if a face was detected in an image. I also looked at colour analysis and built an application that lets you interact with the National Library of Australia's image collection according to colour (http://ll04.nla.gov.au). It's a really interesting way of looking at a collection. Beyond the novelty factor, it's quite amazing when you can pick the colour of a particular media type, like a canvas or aged yellow paper, and suddenly all these images of a particular medium or containing a particular colour of oil paint appear. It's something that is unlikely to occur using traditional searching processes.
MS: Fascinating! Clicking the various colors yielded some interesting results - almost like serendipitous browsing. Maybe if I'm feeling "blue," I might want to see a slideshow of the library's "blue" images. I can imagine a world where we might search on all manner of descriptors. The future is certainly coming fast -- What's next for libraries, mobile access and the Web?
PH: If I knew that I probably wouldn't be working in libraries and would be rich! Mobile offers great potential for libraries, but at the moment, we're just not seeing it - at least not at the National Library of Australia. We currently have less than 0.2% of our web visits from mobile devices. Maybe it's a little different in the US, and Australia is playing catch up. I believe that doing things like making a mobile friendly version of our catalogues just won't cut it. What are the things that are unique to mobile devices: GPS, camera, accelerometer, maybe RFID? I've already mentioned incorporating the GPS as a search tool. Could you extend my search by colour to use the camera and return items from the colour you are looking at? Could you navigate our collection using an accelerometer, tilting a device to steer through subject headings? It might be a bit klunky, but you get the idea of what we need to be experimenting with.
I recently read "We spent $860,000 rebuilding our intranet. The most popular page is still the cafeteria menu!" Our most popular page from mobile devices, apart from our homepage, is still our opening hours! I really want to be moving in a different direction to that.
MS: What's the first step to move in that direction?
PH: I would love to see libraries opening up their information for others to use in an easier manner. Currently libraries are both the custodians of information, and also the custodians of how you access that information (eg: through a traditional catalogue/search interface). Libraries can still be the custodians of the information and retain that authoritative role that they should have, but they shouldn't necessarily be deciding how people access that information. There are a lot of smart people out there building clever applications. Let them use the information how they want. Good ideas come from anywhere.
Paul’s Blog: http://www.paulhagon.com/
Paul was named a 2010 Mover and Shaker by Library Journal