Saturday 17 October 2009

Wednesday 16 September 2009

In Collaborative Search, is Perception Everything?

After reading a fascinating series of tweets on what constitues success in collaborative information seeking, between Jeremy Pickens, Sharoda Paul, Brynn Evans, and Gene Golovchinsky this morning (I woke up some hours after the dicussion), it struck me how important the difference between actual and perceived information need is in collaborative searching activities.

In scenarios where a group of friends are working together to organise a holiday, for example, then every member in the group is working on a perceived collective need. If several people are helping one person solve their problem, then the central person is (hopefully) working on an actual information need, but all their helpers are working on a perceived version of that persons need.

Sharoda Paul has been studying collaborative searching behaviour in medical environments. I haven't asked her directly about it, but in the worst case everyone is working to solve a patients need. All the medical staff are working on perceived information needs, with many, I would suspect, working on perceived versions of other peoples perceived information needs. A nurse might be working to what she thinks the doctor needs to solve the problem they think the patient has.

So what do we know about the difference between actual and perceived information needs? I picked it up in Jarvelin and Ingwersen's 2004 paper that preceded their big book 'The Turn'. The turn talks about it more, but concludes that its relatively underexplored. It appears to be a commonly used term in medical papers about how patients view their illnesses. Related topics, however, have been popular, such as Sensemaking and the elements of communication in collaborative search. Sharoda presented some fascinating work at CHI2009 after her time working at MSR on sensemaking of previous collaborative searchers. Nikhil Sharma has also presented some fascinating work on sensemaking of handovers, between shifts in hospitals for example.

I'm sure the topic has been broached in papers, and is being addressed in part by these related topics, but it seems like collaborative information seeking provides a great opportunity to study perceived and actual information needs, and provide insights back to collaborative search efforts. I'm looking forward to more collaborative search and sensemaking workshops to come! any at CHI2010?

Tuesday 7 April 2009

Google does, apparently, test everything

I was recently interested by a debate about why Google sticks its facets, and now its query refinements etc, at the bottom of the search results. The basic assumption that was proposed was that you only need to refine your results if you didnt get it in the first 10 results, which you probably did anyway right.

I thoroughly enjoyed a good chat with Daniel Russell today, about this decision. I can reveal that it is a very very very well tested decision. Not just a random design decision, as I perhaps naively assumed. Apparently, they even tested 5px variations of it on the x and y axis, as well as placing it above and below the first result of 10 and many more options combined. And their high volume studies decided right there, not 5px to the side, was best.

Apparently it doesnt stop there. Even the height, and shade of blue, of the horizontal bar above the results has a dramatic effect. The colour blue has been carefully chosen.

In some respects, I feel like my research ideas and focus have just been completely shattered into tiny shards. But I guess I am now all the better for knowing (or believe I know) how purposefully Google is how it is. And its just like Daniel Russell said in the recent IEEE special issue, there are some things that you can only study at their level, including tiny UI changes.

Not that they only test small changes it seems.

Saturday 4 April 2009

What is the smallest sensemaking problem?

Daniel Russell opens the CHI2009 Sensemaking Workshop with a question:

What is the smallest sensemaking problem? What is the very minimum that counts as sensemaking?

A lot of the challenges in this area are group planning problems, handoffs in hospitals, writing essays. These range from big to massive. Whats a small sensemaking problem?

Thursday 2 April 2009

CHI2009 Planning tool

Yesterday I released a planning tool for attending CHI2009. Its had 50 people actually use it in the first 24 hours, and I suspect many more visitors (I should be counting really).

Ive had plenty of feedback already, some improvements to make, but much praise too:

"Excellent Max! - Thanks a lot!"

"This is super! ... just having such a planner is a relief! I commend you for such a straight-forward solution."

"This looks great, can't wait to get home and put it on my PowerBook and iPod! Thanks so much for doing this!"

and many many more. I'm quite surprised by the response I've had. Its, of course, quite generic and easy to apply to another conference. So do contact me if you like. Otherwise enjoy and I'll see some of you at CHI!

Thursday 26 March 2009

Google tests more text with each snippet

Daniel Tunkelang has brought my attention to another blog entry about some of the tests that Google is carrying out at the moment. As well as letting you view timelines, and a 'wonder wheel' of connections, the options it lets you test include adding thumbnails to each search result (something that Ask.com has been doing for a while) and also allowing you to see more than 2 lines of text per result.

This last point is the one that seems rather interesting to me. I've heard many a search engine representative talk about getting as many results as possible above the fold (the point where you'd have to scroll to keep reading), and getting the best trade off, therefore, between context and space. Tim Paek et al, at Microsoft Research, studied the idea of flexible snippet lengths back at CHI2004. Its been a long time coming. I proposed back at a SIGIR workshop in 2007 that we just let people choose the size of the each snippet in the preferences, and see how often people change it - and to what? Maybe now we'll see.

Interestingly, in IEEE Computer in March 2009, Daniel Russell, of Google, wrote an article saying that, for some research, only big corporations with thousands of processors and millions of users can really test small UI changes, among many other things. Well I'm glad that Google is testing this - and I hope we see some results from it too.

Friday 6 March 2009

google.com been giving term suggestions?




How long has google.com been providing this term suggestion on its interface? with the number of results its going to provide? They don't do it on google.co.uk. Fun fun.

Thursday 5 March 2009

What seperates query refinement, clustering, and faceted search?

I've been thinking recently about what seperates out the different interactive information retrieval techniques, as a term I am using loosely for now. There's interactive query refinement or expansion, which is often used to suggests potential changes to a query to explore sub-groups of the results. There's clustering, which analyses the results for clusters, in order to help users explore sub-groups of the results. And there's faceted search, which provides many different types of categorisation over the results in order to help users explore sub-groups of the results.

Each of these can be used to explore groups in the results, and they mainly differ by the back-end system that is used to label the sub-groups. They each also come with a typical interaction model. IQE usually sends a new query to the server and returns a new set of results. Clustering interaces, like Clusty.com typically allow users to choose one cluster at a time to view. Faceted browsers, like Flamenco or mSpace, typically allow users to apply and unapply a series of filters.

My question is how much of the effect is down to the method and which is down to the interaction model. Marti Hearst wrote a great article in the CACM that highlighted the advantages of faceted exploration over clustering, but the majority of her highlights are over quality of data produced, such as the completeness of categories produced.

It would be interesting to compare the specific effect of interaction style. Such as allowing users to apply and unapply a series of interactive query refinemements, rather than sending off new queries as a new starting point. The nearest I can think to research doing this is the work by Hoeber, which allows users to turn on and off query refinement filters on the list of results. The aim of such a specific study would be to analyse the benefit of implementing more increasingly complicated backends, instead of simply improving the interactivity of the search interface and the range of search tactics they support.

Friday 27 February 2009

Concert for the deaf?

One of the most amazing people I ever had the pleasure of working with, is putting on a multi-sensory concert for the deaf. Her work on modelling the human cochlea is being tested as part of a audio-responsive chair in a live concert designed for the hearing impaired.

I'm sure it will be an amazing experience for both the gig-goers, the bands, and the researchers seeing their creative work in action!


Thursday 26 February 2009

Is Web-based Exploratory search on the increase?

I read an interesting paper by Vakkari's team, on the different queries submitted to libraries via an online form, between 1999 and 2006. The trends are quite interesting, and one of the conclusions is that topic-related searches have reduced in libraries because they are, instead, being performed more on the web. This creates two questions about topic-searching on the web:

1) many HCIR style papers assume that this is hard to do on the web, but this research suggests its happening more anyway. This is perhaps because its more convenient to access the web now, than it is to drive across town. The service they analysed, however, was an online library query service (in Finland).

2) This is surely motivation for providing better exploratory search interfaces on the web, to help people explore and learn topics - why has it only dropped from 57% to 47%? Why not further?

They also conclude that people still turn to librarians for difficult searching problems. This really is motivation for providing better exploratory search interfaces, so that a) the number of topical searches to libraries goes down even more and b) so that the number of difficult questions goes down instead of up!

Monday 16 February 2009

search interaction is short

*warning - read the comments below before you read the article discussed here*

I came across an interesting article which is, to some extent, both a challenge for interactive information retrieval, and a blow to idea that search should be like a conversation (rather than guessing a searcher's intentions). One of their notable findings is that the average search session is 2.9 interactions long. Nice to see that its not considering search session length in terms of time (a common metric, but not always applicable during information seeking), but instead in the number of interactions. This is something in the vein of my own research.

This finding really only allows 1) an initial search, 2) an interactive refinement and/or a scroll, and 3) a selection. This also assumes, since the 2.9 is less than 3, that one of these is optional. and its unlikely to be the searching or the selecting. I want to go over the paper in some more detail, but its certainly interesting.

ambiguous query terms

Since my last entry, on what to do with more generic query terms, I have come across a few sources about this. First, I happened to review a paper on the topic, which I of course can't say more about. Second, I have happened upon an interesting journal article looking at identifying ambiguous terms. It's by no means the only research to try and do this, but their recent work has found only around 16% of online queries are what they define as ambiguous.

Finally, an interesting blogger, has mentioned an alternative search engine called DuckDuckGo, which, I'm pleased to say, does almost exactly as I discussed in my previous entry. As you can see with the standard ambiguous example of apple, it breaks down results into groups that cover a range of its different domain relations, which can be used for interactive query expansion. Give it a try. They have a nice list of their defining features. I'm currently using it as my default search engine now too.

Tuesday 10 February 2009

generality of query terms

An excellent blogger, named Daniel Tunkelang, recently brought up a discussion, within a discussion, on determining the exploratory nature of a query. He questions whether this is worth it. This is a very interesting focus. Ryen White has published a number of papers on determining exploratory style queries, and the effect of expertese on search style, which are very interesting and certainly related to this challenge.

The excerpt that daniel refers to, from another blog, is on how search engines should react to the terms 'vietnam travel' in comparison to 'vietnam population'. For the former, Yahoo, Google, and Live all bring up different top results, but all based around travel guides. For the latter, Google and Live try to answer the question directly. All three link to the wikipedia page on vietnamese demographics.

Term generality is an interesting case in this example. The term 'travel' has a broader network in wordnet, than population. Naturally, the more generic a term, like travel, the more broadly it will be used on the web, and so naturally generic terms bring out less specific web results, or greater variation in the highest ranked pages. Live search provides 'related queries' for both, where the travel query has many query expansions, the population query provides a series of sibling queries, like korean population.

Instead, however, determining the generality of a term, or its breadth of use on the web, calls for a good opportunity to directly and intentionally provide diversity in search results. That is, instead of letter the breadth of use on the web naturally lead to varience in results, to specifically expose the varience in results, and aim to cover them.

From an exploratory serach perspective, its interesting because it would be different to interactive query expansion, in that the search engine would be providing key results from each of the recommended query expansions, and so the interaction would be different. It, instead, may convert to exploratory behaviour, rather than directed re-querying.

Tuesday 13 January 2009

Tracking people with phones: an example of good

There was both a wonderful result, and a brilliant information integration story today in the news. An girl was found by taking approximate GPS signal locations and google maps street view. First, brilliant that she was found safely. Second, this is the sort of thing that people are both scared of being unduely monitored. Here is a case, though, when you really do want the right people to be able to find you through simply the signal of your mobile phone. I like the additional notion here, however, that the street view was important in finding seeing which buildings, such a hotel, were in that area. Although they knew where abouts she was roughly, the street view helped them stake out possible locations that would fit with the abductor travelling.

anyway. good all round i think.