Saturday, 17 October 2009

Wednesday, 16 September 2009

In Collaborative Search, is Perception Everything?

After reading a fascinating series of tweets on what constitues success in collaborative information seeking, between Jeremy Pickens, Sharoda Paul, Brynn Evans, and Gene Golovchinsky this morning (I woke up some hours after the dicussion), it struck me how important the difference between actual and perceived information need is in collaborative searching activities.

In scenarios where a group of friends are working together to organise a holiday, for example, then every member in the group is working on a perceived collective need. If several people are helping one person solve their problem, then the central person is (hopefully) working on an actual information need, but all their helpers are working on a perceived version of that persons need.

Sharoda Paul has been studying collaborative searching behaviour in medical environments. I haven't asked her directly about it, but in the worst case everyone is working to solve a patients need. All the medical staff are working on perceived information needs, with many, I would suspect, working on perceived versions of other peoples perceived information needs. A nurse might be working to what she thinks the doctor needs to solve the problem they think the patient has.

So what do we know about the difference between actual and perceived information needs? I picked it up in Jarvelin and Ingwersen's 2004 paper that preceded their big book 'The Turn'. The turn talks about it more, but concludes that its relatively underexplored. It appears to be a commonly used term in medical papers about how patients view their illnesses. Related topics, however, have been popular, such as Sensemaking and the elements of communication in collaborative search. Sharoda presented some fascinating work at CHI2009 after her time working at MSR on sensemaking of previous collaborative searchers. Nikhil Sharma has also presented some fascinating work on sensemaking of handovers, between shifts in hospitals for example.

I'm sure the topic has been broached in papers, and is being addressed in part by these related topics, but it seems like collaborative information seeking provides a great opportunity to study perceived and actual information needs, and provide insights back to collaborative search efforts. I'm looking forward to more collaborative search and sensemaking workshops to come! any at CHI2010?

Tuesday, 7 April 2009

Google does, apparently, test everything

I was recently interested by a debate about why Google sticks its facets, and now its query refinements etc, at the bottom of the search results. The basic assumption that was proposed was that you only need to refine your results if you didnt get it in the first 10 results, which you probably did anyway right.

I thoroughly enjoyed a good chat with Daniel Russell today, about this decision. I can reveal that it is a very very very well tested decision. Not just a random design decision, as I perhaps naively assumed. Apparently, they even tested 5px variations of it on the x and y axis, as well as placing it above and below the first result of 10 and many more options combined. And their high volume studies decided right there, not 5px to the side, was best.

Apparently it doesnt stop there. Even the height, and shade of blue, of the horizontal bar above the results has a dramatic effect. The colour blue has been carefully chosen.

In some respects, I feel like my research ideas and focus have just been completely shattered into tiny shards. But I guess I am now all the better for knowing (or believe I know) how purposefully Google is how it is. And its just like Daniel Russell said in the recent IEEE special issue, there are some things that you can only study at their level, including tiny UI changes.

Not that they only test small changes it seems.

Saturday, 4 April 2009

What is the smallest sensemaking problem?

Daniel Russell opens the CHI2009 Sensemaking Workshop with a question:

What is the smallest sensemaking problem? What is the very minimum that counts as sensemaking?

A lot of the challenges in this area are group planning problems, handoffs in hospitals, writing essays. These range from big to massive. Whats a small sensemaking problem?

Thursday, 2 April 2009

CHI2009 Planning tool

Yesterday I released a planning tool for attending CHI2009. Its had 50 people actually use it in the first 24 hours, and I suspect many more visitors (I should be counting really).

Ive had plenty of feedback already, some improvements to make, but much praise too:

"Excellent Max! - Thanks a lot!"

"This is super! ... just having such a planner is a relief! I commend you for such a straight-forward solution."

"This looks great, can't wait to get home and put it on my PowerBook and iPod! Thanks so much for doing this!"

and many many more. I'm quite surprised by the response I've had. Its, of course, quite generic and easy to apply to another conference. So do contact me if you like. Otherwise enjoy and I'll see some of you at CHI!

Thursday, 26 March 2009

Google tests more text with each snippet

Daniel Tunkelang has brought my attention to another blog entry about some of the tests that Google is carrying out at the moment. As well as letting you view timelines, and a 'wonder wheel' of connections, the options it lets you test include adding thumbnails to each search result (something that has been doing for a while) and also allowing you to see more than 2 lines of text per result.

This last point is the one that seems rather interesting to me. I've heard many a search engine representative talk about getting as many results as possible above the fold (the point where you'd have to scroll to keep reading), and getting the best trade off, therefore, between context and space. Tim Paek et al, at Microsoft Research, studied the idea of flexible snippet lengths back at CHI2004. Its been a long time coming. I proposed back at a SIGIR workshop in 2007 that we just let people choose the size of the each snippet in the preferences, and see how often people change it - and to what? Maybe now we'll see.

Interestingly, in IEEE Computer in March 2009, Daniel Russell, of Google, wrote an article saying that, for some research, only big corporations with thousands of processors and millions of users can really test small UI changes, among many other things. Well I'm glad that Google is testing this - and I hope we see some results from it too.

Friday, 6 March 2009 been giving term suggestions?

How long has been providing this term suggestion on its interface? with the number of results its going to provide? They don't do it on Fun fun.