My wife and I have a particularly naughty cat at home. Like many urban dwellers we find it necessary to place those little Raid roach discs around to keep unwanted visitors from parading through our kitchen area. This cat is obsessed with the discs – if they are not placed somewhere very inaccessible she will invariably locate it and bat and bite it to her heart’s content.
The first time we found a disc in the middle of the floor full of bite marks we freaked out, convinced that she had poisoned herself. While my wife called the vet I did what any right-thinking idiot would do and looked up the roach trap ingredients online to see what pet interaction warnings there were. To my relief, while not recommended, the discs would not kill her outright or even make her sick if she wasn’t regularly chowing them down whole.
In retrospect though, perhaps the interwebs wasn’t such a great place to turn to for feline health advice. After all, when it comes to human health the Internet is anything but infallible.
For instance, a new study shows that Wikipedia’s entries on specific prescription drugs is more often than not incomplete. This comes as no surprise – after all drug companies are heavily regulated in terms of how, when and what they can tell consumers and patients about a drug. It’s unlikely that the entries on Wikipedia were generated by the people who have the best information on them.
Interestingly the same study showed that a free peer-reviewed site, the Medscape Drug Reference, is far more complete, though it receives far fewer visitors. It also ranked below Wikipedia in my tests of search engine results for drug names and the term “side effects” in Google.
The flipside of all this is while there were many more errors of omission on Wikipedia, as well as lack of information on many drugs that MDR was able to bring up, MDR actually had more outright factual errors than Wikipedia which had none.
Meanwhile Microsoft has completed an exhaustive study on the causes of Cyberchondria, in which users who stub their toe are lead through their Internet search results into fearing that their entire leg may need to be amputated immediately.
What happens online is that people can confuse a search engine’s notion of relevancy for actual relevancy. As The New York Times notes:
“The Microsoft researchers noted that reliance on the rankings of Web search results contributes a … bias to the judgments people make about illness.”
Here the car is leading the horse with engine influencing people’s conclusions when it ought to be the other way around.
The interesting nugget in the the Times article is this:
“The researchers said they had undertaken the study as part of an effort to add features to Microsoft’s search service that could make it more of an adviser and less of a blind information retrieval tool.”
Microsoft recognizes that search results can sometimes push information rather than pull results that are needed. Relevenacy does not equal accuracy. Google is aware of this too and I suspect it’s one of the main drivers behind their new SearchWiki tool.
What SearchWiki does, if on a limited basis for now, is to make people responsible for their own search results. In a sense it’s a dymystification of search for the layperson who doesn’t think about algorithms and query structure the way search marketers do. Here are results that may be relevant, but not necessarily the most relevant to you, right now.
There are two thorny problems with this. First, this simply isn’t the experience most people want from search engines. They want to put in their search terms and get results without much though involved.
Secondly, as we saw with the MDR and Wikipedia, users themselves don’t always have a way of distinguishing what’s relevant and even professionals will allow incorrect data to slip through from time to time.
The user is like the Duke Brothers from Trading Places relying on Clarence Beeks to bring them the frozen orange juice crop report accurately and ahead of time. Only in the darkened parking lot of their search reply page, they can’t tell a real Clarence Beeks from Eddie Murphy in a trenchcoat, and a fake crop report from the real deal.
What the Duke’s needed was the flashlight of authority.
Ultimately Google and other search engines do take a site’s “authority” into account, but there are a lot of variables at play there, not least of which are links and traffic. The problem is that the free-for-all of social media interactions means that some bad crop reports rise to the top of results from high traffic sites like Wikipedia or Facebook that may not and should not be expected to have the accuracy of some sites with less traffic, or worse SEO.