The first Sherlock Holmes story I read (excluding the condensed for-kids versions) was The Hound of the Baskervilles. Revisiting my childhood reads, I was expecting to more closely examine the cause of my childhood awe and fascination with the detective mind, but was rather more surprised to find it missing.
Some pages later I (think I) located the cause. “We are coming now rather into the region of guesswork,” said Dr Mortimer. “Say, rather, into the region where we balance probabilities and choose the most likely. It is the scientific use of the imagination, but we have always some material basis on which to start our speculation. Now, you would call it a guess, no doubt, but I am almost certain that this address has been written in a hotel.”
(The classic) Sherlock is an algorithmic/logical mind, primed to generate hypotheses from correlated observations, and famously bored by a lack of interesting cases. Probabilistically he was of course right, but it was more shocking to me that he was never wrong. Therein lies the power of fiction and the privilege of the author, to pick one’s battles and their resulting outcomes. But I do not intend here to take Conan Doyle down a notch, nor to mar the famous detective’s record—there are plenty other accounts to read for that. I bring this up because we have a real-world parallel, a chimeric Sherlock, who makes his own hypotheses, often invisibly.
If you use Google Now and it is working well for you, the feeling should be familiar. Legible lives are easy to forecast and simple to monetise. But there are also plenty to whom these attempts at “reading one’s fortunes” seem exactly like actual carnival fortune-telling encounters; often a sense of “I can see how you got to that conclusion, but nice try”, and occasionally “I wish I could see into your brain to find out how you made that crazy logic leap”.
If you’re one of the latter, Google’s data collection policies are unlikely to faze you. All your data is meaningless to someone who does not understand how you live. If you are like me, it is more likely you will be wondering how much more data Google needs before it can start giving you useful search results.
Bruce Sterling, speaking at the 2012 Turing Centenary Symposium, refers to Turing’s original idea of AI as artificial feminism.
“Siri can talk. Siri is a system that pretends to be a woman. Siri can answer questions, but Siri is also answering data-mining questions that no individual woman can ask. Siri’s also answering thousands of questions at once.” The Wikipedia equivalent of Siri, let’s call her “Vicky Pedia” [sic], “is not a thinking crowbar. She should be approached with the same tenderness, respect and consideration that we devote to other dynamic instantiations of genius, such as the City of Paris, or the English common law, or the interstate highway system. We don’t put lipstick on such things, but we don’t dismiss them as mere machines, either.”
How should we approach “Gayle” (I can’t think of a female name for Google) then? What kind of relationship should we aim to forge with an algorithmic forecasting engine? (Because we will, subconsciously or otherwise.) Treat her with extreme mistrust? Like a carnival fortune-teller? A learning child AI?
I treat Gayle as a struggling personal assistant, one who tries to make helpful suggestions but usually fails at it, all the while hoping that one day she might become more competent and helpful. But what happens then?