Saturday, April 30, 2011

Of Dogs, Intelligence, and the Internet

[This needs a bit of smoothing and fleshing out in places . . .]


Yesterday, my college's Philosophy Club made a short road trip to Newman University to meet up with students and faculty in their fledgling Philosophy program (they're just getting a major off the ground there--something of a surprise to me, given the deep historical connections between Catholic thinkers and Western philosophy).

The topic was artificial intelligence and its myriad implications for human beings. IBM's Watson was our starting point: we noted that Watson is a big storehouse of information and some algorithms for sorting that information, but it shows no capacity for learning from its own mistakes or the mistakes of the Jeopardy! contestants it played against. Chris Fox, of Newman's faculty, mentioned that the capacities for reflection and self-awareness have to figure into questions of intelligence. "Computers don't laugh at themselves," he said. His example was dogs: They seem to have a kind of intelligence in that they seem immediately to recognize other dogs as dogs, no matter their size or appearance, but (so far as we know) they don't reflect on their (or their own) dog-ness.

Along these lines, Jeff Jarvis of Buzz Machine (via Andrew Sullivan this morning) has a recent post called "In a dog's net." Apropos of a "CBC Ideas series about how (we think) dogs think," Jarvis takes the idea that dogs "think in maps informed with their smell" and that they thus "have a different sense of 'now'” and muses,

It strikes me that the net — particularly the mobile net — is building a dog’s map of the world. Through Foursquare, Facebook, Google, Twitter, Maps, Layar, Goggles, and on and on, we can look at a place and see who and what was here before, what happened here, what people think of this place. Every place will tell a story it could not before, without a nose to find the data about it and a data base to store it and a mind to process it.

On the same show, canine Boswell Jon Katz argues that dogs respond to changes in their map: “hmmm, those sheep aren’t usually there and don’t usually do that and so I’d better check it out to (a) fix it or (b) update my map.” Dogs deal in anomalies. So do data-based views of the world: we know what happened in the past and so we know what to expect in the future until we don’t. Exceptions and changes prove rules.


As long-time readers of this blog know, I frequently try to talk about how the Internet, and technology more broadly, mediate between us and knowledge of the world, simplifying and distorting our experience of it if we use it uncritically. Jarvis' comments, it seems to me, bear this out. The phrase "a mind to process it" is the small but crucial one here: maps are, or should be, something like a modelling of the mind that produces it--and, indeed, directs our thinking along the lines of that modelled mind. I've not yet seen the program that serves as Jarvis' jumping-off point, but the impression I get from his remarks is that dogs make no judgment about the scents they collect and sort--a scent is a scent is a scent. There's the association of a specific scent with the specific location where the dog encounters it, but (apparently) no linkage made between, say, the locations where s/he also locates it. There's no sense of spatial relations between/among these places. (GPS devices, it occurs to me, are more sophisticated than this only in that dogs, because they have better noses, don't require satellites and disembodied voices to guide us through space.)

Humans long ago created a technology that models the world as the (human) mind apprehends it: maps. Maps show their reader at a glance where things are located in space; moreover, by choosing what to show and not show, their makers have made judgments about what is significant for their reader to know and not know. We are free to argue the values of those judgments, of course, but the implicit message of maps is clear: some things are more worth knowing than others. Indeed, once upon a time, maps quite literally oriented their readers in accordance not with a physical direction, but a worldview. Surely that is a more-or-less basic description of the human mind, too: we experience, we forget, we remember, we interpret and re-interpret, we decide what matters.

The internet-as-map doesn't do any of those things. By implicitly presenting all data as equivalent in value, it can lead to reductive, uncritical thinking in its users. It shapes our thinking in accordance to its modelling of the world and not the other way around, and most of us are at best dimly aware that this is so.

Put another way: some of my students think their smartphones are, in fact, smart.

A dog's way of mapping the world is perfectly fine--if you're a dog! If the 'Nets map the human world in the way that dogs map their world, I'm not too sure that that's entirely a good thing for people.

UPDATE: Somewhat apropos of the above is this post about a couple of programmers teaching a computer how to recognize double-entendres (specifically, otherwise innocent sentences and phrases that one can follow, "heh heh"-like, with "That's what she said"). Unless I'm missing something, this seems to raise the question raised by John Searle's Chinese Room thought-experiment: does the computer really understand that it's making these jokes? Is it "in" on them as it makes them?

2 comments:

Pam said...

I need to get out more (in the blogosphere that is)... and especially here! I've been experiencing a prolonged reading-blogs-burnout phase, and I keep thinking I'll snap out of it. (I think it's just stress related).

John B. said...

Well, Pam, thanks for thinking this place worthy of your visits. I'll do what I can to reduce--or at least not add to--your stress.

And thanks, again, for those beautiful flower pictures at your place.