Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Friday, July 27, 2012

Ghosts in Our Own Machines: A Review of Naqoyqatsi

It's been pretty quiet 'round these parts as the summer begins to wind down and August and the new semester approach. But I recently posted the following review on Amazon, and I thought I'd post it here (with some minor editing) as well.

Breughel's The Tower of Babel, which also serves as the opening image from Naqoyqatsi. Image found here.

This film, the third and last of the Qatsi trilogy, is every bit as visually and sonically spectacular as its predecessors. But, though it clearly belongs with them, it is finally a less hopeful film than the first two. I suspect, though, that that's part of director Godfrey Reggio's point. Any film that opens with an image of the Tower of Babel is probably not going to be very hope-filled.

In Koyaanisqatsi ("Life Out of Balance") and Powaqqatsi ("Life in Transformation"), the two realms being compared and contrasted (respectively, natural and urban spaces, and indigenous and Western ways of living) were given fairly equivalent amounts of screen time, suggesting (to me, at least) the possibility of an equilibrium being achieved between the two--if not within the space of the film, then among viewers as they ponder how best to live. Perhaps that is why I prefer the first two films. Naqoyqatsi, released 14 years after Powaqqatsi, seems to suggest that that possibility of equilibrium has been lost: Technology, as signified in the film by its recurring sequences of strings of binary numbers, not to mention the digital generation and/or alteration of the vast majority of what we see on the screen, has ceased being only a tool by and through which we interact with nature. It has become, in significant ways, our surrogate for nature, blurring our traditional notions of what is "natural" and what is "artificial." The short sequence in which we see the head of Dolly the sheep (image found here)
encapsulates this idea for me: as she moves her head from side to side, the image blurs, doubling and tripling, raising in a visual way the philosophical questions raised by our ability to clone animals.

Technology, this film seems to argue, is the worst sort of dystopia: one that we don't entirely realize we live in because we can no longer be entirely sure whether what we see is the world as it is, or whether it's been tweaked to our liking or convenience.

As if in counterpoint to all this, though, Philip Glass's score floats over all of what we see; it's scored for a small orchestra and isn't as heavy (or heavy-handed) as was his music for the first two films. Lovers of cello will want to hear in particular Yo-Yo Ma's elegant performances.

I will be showing this film to my students this fall. I am hoping one of them will note the film's outdated computer graphics; I'm hoping he'll say that we have better graphics now. "Better? In what way?" "Ours are more realistic." "Yes--and? Is a near-invisible line between the real and the computer-generated necessarily a good thing?"

Read More...

Saturday, September 10, 2011

"The vocabulary of machines": a coda on politics

True, I hadn't considered this as a possibility, either . . . (Image found here)

In my earlier posts on Arendt's The Human Condition and the apparent goal of interconnectivity for its own sake, the topic of politics--which is to say, how best to govern--never came up (except for the fact that Arendt is clearly interested in those questions). We've all run across commentary on how the 'Nets are affecting our politics for good or for ill, but my experience with those kinds of pieces has been that they are often more interested in the short term: that is, with electoral politics (though over the past few months there have been some good posts (by Republicans as well as Democrats) on our nation's shifting demographics toward a minority-majority population and the need for Republicans to respond to those realities with something other than fear and/or loathing).

But of course, that fact itself begs a question that my colleagues and I have asked with regard to academic work: whether these various media and our new ways of accessing and engaging with information interfere with (or perhaps are by their nature antithetical to) long-term, extended, "deep" thinking--the kinds of thinking we say we value in college and seek to encourage in our students. (In my own classes this semester, I've come up with some research projects that, I hope, will get students to use all this technology against its (and students') tendency to want to Cuisinart information; I'll let you know how it goes.) If at the realm of the political, our elected officials cannot (or, I suspect sometimes, don't care much to) see much beyond the next election, then good public policy (whatever form that may take) and, well, the Good of the Nation will inevitably suffer. In the early days of his administration, as he was giving a pep talk to the Democratic Caucus during the stimulus package debates, President Obama told them, in an oh-by-the-way manner, as if it were (still) common knowledge, "Good policy is good politics." Sorry I don't have a link--I saw it on some YouTube somewhere (yeah, yeah, Mr. Tech-critic, I hear you say: Pot, meet Kettle)--but that line has always stuck with me as being not only the encapsulation of what I take to be Obama's approach to governance but also being about as incontrovertible a statement about governance in a democratic society as there is, no matter one's politics--assuming, of course, one accepts the premise that government in its essence is necessary and, it follows, might as well do some actual good (whether through its actions or choosing not to take certain actions) since we have to have it. Would that more of the politically-engaged among us take that to heart, no matter their allegiances. Some stuff might actually get through the Senate, more people would be happier with a half-a-loaf health care bill, etc., etc., etc.

Anyway. I'm posting all this because of my friend Russell Arben Fox's most recent post at his most excellent blog, In Medias Res. Russell teaches political science at Friends University here in town, and so he has much to say on those sorts of issues, but his interests also range toward popular culture and various issues regarding the The Church of Jesus Christ of Latter-Day Saints (of which Russell is a member). His starting point in the present post is a new videogame of the kill-all-the-zombies sort, but the zombies in question are Tea-Party zombies: various not-undead conservative politicians and their benefactors. Russell was called up by one of the local TV stations to comment on this because, I assume, this being Wichita, a reporter was pursuing the Local Angle: one of the zombies is a two-headed representation of the Koch brothers. Russell's post addresses that immediate matter, of course, but also engages in some reflection, some of it directed at himself, on the more general question of violence-inflected rhetoric in our politics (which people all across the political spectrum engage in, as we all know) and some worrying over its longer-term consequences for governance. The complicating factor in all this, Russell notes below, is that whereas in the past (that is, before about 20 years ago) this kind of nastiness tended to stay out of the public eye, relatively speaking, our new interconnectedness makes that, sooner or later, impossible. The links and italics, by the way, are Russell's; I've added the cartoon (image found here).

The best thing I said to the reporter--which, of course, also didn't make it into the piece--was a few thoughts drawing on things which Cass Sunstein and Jay Rosen have both argued at length: that the internet has mostly resulted in our ways of sharing and receiving information being broken apart, atomized, sealed off into separate bubbles. We live, too many of us anyway, in various blog-anchored echo chambers, chatting endlessly on Facebook with our selectively chosen friends. Of course everyone has always created in-groups and out-groups; that's nothing new. But the internet has really ramped it up...and if you combine that with all the stresses and breakdowns our democracy is currently experiencing (an almost wholly dysfunction[al] Senate, major parties that no longer share much of any kind of incentive to actually govern responsibly, etc.), then it's not hard to suspect that there has been an increasing in violent rhetoric in American politics, because it's just so easy for everyone in all their little bubbles to continually egg each other on, say the same jokes ever more loudly and ever more fervently, develop a shorthand of humor and rhetoric that is perhaps completely innocent but nonetheless, in retrospect, perhaps is also thorough dehumanizing, angry, and contemptible.

[snip]

ask yourself this question--is anything on the internet capable of remaining private for long? Also a negative answer, except in that case I don't think there's any "perhaps" to it.

Some liberals, at least, are trying to get ahead of the usual cycle of blame (which, I confess again, I've been part of before!), and calling for boycotts of the game. Good for them, though I wonder what difference it will make. The genie--a genie of anger and contempt, fed by a technology that simultaneously encourages people to act out within their little boundaries as well as makes certain no boundaries truly last--is out of the bottle, and fear that, absent a profound political change which leads people to accept that our democracy can function, that government can listen, and that the rules and procedures and methods of elections and parties can be taken seriously, there's nothing that will get it back in. I'm as much at fault as anyone, I suppose. But I can at least refuse to play the game. We could all do that much, at least.

Amen. As always, you should go and read the whole thing.

Something that struck me as I read this--and I'll just say right now that I'm not smart enough to pursue it further--is that since, in an electronic sense at least there is no longer any truly private communication--all is public, potentially, and sooner or later--I wonder if this fact will be ultimately a good or bad thing for our politics, if this fact will a) cause all of us to be more self-policing and, thus, more civil in ALL our discourse, no matter our audiences; or b) cause more and more of us to despair as the rhetorical, um, misstatements and errant keystrokes of even our very best politicians reveal all of us to be flawed human beings (a persistent myth being that those in politics are, or should be, "better" than the rest of us). And, further, no matter whether a) and b) come to pass, what will be the fate of impassioned, well-intentioned political discourse meant not to demean but to strenuously insist on the rightness of some policies and the wrongness of others. Will such rhetoric become bland, uninspiring talk that reveals as little as possible (out of fear that the speaker is in some sense assaulting his/her opposition or lose his/her base or an election), or will it lead to more genuinely-healthy debate? Well: none of us--especially in this brave new world of interconnectivity--is a passive bystander in these issues; we vote with our eyeballs, we can choose to repeat what we say we value, and we can choose to decry what we say is destructive rhetoric.

Language matters more than ever now, there being so much more of it that we have access to. The old horse-led-to-water aphorism no longer quite works: we have almost no choice but to drink. But we do have choices in what we drink, not to mention choices in what to say about it. That--that capacity enabled by this instantaneous interconnectivity--certainly is a good.

* * *

OT from this post, but of a piece with things we've been discussing here of late: I don't know much about Cathy N. Davidson's new book, Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn, except that it's getting a fair amount of attention and that, judging from the blurbs, seems to be something like the rebuttal to Nicholas Carr's famous 2008 essay, "Is Google Making Us Stupid?" I'll be on the lookout for it.

Read More...

Monday, September 05, 2011

"The vocabulary of machines": Three kinda-related things on technology and human connectedness

Yves Jeason, The Noosphere Sculpture. Image found here.

First, a couple of things from the Technium section of Kevin Kelly's website. Kelly is the author of What Technology Wants, the title of which I think is a pretty succinct way of capturing both the promise and the peril we saw Hannah Arendt (and me) going on about a few days ago; though I'd heard him interviewed last year on NPR and immediately added his book to my Amazon Wishlist, it was only today that I bumped into his website via Andrew Sullivan's blog. I've had a couple of hours of pleasure this morning reading around his place. Kelly's great appeal to me is that he's very clear-eyed about technology. He's not just extraordinarily knowledgeable; he really has thought about it, as I hope a rather lengthy passage below will show.

The first bit I'd like to share is "Theological Chatbots," in which Kelly interviews the Cornell researchers who had the bright idea of having two chatbots converse with each other and came up with this video, now making the rounds (be sure to watch it if you haven't already):



The interview itself is relatively short; here, for me, is the interesting part:

What about all the talk about God? And why are the bots so quick to call the other a liar?

"We think this is because the database of replies in Cleverbot is compiled from the questions and responses of human users, and apparently, humans will often accuse the bots of lying, or will query the bots about their origins, so when they start talking to each other, they mimic what humans say to them."

Our bots ask theological questions because we do. So far, our bots are made in the image of their creators.

This is an odd little moment, a healthy reminder that we too are machines that process and act on information from our environment . . . except these chatbots' information is second-hand. They "know," if that's the right word, what they've been told, as opposed to what they've independently observed and wondered about. So, that little "so far" creates space for a question: if/when we have genuine AI devices that can create genuinely original thought, will these machines also have a capacity for wonder when encountering something unknown; will they have an aesthetic sensibility or a sense of transcendence? Or will AI have no capacity for pondering or experiencing the irrational? Some would say that our capacity for such things gets us into trouble, more often than not, and I would agree in part, but surely the irrational also plays some role in shaping intelligence of the human variety.

Or, alternately, I can hear a committed atheist say that perhaps religious people, just parroting what belief and cultural custom have to say about such matters without ever investigating to determine their truth, are no wiser in their questions about God than these chatbots are.

Here is a bit from another of Kelly's Technium pieces, a meditation on our ever-increasing interconnectedness and the resulting exponentially-increasing rate of invention and innovation, "Why the Impossible Happens More Often":
I think we'll be surprised by how many things we assumed were "natural" for humans are not really, and how many impossible ideas are possible. "Everyone knows" that humans are warlike, and like war, but I would guess organized war will become less and less attractive over time as new means of social conflict and social conflict resolution arise at a global level. Not that people will cease killing each other; just that deliberate ritualistic battle over territories will be displaced by other activities -- like terrorism, extreme sports, subversion, mafias, and organized crime. The new technologies of social media will unleash whole new ways to lie, cheat, steal and kill. As they are already doing. (Nefarious hackers use social media to identify corporate network administrators, and their personal off-time hobbies, and then spoof a gift of a cool new product from their favorite company, which when opened, takes over their computer and thence the network they are in charge of.) Yes, many of the impossible things we can expect will be impossibly bad.

They will be beyond our imagining because the level at which they are enabled is hard for us to picture. In large groups the laws of statistics take over and our brains have not evolved to do statistics. The amount of data tracked is inhuman; the magnitudes of giga, peta, and exa don't really mean anything to us; it's the vocabulary of machines. Collectively we behave differently than individuals. Much more importantly, as individuals we behave differently in collectives.

This has been true a long while. What's new is the velocity at which we a[re] headed into this higher territory of global connectivity. We are swept up in a tectonic shift toward large, fast, social organizations connecting us in novel ways. There may be a million different ways to connect a billion people, and each way will reveal something new about us. Something hidden previously. Others have named this emergence the Noosphere, or MetaMan, or Hive Mind. We don't have a good name for it yet.


Along these lines, here's something from Adam Frank's piece on NPR.org, "Fear of the TwitterBook: When to Adopt or Reject New Tech", in which he compares the ubiquity of social media to the relatively quick adoption of mechanical clocks in late-medieval Europe
It is the open-ended brilliance of Facebook and (as I am learning) Twitter in creating ever-shifting, ever-nested webs of connection that take them beyond themselves. Both sites may eventually be replaced by something newer. But by creating technological norms for a particular kind of connectivity, the electronic social networks they embody are transforming our historical moment as completely as mechanical time metering changed life in [the] 15th century.

Culture sees itself and the cosmos as a whole through the lens of its technological capabilities. That fact may explain when adoption grows beyond mere choice. Once a technology settles in to the point where it begins shaping the dominant metaphors of a society (the 17th century's "clockwork universe" for example), then there is no going back, no opting out. You and everyone you know will be assimilated.

There's much to say about this, but I've taken you away from your Labor Day long enough. Go and have some fun.

Read More...

Saturday, April 30, 2011

Of Dogs, Intelligence, and the Internet

[This needs a bit of smoothing and fleshing out in places . . .]


Yesterday, my college's Philosophy Club made a short road trip to Newman University to meet up with students and faculty in their fledgling Philosophy program (they're just getting a major off the ground there--something of a surprise to me, given the deep historical connections between Catholic thinkers and Western philosophy).

The topic was artificial intelligence and its myriad implications for human beings. IBM's Watson was our starting point: we noted that Watson is a big storehouse of information and some algorithms for sorting that information, but it shows no capacity for learning from its own mistakes or the mistakes of the Jeopardy! contestants it played against. Chris Fox, of Newman's faculty, mentioned that the capacities for reflection and self-awareness have to figure into questions of intelligence. "Computers don't laugh at themselves," he said. His example was dogs: They seem to have a kind of intelligence in that they seem immediately to recognize other dogs as dogs, no matter their size or appearance, but (so far as we know) they don't reflect on their (or their own) dog-ness.

Along these lines, Jeff Jarvis of Buzz Machine (via Andrew Sullivan this morning) has a recent post called "In a dog's net." Apropos of a "CBC Ideas series about how (we think) dogs think," Jarvis takes the idea that dogs "think in maps informed with their smell" and that they thus "have a different sense of 'now'” and muses,

It strikes me that the net — particularly the mobile net — is building a dog’s map of the world. Through Foursquare, Facebook, Google, Twitter, Maps, Layar, Goggles, and on and on, we can look at a place and see who and what was here before, what happened here, what people think of this place. Every place will tell a story it could not before, without a nose to find the data about it and a data base to store it and a mind to process it.

On the same show, canine Boswell Jon Katz argues that dogs respond to changes in their map: “hmmm, those sheep aren’t usually there and don’t usually do that and so I’d better check it out to (a) fix it or (b) update my map.” Dogs deal in anomalies. So do data-based views of the world: we know what happened in the past and so we know what to expect in the future until we don’t. Exceptions and changes prove rules.


As long-time readers of this blog know, I frequently try to talk about how the Internet, and technology more broadly, mediate between us and knowledge of the world, simplifying and distorting our experience of it if we use it uncritically. Jarvis' comments, it seems to me, bear this out. The phrase "a mind to process it" is the small but crucial one here: maps are, or should be, something like a modelling of the mind that produces it--and, indeed, directs our thinking along the lines of that modelled mind. I've not yet seen the program that serves as Jarvis' jumping-off point, but the impression I get from his remarks is that dogs make no judgment about the scents they collect and sort--a scent is a scent is a scent. There's the association of a specific scent with the specific location where the dog encounters it, but (apparently) no linkage made between, say, the locations where s/he also locates it. There's no sense of spatial relations between/among these places. (GPS devices, it occurs to me, are more sophisticated than this only in that dogs, because they have better noses, don't require satellites and disembodied voices to guide us through space.)

Humans long ago created a technology that models the world as the (human) mind apprehends it: maps. Maps show their reader at a glance where things are located in space; moreover, by choosing what to show and not show, their makers have made judgments about what is significant for their reader to know and not know. We are free to argue the values of those judgments, of course, but the implicit message of maps is clear: some things are more worth knowing than others. Indeed, once upon a time, maps quite literally oriented their readers in accordance not with a physical direction, but a worldview. Surely that is a more-or-less basic description of the human mind, too: we experience, we forget, we remember, we interpret and re-interpret, we decide what matters.

The internet-as-map doesn't do any of those things. By implicitly presenting all data as equivalent in value, it can lead to reductive, uncritical thinking in its users. It shapes our thinking in accordance to its modelling of the world and not the other way around, and most of us are at best dimly aware that this is so.

Put another way: some of my students think their smartphones are, in fact, smart.

A dog's way of mapping the world is perfectly fine--if you're a dog! If the 'Nets map the human world in the way that dogs map their world, I'm not too sure that that's entirely a good thing for people.

UPDATE: Somewhat apropos of the above is this post about a couple of programmers teaching a computer how to recognize double-entendres (specifically, otherwise innocent sentences and phrases that one can follow, "heh heh"-like, with "That's what she said"). Unless I'm missing something, this seems to raise the question raised by John Searle's Chinese Room thought-experiment: does the computer really understand that it's making these jokes? Is it "in" on them as it makes them?

Read More...

Saturday, August 21, 2010

A stretch of river LIX: On the difficulty of conceiving

Thoreau's shaque d'amour? Naah--but he did do some conceiving of Walden here.

I tried talking to Scruffy about all this on our morning walk, but he was more unresponsive than usual. Sometimes, every once in a while, blogs are man's best friend.

On Monday at 9:00 a.m., I'll meet a class of around 15 unsuspecting freshmen in English Comp. I, and my 18th year as a college professor will begin in earnest. Beyond passing out the syllabus and engaging in some sort of let's-get-acquainted activity and, in some cases, make a first assignment, I don't know what my colleagues do on the first day of class. Aside from our deans' insistence that we have a syllabus ready to pass out on the first day, we're not obligated to do anything. Early on in my career at my previous school, though, I got it in my head that it's a good idea to offer up my version of an "Aims of Education" talk, in which I try to convey, in some way that I hope will be accessible and at the same time intellectually challenging, my sense of what we are talking about when we are talking about Education (as opposed to Training, which is, alas, what has become the default setting for thinking about undergraduate education). This talk changes from year to year, but for the past couple of semesters it has begun this way:

Before anyone arrives, I go to the classroom and write the following on the board:

"Let us spend our lives in conceiving then."


The rest is below the fold.

While we're doing the housekeeping stuff of the first day, I don't say anything about that statement; I just let them think they know what it means, along with whatever attendant lascivious thoughts may come to their mind (I'm not accountable (yet) for what or how they're thinking); if anyone asks about it, I tell them that we'll discuss it later. Then, the housekeeping done, I tell them that that statement is from Thoreau's Walden and that he, a life-long bachelor, wasn't talking about making babies. Rather, he was talking about those other, less-familiar meanings of conceive, "to understand" and/or "to imagine." Then I provide them with the following passage from near the end of chapter 2 of Walden, "Where I Lived, What I Lived For," a bit of writing that never fails to move me:

Men esteem truth remote, in the outskirts of the system, behind the farthest star, before Adam and after the last man. In eternity there is indeed something true and sublime. But all these times and places and occasions are now and here. God himself culminates in the present moment, and will never be more divine in the lapse of all the ages. And we are enabled to apprehend at all what is sublime and noble only by the perpetual instilling and drenching of the reality that surrounds us. The universe constantly and obediently answers to our conceptions; whether we travel fast or slow, the track is laid for us. Let us spend our lives in conceiving then. The poet or the artist never yet had so fair and noble a design but some of his posterity at least could accomplish it.

. . . . Let us settle ourselves, and work and wedge our feet downward through the mud and slush of opinion, and prejudice, and tradition, and delusion, and appearance, that alluvion which covers the globe, through Paris and London, through New York and Boston and Concord, through Church and State, through poetry and philosophy and religion, till we come to a hard bottom and rocks in place, which we can call reality, and say, This is, and no mistake; and then begin, having a point d'appui, below freshet and frost and fire, a place where you might found a wall or a state, or set a lamp-post safely, or perhaps a gauge, not a Nilometer, but a Realometer, that future ages might know how deep a freshet of shams and appearances had gathered from time to time. If you stand right fronting and face to face to a fact, you will see the sun glimmer on both its surfaces, as if it were a cimeter, and feel its sweet edge dividing you through the heart and marrow, and so you will happily conclude your mortal career. Be it life or death, we crave only reality. If we are really dying, let us hear the rattle in our throats and feel cold in the extremities; if we are alive, let us go about our business.(source)


(That second paragraph is just amazing, isn't it?)

But now comes the "difficult" part: No matter the meaning of conceive, each has in common the dynamic of two unalike entities combining to make something that had not previously existed. So far, so good. But even under the best of circumstances, there's more than a little luck involved in that dynamic. Estimates vary widely, but scientists say that anywhere from 30% to well over half of all fertilized human eggs will never result in a full-term baby, only a very small percentage of those "failures" being the result of some sort of human intervention. As all of us who have ever dreamed up some idea or theory that literally or figuratively blows up in our face can attest, the success rate with the non-biological kinds of conceiving are probably not very high, either: speaking from experience, most of my ideas don't even make it to the stage where they have a chance to blow up. In retrospect, that's probably for the best.

In the realm of intellectual conceiving, information-storage and -retrieval devices always mediate this dynamic of unalikes meeting and creating something new, especially in terms of the Internet's ability to access vast quantities of data (see Neil Postman's work--in particular his notion of "information") and, as Nicholas Carr provocatively argues, how we assess the quality of all that information--specifically, what the implications of the 'Net are for those subjects, and they are many, which don't translate easily to web-friendly environments.

It's here that the first-day talks will diverge. Comp I students will get to hear the "spell-checks only check spelling!" speech, along with the story of why "defiantly" has become such a common typo for "definitely;" I'll tell them flat out that such errors mean that they are truly not reading their work when such errors occur. I want them to think about the concept that good writing (read: conceiving) occurs when the subject becomes more important than the writer and that requires patience and focus and attention. Comp II students will get the "we need to think not just about the information from the source but also the source itself" speech: we'll look at and talk about the medieval maps you see here (each reflecting the faith of its maker), a 1530 map (part fairly accurate, part complete guesswork) of the Western Hemisphere that I'd link to if I could find it online; and, at the complete opposite end of the spectrum, a GPS device. I'll try to say some things about context (or its lack, in the case of the GPS device) that will make sense with regard to understanding, mastering, being able to make observations, and writing well about a subject. All that, too, is part of conceiving.

As I said, Scruffy was unresponsive as I talked with him about all this. Perhaps my students will be a bit more engaged.

Read More...

Tuesday, August 17, 2010

Back to school

Faculty meetings for the new semester began yesterday, and in today's presentation our guest speaker, yet another in a long procession of guest speakers over the years whose job it has been to tell us about Kids These Days, told us about Kids These Days. One of the things about Kids These Days: They have all these gadgets whose chief purpose seems to be to enable their tendencies toward ADD-ness. Moreover, as we know, their most frequent encounters with written language occur not via paper but via electronic screens of various sorts.

I get that, and I am comfortable with that. Or I thought I was until, via The Daily Dish, I ran across this article by Alan Jacobs, in which he announces that he's begun to read David Foster Wallace's Infinite Jest on Kindle.

Here's the bit that tripped me up:

So I bought the Kindle version. All the above problems [chiefly, the paperback's bulk] solved . . . but . . . I found that I was missing the visual cues that codexes offer. I don't often miss them, or not all that much anyway, but in this case I miss them. Wallace goes off on these long riffs, but on the Kindle it’s hard to tell how long they are; whereas when holding the codex I could flip ahead to see how long I should be prepared to keep my concentration before I can expect a break.


In case you didn't catch it, Jacobs does not use the word book; he uses the word codex. To see why this pulled me up short, have a look below at the definition I know for codex, along with a picture of one:

co·dex (kdks)
n. pl. co·di·ces (kd-sz, kd-)
A manuscript volume, especially of a classic work or of the Scriptures.
[Latin cdex, cdic-, tree trunk, wooden tablet, book, variant of caudex, trunk.]
Word History: Latin cdex, the source of our word, is a variant of caudex, a wooden stump to which petty criminals were tied in ancient Rome, rather like our stocks. This was also the word for a book made of thin wooden strips coated with wax upon which one wrote. The usual modern sense of codex, "book formed of bound leaves of paper or parchment," is due to Christianity. By the first century b.c. there existed at Rome notebooks made of leaves of parchment, used for rough copy, first drafts, and notes. By the first century a.d. such manuals were used for commercial copies of classical literature. The Christians adopted this parchment manual format for the Scriptures used in their liturgy because a codex is easier to handle than a scroll and because one can write on both sides of a parchment but on only one side of a papyrus scroll. By the early second century all Scripture was reproduced in codex form. In traditional Christian iconography, therefore, the Hebrew prophets are represented holding scrolls and the Evangelists holding codices. (Thanks, Free Dictionary; image found here.


Add to this my recent reading in which Aztec codices get mentioned with some frequency and, well, maybe you can see why seeing a novel published in 1996 referred to as a codex was a bit startling. You can gather that this usage is brand new to me. Is it for you as well?

But more to the point, I found myself wondering about the implications of this term's application to an object that's usually not called a codex. Books are indeed an ancient technology, but are books themselves ancient--which is to say, passé? Is the choice to call them codices meant to honor them or to draw attention to their jalopy-ness? And what might be implied here regarding those of us who still prefer to read off paper rather than off screens? Are we just slightly-hipper versions of these guys?



As you no doubt have determined by this point, I have no conclusions one way or another about this, aside from the usual truisms: Usages change. But it's hard not to be tempted to read this particular one as a kind of commentary on the position printed text now holds in our culture, that now some (many?) consider it to be on some sort of par with hand-written and illuminated manuscripts. That is a strange thing to contemplate as I once again face the necessity of explaining to students why having a book for the class is a good, if quaint, notion.

Read More...

Thursday, January 15, 2009

Wherever you go, there you are . . . but where is that?




From top: A "T-in-O" map from a 1472 edition of the Etymologies of St. Isidore (7th cen., Seville) (image found here; a map created by Muslim geographer Al-Idrisi from the 12th century (image found here; a copperplate facsimile of the Western Hemisphere of a globe by Johann Schöner, 1520 (Image found here); a Tom-Tom One XI (image found here)

As I blogged about at this same time last year, I'm interested in the relationship between technology and my students: to put it a bit crudely, whether they really use it or it seduces them into thinking they're really using it. And during our in-service this week this issue got raised for some of us, this time from the faculty/administration side of things. In the past couple of years we've made enormous investments in that big thing called "computers"--not just new hardware and software, but in the "meta" side of things: course and equipment intended to teach students how to set up and manage databases and servers, and how to keep them secure. We've also spent a lot of money acquiring technology for classrooms that teachers can use when presenting material.

So, there was this gee-whiz haze that my English department colleagues and I walked through on Tuesday as the entire college's faculty took a morning tour of all that. But in our department meeting that afternoon, one of us spoke to something I'd been thinking about that day and yesterday, too: There for a little while, Wichita seemed to have escaped the worst effects of the recession, but this week Cessna announced that it would be laying off 3,000 workers in March at all salary levels (for some perspective, the metropolitan area has around half a million people). Given the general downturn in aircraft manufacturing, other layoffs or work slowdowns are sure to come. And as goes aircraft, so goes this town. So my colleague's question was, in effect, how do we in the very un-sexy English department make or keep ourselves relevant to our students, many of whom are already living at or below the poverty level?

Good questions. And here's my answer: On the first day, I think I'm gonna talk about really old maps and GPS units.

I got to musing about GPS units during that tour, for some reason, and it struck me that at some essential level the user doesn't have to know where s/he is relative to anything else. If you can turn left and right when told to do so, that's all you need to know. There's no assertion of the user's will on the device once the coordinates are entered--you don't get to choose from multiple routes; you get The Way, the One True Route ("No one comes to the Grocery Store (or wherever) but by me"). Most of the time, that's not a problem, but we've all heard of instances when one of these devices has given directions that don't correspond to the physical world: "You can't get there from here!" Also, I've already mentioned that the device doesn't propose alternative routes. In either case, if something goes awry and you truly don't know where you are in physical space and you either don't have a conventional map or can't locate where you are on it, well, as I am fond of saying, You can only know what you know.

Enter the old maps. As I wrote about this past summer, the top two maps' depictions of space are determined at least as much by ideology--specifically, religion--as by a desire to represent the earth's surface in a useful manner. It's easy enough to see that these competing ideologies produce two very different maps of the same geographical space. But, at least for the pre-Renaissance Christian, there was no distinction between between sacred and secular knowledge. The Bible did more than reveal God's Will for His people; it was also the one accepted source of knowledge about the world--"world" in those days referring specifically and only to the then-known landmasses referred to in the Bible. That which was not in some way accounted for in Scripture either could not be or was in error or simply dismissed because of its (pagan) provenance.

That was the cultural world Columbus lived in and under whose assumptions he undertook his voyage to Asia in 1492. Add to that the complicating factor that he thought the world's circumference was actually 1/3 that of what most people estimated it was, and it's easy to understand why he died, in 1506, insisting he had found Asia, even though by that time most people realized that he had found something else instead. In a sense, the presence of these landmasses was no problem for Columbus: he simply said that what he had found a part of Asia that no one had known about before. But for the growing number of people who realized otherwise, they had a very real problem: how to talk about this Something Else--something clearly NOT accounted for in the Bible--without calling the Bible's accuracy into question? Simply saying the Bible was mistaken was not an option--these were the days of the Inquisition, recall. And consider that the problem created by this other land mass was simple compared to the problem posed by the people found there: Who were they? What were they?--that is, they appeared to be humans, but were they fully human? (read: Did they have souls?)

This was a problem that neither simple observation nor technology could solve--indeed, technology (here, the ability to sail across an ocean) had created this problem. The problem was more than one of simple ignorance. It was a problem of conceptualization, of coming up with a new way of thinking about not just this new place but about the place we had come from (something often forgotten is how Columbus' voyages changed Europe's understanding of itself, too). In other words, this was a problem that only language could solve.

Language is a tool, too. The work it performs is the most important work of all: the work of explaining and making sense of our life and our place in the world--and, if we're both good and lucky with language, shaping and influencing our respective corners of the world. Seen in this way, the term New World, coined by Peter Martyr, is one of the most powerful tools ever made.

Mastery of machines is a crucial skill to have. But more important, if we don't want to feel like people who knows no more about where we are than the fact that our GPS unit has guided us there, is looking up from our machines and seeing how what we do fits with the world beyond our machines. A map is (still) pretty useful for doing that sort of thing, and writing--another sort of map-making--is, too. In Comp I, we'll work on learning how to make better maps than the ones we're making now.

Read More...

Tuesday, January 15, 2008

". . . when the menstrual sings a song . . . "

The title for this post, taken from an actual sentence in a paper on the Odyssey that I received, reminds me that one of the (unintentional, I'm sure) effects of spell-checks is that they can produce unexpected but often delightful moments of humor for instructors as they read student papers.

In that vein, I point you in the direction of Taylor Mali's "The Impotence of Proofreading," with thanks to Cordelia who mentioned this in her comment on the previous post.

There's one vulgar term; the title of this post should suggest the direction in which the language does go, so take heed.

Read More...

Sunday, January 13, 2008

The electronic lever and fulcrum

Archimedes rocks your world.

My colleague Larry the movie guy has noted odd things in the answers on tests he gives in his physics class, in which students are allowed to use calculators (Larry keeps a slide-rule around to show his students how things used to be done): students who get the more complicated math right but who end up getting the wrong answers because of mistakes in simple arithmetic; people whose calculations of a bullet's muzzle velocity yield answers which would be the result of the physical laws of a world in which you or I walking briskly would out-walk that bullet (Cool! We could go back to swinging jawbones of asses, then!); etc. "What are they thinking??" he sometimes exclaims.

My version of Larry's problem is similar by analogy: During the course of the semester, as has been the case throughout the Microsoft Word years, I will have occasion--several, in fact--to say things like "Spell checks only check spelling--not usage" and I will see my students nod sagely and smile at the inanity of someone's thinking otherwise and yet, come the succeeding sets of papers, I will see yet more raft-loads of there/their/they're and two/to/too and weather/whether and accept/except confusions. I'd list yet more, but you get the idea. And don't get me started on apostrophe usage.

It's tempting to blame students and/or the impoverished state of their grade-school education for these problems, but I don't think that's entirely fair. I may be falling prey to "in my day"-type thinking, but it does seem to me that such errors have become more common as computers have supplanted typewriters or, more precisely, as spell-checks have become a feature of word-processing softwares. I would lay some of the blame, therefore, at the straw-man feet of Steve Jobs and Bill Gates. Their softwares have so reduced the mental work involved in producing a text that it can easily create the illusion that once the words appear on the screen, what else is there to do?

Randall of Musings from the Hinterland posts on a different version of this here: some of us have become so accustomed to thinking of computers as labor-saving devices that, as you'll see when you read his post, our insistence on using them as such can lead some of us to become inefficient--the Old Ways of Doing Things would have actually saved us considerably more time than we expended while looking for a way to perform the same task via nifty widgets to save time. A different version of destroying the village in order to save it.

Larry and I once talked about all this last semester, and Archimedes came to mind. Or, more precisely, tools and how we think about them.

At a fundamental level, a computer is a tool, just like Archimedes' lever and fulcrum are. Both are instruments that perform work. But this particular comparison breaks down rather quickly. With the lever and fulcrum, the end results of the labor expended are immediately evident: you've pried the rock up and moved it, or you haven't and have to do it some more or reposition the fulcrum or what have you. Despite the obvious labor required, not much in the way of intellectual engagement is required to use this particular tool--nor, for that matter, emotional investment. Did you succeed in prying up the rock? Good. Next! At a certain level, then, the working of the lever and fulcrum is a passive activity. Physics doesn't care what you think.

Neither does a computer, come to think of it--all the more reason to keep in mind the old programmers' credo, "Garbage in, garbage out." The great strength and weakness of contemporary word-processing programs are that they make very easy the task of producing documents very fast. While the mechanics of producing a text become easier, the desire of most people to be finished with a writing task as quickly as possible, I suspect, becomes enabled by that same ease, thus leading to the problems mentioned above. Somewhere along the line, how we perceive writing has changed: it has always been work, but as the technologies used to produce it have changed, so also has our felt connectedness to its output. Which is to say, that as it has become easier to produce a physical text, we have become more intellectually and emotionally detached from the actual results of that production. Maybe part of the problem also is the language we use for these instruments and their softwares: computer. word processing. As though texts are akin to mathematical calculation, language as lunch meat.

There's also with it the fact that more of us have to write as part of our work, which not only makes practical its further technologizing but also leads to its devaluing as a skill worth doing well (again, because of the emotional detachment most of us tend to feel from the labors our employment require of us). It does not help that the ultimate disinterestedness of the machine to what it produces is actually antithetical to those attributes of Good Writing: clear, effective thinking and what the French would call le bon mot--which, unless we're especially gifted, take time.

By way of contrast to all this, consider Shelby Foote's use of a crow-quill pen to write his massive three-volume narrative of the Civil War. Without worrying here about the quality of Foote's writing, consider for a moment the mechanics of producing that text: a couple of short words, perhaps one word or a part of a word, written per dip of the quill in the ink; and always, always, the frequent few-seconds' pauses when the quill is raised from the page and toward the well and them back again, during which there is time to try out and retry and try again the next few words and phrases--in the inner ear rather than on the screen. There is no way to do this sort of writing fast.

This isn't an argument that we go back to quill pens or Royal manuals when we need to write anything worth writing (it is interesting, though, that Darren Wershler-Henry, author of The Iron Whim: A Fragmented History of Typewriting, noted in this NPR interview that a soundless typewriter marketed in the '40s failed because, apparently, people liked the sound conventional typewriters produced). It's only an observation, and not a very profound one. But it's one that, as a teacher of writing, bears some consideration on my part, seeing as good writing more often than not gets produced by in some way(s) going against the grain of the very technology most of us use to produce writing nowadays.

Read More...

Sunday, November 04, 2007

Returning to the biggish screen: The moving picture in the age of digital reproduction

Image via Cartoonstock.com

Recently over at Clusterflock, Deron Bauman asks a simple question: "What movie(s) have you seen the most?" As I read over the comments and contributed my own, I was reminded of something I said in this post, specifically the following:

I'd like to argue that we can apply [Walter Benjamin's] concept of aura1 to the not-so-old days of moviegoing, in which the only way the vast majority of people could see films was at the moviehouse with hundreds of other people. True, scores of copies of these films had to be made, but the audience desirous of seeing a film more than once had no choice but to go to the moviehouse again, in obedience to its--not the audience's--schedule, and then, when its run ended, would most likely never see that film again. Sure: film can endlessly reproduce images of reality; but surely the aura of film [by which I mean more precisely "moviegoing"] lay to a large extent in the fleeting and arbitrary nature of its exhibition. More: each successive viewing of the same film back in those days would itself have been an irreproducible experience. The individual viewer would notice different things each time, think differently about the same things . . . but just as important would be the different composition of the audience each time.

The corollary to that, I say elsewhere in that post, is that our current ability to see films pretty much whenever we want to, and via ever-expanding and ever-more-convenient and portable delivery systems, signifies several fundamental changes in how we experience film now.

Well, okay: in that post, I identify just one fundamental change: that whereas before, the communal space of the moviehouse was part of the experience of movie-going, that communal space has become devalued via the simple fact that it's no longer a prerequisite of . . . well, not "movie-going" any longer (since we no longer have to "go" anywhere, not even away from our computers, to see a film), but "movie-watching." Indeed, I speculate there that movie-going shares in some ways the dynamic of church-going and may, indeed, fill in a secular manner the idea of churchgoing-as-communion (in "communion"'s broadest sense).

[aside: the big program-oriented non-denominational churches are analogous to multiplexes, come to think of it . . . ]

But the Clusterflock question--specifically, its querying of multiple viewings of films--got me to wondering whether the delivery systems for films shape how we regard them. This is something that partly involves saying, "You should really see [name of film] on a biggish screen," but it also involves asking, How do the easy availability of films and all these different delivery systems shape how we perceive certain actors, certain films or, for that matter, the experience of watching films?

I have some hazy ideas about these matters that I hope will assume more definite form when I also have a moment to blog about them.
_________
1Discussed here (from "The Work of Art in the Age of Mechanical Reproduction"):
Unmistakably, reproduction as offered by picture magazines and newsreels differs from the image seen by the unarmed eye. Uniqueness and permanence are as closely linked in the latter as are transitoriness and reproducibility in the former. To pry an object from its shell, to destroy its aura, is the mark of a perception whose “sense of the universal equality of things” has increased to such a degree that it extracts it even from a unique object by means of reproduction. Thus is manifested in the field of perception what in the theoretical sphere is noticeable in the increasing importance of statistics. The adjustment of reality to the masses and of the masses to reality is a process of unlimited scope, as much for thinking as for perception.

Read More...