9/25 – The Informatic Habitus

In Uncategorized on September 25, 2008 at 4:47 pm

Reading: Datacloud: Toward a New Theory of Online Work

Discussed: Articulation Theory; Cloud Computing; The Communism of Capital; Computers & Composition (field); Computers & Composition (journal);  Formal and Real Subsumption; Habitus; Information Architecture; The New Spirit of Capitalism: Sarah Palin, The Postmodern Condition: A Report on Knowledge; $upercapitalism; Symbolic-Analytic Work; “There Will Come Soft Rains”

Information as Habitus: …how we live has changed: We have come to work with information as a primary environment and resource. Whereas the industrial age focuses on the production of information, in his epoch, information workers do not merely use information, they inhabit it. (3)

Bridging the Critical/Practical Divide: Articulation theory forefront resistant, political stances that many (although not all) symbolic-analytic workers would vehemently protest. yet that contradiction is also the reason to force them together: Articulation theory provides a method for politicizing symbolic-analytic work, whereas symbolic-analytic work provides vocational training for what too easily can devolve into liberal posturing. Datacloud, in one limited sense, is a job ad for information age cultural workers. (19)

Interface as Surface: As we move toward more graphical interfaces, the location of working and learning information begins to shift; learning is buried in the interface (in online help and tutorials), but increasingly the interface – the surface – provides users with suggestions and hints about how to work. In other words, learning and work increasingly take place at the surface of the computer. The interface captures multiple, overlapping spaces that support an increasingly complex array of tasks: word processing, e-mail, graphics design, page layout, presentation of design, video, and more (e.g., the analysis and manipulation of symbols). (45)

Deskilling in the Information Age: Paradoxically, increases ease of use also worries me; it is now much more likely that people will create Web pages without a broader learning context – without understanding anything about interactivity, screen layout, and information design. […] In a sense, the ease of use tends to move the task of constructing Web pages away from a symbolic-analytic skill and toward a routine production or in-person service skill. (49)

The Spime of Symbolic-Analytic Work: In interfaces that relate to information work, we begin to see a split: Programs that support information consumption tend to prioritize the line and time, whereas those that support information production tend to prioritize space (and often colonize time by turning it into space). Some programs support both information production and consumption, with a small class (which I will return to later) supporting production and consumption as both temporal and spatial activities – in other words, disarticulation and rearticulation. These activities suggest new directions for support for symbolic-analytic work. (102)

Deconstructive Architecture: Whereas traditional architecture (and information architecture) would deem contingency and chaos as negative aspects, deconstructive architecture recognizes the productive and creative potential in those characteristics. Just as important, deconstructive architecture frequently involves social commentary in ways commensurate with the social project of articulation theory. So we might consider the work of deconstructive architecture as something akin to Stuart Hall’s understanding of the place of postmodernism in relation to articulation theory – not a rejection, but a recuperation and reclaiming of terrain and power. The fragmenting impulses contained (and exploded) by these gestures do not obliterate opportunities for meaning; they make new meanings possible. (126)

  1. After emerging from the datacloud, I came to realize that I am in the midst of a postmodern love affair. Before I can explain this, I have to think about what Johndan Johnson-Eiola was really trying to accomplish his text, Datacloud. Rather than a warning against the dangers of our culture mutating from an overexposure to data (one suffering from a systemic overload of information), I believe he is mapping the path of cultural and social progress brought to us from our dawning multimedia age. I say dawning because I do not believe that we have yet begun to interface with the world of technology on the scale that our future holds. For Johnson-Eiola, the results of this progress come with mostly positive manifestations of man’s adaptation.

    The rate of progress, just in the last decade, is mindboggling compared to that of the centuries that have come before. Dystopian literature and visual media warn us of the potential for a collapse. Just today in my English 10 class, my students and I finished reading a short story by Ray Bradbury entitled “There Will Come Soft Rains” about a fully mechanized home functioning even after the family and the rest of mankind have been destroyed by nuclear warfare. The disturbing elements of the fully functioning environment tailored to fit a family that no longer exists haunt the reader as it goes on about its day tending to the needs of individuals that are no more. It isn’t hard to get caught up in the imagery of the perfect environment stained with the charred outlines of the family evaporated by the explosion setting in the middle of desolation. Of course, my jaded high school students failed to see the connection between their attachment to their iPods, Smart phones, PSPs, laptops, TiVos, webcams, and virtual spaces to those environments and technical amenities created in the story. The old saying that says, “It is hard to see the forest through the trees,” couldn’t be any truer. I too, as I have most recently discovered, am no different. Like the students, I too fail to see the relationship man has been developing with technology as one of gloom and doom. After all, my very marriage was spawned in the midst of the datacloud that Johnson-Eiola discusses in his text.

    My husband and I met in the vast recesses of cyberspace by chance one evening over some files we were trading with one another anonymously in Napster. The very nature of my antiquated computer led me to make contact with the keeper of vast music files in desperation. I simply could not have another file interruption as I attempted to download music files (a feat that once took all night for two songs).

    Our relationship began with a simple IM message, “Please, don’t turn off your computer.” I never before gave much thought to the “odd” beginnings of our relationship. Socially, it is an odd beginning because it is different from those traditional beginnings of the modern age. Instead of a bar, I met my husband in a virtual room for “chatting.” I wasn’t struck by his appearance in any way. I had no idea what he looked like at all. The feelings that manifested for him came from within and moved out, starting in the mind and ending with the body. In fact, moving to the body connection was an odd and disorienting experience for us both. When he first came to visit, it was difficult for us to talk to one another. Despite thousands of hours spent with one another IM-ing each other, the presence of the face and injection of body language proved difficult. Even today, more than seven years since our initial meeting, we still find it easier to talk via our computers when things are difficult to discuss face to face.

    The way that Johnson-Eiola introduces the datacloud is to paint a picture of technological waves of information that could potentially drown a person and that man has adapted to this over stimuli by processing information differently. The postmodern man takes bits and pieces of information from multiple locations and mashes it together into a collage of ideas and thoughts. The extra information is discarded to save room for more information that serves a higher value in the hierarchy if what is necessary. Johnson-Eiola attempted to show how people have adapted to this way of managing information through his examples of people who utilize this type of cutting and pasting of information in their daily work. What he fails to adequately show is the difficulty that others may have with entering the datacloud. Surely, the transition for everyone from modern thought and critical informational examination to surface value thought and informational examination has some glitches. I certainly have found myself in the middle of the problematic transition from one way of thinking to the other. I have become a channel flipper of ideas with my areas of interest. I love to scan for ideas from multiple sources and pick up bits of interesting information to satisfy my curiosity, but because of this have never developed any real expertise in any one subject. Like myself, our culture is propagating a society of millions of jacks-of-all-trades who are proficient at many things but lack the expertise to move beyond. I recognize this flaw in myself and I am currently trying to rectify this by focusing in one or two areas to build up a knowledge base. Part of me, however, wants to find instant gratification in this process – to take what I “really” need and to move on.
    What I am left with are questions. What do we do with a population of people who fail to see the importance of looking at things with a more critical eye? Is the way that man has adapted to the informational datacloud through sampling going to prove dangerous to our cognitive development? How can educators train their students, who have adapted to this type of environment, to dig deeper into information and to become selective? Can we teach them to tune out the mash of noise from that information around them? If not, what do we do? Is it important to look at information the way we did before, or is this adaptation just the mark of progress and the discarding of an antiquated machine that downloads too slowly?

  2. In The Will to Technology and the Culture of Nihilism, Arthur Kroker grapples with how our burgeoning digital society is adapting to the ethical challenges of technology that is becoming more and more powerful at an increasingly alarming rate. In order to legitimize this technological growth, Kroker presents a concept he calls “reverse engineering,” which he defines in brief as the “…decompiling and dissembly of redistributive codes” (206). Through digital art and cinema, Kroker states, we are able, essentially, to prepare for future bumps in the technological road simply by invoking decades of past research and aesthetics: the past is the future.

    In a similar fashion to Kroker, Johndan Johnson-Eilola describes in Datacloud our transition from a culture that produces material things to one that produces ideas. It seems to me that Kroker’s reverse engineering fits with what Eilola calls the shift toward “knowledge economy” (11) and articulation theory in which myriad fragments of history, pop culture, technologies, and personal experience are stitched together to form something new and completely innovative. While none of the parts are new, the final product is. And thus, it is not just an issue of ethics, but of potentiality as well. How much does our reverse engineering affect the progress of technological culture? I’m not saying that 2001: A Space Odyssey and the like were Kubrickian harbingers of the Y2K hullabaloo and our reactions to it. I am only speculating, of course, but it seems plausible. Whether optimistic or dystopian, futuristic films act as a sort of dress rehearsal or rough draft of what could be to come. Cinema-of-the-future allows us to say “Hey, we’ve seen this before.” The artistic mind has no limit, and thus reverse engineering has infinite possibilities. We cannot truly be shocked by the biotech future, because everything old is new again. But what is interesting is that our potential is based squarely on creativity.

    To put both Kroker’s and Eilola’s ideas into different words, I’d like to suggest my own terminology: creative invention. Just as oral invention exercises helped young Greeks to prepare for a career in the agora and written exercises in our composition classrooms prep students for their future careers, so too do digital activities like creating websites and even surfing the Web act, like the films referenced by both Kroker and Eilola, as practice, as possibilities, to add to technological consciousness.

    Eilola echoes this idea of practice in Datacloud. In the “Other Stories, Other Texts” chapter, he describes how DJs play with fragments from different sources to create a new sound: “The ability to drop interesting cuts relies on building their repertoire” (112). “Building a repertoire” is an important choice of words for Eilola here, as it reshapes centuries-old techniques and practices for building oratorical skill. DJs collect LPs; we collect YouTube videos, Googled Jpegs, hyperlinks, and fragmented texts on JSTOR and Project MUSE for papers, projects, or pure entertainment. Thankfully, we have the external memory of our computers to remember these fragments for us, but the effect remains the same: even if our web surfing has the potential to be the start of creative inventions, rearticulations, reverse engineering, for us.

    At the end, here, I feel as if I’m returning to the question Conor posed in his response to Technics and Time last week: what IS composition, exactly? It is so much more than fingers-to-computer-keys or pen-to-paper writing, of course, as any music composer would tell us. However, now that we are enmeshed in digital culture, the umbrella becomes even larger.

    Question: what does it mean for a project to be finished, then? Is it possible to truly “finish” a digital project? If we can edit information indefinitely and our work can be appropriated in partial form for other uses, then what good does it do to say, “This paper is finished.” The “end” as we call it is arbitrary; the paper, the line of thought, even the Internet can—and perhaps will—continue on forever. How could it not?

  3. Crystal Starkey
    Datacloud: Toward a New Theory of Online Work
    Johndan Johnson-Eilola

    Learning has indeed shifted in relation to computers in that computers effect the location as well as the type of learning with which we engage. Further, new media has established numerous and various ways that information is manipulated, filtered, sorted, and transformed (97). Yet, as Johnson-Eilola notes, “[a]lthough interfaces, along with social contexts and other forces, tend to articulate linear, difficult to analyze communications…this should not be surprising given that users are not encouraged to do otherwise by the software or context they are in” (99). I wonder though how users (i.e. students) could know what “otherwise” could entail without some kind of alternate direction. Also, are the software programs designed to be linear because we think linearly? Johnson-Eilola notes that people who produce information architecture think more spatially than those who consume it (104). So, then is it that we are taught to think linearly? If so, then is this part of his earlier critique that higher ed has done a poor job thus far of teaching the complex layers of new medias? Another question, then, is would Johnson-Eilola say that society normalizes the use of new media mediums, in a similar fashion that we normalize human bodies?

    As it stands now, information is either something we produce or consume. Students are rarely taught that information is something to be analyzed, questioned, and interacted with. Rather, they are taught information is something to be received as opposed to used (101). According to Johnson-Eilola, “contemporary cultures are primarily involved in the control of space” (101). And, I would argue for the addition of presentation as well. Due to my interest in Disability Studies, this concern with space and presentation in new media can also be seen in relation to social awkwardness so often associated with student with High Functioning Autism and Asperger’s Syndrome (HFA/AS). Students with HFA/AS would agree that society is more concerned with space and presentation; this focus is foreign for them and exploits their differences even further. These students would argue that content is much more important than the amount of/control over space or space’s presentation. Connecting this to the social aspects I spoke of, student with HFA/AS are often criticized for not making eye contact and not being able to read non-verbal cues such as body language. This often occurs because students with HFA/AS are often focusing on what someone is verbally saying so intently that the other forms of communication (communicating attention/alertness/interest through eye contact, and reading sarcastic undertones or expressions like eye-rolling or hand gestures) go unnoticed. For people with HFA/AS the specific message spoken is what matters most, not the space in which it is delivered nor the presentation that accompanies it.

    Johsnon-Eilola notes that “architecture relies on a careful balance of form and function, with emphasis placed primarily on function and purpose” in alignment with the “modernist maxim: ‘form follows function’” (120). Am I wrong in think that composition, especially the composition course (and processes) with students who have cognitive diversities, must also negotiate this careful balance? Another connection I saw to composition is when Johnson-Eilola writes about alternate ways of understanding information as “…creativity articulated not as the creation of unique information in a vacuum, but as involving manipulation of preexisting pieces of information in space…[through addressing] the symbolic-analytic work issues as a way to orchestrate temporal fragments, constructing a line from heterogeneous, disjointed spaces” (109). Since composition, by definition, speaks to the order and uniformity of an arrangement or organization (and this is a very linear way of thinking), would Johnson-Eilola agree with Robert McRuer in that composition might be better named de-composition?

    (Side note: McRuer asks “What would happen if…we continually attempted to re-conceive composing as that which produced agitation— to re-conceive it, paradoxically, as what it is?” (148). For Mcruer, then, “…composition, as it is currently conceptualized and taught in most U.S. colleges and universities, serves a corporate model of efficiency and flexibility…[where] ‘critical thought’ is re-conceptualized through a skills-based model ultimately grounded in measurement and marketability, or measurement for marketability” (148). McRuer, argues that composition, as it is often processed, digested and taught in contemporary universities, undergirds heteronormativity and heteronormativity undergrids composition” (150). Indeed in corporate universities, composition remains focused on an ideal (thus unreachable), measurable, final product in which we are “…forgetting the messy composing process and the composing bodies that experience it” (152). The ways in which writing has been conceptualized leaves revision as mandated, forbidden even because we only want revision that is “…safe, contained, composed; the corporate university…seeks immunity from authentic revision, from writing generated by unruly queer/crip subjectivities, from de-composition”(168).)

  4. After struggling with reading Steigler’s Technics and Time, Datacloud seemed to be too simplistic. There were only two theories present, the language was extremely informal in spots, some of the photos and content were archaic, and author Johndan Johnson-Eilola cited Websites for his research. This was hardly the type of scholarly writing this class has been reading in the past two weeks. One point of language I have to point out is when he mentions how he would like to have a classroom set up: “dammit, one with chairs and desks we can arrange in a circle and just, you know, talk to each other without distractions, like we used to when I was a TA. This all makes me feel so freaking old” (24). However, on a second read through I focused on the content on the two theories – articulation and symbolic-analytic – and noted that the recent unveiling of Sara Palin fits Johnson-Eilola’s charge. I will show how articulation regarding Palin has changed since she burst on the national scene less than one month ago when she was tapped to be the Repbulican vice president candidate using the symbolic-analytic work within my own datacloud manipulating, synthesizing, and analyzing the various bits of information.

    Johnson-Eilola lists five key aspects of articulations, the first being open to change. As a relative newcomer to the national political scene, Palin was an unknown—so just about everything had to be defined and was therefore open to change. The media was not told much more than her name and current occupation, so reporters swarmed around her small town, family, and friends looking for a definition beyond the “hockey mom” who presented such a rousing speech at the National Convention. What’s now being said about her is so new: 176 of the 211 references on her Wikipedia page have been written since Aug. 30. The second point is that change involves struggle among competing forces, which we know happens between politicians in an election year. The Republican’s primary argument against Barack Obama was that he lacked foreign policy experience; Palin’s experience is similarly lacking. She has ten years of government experience including eight years as a small town Alaskan mayor and two years as the state’s governor. So the struggle to define her includes would she be ready to step in as president if the crotchety McCain croaked; the Democrats say no. A third point states that, “articulations are never separate from the forces that construct them: the articulations are formed through the differential sets of other articulations” (28). The Republicans and Democrats, Palin, her friends, and the media are all adding to the dialogue—many, many articulations are shaping Palin’s definition. In her first public interview with ABC journalist Charlie Gibson Palin failed to effectively identify the Bush Doctrine. The media had a field day Friday and Saturday over the stumble, Palin was even the butt of a Saturday Night Live skit featuring comedian Tina Fey, and using the “new media” of Youtube, Google and the AOL News Bloggers you will find articulation for and against her:

    One blogger said: “I am terrified at the prospect of this inexperienced undereducated woman representing the United States perhaps someday sitting across the table from Nouri Maliki. (Incase she doesn’t know he is the Prime Minister of Iraq). He would eat her for lunch. She is fine as the governor of the huntin and fishin State of Alaska but she is way over her head on the national level let alone the international level” (C.A. Wren).

    Another blogger had an opposite stance: “Sarah Palin is a straight shooter. I have grown to have absolute contempt for the media, and I believe that this election may maybe a referendum against the media. The twisted version of everything she says by the media is contemptible” (Misty).

    The fourth definition mentions that definitions will be different and uses the example that social programs continue to have different meanings to the Republicans and Democrats. The same goes for the term: “Sara Palin.” The fifth and final key aspect of articulation is that fragmentation and rupture are not necessarily debilitating—this is what the media is doing now as they try and define Sara Palin. It was announced her unwed, teenage daughter is pregnant—something that in the past would have eliminated a candidate from such a high office. Instead that situation has been rearticulated to point out the teenager did not have a boyfriend and will be married soon. As the media continues to uncover more information about Palin, the articulation will be further defined; but at what point should the media stop? When will the articulation be complete?

  5. Johndan Johnson-Eilola’s Datacloud examines the shift in human/user activity in online and offline mediums.  He notes that “this sort of activity – contingent, experimental, loosely goal-driven, playful – in an increasing number of situations – not only games…is due to the dramatic increase in the amount of information we deal with on a daily basis…Rather than establishing frameworks and ground rules early on, users in these environments learned – and created – rules on the fly.  Rather than understanding creativity as the inspired production of solitary genius, these users manipulated preexisting data, filtering, cutting, pasting, and moving” (3).  Eilola’s understanding of online user activity as “dynamic processes of ongoing construction and reconstruction” raises a persistent concern within the Composition field as to what obligation we have as instructors to engage students with the malleable, shifting, and transforming power of language: To what extent do academic genres limit this “playful” and “experimental” activity? If information overload is crushing our preconceived notions about what constitutes ‘knowledge’ in various discourses, then why are we holding on to traditional generic forms at the expense of allowing our students to create writing products “on the fly”?
    Christopher Nealon’s article, “The Poetic Case,” discusses the historical debate surrounding poetry and whether or not “poem making is a kind of labor” or if perhaps poetry “illuminates the unimportance or even pointlessness of all human labor” (688). Nealon’s examination of poetry and language production as a site for determining “use value” or as something of a “higher purpose” seems relevant when discussing the site of the composition classroom. Poetry’s significance, as Nealon and other “poetry defenders” (as he calls them) suggest, “…is not captured by the language of making or purpose but that it is a type of activity that puts pressure on the social meanings of both. And as the meaning of the social develops ever-greater complexity, relentlessness, and intensity, this demurral from instrumentalization opens up a space of bewilderment about the present that is potentially critical” (689). Poetic activity and online user activity denote types of practices that open up critical spaces through a performative spirit of experimentation. As Geoffrey Sirc writes in English Composition as a Happening, “What I want is simply to reconsider a group of artists and compositionists who wondered why texts couldn’t be new, interesting, and transformative. Why they couldn’t experiment with new materials and forms, blur disciplines and boundaries, and subsume the whole with a life-affirming humor. Mostly these artists wondered why their compositions couldn’t strive for a sublimity in the participants that might, in some small ways, change the world” (30-31). Sirc’s contemplations about avant-garde writing practices in relation to academic institutions reinforce questions surrounding composition practices that teachers and students must “construct and reconstruct” as technology determines new activities and definitions of creativity.
    Socially determined practices, definitions, and frameworks of meaning-making are reflected upon by Elie Wiesel. From the preface of Night, Wiesel offers a meditative confession on his difficulty of using language, of utilizing words to bear witness to atrocities from a socially established vocabulary that had been forever changed, perverted, and destroyed by the ‘enemy.’  His dissatisfaction, struggle, and ultimate inability to effectively express his memories leaves him with an amorphous “it” essence, presence, and active question that remains unanswered and unresolved; for “it” is a process that is actively being negotiated – perhaps a ‘pre-emergence’ (“active and pressing but not yet fully articulated” (126) as Raymond Williams writes) – that involves a complex process of cultural production.  Below is an excerpt from his preface:
                Convinced that this period in history would be judged one day, I knew that I must bear witness.  I also knew that, while I had many things to say, I did not have the words to say them.  Painfully aware of my limitations, I watched helplessly as language became an obstacle.  It became clear that it would be necessary to invent a new language.  But how was one to rehabilitate and transform words betrayed and perverted by the enemy?
               Hunger – thirst – fear – transport – selection – fire – chimney: these words all have  intrinsic meaning, but in those times, they meant something else.  Writing in my  mother tongue – at that point close to extinction – I would pause at every sentence, and start over and over again.  I would conjure up other verbs, other images, other silent cries.  It still was not right But what exactly was “it”?  “It” was something elusive, darkly shrouded for fear of being usurped, profaned.  All the dictionary had to offer seemed meager, pale, lifeless. (ix)
      This haunting and conflicting question of “it” for Wiesel captures the daunting task of acknowledging that language is always an “active and changing experience; a dynamic and articulated social presence in the world” (37-38).  Its organic nature is a “persistent kind of creation and recreation” (37), evident in the text above in its acknowledgment that a word such as chimney came to signify death and the crematoria, for example.  Even his mother tongue of Yiddish had socially begun to fade after the war, externally representing his internal conflict with words and language and finding the means necessary to express what on many levels cannot be expressed via words, even words that had not been “perverted by the enemy.”  Furthermore, the dictionary represents a form that does not deliver answers or a sense of comfort or clarity but becomes something hollow and useless and “lifeless.”  According to Williams, “There are the experiences to which the fixed forms do not speak at all, which indeed they do not recognize” (130).  The Holocaust, Auschwitz, and this particular period in history cannot be expressed or interpreted from a predetermined, fixed form, such as the dictionary, because it is a form that does not yet recognize these experiences. 
      In the “Structures of Feeling” section from Marxism and Literature, Williams speaks of ‘practical consciousness’ and how it is “…observed in the history of language.  In spite of substantial and at some levels decisive continuities in grammar and vocabulary, no generation speaks quite the same language as its predecessors.  The difference can be defined in terms of additions, deletions, and modifications, but these do not exhaust it” (131).  Change in language becomes an “open question” that results in changes in “structures of feeling” or “structures of experience.”

  6. The more I think about it, the more I think Johnson-Eilola’s book is actually something kind of radical. Not like, “bodacious, dude!” radical, but dangerous, scandalous, turbulent, subversive—a molotov cocktail thrown into the crowded field of computers and composition. Look out, watch it, be cool, get down—here comes the Datacloud.
    Granted, on a first read, Datacloud seems somewhat tame; in particular, the persona Johndan establishes is far from a radical firebrand: he reminisces about his Atari, he watches his daughter learning how to play Per.Oxyde, he grumbles about students IMing during his class. And really, much of the content of this book seems hardly political at all; if other readers are like me, they might even be finding their attention drifting a little as Eilola centers his arguments around interfaces and information architecture—neither one the sexiest nor the most provocative object of study.
    However, in the last chapter (excepting the coda), Eilola makes a claim that is worth consideration in its own merit and prompts a reconsideration of the work that’s gone before. One of the four strategies Eilola argues for insists “information is always associated with political and cultural meanings. This is not to say that pieces of information come to use with instructions on how to position and use them, but that they do possess tendential forces suggesting ways to articulate them. Moreover, even when we contest those meanings, our own use of those pieces is bound up in political and social acts” (134).
    Even still, we might think that Eilola’s political stance here is conservative (little-c, not big-C) or moderate: perhaps Eilola is calling for equal opportunity for information work across gender lines (36-8) or recycling the trope that ubiquitous computing can be read as a “democratization of technology” (47). I want to push harder, though, and draw attention to Eilola’s early claim that “the computer participates in broad social changes” by influencing how people work (34); in particular, I want to argue that Eilola is in fact forcing a radical confrontation between three forces: the symbolic-analytic work appropriate to the information-saturated datacloud and the postmodern culture it supports/is supported by; the “forces of productivity,” here understood as industrialized material production and the forms of cultural consumption it engenders; and, stuck in the middle, the university. Without insisting with undue emphasis on a Marxist reading of Eilola (given his own apparent misgivings about the project of such critiques), what Eilola offers here is an analysis of the digital means of production and a social structure whose future is contingent upon the outcome of the these three forces.
    Consider. According to Eilola’s citation of Reich, the twenty-first century post-industrial economy is structured around service work and symbolic-analytic work (28). Unlike service workers, who “complete routine, relatively undervalued tasks”, symbolic analysts are now “the most valued workers”, enjoying higher pay, social status, professional opportunity, and geographic mobility. The shift from industrial manufacturing work to postindustrial service and symbolic work—although Eilola opts not to emphasize the point—is potentially as radical a shift as that from a primarily agrarian pre-industrial society to a mechanized, modern industrial society.
    Despite the burgeoning promises of the information economy, the forces of industrial production remain strong. Eilola suggests that while computer and information management skills have become widespread—and with them the hope a broader distribution of economic agency and access to opportunity—they have at the same time become colonized by industrial production. Although now anyone can create a webpage using basic software packages (or, more recently, blogging software or wikis), the fact that users are divorced from the need to know the underlying coding languages limits the sorts of tasks and works they can perform with these forms of production; as Eilola states, “the ease of use tends to move the task away of constructing basic Web pages away from a symbolic-analytic skill and toward a routine production or in-person service skill” (49). Thus made complacent, users neither see nor feel a need to learn the concepts and frameworks beneath the interface and are the threat they pose to industrial production (and to the gendered symbolic-analytic elite) is mooted: “the tendential forces demanding productivity militate against broader forms of learning for many users” (51), that is, at the expense of learning the deep structures of symbolic-analytic work and with that the learning the possibility of infiltrating the new social elite.
    (I shall leave aside here the related question that Eilola raises about the consumption and production of Web pages being left separated and isolated from one another (102), even to the extent that they are conducted in separate programs, except to underscore the argument implied above that ease-of-use, the abjuring of the need for broad learning, is an easy parallel to the mystification of production and capital in classical Marxism.)
    Where, then, does that leave the university? As Eilola would have it, the university is left making the most of a bad lot. On one hand, the university is committed to teaching “useful skills” (72): word processing, databasing, spreadsheets, professional presentation software. On the other, though, these skills “tend to prioritize more traditional forms of work,” and as a result of our own misunderstanding of symbolic-analytic work “we have yet to do an effective job of helping people learn” the information management skills necessary for life and work in the datacloud. Moreover, the material constraints placed on the institution—Eilola here discusses the common “goal of maximizing space usage” (82)—makes instruction in these sorts of productive skills difficult to achieve in anything more than small, ad hoc groups. So without institutional support for learning the skills of symbolic analysis, and with self-directed learning discouraged within professional settings either colonized by or still working through the paradigms of industrial production, where can the next generation of learners acquire the skills needed for competitive performance in the information economy?
    Eilola leaves this crisis unresolved, and it is there that perhaps we might establish a series of questions to further the discussion:
    1. As education professionals, how do we understand our responsibility to our students in the terms described here? Is it our task (either within the university as a whole or particularly within English departments) to understand the demands of this economy and help our students develop the skills necessary for it—or is our goal less mercenary and something more traditionally humanistic?
    2. Can we best understand the relationship between industrial and postindustrial work/production/society as one of crisis, confrontation, evolution, mutation, or some other form of connection? Can we (thinking back a couple of weeks) understand the industrial-to-postindustrial shift as a dialectical engagement of two forms of work?
    3. How might the forms of information work described here (and perhaps demonstrated in the Coda) inform or influence our own scholarly production?

  7. Tell me you’re surprised that the section in Datacloud that I found the most striking was the one using predominantly human-based interactions to demonstrate the need for different computer-based interfaces to facilitate symbolic-analytic work. I found his discussion of Turntablism and the sound experiments conducted by the Flaming Lips to be especially illuminating and illustrative of how our interactions with computer interfaces and the manipulation of information have changed in the face of new technologies. However, even though these examples are strongly indicative of the direction information architecture and interface design need to go in order to better facilitate this type of symbolic-analytic work, an equally strong picture, I believe, has been painted for the continued synthesis of man, machine, and space to create a conducive symbolic-analytic environment.

    In his discussion of DJs, sound experiments, and the office spaces of Brent Faber and David Dies, Johnson-Eilola describes environments that combine computers, MIDI keyboards, and other electronic devices with such old-school “technologies” as blackboards, white boards, notebooks, turntables, boom boxes, cassettes, and car stereos. But it is in the synthesis of these media that the truly interactive space is created and the symbolic-analytic work is done. Faber’s idea boards and Dies’ notebooks provide additional space in which ideas can be generated and fleshed out before they are imported into the digital environment for manipulation, and the digging for vinyl in dingy record store basements provides the Turntablists with the data they need for their programming and mixing, thus creating an “emerging space for symbolic-analytic work, one richer than most of us experience on a daily basis … (114).” Although the technology exists to conduct the work that a Turntablist does in an entirely digital manner—downloadable MP3 files, CD player that are actually “scratchable”—can the “richness” of this symbolic-analytic environment be recreated in a digital realm, or is it dependent upon the previously described man-machine interaction? I believe that anyone who has played, on a home stereo system, a mix CD produced by a DJ, as opposed to seeing the DJ live, can attest to the fact that something is indeed lost in the translation. Or if David Dies designed all of his sound shapes by direct interaction with the computer, instead of experimenting with MIDI keyboards and rack-mounted processors, and taking notes in a notebook, would the vibrancy of the end product lose some of its sheen?

    In the example of the Flaming Lips’ sound experiments; in order for them to be properly conducted (no pun intended), it is necessary to have a collaboration and synthesis between the technology, the artists, and the audience. People need to gather in a physical space and interact with each other and the present technology, which in this case is very low-tech, to create a mutually shared experience—one that is valuable to participants, conductors, and audience alike. Unlike the somewhat frightening vision of the possible future of what an entirely computer-based symbolic-analytic work system may look like: “During on class several years ago, my department chair sat in the back of the room while I and 15 PhD students in a seminar sat in front of our keyboards typing madly away, interacting with each other in our MOOspace. The only sound in the room was the clattering of keyboards interspersed with muffled laughs and periodic interjections of ‘Yes!’ and ‘Ha!’ as participants in the onscreen conversations grappled with the readings assigned for that week (95).” Although my own experience with the use of Instant Messenger systems in the workplace has been extremely positive, allowing more interoffice interaction to take place without as much disruption of workflow and also allowing more slacking off to take place without as much disruption to remaining employed, there is still something to be said about gathering in a room to brainstorm, discuss ideas and scribble on a blackboard, white board, or even a SmartBoard—gathering in a room to type stuff to each other seems to be taking things a bit far.

    If there is value in the synthesis of digital space, physical space, and the human-machine interaction that takes place within this context, is it desirable or even possible to create interfaces through which time and space can be manipulated in the ways that Faber, Dies, and DJ Jazzy Jay manipulate time and space in both their physical and technological environments?

    Also, see the notes section on page 58 for the requisite mention of Neuromancer.

  8. Sorry for the delay…somehow, I thought I had already posted this:

    The following is a response both to Johnson-Eilola’s Datacloud as well as a relatively brief speech that he gave immediately preceding the release of the aforementioned publication. Johnson-Eilola’s speech, entitled “Datacloud: Expanding the Roles and Locations of Information,” is mainly concerned with his conception of “documentation,” though parallels can also be found to his discussion of flatness and interface overflow in chapters three and four of Datacloud. In both the speech and text, Eilola attempts to engage with the fundamental characteristics of computer interaction especially as these characteristics relate to workplace practice (9-10, 34).

    As Eilola argues, the meaning of documentation has shifted quite dramatically over the past several decades. This shift, of course, is directly related to significant periods of technological innovation and development. As we move from print-based to online formats, significant shifts in tech – education and social activity occur. Here, Eilola attempts to sketch a history of computers as “technologies for work,” by examining computer interfaces. These interfaces, as Eilola denotes, range from the command-line to the spatial hybrids that emerge later.

    Although the predominant thrust of the initial portion of this speech is an attempt to espouse that a certain aspect of human interaction is in decline – specifically, the master/apprentice relationship characteristic of earlier tech-education practices – more redeeming arguments arise as the speech and his work progress. In later portions of the speech, Eilola suggests that computer interaction might actually encourage a return to the social and a dramatic reinstitution of the spatial.

    Here, things become a bit more complicated. If the predominant thrust of Eilola’s initial argument is to suggest that computer interaction confines the person by limiting interpersonal contact, and that computer processing flattens information (chapter 3), this notion is dramatically disturbed as Eilola continues. He argues that whereas the older interface confined the computer-user, contemporary work practices demonstrate that the interface encourages outward movement; information, as Eilola argues in chapter four, spills over the very edges of the interface. Already, Eilola’s imagery invokes the physical: The computer user is compelled to move about the office, through the bounds of space, while using the computer. Drawing on the examples of several colleagues, he argues that the computer stimulates movement through both virtual and physical realms. Taking notes from a blackboard, these notes are refined on paper, and then transferred to the computer interface.

    Of course, in even more contemporary contexts, better examples of such movement come to mind. One might consider Wii Fit to exert a similar effect. It is, in essence, a rather complicated computer technology which encourages the user to move throughout space. Though the purposes are different, this example validates Eilola’s suppositions.

    So, perhaps, Eilola can be understood as suggesting that a return has been enacted; that something of the original character of personal/technological interaction has reappeared. Though, as Eilola notes, this type of social interaction never really disappeared: “new types of micro-context do not completely erase previous ones – people continue to work in apprenticeship systems and use print manuals to this day.” What Eilola describes here is strikingly similar to the model that Stiegler proposes in Technics and Time. Whereas Stiegler suggests that technical innovation should be understood as a series of interlinked and interdependent technical systems, Eilola picks up on the layered nature of the technical micro-context.

    It is at the interstice of the fourth and fifth chapters that Eilola’s motivations become a little confused. Eilola’s nostalgic move in chapter four is an attempt to locate some redemptive merit in contemporary technology by relocating space. Here, partially, he is working towards an understanding about how workspace might be used more effectively. In chapter five though, it seems that Eilola is hailing the almost schizophrenic nature of the “Datacloud”: “rather than a single conversation, I had a small Datacloud, a relatively unstructured mass of conversation in which multiple conversations occurred on the same general plane. Like most users, I developed skills at navigating this space…” (94-95).

    Here though, I guess one aspect of my confusion rests with Eilola’s work in relationship to surface and embedded systems. Part of Eilola’s nostalgia arises in relationship to embedded or buried systems. Here, Eilola suggests that such systems force the user to learn; that the user must learn how to search out the answers to various inquiries. What ultimately becomes problematic with more contemporary surface systems is that they give the user answers far too easily. According to Eilola, the general accessibility of information contributes to the greater disablement of the user. So, if the Datacloud can be interpreted as a surfaced system, is Eilola finally relinquishing that this system has its educational benefits as well, as he seems to be suggesting in chapter five? I am just confused about the more general oscillations that occur in the text.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: