Friday, May 9, 2008

Assignment #4 - Code in the Linux subculture

As technology continues to integrate itself into our daily lives, we are confounded by the translation of language into its digital parallel: code. Code complicates language by mediating modes of communication and what they aim to communicate. I aim to examine how the Linux community exploits this coded mediation to a utilitarian purpose. As Hayles argues, code exceeds speech and writing in its capability because it possesses characteristics beyond a representative sign or a functional signifier; code can represent the relationship that exists between the two. 1 This notion of code serving a dual (or multiple) purpose was previously alluded to by cultural theorist Dick Hebdige, who examined social codes as formative for subcultures, which utilize these codes to recognize and verify authenticity. The obvious problem is that these codes are then often broken by their very identification and translation. Linux, as a digital subculture, stands to reconcile this contradiction. Because of its unifying open source ethos, Linux is able to identify its own codes, both cultural and binary, as well as adapt to the translation into other languages, thereby transcending the inherent contradictions in the relationship between sign and signifier that traditionally undermine subcultural models. The utility of code is what enables Linux to proliferate; that proliferation reinforces code as functional language.

The presence of code in daily life often goes unnoticed; digital mediation is nearly assumed present in oral and written communications. From Saussure’s claim that “the spoken word alone constitutes the object”2 through Derrida’s assertion that it exists as but a signifier to the actual sign itself, observable problems arise in the development of the relationship between speech and writing. Code functions to reconcile the two by acting as both sign and signifier, both interpretable and applicable—but as this reconciliation is translated to code through digital mediation, the in-between becomes truly revealing. Katherine Hayles states the need for “nuanced analyses of the overlaps and discontinuities of code with the legacy systems of speech and writing, so that we can understand how processes of signification change when speech and writing are coded into binary units.”3 If code can theoretically assume roles of both speech and writing, how can it do this practically? What are the implications of code as a hybrid language system?

Alexander Galloway points out that “code is the only language that is executable,”4 in reference to computer codes; but this evaluation is not entirely complete. Not dissimilar to the way varying dramatic, legal and sacred texts can be performative, code can be executable when its functions parallel speech or writing. Subcultures exemplify this execution of code. Subcultures exploit the disconnect between sign and signifier to encrypt the underlying meaning. “In this way, its very taken-for-grantedness is what establishes it as a medium in which its own premises and presuppositions are being rendered invisible by its apparent transparency,”5 writes Stuart Hall. This act of rendering invisible the essence of code, code as dual meaning, 0is itself the execution of code as language.

Yet as Hebdige approaches this subcultural appropriation of code, there is an immediate contradiction in bestowing authority upon constitutive, authenticating signs and signifiers to serve as an alternate parallel language to speech or writing. Citing Barthes’ cultural appropriations of the linguistic method, Hebdige interprets that “it was hoped that the invisible seam between language, experience and reality could be… rendered meaningful and, miraculously, at the same time, be made to disappear.”6 However, this is indeed a hope. The codes adopted by subcultures as significant pose themselves an inbred problem in execution: to execute subcultural code is to activate a signifier by identifying its sign, which breaks the code by oversimplifying the relationship between the two.7 Trying to execute the unexecutable is possible, but defeats the purpose.

Linux inverts this subcultural model by taking advantage of this oversimplification. The Linux community evolved in direct response to dominant operating systems, such as Mac and Windows8, whose code was all closed source. The initial desire for modifiable open source codes led to an interactive collective of programmers who were not rebelling against, but adapting to, the programming of Mac and Windows. Yet in its adaptation of computer code, Linux indirectly outlined social codes identifying and structuring itself as a digital subculture, codes that emphasized the unifying facet of Linux as code translatable to every person and operating system in order to be truly utilitarian. This essential tenet, from where Linux’s subcultural identity stems, is that open source code is free9 for everyone to use, modify, reprogram, republish, and distribute.

By making these links within its functioning transparent, Linux not only identifies the function of its subbcultural codes but also exploits the utility of coded language by adapting to the detrimental sign/signifier relationship of speech and writing; it does not exist parallel to, but interactive with speech and writing to transcend inherent barriers and absorb the linguistic exchanges into its function. Essentially depending on coded language for both adaptation and cultural identity, Linux thereby inverts the traditional subcultural model that is undermined by translation because the language of code is adaptive enough to do so.

The rift between speech and writing, sign and signifier, is deproblemetized with code because it is able to function as both. Florian Cramer summarizes that: “Read as a net literature and a net culture, Free Software [like Linux] is a highly sophisticated system of self-applied text and social interactions. No other net culture has invented its computer code as thoroughly, and no other net culture has acquired a similar awareness of the culture and politics of the digital text.”10 Whether Linux can endure upon this awareness of codes is yet to be seen, but as it exists, its codes are its essence.




1. “The exchanges, conflicts, and cooperations between the embedded assumptions of speech and writing in relation to code would be likely to slip unnoticed through a framework based solely on networked and programmable media, for the shift over to the new assumptions would tend to obscure the ways in which the older worldviews engage in continuing negotiations and intermediations with the new… [in] the reverse operation of trying to fit the speech and writing systems into the worldview of code… here too I expect the discontinuities to be as revealing as the continuities.” Hayles, Katherine. Speech, Writing, Code: Three Worldviews, My Mother Was A Computer. Chicago: Univ. of Chicago Press, 2005. p45
2. As cited by Hayles, p42.
3. Hayles, p39.
4. As cited by Hayles, p50
5. Hall, Stuart (1977) as cited by Hebdige, Dick, From Culture to Hegemony, Subculture: the Meaning of Style. New York: Routledge Publishing, 1979. p11. Hebdige continues: “Notions concerning the sanctity of language are intimately bound up with ideas of social order. The limits of acceptable linguistic expression are prescribed by a number of apparently universal taboos. These taboos guarantee the continuing ‘transparency’ (the taken-for-grantedness) of meaning.” [p91]
6. Hebdige, p10.
7. Hebdige uses punk as a prime example. When you identify something (spiked hair, nose piercing, etc.) as being “punk” or replicate it as punk, its authentic quality (“punkness”) is reduced.
8. Raymond notes that Linux code was written to operate on PCs, yet its open source nature is also inclusive and adaptable. Raymond outlines “lessons” in Linux, examples that prove such: “2. Good programmers know what to write. Great ones know what to rewrite (and reuse).” “7. Release early. Release often. And listen to your customers.” “10. If you treat your beta-testers as if they’re your most valuable resource, they will respond by becoming your most valuable resource.”
9. Free as in speech, not as in beer.” Torvald, www.fsf.org.
10. Cramer, Florian. Free Software as Collaborate Text. Berlin: Freie Universität Berlin, 2000.

Assignment #4: Information Access


COMMENTS

Texts Used: “As We May Think” by Vannavear Bush; “Return to Babel” by Boast, Bravo, & Srivanasan

The next time you’re on YouTube, take a look at what might be the most important part of the page (sans the video): the tags. The tags, which facilitate YouTube’s search feature, determine who sees the video; they play off of interests, or popular searches. It’s not uncommon to see users tag their videos with popular buzzwords, often just to catch the eye of more viewers – relevancy is optional. When Vannavear Bush first conceptualized the MEMEX in 1945, he saw, in fact, a relatively simple system which played off of “the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain,[1]” much in the same way as these tags do. In creating databases, hyperlink maps, and indexes, we create “management tools, not access tools[2]” - but management tools may be all that we need. Although the ED2 project seeks to “make the Web a true knowledge resource[3]” by claiming a greater emphasis on temporality, spatiality, authorship, & contention from information objects, changing the system “to recognize and accommodate the negotiated, narrative, emergent, and incommensurable nature of knowledge production and use[4]” can be, depending on the user, superfluous.

The acquisition of “knowledge” in the sense that Srivanasan describes it is not necessarily the primary aim of the Internet, nor would it be its most beneficial function of mankind as a whole. It may be the nature of today’s culture, but complete understanding of a subject is, most likely, a bit much for the average user. “…the knowledge of a particular topic… rarely can be uncovered within a single description or descriptive trope.[5] Why should it? Bush’s concept of the MEMEX was based upon the instantaneous accessibility of knowledge, “provision for the consultation of the record by the usual scheme of indexing.[6] Greater depth of understanding via narrative and personal perspective should be (and certainly is) accessible to those who seek it – not forced upon a curious individual simply seeking an introduction to a topic.

(There are, however, limitations to the MEMEX concept that were eventually overcome by the modern nature of information technologies. First, the concept of storage limitation; although the MEMEX could hold a great deal of information, it was ultimately finite. The presence of information over a worldwide distributed network [i.e. the Internet] allows for the compilation of even more data, the “process of tying two items together[7]”. Secondly, there seems to be an over-reliance on codes. By using codes to pull up certain material, one has to know each code specifically [the “mnemonic” bookmarking for frequently used codes notwithstanding] to pull up a certain text. There’s also no way to look for a certain passage. However, the emergence of “search engines” has served to alleviate many of these concerns; further discussion of this phenomenon, however, is beyond the scope of this posting.)

ED2 claims to concern itself primarily with three issues: “temporality/spatiality, authorship, & contention from information objects.[8] It cites the concept that “Knowledge claims are of a time and place... Information objects are translations of these authored stories to timeless abstractions.” While the first statement may be valid, the second is not necessarily true; many “information objects” such as website postings and wiki entries have information (either externally visible or embedded) on who authored or edited the piece of information, when (date and time), and even where (IP information). As for the contentions of information objects, i.e. the constant discourse & debate over the validity of information, the claim that “the transformative processes that create information remove these dynamics[9]” is more than a bit misleading. At least in terms of mainstream information (encyclopedias, news articles, etc.), information is subject to constant revision, most visible in the discussions over information found readily throughout Wikipedia and its many contributors. Additionally, scientific resources such as Nature, although static in individual publication, are dynamic in their constant revision of knowledge through the publication of many information objects on one subject over a period of time.

Ultimately, the question of how information should be labeled, organized, indexed, and distributed, is one of ontology – “the way in which a certain community negotiates the conceptualization & organization of its knowledge and information.[10] For the indigenous groups described in “Return to Babel,” a greater contextual understanding is certainly desirable. However, for the “mass audiences” of society, to which the Internet predominantly caters, the current system of information organization & retrieval is more than sufficient. As mankind’s store of information grows with time, its ease of accessibility may grow as well – it all depends upon the systems in place which govern & facilitate our own inquisitive minds, “blazing trails” through the past, present, and future.



[1] Bush, 6.

[2] Srivanasan, 1.

[3] Srivanasan, 9.

[4] Srivanasan, 9.

[5] Srivanasan, 6.

[6] Bush, 7.

[7] Bush, 7.

[8] Srivanasan, 9.

[9] Srivanasan, 10.

[10] Srivanasan, 6.

Assignment #4: Information Access


COMMENTS

Texts Used: “As We May Think” by Vannavear Bush; “Return to Babel” by Boast, Bravo, & Srivanasan

The next time you’re on YouTube, take a look at what might be the most important part of the page (sans the video): the tags. The tags, which facilitate YouTube’s search feature, determine who sees the video; they play off of interests, or popular searches. It’s not uncommon to see users tag their videos with popular buzzwords, often just to catch the eye of more viewers – relevancy is optional. When Vannavear Bush first conceptualized the MEMEX in 1945, he saw, in fact, a relatively simple system which played off of “the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain,[1]” much in the same way as these tags do. In creating databases, hyperlink maps, and indexes, we create “management tools, not access tools[2]” - but management tools may be all that we need. Although the ED2 project seeks to “make the Web a true knowledge resource[3]” by claiming a greater emphasis on temporality, spatiality, authorship, & contention from information objects, changing the system “to recognize and accommodate the negotiated, narrative, emergent, and incommensurable nature of knowledge production and use[4]” can be, depending on the user, superfluous.

The acquisition of “knowledge” in the sense that Srivanasan describes it is not necessarily the primary aim of the Internet, nor would it be its most beneficial function of mankind as a whole. It may be the nature of today’s culture, but complete understanding of a subject is, most likely, a bit much for the average user. “…the knowledge of a particular topic… rarely can be uncovered within a single description or descriptive trope.[5] Why should it? Bush’s concept of the MEMEX was based upon the instantaneous accessibility of knowledge, “provision for the consultation of the record by the usual scheme of indexing.[6] Greater depth of understanding via narrative and personal perspective should be (and certainly is) accessible to those who seek it – not forced upon a curious individual simply seeking an introduction to a topic.

(There are, however, limitations to the MEMEX concept that were eventually overcome by the modern nature of information technologies. First, the concept of storage limitation; although the MEMEX could hold a great deal of information, it was ultimately finite. The presence of information over a worldwide distributed network [i.e. the Internet] allows for the compilation of even more data, the “process of tying two items together[7]”. Secondly, there seems to be an over-reliance on codes. By using codes to pull up certain material, one has to know each code specifically [the “mnemonic” bookmarking for frequently used codes notwithstanding] to pull up a certain text. There’s also no way to look for a certain passage. However, the emergence of “search engines” has served to alleviate many of these concerns; further discussion of this phenomenon, however, is beyond the scope of this posting.)

ED2 claims to concern itself primarily with three issues: “temporality/spatiality, authorship, & contention from information objects.[8] It cites the concept that “Knowledge claims are of a time and place... Information objects are translations of these authored stories to timeless abstractions.” While the first statement may be valid, the second is not necessarily true; many “information objects” such as website postings and wiki entries have information (either externally visible or embedded) on who authored or edited the piece of information, when (date and time), and even where (IP information). As for the contentions of information objects, i.e. the constant discourse & debate over the validity of information, the claim that “the transformative processes that create information remove these dynamics[9]” is more than a bit misleading. At least in terms of mainstream information (encyclopedias, news articles, etc.), information is subject to constant revision, most visible in the discussions over information found readily throughout Wikipedia and its many contributors. Additionally, scientific resources such as Nature, although static in individual publication, are dynamic in their constant revision of knowledge through the publication of many information objects on one subject over a period of time.

Ultimately, the question of how information should be labeled, organized, indexed, and distributed, is one of ontology – “the way in which a certain community negotiates the conceptualization & organization of its knowledge and information.[10] For the indigenous groups described in “Return to Babel,” a greater contextual understanding is certainly desirable. However, for the “mass audiences” of society, to which the Internet predominantly caters, the current system of information organization & retrieval is more than sufficient. As mankind’s store of information grows with time, its ease of accessibility may grow as well – it all depends upon the systems in place which govern & facilitate our own inquisitive minds, “blazing trails” through the past, present, and future.



[1] Bush, 6.

[2] Srivanasan, 1.

[3] Srivanasan, 9.

[4] Srivanasan, 9.

[5] Srivanasan, 6.

[6] Bush, 7.

[7] Bush, 7.

[8] Srivanasan, 9.

[9] Srivanasan, 10.

[10] Srivanasan, 6.

Freedom In The Internet Era

COMMENTS please!

Since the beginning, the Internet has been intimately linked to the notion of freedom--freedom from authority, freedom to move, and freedom to create. As the Internet has become an increasingly larger player in the economic realm, freedom in the context of the Internet has changed. Today, Internet freedom means interactive and collective activity free of cost. Tiziana Terranova gestures towards the economic aspects of the Net’s freedom in her essay Free Labor- Producing Culture For The Digital Economy. Entwined with these economic implications is the aspect of Internet freedom Julian Dibbel examines in his book My Tiny Life, its communal spirit. However, Dibbel’s argument, which is made possible by the phenomenon Terranova writes about,  also brings up another important point to consider. While these free communal activities can be positive, they can also be perilous, making users extremely vulnerable to the ill intentions of other users who exploit this two-fold system of freedom.

When Dibbel’s and Terranova’s texts are combined, the definition of “free” doubles. Put the two together, and freedom means both without cost (Terranova) and without boundaries (Dibbel). The combination of these two definitions has profound implications for the kinds of activities that will occur on the Internet in the future. At the end of her essay, Terranova claims that the Internet “is dispersed to the point where practically anything is tolerated” (Terranova 53).  Terranova continues, stating that the Internet produces a “digital economy that cares only tangentially about morality” (Terranova 53). The cyber rape Dibbel chronicles in My Tiny Life corroborates Terranova’s claim. It also happens to be a result of this new and hybridized notion of freedom.

The two types of freedom, gratis and without strictures, are very connected. Terranova’s definition of digital labor, which is necessarily free and collective, is a key place to start.

Simultaneously voluntarily given and unwaged, enjoyed and exploited, free labor on the Net includes the activity of building Web sites, modifying software packages, reading and participating in mailing lists, and building virtual spaces on MUDs and MOOs (Terranova 33)  

The important part of this explanation is when Terranova mentions that MUDs and MOOs are products of free digital labor. This labor is enjoyable because in exchange for the work, the worker gets the pleasure from communication with others in the new system. LambaMOO says as much about itself on its site: “LambdaMOO is a new kind of society, where thousands of people voluntarily come together from all over the world” (qtd. in Dibbel 11).

Julian Dibbel goes on to describe LambdaMOO as, “a very large and very busy rustic mansion built entirely of words” (Dibbel 11). However, this world of words is dangerous. As Dibbel observes, “what transpires between word-costumed characters within the boundaries of a make-believe world is, if not mere play, then at most some kind of emotional laboratory experiment” (Dibbel 23) whose results can have graver consequences than anticipated.

In accordance to Terranova’s definition of digital labor, LambaMOO is a place created by users and for users. This setup makes LambdaMOO a free speech utopia. Users are liberated (free) to say whatever they please. However, Dibbel also sees the negative connotations of this free for all. In this word-costuming is “the power of anonymity and textual suggestiveness to unshackle deep-seated fantasies” (Dibbel 16). This seductive combination compels some users to take actions they would never dream of performing in RL, or Real Life. In fact, it was precisely these conditions that facilitated the cyber rape Dibbel wrote about.

The Bungle Affair, as Dibbel calls it, comes full circle to Terranova’s observations that the digital economy is only minimally concerned with morality (Terranova 53). Almost anything is tolerated in the dispersed expanse of cyberspace (Terranova 53). The problem arises when something happens that users collectively agree is wrong. In this new society free from rules and regulations, what happens when somebody crosses the line? And, for that matter, in this doubly free society, where is the line, anyway? These tricky questions were the dilemmas the members of LambdaMOO had to navigate in the wake of the Bungle Affair.

Together, these texts by Terranova and Dibbel paint a picture of the future. As free digital labor becomes more widespread, this confusion over where freedom ends and boundaries begin will arise over and over again. As Internet users share more and more of themselves with each other in this newly open and free environment, the more vulnerable they will become. For generations, parents have warned children eager for independence that with freedom comes great responsibility. As Internet users gain even more freedom through labor and expression, the stakes just keep getting higher, and responsibility for this new swelling of freedom is not to be taken lightly.

Works Cited 

Dibbel, Julian. My Tiny Life: Crime and Passion in a Virtual World. New York: Henry             Holt & Co, 1998.

Terranova, Tiziana. “Free Labor: Producing Culture For the Digital Economy.” Social             Text 18.2 (2000) 33-58.

Sharing documents (makeup post for last week of class)

When I first read the GNU FAQ, I was very unimpressed. The philosophy seemed a little elitist for suggesting that normal people should just drop everything and use free software. It's been a long time since cars were simple enough that people fixed their own; now people go to the shop even for an oil change.

But then I wandered around for a while and discovered a page (under Philosophy) about why sending .doc attachments in e-mails is bad for everybody. Again, it seemed a little too sure of itself, but the point is sound: sending a proprietary format attachment, which Microsoft changes so that programs like OpenOffice have a hard time reading it, is a little rude. It assumes that your recipient has Word, which some people don't want to or can't spend the money on.

So maybe this affects us all more than we think. Another corporation has hooked people on its product. But how is the average computer user supposed to resist buying the latest version of Microsoft Word when she knows that all her friends and colleagues will be sending documents she can't read, instead of text, rich text or portable document formats? It's a question I certainly can't answer, and it remains to be seen whether regular people who don't know what goes on inside the box will ever care about where their software comes from.

Make-Up Blog (Week of March 31 - April 3)

Part of an e-mail to Ramesh after his Skype visit to class:
In "Indigenous, Ethnic, and Cultural Articulations of New Media" and other articles, your new media models and interventions seem to attempt to form bridges across disconnected reservations. Have there been any physical effects as a result of these information systems? Have you seen any movement or migration of people between these reservations, and can any of it be attributed to (or simply compared with) the (electronic) structure of the networks you've implemented? Also, do you see any possibilities for your ideas to apply to small, marginalized, nondiasporic communities who have common ties apart from race and ethnicity?

Make-up Blog (March 17-19)

Agre’s discussion of surveillance casts it as a model, “a set of metaphors” which maintains an “identification with the state . . . with consciously planned-out malevolent aims of a specifically political nature,” (Agre 743). He also notes the conflation of human bodies with their constituent parts or objects metonymically associated with them (ex. “a system that tracks trucks can generally depend on a stable correspondence . . . between trucks and their drivers”) (742). This made me think of biopower as part of the state’s political aims in conditions of near or imagined “total surveillance,” (737). Agre agrees to a paranoid extent with Foucault about disciplinary/surveillance societies simultaneously forcing and enforcing compliance with their mechanisms of organization and ideological aims. By objectifying the body and its parts to make it capable of tracking, institutions impose “a moral influence over behavior,” (Foucault 210) and regulations of the body with eugenic implications.

Thursday, May 8, 2008

(very) Late Post on Surveillance (Week of 3/17-3/21)


I recently came across this video surveillance game called Vigilance 1.0. The player is tasked with maintaining morality and order by monitoring surveillance cameras and denouncing any digressions. They player is rewarded for punishing anyone caught in the act of "
robberies, pocket-pickings, burglaries, shop-lifts, breaches of the highway code, trash-abandoning, drug dealing, solicitation on a public place, procuring, drunkenness, sexual harassment, adultery, incest, pedophilia, zoophilia, necrophilia, etc."

The art-game's description and game play are intentionally tongue and cheek. There is no winning as the game continues indefinitely and the system arbitrarily assigns point values to crimes (+2 points for prostitution, +10 for bagsnatching, -1 for false allegations). The game's developer makes the claim that for the player, the game is "
At the end, the denunciation of a controlled society, the total visibility and spying, putting him in a position of self-denunciation." Ostensibly the game aims to question the role of people in controlling and capturing in society. The player is cast not as a criminal avoiding constant surveillance but as a security guard faces with the responsibility of upholding justice.

Through Agre, Foucault, Deleuze and Virilio, our discussions of surveillance have focus on the structures and networks of "control society" and the "REALTIME" of the contemporary world. Examining this game also makes me wonder about the possibility for individuals to decide whether or not they want to enforce the networked mediated systems of control, or whether as this game seems to hopefully suggest, it is as much the system (the structures, networks and media) of control as the the agency of the people enacting that system.

Tuesday, May 6, 2008

(very) late post on jameson

Jameson suggests in “Cognitive Mapping” (from Marxism and the Interpretation of Culture) that within the space of the postmodern era, all voids and gaps are filled. As “the truth of experience no longer coincides with the place in which it occurs” (349) the individual who experiences this new space becomes schizophrenic. (For this schizophrenia, similar to the disorientation of the subject within Virilio's visual crash, the solution for which Jameson searches may also help to prevent or at least soften the visual crash.) Jameson seems to hope that his aesthetic of cognitive mapping will intensify the individual subject's sense of place in the global system and rescue him from his schizophrenia. Manovich's navagable space of new media is then a symptom of the fragmentation and schizophrenia that Jameson speaks of because the navigator must jump from one discrete object to another to move through it.

Jameson's political agenda here may also relate on some level to Deleuze, who insists that “the crisis of the institutions, which is to say, the progressive and dispersed installation of a new system of domination” calls for “new forms of resistance against the societies of control.” (Postscript on the Societies of Control, p. 7) However, Jameson's desire for totality seems reactionary and perhaps a bit conservative because he assumes that, in the postmodern era, the individual may be completely detached from the local and consumed in the global experience, and if so, that this is a bad thing which must be met with a solution. Unlike Haraway, who embraces the evolution toward the cyborg and the pastiche of postmodern experience, Jameson calls for a return to a more traditional local sense of place within the global system, rather than the saturated global sense.