My mother grew up with brutality of the Vietnam war airing each evening on the nightly news. In my generation, the nightly news broadcast brutality and atrocities from Reagan’s Central America, the Pinochet regime in Chile, and, of course Apartheid-era South Africa. I recall watching thousands upon thousands of Black South Africans linked arm and arm, dancing and chanting in protest in the barren townships, then running and trying to disperse as police fired live rounds indiscriminately into the crowds. I watched footage of police brutally beating Black South Africans, dragging them into trucks and hauling them off for “interrogations.” This was a regime supported by the U.S. and by transnational capital and its interests for years. South African activists asked for international support and the righteous around the globe responded to put pressure on the terrible Apartheid government to free Mandela and, ultimately, to end Apartheid.
Nelson Mandela’s liberation and the downfall of Apartheid were some of the most incredible events I have witnessed in my life. The world has lost an incredible leader and advocate for social justice for all people. We need more people to rise up and stand up for what is right. There could be no greater legacy that we could live in his honor.
Rest in power, Mr. Mandela.
I came across a disturbing case from the Languedoc region of France today, while perusing headlines on Salon.com. A 14 year-old girl who had been repeatedly victimized sexually by her father had reportedly caught the abuse by employing her computer’s webcam. The key to the most disturbing aspects of this extremely upsetting story lies in the fact that the local police had informed the young woman that they could not (would not) take any action without “evidence” of the abuse. In other words, a 14 year-old girl had to arrange a sting operation of her own father/abuser, and then put herself in harm’s way to not only be abused again, but to have evidence of that abuse be digitally captured – a format that is easily copied and transferred, transmitted and will never lose its data integrity. As many in law enforcement and abuse victim advocates explain, a victim having knowledge of his or her abuse being seen over and over again can often feel that each viewing is yet another victimization. In this case, the new instance of assault caught on camera could have been avoided altogether had police or other authorities removed the child from the home when she first brought her allegations forward.
The onus that this 14 year-old child was under to do the police work that should have been undertake by adults is a burden no one should ever have to face. And yet it is one that, in an era of “pix or it didn’t happen,” appears to be becoming status quo for those who have been victimized at the hands of others; two years ago, I wrote about a case of a Texas teen having caught her physical abuse on video and then using YouTube to seek justice against her father, a local judge.
Yet more disturbing still, even when abuse allegations can be proved via what appears to be unequivocal evidence, (warning: link contains photos) such as in the case of the Steubenville rapists who documented their sexual assault of an unconscious female throughout a night, the evidence does not serve as conclusive or clear-cut in the eyes of a jury or the public. In that case, the images made their way through social media and into the mainstream; even if one wanted to avoid viewing them, it was almost impossible not to when following the trial. In the case of the French child, the father’s lawyer is already trying to find a way to minimize the impact of what ought to be damning evidence; the first excuse that has been offered is that the father was dealing with a period of unemployment. Okay.
When the onus for protection is shifted in this way onto an abuse victim and/or onto a minor child, a disturbing precedent is set. What of those who can’t or do not want to record and document their abuse in this way? Are they then to have access to some lesser form of justice, or perhaps, no justice at all?
Greetings to all listeners who just heard the interview with me on today’s All Things Considered. If you’d like to learn more about CCM and my research about the practice and people who do it, please hop over to this page for an overview and detailed discussion. For those who missed the story, you may find it by following this link to NPR’s archive of it, and the text version that accompanies it.
Thank you for your interest, and welcome!
My article on the problematic and ubiquitous university/state employee “ethics test” can be found in an edited version at the Chronicle of Higher Ed’s “Conversation” website. Please visit and add your voice! One commenter described the tone as “whiny and PC,” so I must be doing something right…
Hearty congratulations to Dr. Miriam Sweeney, on the successful passage of her dissertation defense, “Not Just a Pretty (Inter) Face: A Critical Analysis of Microsoft’s ‘Ms. Dewey’”, at the University of Illinois Graduate School of Library and Information Science (GSLIS). Dr. Sweeney is an Assistant Professor at SLIS, University of Alabama. The defense was lively, entertaining and deeply interesting! Well done and congratulations, Miriam!
Her committee included: Linda Smith, Chair; Lisa Nakamura (Screen Arts & Cultures and American Cultures, Michigan; formerly at Illinois); André Brock (Communication Studies, Michigan); Allen Renear, Dean, Graduate School of Library and Information Science, University of Illinois.
If you are an employee of a higher education institution, you can likely set your watch by it: the dread annual ethics test. Usually presented as a self-paced, online “learning module,” the test is designed, ostensibly, to measure your ability to deal with complex workplace situations. Some of the situations require use of your best judgment, while some best responses are dictated by state or federal law, or University policy. Yet one thing has become clear over the years: many of these ethics tests bear very little resemblance to the kinds of ethical problems those of us working in university environments are attempting to confront, usually through our own teaching and research.
Instead, these tests are exercises in disciplining employees to a particular kind of logic: one that reinforces the supremacy of the administration, the need to unquestioningly follow rules, the mandate to survey and report on coworkers, and a focus on “ethics” at such a micro level (e.g., don’t misuse office supplies; don’t seek reimbursement for a non-business luncheon) as to render the whole process a joke, were it not simultaneously so fundamentally insulting.
Consider the case of the University of Illinois, where all employees, including graduate student teaching assistants, are required to refresh their ethical skills each academic year. In this particular test, employees are introduced to a cavalcade of characters, representing various cultural and ethnic affiliations from a stock art company somewhere, who are confronted with ethical dilemmas to which employees must respond. A wrong answer leads to an explanation of why there is another, better answer, and how the employee should behave when encountering similar tricky situations. Through this test, we get to know Keisha and Amelia and a number of their ethically challenged friends and coworkers and learn, through their foibles, what we ought to do in similar cases.
Such tests are thoroughly commonplace in most higher ed workplaces, but the one for the workers at the University of Illinois comes with an extra dose of irony. This institution has made headlines in past years for ethical problems of its own. Admissions scandals involving influence peddling, cush appointments for disgraced high-level administrators, continued resistance on the part of administration to meet the terms of their graduate employees’ contract, living-wage battles for campus food service employees and graduate employee strikes have marred the integrity and any claims to ethical leadership the U of I may have, at one time, possessed.
Indeed, these training guides appear to be in the service of presenting an alternate reality – one that denies the issues described above by their glaring absence and focuses, instead, on comparative ethical minutiae and on redressing the actions of a few bad actors, rather than examining or even acknowledging the existence of systemic inequities. Further, the tests are visually and culturally mapped into a post-racial discourse of multiculturalism and diversity whose underlying logic wholly negates the lived realities of social inequality, exploding wealth gap, minority scholars fleeing this campus, students living below the poverty line, and racialized crime profiling, and domestic abuse on campus. In short, to take the ethics test every year is to experience a profoundly cynical feeling of cognitive dissonance. In the world of the university ethics test, one office worker making $30,000 per year can stem the tide of a university budget deficit. In the ethics test universe, one groundsworker can tattle on his boss for taking a lunch with a potential service provider, and this somehow competes with cases of graft and corruption at echelons far beyond those of the workers depicted in the modules. In fact, the ethics training is a mechanism for the administrative élites to control and manage employee behavior and maintain status quo, using technological systems and scientific management techniques (e.g., standardized tests) to do so.
Consider this year’s addition of the case of Amelia:
Amelia, an employee at the university, takes on a teaching job at another state school and is reprimanded when her supervisor (presumably told about this by a coworker of Amelia’s?) learns that Amelia is using her university-issued computer to complete the work. There are two possible choices from which to pick in order to answer the question regarding Amelia’s situation, but none of them ask the one so obvious to my colleagues and to me: why does Amelia need to take on a second job to make ends meet? Why doesn’t the university pay her enough so that that isn’t necessary? And what do we know about the terrible, and often tragic, precarity experienced by people who adjunct full-time? More than the makers of the ethics test, it would seem. Is it any wonder that these ridiculous questions become the punchline to social media posts, or fodder for frustrated blog posts?
The truth is that the time is ripe for a large-scale discussion about ethics. Many are happening right now, within the walls of the very institutions in which employees are subjected to that _other_ kind of ethical discussion. But these questions tend to focus on police brutality, racism, global inequality, endless war, human exploitation, environmental destruction, the perverse concentration of capital among a few. These questions don’t have easy answers to be plucked from a multiple-choice computer module. These issues have responses that are likely to challenge the status quo, insist that change be made and put tough questions to university administrators and all those in power. Where is our ethics test about these issues? Where can we take a learning module, or MOOC, that will expose our institutions’ ties to corporations, organizations and governments responsible for some of the grossest exploitation of people and resources?
Contrary to what these tests and learning modules attempt to instill, “ethics” and “the best interest of the employer” are not synonyms. Let’s stop lending credence to these ridiculous and insulting exercises in our own self-policing. Let’s decide on our own from where to draw our ethical inspirations, and let that inspiration be more about addressing inequities and injustices than avoiding litigation or embarrassment for our employers.
Our own integrity, and ethics, demand nothing less.
It’s been a whirlwind of a week in Dublin, Ireland, as I’ve been visiting with colleagues and participating in IAMCR13. The conference has been time well spent, with a critical mass of critical media and communications scholars assembled in one place to talk about very real issues. At the fore has been that of continued economic crisis, austerity and related topics – topics quite relevant in Ireland today, as a massive banking scandal and attendant fallout rocks the country, with very little accountability to be had on the part of the bankers responsible. Meanwhile, the Irish Times put a guide to debt in the Thursday issue I picked up.
I was pleased, therefore, to present today at this conference and among two very esteemed friends and colleagues, Ergin Bulut (University of Illinois) and Miriam Sweeney (University of Alabama), and Victor Pickard (Annenberg School, Penn). I shared work on commercial content moderation, while Ergin presented a fascinating aspect to his dissertation work on gaming companies: the feminized invisible labor of gaming developers’ spouses. Miriam Sweeney shared her work on anthropomorphized virtual agents (AVAs), troubling aspects of design and HCI that often believe themselves to be value-neutral and demonstrating them as sites of deep instantiation of cultural, racial and gender norms and stereotypes. Pickard shared his historical analysis of newspaper journalism of the 1930s and 1940s not as the halcyon days so often juxtaposed with today’s journalism in crisis, but as a contested time for print journalism, when a laissez-faire relationship so often presumed between government and journalism was not necessarily the case.
It was a pleasure to have a nice turnout for the panel; in the audience were several familiar faces, including that of Christian Fuchs, who prompted the panelists to find a theoretical thread or theme that might tie all the papers together. The panel responded as a group, and the audience also brainstormed on the question, with obfuscation, dismantling mythologies, issues of power and control, unveiling the human in infrastructures and systems (this from Lisa Nakamura) all contenders. In the end, the panel fielded many engaged and provocative questions from the lively participants in the audience, and the experience was marvelous. Thanks to those who attended; we look forward to sharing our work further with you. On a personal note, this conference marks the last time that I will be in attendance as an affiliate of the University of Illinois; as of Monday, July 1st, I am very pleased to take up my post as Assistant Professor in the Faculty of Information and Media Studies at Western University.
The time has come for me to share some big news. I am honored and thrilled to report that I will be joining the Faculty of Information and Media Studies (FIMS) at the University of Western Ontario, at the rank of Assistant Professor, this summer. Thanks to all who supported me through this process. I look forward to the coming adventure at this wonderful institution!
Today, the IEEE Computer Society reported, via its Facebook page, on the 20th anniversary of NCSA Mosaic. This web browser, developed at the University of Illinois’ National Center for Supercomputing Applications, was distributed free of charge and its GUI interface was largely credited with sparking widespread interest in the Web.
As I reflect on my own 20-year anniversary as an Internet user, and as an almost-grad of the University of Illinois myself, this story caught my eye. Indeed, I experience something of a rush every time I have occasion to go over to NCSA, where a decorative plaque commemorates Mosaic and its contributions to computing.
My own first experience with Mosaic was somewhat inauspicious, however. In early 1994, I was working as a computer lab specialist in the Memorial InfoLab at the University of Wisconsin-Madison, the campus’s largest and busiest computer lab at the time. One day, I reported in to work and caught up with my colleague, Roger, as we strolled the floor of the lab. We stood in front of a row of Mac Quadras (the “pizza box” form factor), as they churned and labored to load something on their screens. It was an interface consisting of a greyish background and some kind of icon in the upper corner that seemed to indicate loading was in progress, but nothing came about (NB: this was more likely due to a lack of content or a choked network, given the time, than anything else inherent to Mosaic). Turning to Roger, I asked, “What _is_ that?”
“That,” he replied, “is NCSA Mosaic. It’s a World Wide Web browser. It’s the graphical Internet!”
My response was as instantaneous as a reflex as I sputtered out a disdainful reply. “Well,” I scoffed, “that’ll never take off. Everyone knows the Internet is a purely text-based medium.”
And the rest, they say, is history. Happy birthday, Mosaic.
Sarah T. Roberts: Stephanie, thanks so much for taking time to talk with the HASTAC community about your excellent new book, Cached: Decoding the Internet in Popular Culture. I think this book will speak to many different kinds of people involved with HASTAC, given its interdisciplinary nature and its locus at the intersection of policy, history, technology and culture studies. One thing you discuss early on in the introduction to the book is your goal to “recover the underlying assumptions involved in the adoption…” of the Internet and concomitant technologies. Can you talk to us a little about what those underlying assumptions are, and how they relate to what you describe as a tendency to be “erased or at least diminished by our present uses and accepted memories of the internet”?
Stephanie Ricker Schulte: I am glad you asked this question because it spotlights a central tension in the way I have chosen to tell my story about the internet as well as a central tension with internet technology itself. Rather than one set of underlying assumptions fueling the internet’s growth and adoption, there were actually several overlapping and conflicting sets of assumptions about the technology as well as what it did (or could) mean. In this book, I ask, “How did conflicting voices—such as advertisers, journalists, users, and policymakers—help establish common sense notions about the internet?” So one of my goals was to write about the ways competing origin stories of the internet and competing visions of the technology’s present and future played out in decisions about the technology itself. In a sense, this book is designed around conflict.
Cached investigates a number of “common senses” about the internet that occurred sometimes simultaneously and often paradoxically with one another between the 1980s and 2000s. For example, the internet was considered a war project, a toy for teenagers, an information superhighway, a virtual reality, a player in global capitalism, and a leading framework for comprehending both globalization and the United States. I could have written a book that chose a side, that tried to settle these historical conflicts about how to understand the internet by arguing for the “correctness” of one of these positions over another. Instead, I chose to show how the tensions in these internet narratives—their interrelatedness and the conflicts between them—are what shaped the internet’s use and development as well as the policy decisions around the technology. It is less that there was a single common sense about the internet or a “best” way of approaching it, than there was (and is) a continuous struggle to name and understand the internet’s implications. The struggle that produced the internet is, I think, what makes the best story.
And, as you referenced in your question, some of these tensions and transitions between different “common senses” resulted in the cultural erasure of internet technology itself. To use a rather simple example, in the 1980s we talked about “computer networking”—a phrase that immediately conjured up images of wires physically connecting machines—but in the 1990s we talked about “surfing the world wide web.” As more and more of the public (not just military and university employees) began using the internet, they were also reimagining it, focusing more on the experience of using the internet and less on the technologies that made it possible. Clearly this shift had a lot to do with the evolving capabilities of technology itself. It is hard to imagine ourselves “surfing” over a world-wide virtual landscape until the internet became visual as well as textual. But this shift was also worked out outside technology itself, in advertising campaigns and political strategies that linked the internet to America’s global and economic dominance. So I make the case that, in some ways, the internet’s narrative structures helped erase its physical ones. Instead of the internet imagined as a bunch of plugged-in computers, the computer became imagined as a gateway to the internet. The internet lost its body, so to speak. As such, we could more easily imagine it as a global yet also deterritorialized space or as an experience rather than a product of infrastructure.
STR: In the first chapter of your book, you anchor your discussion of early networked social computing (by this, I mean networked computing outside the confines of the academy or the military context) around the 1983 film War Games – certainly an important cultural artifact for anyone of a certain age at the time of its release. Using this example, and other related moments from the news media and other sites and examples of mainstream cultural discourse, you demonstrate the profound relationship between these representations and subsequent policy and legal regimes. Would you share with the readers of this interview how you’ve theorized the interplay between these phenomena?
SRS: For me, it is crucial to always remember that the internet is much more than a sum of its technological parts. Along with other scholars like Manuel Castells and Lisa Gitleman, I approach the internet as cultural. To say that the internet is a culturally-constituted, historical object is to say that qualities essential in the technology itself have always mattered but they do not alone determine the ways the internet evolved or how it has been and is understood—either as a technology or as part of our lives. War Games is a fun example and was a great way to start the book because it’s a clear moment of how circumstances in history and cultural mattered, how circumstances helped shape the internet and the way a range of people understood it. Filmmakers, software designers, marketers, journalists, military researchers, and U.S. Senators, with all their varying motives, joined in an explicit, competing and sometimes overlapping, conversions about what the internet was and should be. War Games and its reception worked as the site where these different visions were expressed, collided and altered one another. Together, these complex cultural understandings of the internet guided its political, commercial and technological future in the United States and elsewhere.
In light of this example and others that illustrate a range of forces shaping the ways people engage the internet, I argue that technology is culturally flexible and not fixed by its material parameters. As a result, I tell its technological history alongside the history of mediated and political narratives about the technology, to provide insight into the ways the internet transformed in accordance with larger changes in culture and policy as much as with the intrinsic capabilities of technology itself. This cultural history approach demonstrates that it is impossible to speak separately about policy, technology and culture. Cultural values produced in, through and about communication technologies shape policy debates and helped determine policy outcomes. Policy, as a cultural actor, shapes technological innovation and development as well as validates and legitimizes particular cultural values over others, amplifying them in the cultural sphere. The technology does not and cannot happen in a vacuum outside the spheres of culture and politics so it cannot on its own dictate its cultural and political future. So this approach helps me demonstrate in very concrete terms how representations of the internet both relied on and helped formulate larger cultural assumptions about the nation, the state, democracy, public space, consumption, and capitalism.
STR: One thing you have done with this book is to really surface up, and then problematize, a lot of the received wisdom that permeated early discussions of the internet – politically, as a site of commerce, and as a manifestation of American-ness – with specific American narratives foregrounded. Can you talk about some of the notions you wanted to critically examine with this book? What do you think we can learn from these past framings, and what do they suggest about contemporary narratives?
SRS: The internet operates transnationally, yet is based in national infrastructures and is subject to national legal and economic systems. So one of my goals with this book was to begin teasing out the complexities of that, to contrast cultural and political approaches to the internet in the United States, Europe, and the Middle East in a way that accesses the dynamic ways cultural, political, and economic power operate On the one hand, I focused on alternative visions of the internet available outside the United States that help reveal the hidden national flavor in our domestic visions of the technology. On the other, I focused on how and why American visions ultimately dominated the cultural history of the internet, how and why the internet was imagined as self-evidently American, democratic, and capitalist. I show how voices in cultural representations and policy initiatives—originating both in the United States and abroad—helped to construct the internet as inevitable and as naturally regulated by free markets and corporate forces. So, I acknowledge the United States’ dominant and formative power in the internet’s history but I don’t take that historical outcome for granted. I simultaneously recognize the technology’s internationality, the influence of international actors, and the variable models for the internet offered by other countries and cultures. Ultimately, the comparative elements of my book reinforce the ways the internet was a central location for key national and international debates about globalization, democracy, public space, consumption, capitalism, and America’s place in the world.
In doing this, I hope to disrupt today’s uncritical debates about the internet’s development and to prompt us to collectively recognize the internet not only as a technology, but also as a cultural and political formation. This recognition is especially important as new technologies appear. I want to collectively critically examine our common senses about technology, about their historical legacies, political implications, and unacknowledged cultural assumptions. All of those things matter to our use technological practices, our innovation strategies, and our policymaking process. Acknowledging the roles our national identities and transnational activities play in our understandings of the internet—in particular in ways we imagine technological development as naturally American, democratic, capitalist, and inevitable—is key as we shape communications technologies increasingly central to daily life around the globe.
STR: Finally, what has your own relationship to the internet been? I am observing my own 20th anniversary this year of my life online, and am feeling extremely self-reflective about that. To be sure, my life online changed my life, in general, in very real and tangible terms. What was your own trajectory with computing and the internet? What was the interplay of the genesis of this research with your own experience living through this period of time?
The first time I experienced computer networks was watching my best friend MUD in the 1980s. I observed as she dialed up and typed, traversing the complexities required in that historical moment. I didn’t understand what she was doing let alone how she was doing it. To me, she was a teenaged Neo from the Matrix. I remember asking her question after question, and she didn’t seem to understand why I was asking many of them. For her, computing was already intuitive. Many of her assumptions, of course, became the naturalized mainstream visions of the internet that I ultimately wrote about. In some ways, this book is my attempt to regain the fresh perspective I had in the late 1980s and apply it in a more systematic fashion to the assumptions that have been codified in culture, policy, and technology itself.
STR: Thanks, Stephanie, so much for your time in talking with me about Cached and about the cultural meaning of the Internet, more broadly. It’s really been a pleasure.
SRS: Sarah, I’m honored to speak with you today. I have long enjoyed HASTAC, one of the most forward-thinking and diverse consortiums around. Many claim interdisciplinarity, but HASTAC actually provides it.
Stephanie Schulte is Assistant Professor of Communication at the University of Arkansas, where she researches communication technologies, media history, media policy, popular culture, and transnational cultural exchanges. Her work has appeared in the Journal of Television and New Media, Mass Communication and Society, the Journal of Transnational American Studies, and Feminist Studies. She is the author of Cached: Decoding the Internet in Global Popular Culture (New York University Press, 2013) and is currently working on a second book length project, which investigates technology and notions of the public good, as well as an interdisciplinary edited collection focused on media and citizenship.
A version of this post appears at hastac.org.
Acknowledging the passing of my best pal, Chesterfield “Chester” Roberts. Born November 1997 in Chapel Hill, North Carolina. Chester passed away peacefully at home surrounded by his mom, his uncle, and his feline sister and littermate, Lucy. We’ll miss him a whole bunch.
Just a brief note to share that my colleagues Miriam Sweeney, of GSLIS, Ergin Bulut, of the ICR, and I have had our panel proposal accepted for IAMCR 2013 Dublin, to be held June 25-29th at Dublin City College. This year’s conference theme is “Crises, ‘Creative Destruction’ and the Global Power and Communication Orders,” and had over 2,400 submissions. The three of us are honored to have had our panel, entitled, “Demystifying the ‘Digital Economy’: Critical Interventions in Online Moderation, Anthropomorphized Virtual Agents and Gaming” selected by the Political Economy section of the conference.
It’s not every day that one’s wildest sci-fi inspired dreams are achieved, but yesterday was such a day.
Over the past few years, I have watched the quiet development of 3D printing unfold. It has been evolutionary rather than revolutionary, with innovations taking place in the rarefied domains of university R&D centers and labs, or in the basements and garages of tinkerers, hackers and makers – the kinds of gadgeteers and engineers willing to bang their heads against a problem until they triumph with a solution, and know that the headbanging part is the actual fun of it. As these printers (in reality, they’re something more akin to giant glue guns that can read instructions and move rapidly along x, y and z axes to produce objects by stacking layers measured in millimeters of thickness – here’s a good layman’s explanation of a few different types of printers available) have come down in price and increased in terms of ease and reliability, they have, predictably, been turning up in more and more places. They are coming out of the garages and into the light, and they are finding homes in community hackerspaces, fab labs, and even libraries.
My own city has benefited greatly by the presence of a fantastic collaborative hackerspace, open to the public via frequent classes, events and a monthly membership. Last night, Sector67 offered an intro to 3D printing, so, for $20 and two hours of time, I was treated to a five-person whirlwind tour of the state of the industry art by none other than Sector’s founder and extremely knowledgeable 3D printer nut, Chris Meyer. During our two hours, we were given an overview of all of the extant 3D printing material options: powder, ABS, PLA (a corn-based plastic that previously mostly functioned as a support system in commercial ABS product manufacturing). We walked through the various 3D printers out there, ranging from the ridiculously DIY (the RepRap, made from 3D-printed parts – a weird, amoeba-like thing to think about) to the very expensive and more functional than hackable (the MakerBot Replicator 2) We examined the output of each, discussed the potential faults and pitfalls of working with the printing control software, ReplicatorG, and – joy of joys! – my idea for production of a print was chosen by Chris as the example we would watch be created before our eyes while talking over the finer points of “jitter” and Skeinforge. We would be printing out a 3D version of a game tile from the popular German boardgame, the Settlers of Catan – with my apologies to the guy who wanted a camera case for his GoPro HeroHD camera.
I downloaded the plans for my gamepiece from the crowdsourced 3D site Thingiverse, and avoided a potential stumbling block when I discovered that the CAD-like STL files I had previously drooled over were nowhere to be found. Tabling that for a moment, I quickly found an alternative. We downloaded the files, picked one component (an “ore tile,” for any Settlers nerds out there), and started printing. 49 minutes later, and I had one lukewarm tile in my hot hands.
And here we get to the sci-fi dreams: look, simply put, watching this thing be created before my eyes was incredible. Anyone who has had a chance to come face to face with a functional version of one of these machines has undoubtedly gone away mesmerized, with visions of what they could do with one of these things in the context of rapid prototyping, proof-of-concept testing, materials development (think: plastic textiles, printed on demand), and just plain fun. Where could any harm lie?
Well, it turns out that there is a potential dark cloud on the horizon for 3D printing. When I got home, I did some poking around, looking for the initial Settlers of Catan 3D boardpiece files that had initially piqued my interest in 3D printing as a technology with personal significance a couple years ago. I even remembered the username of the user on Thingiverse who had created them: Sublime. While I found many references to Sublime and his/her awesome game pieces, all links led me to a big, fat, dead end – and possibly the best 404 I’ve ever seen. What had happened to Sublime, and to the Catan plans? Well, it looks like a cold wind blew into the 3D printing universe: the chilling effect specter of copyright.
And while I couldn’t find Sublime anywhere, I did find this Public Knowledge post that suggested that a. Sublime was getting nervous about potential infringement and b. there was likely no infringement going on. After all, as PK pointed out, “…the pieces themselves are not even distributed. Instead, if you want the pieces you need to download the files, boot up your 3D printer, and make them yourself.” This post is related to a larger whitepaper that PK authored, aptly entitled, “It Will Be Awesome if They Don’t Screw it Up.” This contribution delves further into specifics of original product creation, the making of copies, the nature of patent, trademark and other relevant issues. Yet the title says it all: given that we are still dealing with an ongoing culture and legal war over what constitutes ownership over IP with digital material contained in computer files and compiled of 1s and 0s, it seems unlikely that the introduction of the ability to create tangible, functional 3D objects – or, more to the point, to _replicate_ extant ones – will have clear-cut solutions or yield easily divined answers. Further, it’s not as if legal posturing and wrangling of the past 15 years has slowed the trade in copyrighted materials in the slightest. How long until we see the flip, dark side of Thingiverse, a Pirate Bay for files pulled for copyright infringement , illicit materials, weapons? The latter is not pure conjecture; a smart-alecky law student type from Texas has made his share of headlines with his dream to freely distribute handgun blueprints for DIY arsenal-builders.
Given this, are we likely to, as PK puts it, spoil everything by screwing it up? As 3D printing technology is poised on the verge of making a leap from the esoteric to the commonplace and from the rare to the ubiquitous, questions about the technology with invariably shift, from “Can we do this?” to “Should we?” Meanwhile, I plan to book as much time at Sector67 as I can to get my board printed out before it’s too late.
A special shout-out to the students of LIS 502LE, visiting this blog at the end of their hard work in the inaugural intersession LEEP Foundations in LIS course. Congrats on a job well done, everyone!!
My blog posting has been on the wane of late, but it has been for a good reason. Work on my dissertation has continued apace, which means I’ve been putting the vast majority of my writing efforts towards it and fewer here. That having been said, I relish this space as a great starting point to help me work out my thoughts and capture issues as they are unfolding – your comments and participation are a great help to me, in that end, and I appreciate greatly the participation of those of you who read this site. I look forward to our conversations in the coming year. Thank you!
One of the few things I’ve been able to give time to that is not directly tied to my dissertation work has been to switch a good portion of my computing to an open source platform. I’ve been a Mac user since the late 1980s and online (on the Internet) for just about 20. In that time, I’ve watched with interest as the open source software movement, in general, and Linux, in particular, have gained momentum and a following. My own attempts at using Linux spans pretty much its entire existence, and I’ve tried more distros than I can remember – Red Hat, NetBSD/FreeBSD, Ubuntu are the ones that are coming easily to mind. Because extraneous PC hardware upon which to run the Linux flavors was often out of my grasp, or the technical acumen required to run the OSes lost out to my ease of use and familiarity with my OS of everyday choice, MacOS, many of my attempts to work Linux into my own computing life ended prematurely.
My increasing frustration with the restrictions being placed on Mac OS, and its increasing iOS-ization, as well as my disdain for both the experience of Windows and the practices of its maker, led me to put a call forth to my friends for the headlines on the state of the art of Linux computing. The call was met with a unanimous response: check out Linux Mint. This elegant, aesthetically lovely project, led by Clément Lefebvre and teams of many other volunteers and based off a branch of the Ubuntu distro, was also reported to be easy to use, user-friendly and accessible (at least as far as Linux goes). I was ready to take the plunge and, along with a friend, we installed Linux Mint 14 on our refurbed Lenovo laptops that we have for Windows emergencies (when we are forced to run Windows for some task or other). He was a Linux newbie and I, more of a veteran, but also with a steep uphill battle to getting my chops back. To our delight, the OS installed with ease and we were up and running, and using, Mint – and abandoning Windows – almost immediately.
Part of my joy in this process has been discovering the open source analogues to so many of the software packages and processes that everyone I know of, including myself, has come to rely upon. Many I already knew of and had used in the past (e.g., Gimp), but so many more of them have come so far even since my last attempt at getting serious with Ubuntu, about five or six years ago, that it’s been a great pleasure to find out what is truly possible while running under Linux. All of my everyday necessities are working nicely: productivity software, Zotero (and its hooks into other apps), browsers and net utilities, graphics and audio apps, and so on. And there is great pleasure, too, in being able to get under the hood and really crank around in the file system from the command line (I remember my thrill when I first was able to score a Unix shell account at the University of Wisconsin in my freshman year, by joining up with a computer club pretty much in name only).
I also appreciate so much the politics of what Linux, specifically, and many open source projects, in general, represent: another model, and an alternative way of doing things that challenges the status quo and conventional wisdom that major projects like this can only succeed when driven by a profit motive. Mint, and other projects like it, relies on a healthy community of developers and users who engage in mutual aid and assistance, and welcome newcomers. My hat goes off, for example, to the fellow who stayed up with me into the wee hours of the morning a number days ago, as we worked together to troubleshoot a particularly tricky dual-boot issue that challenged my knowledge and solo skillset. I have thought about this today in particular while waiting on endless hold today to get a hold of someone at Microsoft in order to “unlock” the OS (Windows 8) that I already paid for, and yet can’t fully use.
With the increase in tethered devices (e.g., smart phones; tablet computers) and a philosophy of closed, proprietary computing only increasing in prevalence, my switch to Mint has brought with it a surprising feeling of freedom and of possibility – the same kind I used to have when I first ventured online in the early 1990s, and imagined what could be in the new world of information and social interaction that I discovered there. If you, like me, are feeling constrained by the artificial blocks, locks and relationships being imposed on you by your reliance on commercial OSes and all that those relationships entail – financial obligation, limitations on use, surveillance, etc. – then I urge you to give Linux Mint – or any flavor of an alternative OS – a try. Report back and let me know how it goes. I’ll be eager to hear what you have to say.
Happy new year to all!
On September 20th, I had the pleasure of traveling to the Illinois Institute of Technology to deliver the first talk in IIT’s fall series, “Defining Boundaries and Goals in the Digital Humanities.” My talk, entitled, “Digital Humanity: Foregrounding Human Traces in Technological Systems (and Why We Should Care),” was followed by a lively and engaging Q&A session with faculty, grad students and staff. In addition to discussing the current state of, and the potential for, the digital humanities to highlight and unveil human traces in digital technologies, we talked about platforms that provide the potential for humanizing digital tools and creating space for alternative perspectives in technical systems; indeed, the Raspberry Pi that I brought along was a particular hit. The abstract for the talk follows below; thanks to Dr. Marie Hicks and all those at IIT who made my visit such a treat.
Taken from the perspective of the academy’s long view, the “digital humanities” as a concept is nascent and its precise definition remains a moving target, with a variety of methods, disciplinary perspectives and approaches finding a home under its ample umbrella. Yet the fluidity around its precise meaning affords opportunities for scholars to apply the critical lens of the humanities to the study of the digital, and to ask questions about who benefits, how and why, in the context of an ever-increasingly networked, computerized and digitally enclosed world.
In this talk, I will discuss current research in and several practical applications of technology that foreground the humanity in the digital and that offer and model alternatives. In some cases, these examples unveil hidden or obfuscated traces of humans within digital systems, literally and in the abstract – in labor, representations, and by other means – and the implications that such erasures engender. I will also highlight practical examples of platforms, systems and tools that endeavor to challenge existing paradigms extant in many mainstream or instantiated technical systems. This talk is intended as interactive dialog with opportunity for the audience to offer their own experiences, tools and solutions for discussion and inspiration.
As reported by Reuters and picked up in the Huffington Post, Facebook today released a confusing infographic ostensibly designed to shed light on the cryptic route that reported content takes through the company’s circuit of screening.
According to the company, content flagged as inappropriate, for any one of myriad reasons, makes its way to “…staffers in several offices around the world to handle the millions of user reports it receives every week about everything from spam to threats of violence.” Reasons cited in the infographic that may cause material to be reported include content that is sexually explicit, involves harm to self or others, depicts graphic violence, contains hate speech, and so on.
What the infographic and accompanying statement from Facebook fail to do is to suggest what amount of content is routed through this circuit, and how much of a problem addressing problematic user-generated content (UGC) routinely tends to be. In reviewing the infographic, the lack of real information it provided about workers, the nature of this issue and the nature of the content being flagged left me thinking of the old disparaging computing phrase of “security through obscurity;” the infographic offers protection to Facebook by revealing very little of import. It is obscurity through ostensible transparency.
Critically lacking, for example, is a lack of discussion of the working conditions for the “staffers…around the world” who contend with this material as a major function of their job. Are these staffers full-time Facebook employees, afforded the status and benefits commensurate with their positions? As other reporting has already indicated, and as I have discussed in another entry on this site, Facebook indeed employs micro-work sites such as oDesk and others to conduct these moderation and review practices. The workers engaged in the digital piecework as offered on micro-work sites are afforded no protections or benefits whatsoever, and do not even benefit from the ability to commiserate with other workers about the content they view as a condition of their work.
In this way, Facebook benefits from the lack of accountability that comes with introducing secondary and tertiary contracting firms into the cycle of production – a fact that is critically absent from the infographic above. Workers engaged as moderators through digital piecework sites are isolated, with few (if any) options for connecting – for emotional support as well as for labor organizing – with other workers in similar conditions, and without any real connection to the worksites of origin from which the content emanates. While the micro-work sites and the major corporations that engage them may tout the ability to draw on expertise from a global labor marketplace, in practice even the New York Times notes that these temporary work relationships result in lost payroll tax revenue for companies such as the US when labor is outsourced, and note that increases in these kinds of labor pools are significant in Greece and Spain, countries devastated by economic crisis and crippling “austerity” measures. Notably absent, however, from the NYT piece is any discussion of the bargain-basement rates that drive the value of the labor down to the lowest bidder, by design. The connection between economic crisis in a region and an increase in the availability competent labor that is exceedingly cheap cannot be lost here.
Of course, one cannot reasonably expect Facebook or any other company to ease the way for workers to organize and push back against unpleasant work conditions and unfair labor arrangements; this, after all, is one of the features of outsourcing and using intermediaries to supply the labor pool in the first place, along with the lack of regulation and oversight that these arrangements also offer. In response, non-traditional organization among workers in these sectors is taking place, such as in India, where UNITES Professionals have issued a charter for IT and call center workers, and the Precarious Workers’ Brigade, whose focus is educational and cultural workers, but whose model and scope could certainly conceivably be extended to workers engaged in screening and moderation.
I’m back in Sweden, this time in Uppsala, on the campus of the university of the same name, to attend the 4th meeting of ICTs and Society. Convened by Christian Fuchs and colleagues, this fascinating lineup features timely discussions of, among other things, global capitalism, information and knowledge labor/labor in ICT, organization, theories of “the information society,” surveillance, privatization, policy and activism – so many topics near and dear to my heart and at the center of my own intellectual endeavors. Vincent Mosco and Graham Murdock set the stage in this morning’s plenary with their “reloading” (as Christian Fuchs describes it) of Marx by highlighting the ongoing relevance of Marx today against the backdrop of global labor, social movements, uprisings and crises – with the latter’s relationship to emergent and extant ICT certainly up for discussion.
Early into the first paper session and I’ve bumped into numerous colleagues and friends from past conferences, as well as other scholars whose work has proven illuminating to me (Christian Fuchs, Trebor Scholz, Nick Dyer-Witheford and Will Peekhaus among many). I am looking forward to a challenging and provocative few days, and am tweeting my observations and particularly salient insights @ubiquity75 using the hashtag #CDP21. Do follow along if you’re interested.
This Friday, the University of Wisconsin-Milwaukee’s School of Information Studies (SOIS), along with co-conveners School of Library and Information Studies (SLIS), UW-Madison, and the Graduate School of Library and Information Science (GSLIS) at the University of Illinois at Urbana-Champaign, will come together to present, “Out of the Attic and into the Stacks: Feminism in LIS,” an unconference (March 9-11, Milwaukee, WI).
Why feminism in LIS now? Simply put, the situation for women hasn’t felt this dire in years. As a divisive and acerbic Republican primary season has gripped the country, women have taken center-stage in a resurgence of the culture wars reminiscent, in tone, of the early 90s and in position of perhaps even another few decades previous. And while so-called ”women’s issues” have dominated the headlines, the climate has extended to other easy targets. Last fall, a Virginia-based “think tank” specializing in the eradication of race-based admissions preferences in colleges descended upon the University of Wisconsin-Madison, eager to pick a fight and cause derision at the campus. Throwing conservative “states’ rights” values out the window in order to meddle with the inner workers of the state’s flagship university, the visiting director of the center dished out arguments that seemed directly ripped from the pages of The Bell Curve, a book I thought long ago discredited, in a debate I attended along with hundreds of others. Like being trapped in some kind of time machine or Twilight Zone, I remarked to a friend that all that was needed was an appearance by Dinesh D’Souza sporting Hammer pants, and the return to 1990 would be complete.
Yet, in 2012, it’s as if the past years of social gains and progress in the arena of the standing of women never happened, either. Enter “Out of the Attic and into the Stacks,” in which participants will gather together to talk about the current climate for all women, using the perspective and lens of LIS to inform and ignite the conversation.
From my own perspective, I certainly see the issues facing women today from multiple fronts with many intersections. Pragmatic issues such as lack of access to key resources, women and children living in poverty, lack of educational and reasonable employment prospects, and so on, are at the fore on many of our minds, as are the situations and issues of particular relevance to women of color and LGBT-identified women, all of which the unconference plans to bring into the discussion. From a political perspective, too, I hope to get to grips alongside my unconference colleagues with the current scapegoating and targeting of women, using historical and theoretical frameworks that are applicable. Multiple feminisms will be key to these discussions, and many exciting resources have been identified on the unconference’s wiki, to which all participants may contribute.
I also view this situation from an informational lens. Not only are women’s access to services in health care, reproductive and pre-natal care, equal pay, and a host of other hard-earned rights being threatened or rescinded, full-stop, but, crucially, women’s access to information about their rights and services available to them are also disappearing. State legislatures have been busily curtailing or otherwise interfering with what women can know and when they can know it about abortion services, contraception and other information vital to their reproductive and overall health; similar debates have raged at the federal level and are featuring in the Republican presidential primaries. All of this offers a backdrop conducive to a general cultural climate in which Rush Limbaugh thought it would be fine to refer to a Georgetown Law student seeking birth control access in a hearing before Congress as a “slut” or a “prostitute” over 50 times - as if such a status should render women ineligible for health care or the most basic common courtesy. At least he seems to have misfired on this particular episode, but as Sandra Fluke (the target of his misogynistic outbursts) and others point out, the real issue is not Limbaugh’s attention-seeking behavior, but the legislative and other political maneuvers that lie behind it, and other anti-women actions and sentiment that are their outcome.
For those of us LIS students, practitioners, and scholars who will be taking part in the unconference this weekend, both hope and energy is running high. With time spent together discussing the collective state of feminism, women and social justice topics, in general, my hope is to emerge with some concrete (re)dedications and linkages of the role of and opportunities for LIS to the social issues and lacks that are plaguing our society – with some more than others bearing the heavy burden of the disturbing trends I’ve outlined. Seeing that the unconference will be taking place Milwaukee, a once vibrant and now devastated Midwestern urban center and now one of the country’s leaders in infant mortality, the stakes could not be higher. This is about so much more than women. This is about us all.
In the past few days my inbox has seen an influx in forwards from friends and colleagues, all sharing links with me covering the recent revelation that Facebook outsources some of its dirtiest work, and that those firms handling Facebook’s outsourced labor pay exploitatively low wages for some of the most psychologically damaging digital work imaginable: the screening of user-uploaded content (posts, images and videos) to Facebook. My colleagues sent these links my way for good reason: this topic has been the primary subject of my own academic research for the past year and a half, ever since I discovered these content moderation practices through a small news story in the New York Times. After reading it, I became riveted both by the workers and the industry it portrayed, as well as by the implications of this practice in the greater digital media/social media ecology. How do these practices change our collective notions of participatory media and understandings of the costs – financial and human – to use said media? What does it mean about the nature of our online participation, at one time heralded as a great direct-access equalizer, to know that content undergoes screening by unknown agents, who are often low-paid and low-status? What is it about the nature of social media that may encourage the creation and uploading of prurient, shocking or just-this-side of bearable content to be shared? Who benefits from such material? Who is put at risk? I wanted to explore, too, the impetus to conceal or render invisible these labor practices, virtually unknown to those outside the industry and yet an integral part of the production chain of user-generated digital media. These were just a few of a veritable laundry list of questions I generated based on my initial research on this topic. Since then, I have been documenting and writing about these labor practices and the workers involved, mapping them both in terms of their material nature as well as from a theoretical perspective, in my dissertation, Behind the Screen: The Hidden Digital Labor of Online Content Moderators.
Meanwhile, the latest chapter in the popular press’s up-until-now scant coverage of the story transpired just last week, when Gawker’s Adrian Chen filed his post entitled, “Inside Facebook’s Outsourced Anti-Porn and Gore Brigade, Where ‘Camel Toes’ are More Offensive Than ‘Crushed Heads’.” Chen’s story focused on practices at Facebook which, he discovered, takes place largely via outsourcing and micro-labor market oDesk see Brett Caraway‘s 2010 article referenced below for a nice overview of that company’s practices). Chen’s article is remarkable in a number of ways: first, he was able to focus on real-world examples shared with him by the workers themselves, most of whom are no longer working for Facebook via oDesk, and many of whom are located outside the US and in the so-called “Global South.” The workers’ accounts give concrete examples of both the kinds of egregious and trauma-inducing material they were exposed to, on the one hand, while on the other being paid wages that would seem to be nowhere near reasonable given the hazards of the work. Here it is interesting to note that much of the outsourced labor that takes place at sites like oDesk or at Amazon’s Mechanical Turk is undertaken on a per-item basis, so that workers are paid based on the number of items they are able to screen; I have taken to describing this practice as “digital piecework.” Secondly, Chen was able to provide the Gawker readership, thanks to the workers he interviewed, with a number of internal documents from oDesk, used for training and quality control by the content screeners. This type of material is generally not available for public view and is considered insider business knowledge; not making it public allows a company to maintain ambiguity about its screening and censoring practices via more general “user guideline”-style statements that give it plenty of room in which to operate when making subjective content screening decisions. This angle was another particular focus of Chen’s piece, where he pointed out the strange hierarchy of material, and how it is to be adjudicated by the screeners. While Chen’s piece, and subsequent takes on it in the blogosphere and in other sensationalistic coverage online, focus on the admittedly disconcerting nature of the material Facebook rejects, the more compelling facts rest just below the surface.
The website/online lab R-Shief has announced a three-day data visualization hackathon designed to create opportunities for new ways of viewing and understanding #occupy movements worldwide. R-Shief is offering its data sets of #occupy-related Tweets to anyone wishing to participate in using it between December 9-11 in order to create and share data visualizations based on the data sets.
Participants must agree to a commitment to social justice and promise not to use the data sets to nefariously monitor activists in order to gain access to the Tweets. Read more in the press release below and at R-Shief’s website.
R-SHIEF SHARES ITS #OCCUPY TWEETS IN A COLLECTIVE 3-DAY EFFORT TO #OCCUPYDATA
FOR IMMEDIATE RELEASE:
LOS ANGELES, October 26, 2011- 3 Days, 30 Twitter hashtags, and countless ways to understand the occupy movement. From 09 December 2011 to 11 December 2011, R-Shief, a lab that collects and analyzes Middle East content from the Internet, will hold its first hackathon with satellite locations throughout the world. The aim of this event is to give activists data collected from Twitter, as well as R-Shief’s machine learning analytics, in a collective effort to offer a public and shared repository for data and visualizations about the Occupy Movements.
In solidarity with protestors around the world, #OccupyData is meant to serve as an intervention by offering experts and activists means to work together and think critically about the movement, its messages, and goals. Register and receive open access to export four CSV files for each hashtag — (1) stats by day, (2) stats by hour (3) stats by minute and the (4) raw data itself. (These files are automatically updated hourly). We encourage all participants to post links or images of the work that comes out of this to R-Shief’s blog رشيف | Blog or Visualize It section رشيف | Data Visualizations. Reports from this event will also be featured in Jadaliyya.
Register @ R-Shief | #OccupyData
Live graphs @ R-Shief Twitterminer
2011-2012 marks my third year participating in the HASTAC program, an international collaboratory of scholars who meet primarily online to discuss issues of new media, teaching and learning in all of the breadth and depth implied. As such, the opportunity to meet people from across the world with whom one has interacted is to be relished, and so I am very much looking forward to traveling to Ann Arbor, Michigan, tomorrow to take part in HASTAC V, the group’s exciting annual conference. As this trip will represent a rare opportunity to participate without having the burden of presentation-related anxiety, I am hoping to file reports here, on the HASTAC site, and on Twitter (@ubiquity75) on the events as they unfold. Check out the conference program here. And I’ll be sure to pack my appropriate outerwear, as I’ve been told that “winter is coming.“
The paradox of digital material is its ability to disappear: despite a potentially infinite lifetime and no degredation of quality as suffered over time by their analog media counterparts, digital objects are only as good as the ability to find them – to avoid, in essence, digital ephemerality. These are themes that are not unfamiliar to those who work in digital archives or in LIS, in general, and those attuned to such issues who have also been active in the recent digital-media informed new social protest movements have seen this digital emphemerality for the problem it is. For example, the Wisconsin Union (#wiunion on Twitter) protests of early 2011 produced a wealth of born-digital documents and material, subsequently scattered across the digital landscape and subject to the personal archival practices of the people who created it. You can find this material on YouTube, Vimeo, Flickr, or on the private Facebook accounts of any number of the multitude of protestors involved in the events – as long as you know where to look. In the latter case, if you’re like me, your collection is behind a privacy barrier and in a heavily-curated account, where only “friends” can access the material. In the worst cases, the digital video and photos are still sitting on a flash card in someone’s Flip cam or iPhone, waiting to be uploaded but frozen in stasis and on the To-Do list that never gets done.
In yet other cases, there is a plethora of physical material – hand-outs, flyers, posters, etc. – that is not yet widely easily available or may not exist in digital form and runs the risk of disappearing altogether if it is not curated and digitized. I have saved countless handouts from the TAA, the WEA and other organizations that fits this bill and have anxiously looked at it stacked in a forlorn corner of my desk, wondering when and how I’ll get around to dealing with it.
Enter the Wisconsin Uprising Archive, a collaboration of UW-Madison School of Library and Information Studies graduate and librarian Keely Merchant and WYOU Community Television, under the supervision of that organization’s Board member Luciano Matheron and longtime political activist Barbara Vedder, a Dane County Supervisor. According to the Archive’s mission statement, “The mission of the Wisconsin Uprising Archive is to collect and preserve materials related to the democratic uprising that started in Wisconsin in February 2011. Examples of materials include, but are not limited to, videos, photographs, pamphlets, and audio. The materials will be accessible to all online with the goal of advancing the awareness of events to the general population as well as for educational uses by teachers and students of all ages. A further goal is to aid the production of documentaries about the events of the time by becoming a permanent repository in partnership with other institutions.”
Indeed, just as Wisconsin’s uprising of the spring served as a prescient springboard for the social justice protests that have spawned since around the country, so, too, does this Archive serve as a foreward-thinking and necessary companion to the protests as they happen. Not only do they serve to document the vast array of people-created media from the on-the-ground activities, allowing researchers and other interested parties to deal with primary-source materials when working on projects related to the events, but it gives a rare non-corporate outlet for people to contribute and house their materials. This is no small feat, in an era where most everyone’s go-to distribution channel of choice is a deeply corporate enterprise whose privacy and other practices are outside the control of the users, with voracious intellectual property appetites that often demanding the surrendering of user ownership of material in perpetuity. Is that truly the best outlet to document social resistance movements? Furthermore, with user-generated social media increasingly thrust into the spotlight as one of the few power-leveling mechanisms available to protestors, being able to house and preserve digital media in this way will continue to grow in importance. This week’s shocking video from UC-Davis capturing campus police using pepper spray on seated Occupy students (and the subsequent powerful video of Chancellor Linda Katehi walking past throngs of silently-protesting students without comment) is only the latest example in which on-the-ground, organic media created by participants in resistance movements continues to send shockwaves around the world. Indeed, this particular clip sparked outrage throughout the country and the Police Chief of the UC-Davis police has been put on leave. A the #OWS movement grows and other social justice movements continue to document their struggles using participant-generated digital media, the need for projects like the Wisconsin Uprising Archive will continue to grow. With luck and with coordination, this project can serve as a model to other people and movements around the country, who undoubtedly have a similar need to preserve and document this history in the making.
1 minute, ten seconds.
That’s how long I withstood a viewing of the video, posted on October 27th and now approaching two million views, of Hillary Adams, aged 16 at the time, being viciously beaten by her father, Aransas Co. family court Judge William Adams. In 2004, Hillary Adams was caught accessing content online for which she hadn’t paid, an act that enraged her father and prompted Hillary to turn on a camera she had hidden in her room to capture just such an event (apparently the beating caught on video in this incident was not without precedent).
Indeed, Judge Adams unleashes a torrent of verbal and physical abuse so profoundly violent, disturbing and out of proportion in any case, much less given the circumstances of this one as reported by his daughter, that I was unable to take any more after only 70 seconds. Hillary Adams endured the beating for seven minutes. According to published reports across the Web, the video carries on for the entirety of that beating, during which time Judge Williams threatens to hit his daughter in the face with a belt, enlists his (now ex-)wife to assist in the abuse (not atypical behavior in family abuse situations in which a tyrannical adult holds an entire family hostage) and actually leaves the room only to come back for a second round with another belt and possibly a board.
And while this tragic and sickening event may not have been without precedent in the Adams home – by all accounts, an upper-middle class, suburban arrangement in a town on Texas’s Gulf Coast – the fact that such a video a. has gone viral and b. was posted by the victim depicted within it certainly seems to be. That Hillary Adams enlisted YouTube as her distribution channel for the video has not been lost on many commentators around the Web, who have noted with sad irony that it was Adams’ use of the Internet in the first place that brought the wrath of her father upon her – not that any child can be held to blame for the violent actions of an adult. And as is abundantly clear in the brief moments I was able to stomach of this video, there is no behavior imaginable so heinous as to merit the vicious sadism of Judge Adams’ attack.
I’m in Seattle right now, enjoying the great ambiance that is the annual conference of the Association of Internet Researchers (AoIR). This is my third year at AoIR (the conference is in its 12th), and it’s always a pleasure to come to this conference, both for the people and for the insightful and exciting research they are doing.
This year I am participating on a panel with my colleague, Annette Vee, of the University of Pittsburgh, and another colleague, Matt Gaydos, of the University of Wisconsin-Madison on some research projects regarding the Wisconsin labor protests (hereafter known as #wiunion). Entitled “Cheeseheads Rise Up! Social Media and/as Resistance in Wisconsin,” we will provide significant context for the events February and March 2011 in Madison in reaction to Governor Scott Walker’s notorious “Budget Repair Bill.” This will include insider perspectives, as well as the situating of the events in a theoretical context and in relation to other recent resistance movements, past and present.
My own contribution will be primarily to discuss the events as they unfolded, as well as to discuss a nascent research project around the use of personal digital media (such as digital photos and video) and social media platforms (such as Twitter) to document and disseminate information about the events for both internal/local and external publics. This investigation focuses on the impetus and motives for such media creation and dissemination, as well as documents the practice of their production and curation themselves. I am looking forward to a lively discussion, and was heartened earlier today when a colleague from Sweden both shared his interest in attending the talk but confessed that he didn’t know many of the particulars about the events that were being shorthanded elsewhere. On, Wisconsin!
Since February 12th, I have been involved in participating in and documenting the protests against Wisconsin Governor Scott Walker’s “budget repair bill,” underway at the State Capitol in Madison, WI. As an academic engaged with issues of both labor as well as critical media scholarship, I have been keenly aware of the peculiar situation of being both directly involved in the protests while attempting to think about them in the context of my academic work, and in terms of larger-scale sociocultural movements of the past 30+ years. Throughout the past three weeks, I’ve found myself routinely returning to a position of negotiation between my public and private, political and professional, student, academic and grassroots self. Of course, the binarisms of these juxtapositions are false from the get-go, but perhaps the negotiation process has been made more apparent and more acute as I’ve found myself, moment-to-moment, simultaneously making decisions, documenting, responding to developments online and off, and simply facing the challenge of extended time periods in very cold weather.
Radical author/artist/activist/zinester Sloan Lesbowitz contacted me and asked me if I’d be willing to talk to her about what has been going on in Madison, in part, in the context of the online technologies and media (e.g., Twitter; Facebook) at the center of so much attention and activity in Madison and elsewhere in the world. Her questions were so thoughtful and provoked so much reflection in me that I asked her if I might share it with others. With Sloan’s permission, the conversation is posted below, with a few modifications as needed (and the original can be found here and here). I hope it is of interest.
In his article, “Surveillance in the Digital Enclosure,” scholar Mark Andrejevic takes on the task of questioning the often-idyllic and largely positive rhetoric frequently used to describe the variety types of ubiquitous, cloud and always-on computing. In so doing, he invokes the sci-fi visionary of the 1980s, William Gibson, who imagined many characteristics of the modern networked computing environment before it actually existed, and reminds us that that vision was hardly idyllic. Rather, it was a dystopian near future that Gibson portrayed, characterized by surveillance and control.
Which narrative is more realistic, in the context of the brave new world of cloud computing? Andrejevic suggests that, despite the rhetoric of convenience and untetheredness, the Faustian bargain into which users enter in order to gain the convenience of access to their information and the suite of applications cloud and ubiquitous computing provisioners offer comes at a great, yet unseen cost: the profound recentralization, consolidation and subsequent commodification and control over both the content users upload to the cloud and their habits and behaviors that can be turned into valuable data, mined, extracted and sold. Continue reading
If you haven’t come across it before, the Democracy Now! program is an excellent resource for the kind of in-depth, globally focused reporting that is notably absent from today’s mainstream infotainment options dominating cable and network TV and the Internet.
Host Amy Goodman frequently brings guests on to discuss contemporary issues such as net neutrality, media conglomeration, access to information, governmental transparency and accountability and other related issues. She’s been one of the go-to journalists staying on top of the WikiLeaks story, and the other day, she hosted a very interesting debate from two people. One is Steven Aftergood, a “transparency activist,” who is dedicated to some of the same principles WikiLeaks espouses, but who feels WikiLeaks will ultimately do more harm than good to open information principles. The other is Constitutional scholar and writer Glenn Greenwald, who is in favor of WikiLeaks and a frequent contributor to DemNow! and The Nation, among other outlets.
The nuances and standpoints in this debate are very interesting, and go well beyond the kind of black-and-white soundbites you might hear on network news, for example. Check it out if you have a few minutes.
Goodman frequently scoops major media outlets, as well, due to the in-depth reporting that is done for DemocracyNow, and the range of guests they invite on. I heard Assange’s UK legal representative, for example, confirm that Assange is in the UK, whereas CNN’s article on WikiLeaks today stated that they “could not confirm” his whereabouts. They needed only watch or listen to the interview from several days ago, in which the attorney states unequivocally that he is in Great Britain (“Attorney Confirms WikiLeaks Founder Julian Assange in Britain, Responds to U.S. Attacks,” Dec. 2, 2010).
Here is the link to the debate, which you can also read as a transcript on the same site.
On Monday, November 8th, the Information in Society Speaker Series welcomed Dr. Eden Medina of Indiana University to campus. Medina’s talk, “The Slipperiness of Socio-Technical Engineering” focused on her work on Project Cybersyn, the 1970s-era cybernetics project envisioned to support and inform the economic agenda, and many nationalized industries, under the Chilean government of President Salvador Allende – a presidency abruptly ended by a bloody CIA-supported coup in 1973. Dr. Medina, whose own dissertation, published works and forthcoming book, Cybernetic Socialism, deal with the complexities and paradoxesof the Cybersyn project (known in Chile as “Synco”), gave an hour-long talk to the engaged audience of representatives from across the disciplines and from the community on the theoretical basis for cybernetics, its main proponents (e.g., Norbert Wiener), the background of those involved with Cybersyn, such as the English polemical iconoclast Stafford Beer who served as chief architect for the project, and the actual historical record of what the system achieved – and all that it did not. For her research, Medina traveled to Chile on multiple occasions to interview principles in the project, and also interviewed Beer before his death in 2002.
Medina’s own background in engineering and computing also gave her technical insight into the system’s cybernetics underpinnings and technical parameters, and the ways in which it – and did not – ever work. The system was a combination of four distinct components: the Telex network called “Cybernet,” the software suite known as “Cyberstride,” an economic simulator that could be used for projections and scenarios known as “Futuro” and, most famously the OpsRoom that took an interior design cue from the set of Kubrik’s “2001″. The project was viewed with suspicion from both the right and the left, with alternate claims of Soviet-style totalitarianism and dehumanization being levied at various times from the different sides. In the end, Cybersyn was a victim of a combination of technological barriers, a politically-motivated coordinated campaign of bad press from the right, and the problematic nature of Beer’s own efforts to publicize the project.
(Curiously, as evidence of an ongoing lack of understanding around Cybersyn, Medina revealed to a stunned audience that a potboiler of a novel and attendant film on the “Synco” phenomenon had been released in the past few years in Chile, to some buzz. The multimedia clip we viewed in the course of the talk featured an imagined dystopian future in which Allende had survived and collaborated in a power-sharing arrangement with Augusto Pinochet to work as dictators controlling informational flow and everyday life using the Big Brother-like Synco system. A Chilean colleague of mine described the imagery and premise for the novel/film as being “in poor taste.” I certainly agreed.)
After the talk, we opened up the floor to one of the richest and most fruitful discussions yet in our Info in Society series. We had several provocative questions posed by audience members who included scholars of Chilean history, cybernetics and AI scholars, and even a man who had been a member of the original Cybersyn project in Chile, and had helped to wire the OpsRoom, among other things.
Eden Medina is assistant professor in the School of Informatics and Computing and adjunct assistant professor in the Department of History at Indiana University – Bloomington. Her research bridges the history of technology and the history of Latin America and asks how studies of technology can enrich our understanding of broader historical processes. She received her Ph.D. in the history and social study of science and technology from MIT in 2005 and completed an interdisciplinary dissertation on the history of Chilean computing and its relationship to state formation.
She is the recipient of a 2007-2008 National Science Foundation Scholar’s Award and the 2007 Institute of Electrical and Electronics Engineers Life Member’s Prize for the best article of the year in electrical history. In 2005, she transformed her research into a multipart installation at the ZKM Center for Digital Art and Media as part of the “Making Things Public” exhibition curated by Bruno Latour and Peter Weibel. Dr. Medina is currently associate editor for the IEEE Annals of the History of Computing. It was our pleasure to host her at GSLIS as a part of our series.
Medina, Eden. “Designing Freedom, Regulating a Nation: Socialist Cybernetics in Allende’s Chile.” Journal of Latin American Studies 38, no. 03 (2006): 571-606.
With travel out of the way and just a moment to breathe before turning back to piled up work demanding my attention, I have just a few moments to reflect upon AoIR 11.0 in Göteborg. As is often the case with these sorts of activities, so much of the richness of the conference came from the synergistic encounters with others in the hallways and in post-panel discussions; it was a pleasure to meet many I’ve known online (in some cases, for years) in person in this context.
To mention just a few of the many great panels and papers I saw, I was especially excited about the Friday afternoon “Google This: How Knowledge and Power Work in a Culture of Search,” chaired by Ken Hillis of UNC. This intriguing and oft-times highly philosophical panel provoked an explosion of engaged and engaging questions that enticed the session-goers to stay into the break – always the sign of a good session. The Q&A brought up issues of Google’s recent and very public exit campaign from China – accurately framed as a massive PR stunt and ultimately highly meaningless as a political act by the audience and panel alike. My work on contextualizing resistance to Google in an historical framework has me quite interested in this particular chapter in recent Google history and so I was glad to have a forum to engage in addressing some of my thoughts on the topic with fellow-travelers, having just come off a junket of news clip-watching highlighting Google’s extraction from China.
I’m in lovely (and cold and rainy) Göteborg, Sweden for the annual AoIR conference, 11.0 (and tweeted about as #ir11). I plan to participate in a pre-conference workshop, then I’ll be presenting on Thursday on an historical revisit to Minitel – its roots, the policy dimensions surrounding it, the political context for its creation and implementation, French industrial policy from the Post-War period on, and a discussion of how to read it in the context of contemporary attempts, in France, to push back on technological hegemony from the United States and elsewhere (Yahoo! and Google Book Search, anyone?). All that in 10-15 minutes! Naturally, to cover it all is impossible, so I’ll be hitting the highest of the high points on this talk and will look forward to delving deeper in the Q&A and in other fora with anyone interested; I want to definitely err on the side of being timely and not encroach on my fellow panelists’ presentations.
This conference has a well-deserved reputation of hosting some of the kindest and most-engaged academics around. I’m looking forward to the excellent workshops, panels, roundtables, and papers to come, and the great serendipities of the hallway chats and the impromptu meet-ups over coffee/cocktails.
We are live with our first pre-release of Volume #0: “What is the Cyborg Subject?” tackling issues in fields as diverse as music, ecology, network localities, and psychoanalysis, this volume attempts to define one of today’s central philosophical issues: the subject in the age of posthumanity.
If ‘dark matter’ is what is unaccounted for in the universe, and is seen to be potentially dangerous, then dark leisure is what people do that can be seen to be disturbing or troubling.
We would like to invite contributions for a proposed edited collection looking at the ways that people spend their leisure time pursuing online activities that might be labelled unusual, dark, or deviant; for example, about dogging and swinging, pro-anorexia and cutting, suicide, death camps, and terrorism. This might include discussion boards, email lists, chat rooms, advice sought and given, photographs or videos shared, and events publicised.
Chapters should be empirically based, around 6000 words in length, and written in an accessible style suitable for an interested, intelligent general audience as well as for an academic readership in gender/cultural/media studies and sociology/anthropology. An examination of your ethical and methodological issues is required as these are obviously sensitive issues. We are also interested in research which prioritises issues of gender and sexualities.
Abstracts will form part of a book proposal to be submitted to an interested publisher.
Send abstracts of up to 250 words by October 31st and including a brief bio to: Julie Harpin email@example.com
Please forward this email to your networks and any colleagues who might be interested. Thanks.
Julie Harpin & Samantha Holland
Leeds Metropolitan University, UK
Information Science at Cornell (www.infosci.cornell.edu) is an interdisciplinary department within the Faculty of Computing and Information Science (www.cis.cornell.edu), bringing together from across the campus those interested in studying information systems in their social, cultural, economic, historical, legal, and political contexts. We are seeking to fill a tenure-track faculty position, broadly in the area of information policy. Areas of interest may include:
- the sociology or anthropology of information policy;
- contemporary debates (e.g., privacy, net neutrality, security);
- the interactions and tensions between the legal and the technological;
- the politics and/or economics of information institutions;
- the implications of information policy for design or practice.
The University of Michigan’s School of Information (SI) seeks an outstanding tenure-track faculty member at the Assistant Professor level to help establish a vigorous program of research and teaching in Digital Environments/Digital Humanities. New technologies and digital environments offer transformative opportunities for the humanities. At the same time, they bring unheralded challenges for accountability, authority, representation, intelligibility, and the assessment of value. Candidates for this position should have a demonstrated research record investigating topics of concern in the digital humanities. Potential areas of research include (but are not limited to) virtual collaboration in the humanities; design of interactive humanities-related media; credibility and authority of digital content; ethnography or history of digital culture; and curation of digital resources.
This position is part of a Digital Environments faculty cluster aimed at transforming humanities scholarship and engaging faculty and students in new modes of research, teaching, and learning. The Digital Environments cluster represents a partnership between the School of Information; the departments of English Language and Literature and Communication Studies; and the Program in American Culture, each of which is hiring a new faculty member through independent searches. Candidates for the School of Information position will engage with these new faculty as well as colleagues across the university, through such venues as research projects, a speaker series, reading groups, and teaching initiatives.
A version of this essay appeared at http://www.hastac.org/blogs/sarahr/some-musings-labor-culture-industry on February 9, 2010.
Theodor Adorno’s primary critiques in the selections brought together in Routledge’s The Culture Industry focus on what can be termed generally mass culture (or, to use the term he coined along with Horkheimer, “the culture industry”), being those artifacts which are mass-produced, reproduced, distributed – both as the means and the end to advertise, promote and consume the products.
The result is that what was once the province of cultural output such as artistic expression is reduced instead to artifacts and emblems of products and commodities; this then becomes the common cultural currency. Advertising stands in for art, and cultural objects are created expressly for consumption – by necessity, as a result of their mass-production – and to generate capital.
There is a flattening of the culture which, “while eliminating tension…abolishes art along with conflict” (Adorno 77). Devoid of meaning except for the most superficial, obvious and apprehensible on a large scale, the culture industry/products become a site of and for control. Adorno tells us that products of mass culture (such as sport, for example) have been used to reinforce, glorify and exalt modes of material production. Evidence of autonomy or creativity, such as in works of art, is eliminated (Adorno 99).
MIT’s Program in Comparative Media Studies in the School of Humanities, Arts and Social Science is seeking a tenure-track assistant professor of media studies to start in the Fall of 2011. Candidates should have a Ph.D. with a record of significant publication (or the promise thereof), research activity and/or experience relevant to civic media. Relevant areas of specialization include the contemporary practice, history, or theory of one or more of the following: user-generated content; forms of civic engagement such as citizen journalism, journalism and new media, and location-based social networks; innovative uses of media technology; media and democracy; youth culture and media literacies. Fluency in a broader array of theories, histories and practices associated with media studies will be considered a plus. Applicants should have teaching experience. Please send a letter of application, C.V., three letters of recommendation, and hard copy samples of your research and publications to Prof. James G. Paradis, Interim Director, Program in Comparative Media Studies, Room E15-331, Massachusetts Institute of Technology, Cambridge, MA 02139. Electronic submissions may be sent to firstname.lastname@example.org. The application deadline is December 9th, 2010. MIT is an affirmative action, equal opportunity employer.
The Communication, Culture & Technology (CCT) M.A. program at Georgetown University focuses on the relationship between new computational technologies of communication and areas such as science, scholarship, culture, government, media, business, journalism, and the arts. The program is developing a new lab, which will be a hub of technology knowledge, discovery and research, connecting CCT and Georgetown to the larger world of practice and innovation in all sectors where technology is central. In particular, the lab will provide a means for CCT to create partnerships with leading private sector information organizations developing innovations in digital media, knowledge management, and Internet applications; to remain at the forefront of research by creating relationships with initiatives in the Digital Humanities and the Information Schools; and push forward the boundaries of knowledge through external support by agencies and foundations such as NSF and Mellon.
A version of this essay originally appeared at http://www.hastac.org/blogs/sarahr/exploring-platform-studies on February 9, 2010.
The concept of the “platform” has been around for as long as computing, and computer gaming, has existed, underneath, and underpinning, our video games, digital art, electronic literature, and other forms of expressive computing. Int he recent past, digital media researchers and scholars have begun to approach computer language, or “code,” as a theoretical starting point to situate computers and computing in the culture, but there have been fewer attempts to go even deeper, to investigate the basic hardware and software systems upon which programming takes place, that are the foundation for computational expression and that define our interaction in digital contexts (2).
Just as Alex Galloway has made a call to study the meaning and import of decisions made around protocol (2009), platform studies is proposing similar inquiries to be made around the hardware, on its own and as it interacts with operating systems, as the foundational environments in which we engage with digital media and particularly with games, for it is these constructs and systems that dictate our interactions with the machines and the words they propose to us. This encompasses the worlds the games invite us into, as well as their physical form. When examined from this perspective it becomes clear that there is much to be (un)covered, discovered, and included in under the rubric of “platform studies.”
Contemporary 3D virtual worlds are expansive, taking up the equivalent of thousands and thousands of miles of real-world space. The worlds they render on our screens are highly detailed, with every last shadow, ambient sound, ray of light and potential player interaction calculated and accounted for. Worlds are open to exploration; movement can take place on any vertex.
Actual screenshot of gameplay in “Assassin’s Creed 2,” XBox 360, 2009
As for me, I am old enough to remember what we called the Atari 2600 or, more simply, the Atari, in its first iteration (actually, I remember Pong, too, although I admittedly had access to a 2600 first). When I played Space Invaders or Tank, or any other of the earliest Atari games, I was captivated by my ability to affect movement and interaction with the TV screen for the Atari was hooked to the family TV screen as its video output device. My physical movements with what now seems like absolutely primitive joysticks and paddles took on a mystical, magical and very powerful aura to my child self. Locating the joystick properly in real physical space directly impacted the pixilated battle on the screen; agility and speed were key. I often struggled to direct the missiles to their proper targets, but I was nonetheless entranced by the 4-bit sound and the rich colors displayed on the screen.
The iconic Atari 2600 joystick, a cultural phenomenon in and of itself.
Originally posted at http://www.hastac.org/blogs/sarahr/vast-world-vast-narratives-fandom-and-participatory-culture on March 22, 2010
What makes a narrative vast, according to the contributors to the recent MIT volume Third Person? Based on the varied content, spread across multiple media, covered by the book, vast narratives receive their designation not only due to the interior nature of the narrative, which may span unusual lengths when measured in years, amount of content produced, number of media in which the world is present, among other features (Harrigan and Wardrip-Fruin 2).
Yet the volume is also vast, as in catholic, given its broad interpretation of what constitutes a narrative: consider outsider artist/author Henry Darger‘s inclusion alongside other constructed worlds and universes of comic books (Ford and Jenkins), traditional paper and pen gaming (Laws), video games, television programs whose mythologies extend beyond the reach of traditional broadcast and into transmedia, such as in the case of Lost (Lavery). (In the interest of full disclosure: Lost is of particular interest to me at present, as I only discovered it last semester, watching five seasons on Netflix while I read about the show elsewhere.)
Alternate reality games bridge the Lost world beyond the confines of the original television medium, endless clues and the constant suggestion of deeper meaning in the shows symbols, comic book-like world and story building with some characters reading comic books on the show allowing viewers a sense of interactivity with/in the narrative. Is a fantasy or sci-fi setting more easily adaptable to a vast narrative? Is it because of the pliability of the rules, so to speak, of physics, time, space and who can populate the narratives in these genres? Is it due to the relative rigidity of their Dorothy-like structure – Oz vs. Alice’s Wonderland (Bartle)? Is it some combination of the two?
These settings and protocols have begun to seep into our understandings of possibility and potentiality for narrative structure, as well as what is doable (Bartle 107). They have developed into understood sets of rules that become so entrenched in cultural material that they are no longer questioned or their origins, traced. Purchase a new fantasy game for Xbox or PS3 and be asked to create a character who is a magic user, fighter, or healer. Choose armor and weapons and prepare for a quest after learning about the characters world and its complex culture and mythology. These processes are routine and mundane, and the masses have now become conversant in their operation, mechanisms and tropes
A version of this essay was originally posted at http://www.hastac.org/blogs/sarahr/digital-labor-cold-war-roots on February 9, 2010.
Doing some reading over the past week, I was prompted to think about, then comment on, a chapter by Friedrich Kittler on Cold War computing technology and the implicit (and explicit) ways in which an examination of so-called “defense technology” comes into direct contact with, and within the purview of, media studies, information studies and labor studies.
Specifically, I am interested in uncovering the history of these technologies and their development, particularly when the when many defense technologies have been considered value-neutral or even as beneficial (and perhaps were, particularly when they moved from the province of military applications to consumer or mass-market ones). Additionally, the process of uncovering the hidden labor embedded in digital and computing technologies and processes, is inextricalbly tied to the critically important task of uncovering their hidden agendas, applications and roots within the military-academic-industrial complex1.
“The SAGE radar display console seen here presents a picture of the air defense situation within its assigned geographic area. Using buttons and switches on the console, the Air Force Airman First Class who is operating the console can request information to be displayed such as speed, altitude and weapons availability and location, and he can direct action to be taken against an attacker. With the light gun in his right hand, the operator selects radar tracks for identification and display on the SAGE Direction Center’s summary board.” Photo Credit: IBM online archive.
Fred Turner, in a talk a few weeks ago at the University of Illinois, referenced SAGE, for example, one of the first interlinked computer systems, and part of the U.S military’s DEW (distant early warning) system. Kittler notes, in the same writing, that the Semiautomatic Ground Environment Air Defense System, was conceived as an answer to the Soviet atomic fleet, and it brought us everything todays computer users have come to love: from the monitor to networking to mass storage (182). Many of these military innovations have found direct applications and homes in the civilian sector, a spin-off called information society [that] began with the building of a network that connected sensors (radar), effectors (jet planes), and nodes (computers) (182). Not only, therefore, has the technology developed by the military, in conjunction with partners in academe and industrial R&D, made its way into daily life, but so, too, have basic concepts of organization, processes and structures. Any study endeavoring to undertake an examination of these organisms must therefore absolutely examine ties to other systems, projects and goals, particularly during the technological boom of (and promulgated by) the Cold War.
I recently undertook a preliminary (to me) study of a state information system in late 20th century France that was developed for civilians and laypeople in the country2. While this system, popularly known as the Minitel, was fundamentally implemented for the populace at large, by tracing the policy development and goals at the root of the creation of the system, I quickly discovered that military and national sovereignty concerns were, in fact, at the core of this massive national technology project. In fact, a desire to be able to calculate nuclear strikes and impacts in simulation on IBM mainframe computers drove then-president and erstwhile war hero Charles de Gaulle to institute a state information policy where previously there had been none. To this end, Kittler’s comment that since 1941, wars no longer needed men, whether as heroes or as spies, but were victories of machines over other machines (182) does not seem like much of a reach at all.
The Third Graduate Student Conference on the History of American Capitalism: “Capitalism in Action”
Sponsored by the David Howe Fund for Business and Economic History at Harvard University.
Keynote Speaker: Jackson Lears
Discussions of American capitalism often uncritically rely on loaded but abstract terms, from “markets” to “capital.” This conference aims to bring together emerging scholars who are interested in interrogating the nitty-gritty details of how capitalist systems have been imagined, constructed, maintained, altered, and challenged by an array of different historical actors in the United States and across the globe. What does “the economy” look like once we shift our focus from intangible market models towards the concrete workings of capitalist society and culture? In this conference, we hope to expand our understanding of American history by analyzing many different moments of “capitalism in action.”
We welcome papers by fellow graduate students from many different fields, such as cultural, social or business histories of capitalism. We encourage papers on a range of diverse topics. Possible paper subjects could include anything from mortgage-backed derivatives, land speculation and the geography of garbage to corporate personhood, consumer branding and the political economy of baseball. We welcome the submission of panels as well.
Interested graduate students should submit a C.V. and a 750-word abstract of their paper (description, significance, sources, current status) to:
History of Capitalism Conference
Charles Warren Center
4th Floor Emerson Hall
Cambridge, MA 02138
The submission deadline is Nov 1st, 2010. Those selected to present will be notified by Nov 19th and receive a stipend towards travel costs.
For additional information, please see: www.fas.harvard.edu/polecon or email email@example.com. For the websites of previous conferences, please see www.fas.harvard.edu/~polecon/conference/ and www.fas.harvard.edu/~histcap/.
Faculty supervisor: Professor Sven Beckert
Organizers: Nikolas Bowie, Eli Cook, Jeremy Zallen and Caitlin Rosenthal
History of the Present, a Journal of Critical History is a new peer-reviewed journal published by the University of Illinois Press. The editors (Joan Wallach Scott, Andrew Aisenberg, Brian Connolly, Ben Kakfa, Sylvia Schafer and Mrinalini Sinha) invite submissions that approach history as a critical endeavor for publication in volume 2 number 1 (summer 2012). We are particularly interested in essays that press the boundaries of history’s disciplinary norms. In that spirit, we also seek submissions from scholars thinking through the past in fields outside of history.
We welcome articles that:
-examine the historical construction of categories of knowledge.
-analyze how relationships of power are established and maintained, and how history has served to legitimize or challenge them.
-are explicitly theorized without being restricted to the discipline’s conventional categorizations of method and subject (i.e. social, cultural, intellectual, legal, or political history).
Manuscript submissions and queries to: firstname.lastname@example.org
The Department of Communication Arts at the University of Wisconsin-Madison seeks applicants for a tenure-track position at the rank of Assistant Professor in Media and Cultural Studies, to begin August 2011. Ph.D. in a related field required prior to start of appointment. Candidates will be expected to conduct research, develop and teach courses, and supervise graduate students in the critical/cultural analysis of television and electronic media with a specialization in at least one of the following: global media, gender and/or identity studies, or industry/production studies. Candidates must show potential for excellence in scholarly research and teaching. See also http://commarts.wisc.edu. Please submit a CV and a letter detailing interests and capabilities and arrange to have sent three letters of reference to Professor and Chair Susan Zaeske, Media and Cultural Studies Search, Department of Communication Arts, University of Wisconsin-Madison, 821 University Avenue, Madison, WI 53706. Electronic applications will not be accepted. The deadline to assure full consideration is November 14, 2010. EOE/AA. Employment may require a criminal background check.
Unless confidentiality is requested in writing, information regarding the applicants must be released upon request. Finalists cannot be guaranteed confidentiality. The Department of Communication Arts is committed to building a culturally diverse intellectual community and strongly encourages applications from women, ethnic minorities, and other underrepresented groups. Questions about the search may be directed to Professor Mary Beltrán at email@example.com.
The Women’s and Gender Studies Department, in collaboration with the Institute for Research on Women (IRW) at Rutgers University, is pleased to announce a two-year postdoctoral fellowship supported by the Andrew W. Mellon Foundation. The selected fellow will receive a stipend of $50,000 each year as well as an annual research allocation of $2,000 and Rutgers University health benefits. The fellow will pursue research and teach three courses in the Women’s and Gender Studies Department during the two-year term of her/his appointment. The fellow also will participate in seminars and other activities organized by the IRW.
The Women’s and Gender Studies Department:
http://womens-studies.rutgers.edu/ has particular interest in scholars of Asian-American Feminist Studies; Feminist Science Studies; New Media, Arts and Technology; Religion, Sexuality, and Gender; and Gendered Violence but welcomes applications from all scholars who feel that their work would benefit from affiliation with our department and with the IRW.
Herbert Schiller’s chapter “Data Deprivation,” from his 1996 work Information Inequalities, focuses on the great shift in power and control from state to private actors, resulting in a massive consolidation of power in the corporate sector, particularly over the control and dissemination of communication and information (43). Almost 15 years old, this essay draws out the peculiar of this new power structure and highlights the disturbing characteristics of that shift, including the crystallization of the already-underway processes (in the United States and, by extension, abroad, wherever the transnational influence of these companies reaches) such media conglomeration (44), leaving important informational functions, vital to a vibrant democracy, in the hands of a relatively elite few with considerable agendas of their own.
The results of the shift from state to private hands has immense and critically important ramifications, Schiller convincingly argues. One major arena of this transformation occurs in the context of an increase in the technologically-facilitated disappearance of some information (such as the case of that at the federal level in the context of changing administrations) (48), and the lack of transparency and accountability under new privatized paradigms where private corporations stand in for the government/state. Using techniques such as privatization, contracting and deregulation, corporate contractors have taken on the process of creating, managing, storing and disseminating (or hiding, in some cases) vast amounts of information. Indeed, the recent “Top Secret America” report in the Washington Post reveals that there are now over 2000 private firms engaged in data analysis for the purposes of national security alone, with little, if any, public redress available to learn more or understand what these firms do.
Meanwhile, as the government cedes control over the production and dissemination of material to corporations that treat it as commodity (46) and then are under no obligation to engage in transparency, the corporations themselves have seen a major rise in their own profile, to the point that, as Schiller describes, “corporate speech has become a dominant discourse, nationally and internationally…”, forcing individual speech aside or drowning it out completely (45). This trend has recently reached its apex, resulting in a Supreme Court ruling that has now codified the “right” of a corporation to “speak” politically (and monetarily) at a scale no individual citizen could ever reasonably hope to attain (c.f. the Citizens United case of 2010).
Schiller’s view of the near future he did not live to see may, at first blush, seem unusually prescient. Yet his clairvoyance stems simply from his engaging in tracing the logical conclusion of the tendencies he identified, in the mid-90s and much earlier, of consolidation, conglomeration, and shifting control across the military-academic-indusrial complex. Many of these tendencies have yet to fully play out and continue on today, particularly in the context of the Internet.
Robert Darnton is an historian and the Director of the Library at Harvard University whose work has focused on the history of the book, primarily in 18th century France, about which he is an expert. As such, he takes a long view, therefore, of books and book history as they pertain to the culture. His interest in and ability to dissect the complexities of the Google Book Settlement (GBS) make this article particularly helpful to those trying to get the big picture view of Google’s voracious program of digitizing the contents of the major academic libraries in the United States and elsewhere.
As the program has developed, critics have become concerned about its size and scope, lack of transparency and articulation of long-term plans. Further, the issue of so-called “orphan works,” those works that have fallen out of copyright protection and are therefore without rights holders, as well as hints of monopoly practices have been of particular concern. Furthermore, Darnton highlights a more generalized concern about Google, in general, which is their lack of commitment to the public good on any long-term basis. Despite their company motto that espouses do-gooding as the primary mission statement (“Don’t be evil”), Google remains a for-profit private entity: “as a commercial enterprise, Google’s first duty is to provide a profit for its shareholders, and the settlement leaves no room for representation of libraries, readers, or the public in general” (Darnton 2009).
As Google Books has turned its eye offshore, some of its potential target markets and sites of content, France and Germany, responded from a state level, with typically high-brow/high-culture arguments appealing to each country’s long history of resisting the commodification and control of its cultural output by others. The case of France is particularly reminiscent of another attempt by that nation to resist U.S. corporate digital hegemony, constructing the Minitel, a major national digital communications infrastructure and platform, in large part to resist the encroachment of IBM.
Yet the US government has finally, albeit weakly, gotten into the anti-GBS act. Instead of the high-culture protectionist rhetoric of Germany and France, the Department of Justice preferred an appeal to free markets. Darnton finds irony in “foreign governments defending a European notion of culture against the capitalistic inroads of an American company.”
One of the DoJ’s main concerns centered around the issue of orphan works, which, in Google’s original plans, would simply be sucked up into the GBS, with the potential to be sold back, via the subscription service it is planning around the project, rather than relegating it to a place in the commons. The new solution in GBS 2.0 does little to resolve the issue, as the proposed opt-in vs. the current opt-out paradigm, remains unenforced. Google can obtain content unless the creator(s) opt out. If a creator cannot be located, or does not know to object, the material is considered fair game for inclusion, and Google can happily digitize, repackage and sell the material. This turns the GBS situation into one of _obligatory_ produsage, where the seeming nuance of “opt-in” vs. “opt-out” actually becomes the key factor. And, once again, the regime of contracts seems to trump other regimes and operate as de facto law.
Darnton, writing in November of last year, proposes state intervention in the form of a national digital library, as a viable alternative to the GBS. Reading now, such a notion seems like a naive and distant dream, just slightly less than a year later, in the midst of economic meltdown, gulf oil disaster, unrelenting war and the biggest release of digital state documents in American history – an event which will undoubtedly sour the government against any new entrees into facilitating digital data access by the people. Meanwhile, Google will chug merrily along, ingesting unquantifiable amounts of material into its insatiable mouth.