The time has come for me to share some big news. I am honored and thrilled to report that I will be joining the Faculty of Information and Media Studies (FIMS) at the University of Western Ontario, at the rank of Assistant Professor, this summer. Thanks to all who supported me through this process. I look forward to the coming adventure at this wonderful institution!
Today, the IEEE Computer Society reported, via its Facebook page, on the 20th anniversary of NCSA Mosaic. This web browser, developed at the University of Illinois’ National Center for Supercomputing Applications, was distributed free of charge and its GUI interface was largely credited with sparking widespread interest in the Web.
As I reflect on my own 20-year anniversary as an Internet user, and as an almost-grad of the University of Illinois myself, this story caught my eye. Indeed, I experience something of a rush every time I have occasion to go over to NCSA, where a decorative plaque commemorates Mosaic and its contributions to computing.
My own first experience with Mosaic was somewhat inauspicious, however. In early 1994, I was working as a computer lab specialist in the Memorial InfoLab at the University of Wisconsin-Madison, the campus’s largest and busiest computer lab at the time. One day, I reported in to work and caught up with my colleague, Roger, as we strolled the floor of the lab. We stood in front of a row of Mac Quadras (the “pizza box” form factor), as they churned and labored to load something on their screens. It was an interface consisting of a greyish background and some kind of icon in the upper corner that seemed to indicate loading was in progress, but nothing came about (NB: this was more likely due to a lack of content or a choked network, given the time, than anything else inherent to Mosaic). Turning to Roger, I asked, “What _is_ that?”
“That,” he replied, “is NCSA Mosaic. It’s a World Wide Web browser. It’s the graphical Internet!”
My response was as instantaneous as a reflex as I sputtered out a disdainful reply. “Well,” I scoffed, “that’ll never take off. Everyone knows the Internet is a purely text-based medium.”
And the rest, they say, is history. Happy birthday, Mosaic.
Sarah T. Roberts: Stephanie, thanks so much for taking time to talk with the HASTAC community about your excellent new book, Cached: Decoding the Internet in Popular Culture. I think this book will speak to many different kinds of people involved with HASTAC, given its interdisciplinary nature and its locus at the intersection of policy, history, technology and culture studies. One thing you discuss early on in the introduction to the book is your goal to “recover the underlying assumptions involved in the adoption…” of the Internet and concomitant technologies. Can you talk to us a little about what those underlying assumptions are, and how they relate to what you describe as a tendency to be “erased or at least diminished by our present uses and accepted memories of the internet”?
Stephanie Ricker Schulte: I am glad you asked this question because it spotlights a central tension in the way I have chosen to tell my story about the internet as well as a central tension with internet technology itself. Rather than one set of underlying assumptions fueling the internet’s growth and adoption, there were actually several overlapping and conflicting sets of assumptions about the technology as well as what it did (or could) mean. In this book, I ask, “How did conflicting voices—such as advertisers, journalists, users, and policymakers—help establish common sense notions about the internet?” So one of my goals was to write about the ways competing origin stories of the internet and competing visions of the technology’s present and future played out in decisions about the technology itself. In a sense, this book is designed around conflict.
Cached investigates a number of “common senses” about the internet that occurred sometimes simultaneously and often paradoxically with one another between the 1980s and 2000s. For example, the internet was considered a war project, a toy for teenagers, an information superhighway, a virtual reality, a player in global capitalism, and a leading framework for comprehending both globalization and the United States. I could have written a book that chose a side, that tried to settle these historical conflicts about how to understand the internet by arguing for the “correctness” of one of these positions over another. Instead, I chose to show how the tensions in these internet narratives—their interrelatedness and the conflicts between them—are what shaped the internet’s use and development as well as the policy decisions around the technology. It is less that there was a single common sense about the internet or a “best” way of approaching it, than there was (and is) a continuous struggle to name and understand the internet’s implications. The struggle that produced the internet is, I think, what makes the best story.
And, as you referenced in your question, some of these tensions and transitions between different “common senses” resulted in the cultural erasure of internet technology itself. To use a rather simple example, in the 1980s we talked about “computer networking”—a phrase that immediately conjured up images of wires physically connecting machines—but in the 1990s we talked about “surfing the world wide web.” As more and more of the public (not just military and university employees) began using the internet, they were also reimagining it, focusing more on the experience of using the internet and less on the technologies that made it possible. Clearly this shift had a lot to do with the evolving capabilities of technology itself. It is hard to imagine ourselves “surfing” over a world-wide virtual landscape until the internet became visual as well as textual. But this shift was also worked out outside technology itself, in advertising campaigns and political strategies that linked the internet to America’s global and economic dominance. So I make the case that, in some ways, the internet’s narrative structures helped erase its physical ones. Instead of the internet imagined as a bunch of plugged-in computers, the computer became imagined as a gateway to the internet. The internet lost its body, so to speak. As such, we could more easily imagine it as a global yet also deterritorialized space or as an experience rather than a product of infrastructure.
STR: In the first chapter of your book, you anchor your discussion of early networked social computing (by this, I mean networked computing outside the confines of the academy or the military context) around the 1983 film War Games – certainly an important cultural artifact for anyone of a certain age at the time of its release. Using this example, and other related moments from the news media and other sites and examples of mainstream cultural discourse, you demonstrate the profound relationship between these representations and subsequent policy and legal regimes. Would you share with the readers of this interview how you’ve theorized the interplay between these phenomena?
SRS: For me, it is crucial to always remember that the internet is much more than a sum of its technological parts. Along with other scholars like Manuel Castells and Lisa Gitleman, I approach the internet as cultural. To say that the internet is a culturally-constituted, historical object is to say that qualities essential in the technology itself have always mattered but they do not alone determine the ways the internet evolved or how it has been and is understood—either as a technology or as part of our lives. War Games is a fun example and was a great way to start the book because it’s a clear moment of how circumstances in history and cultural mattered, how circumstances helped shape the internet and the way a range of people understood it. Filmmakers, software designers, marketers, journalists, military researchers, and U.S. Senators, with all their varying motives, joined in an explicit, competing and sometimes overlapping, conversions about what the internet was and should be. War Games and its reception worked as the site where these different visions were expressed, collided and altered one another. Together, these complex cultural understandings of the internet guided its political, commercial and technological future in the United States and elsewhere.
In light of this example and others that illustrate a range of forces shaping the ways people engage the internet, I argue that technology is culturally flexible and not fixed by its material parameters. As a result, I tell its technological history alongside the history of mediated and political narratives about the technology, to provide insight into the ways the internet transformed in accordance with larger changes in culture and policy as much as with the intrinsic capabilities of technology itself. This cultural history approach demonstrates that it is impossible to speak separately about policy, technology and culture. Cultural values produced in, through and about communication technologies shape policy debates and helped determine policy outcomes. Policy, as a cultural actor, shapes technological innovation and development as well as validates and legitimizes particular cultural values over others, amplifying them in the cultural sphere. The technology does not and cannot happen in a vacuum outside the spheres of culture and politics so it cannot on its own dictate its cultural and political future. So this approach helps me demonstrate in very concrete terms how representations of the internet both relied on and helped formulate larger cultural assumptions about the nation, the state, democracy, public space, consumption, and capitalism.
STR: One thing you have done with this book is to really surface up, and then problematize, a lot of the received wisdom that permeated early discussions of the internet – politically, as a site of commerce, and as a manifestation of American-ness – with specific American narratives foregrounded. Can you talk about some of the notions you wanted to critically examine with this book? What do you think we can learn from these past framings, and what do they suggest about contemporary narratives?
SRS: The internet operates transnationally, yet is based in national infrastructures and is subject to national legal and economic systems. So one of my goals with this book was to begin teasing out the complexities of that, to contrast cultural and political approaches to the internet in the United States, Europe, and the Middle East in a way that accesses the dynamic ways cultural, political, and economic power operate On the one hand, I focused on alternative visions of the internet available outside the United States that help reveal the hidden national flavor in our domestic visions of the technology. On the other, I focused on how and why American visions ultimately dominated the cultural history of the internet, how and why the internet was imagined as self-evidently American, democratic, and capitalist. I show how voices in cultural representations and policy initiatives—originating both in the United States and abroad—helped to construct the internet as inevitable and as naturally regulated by free markets and corporate forces. So, I acknowledge the United States’ dominant and formative power in the internet’s history but I don’t take that historical outcome for granted. I simultaneously recognize the technology’s internationality, the influence of international actors, and the variable models for the internet offered by other countries and cultures. Ultimately, the comparative elements of my book reinforce the ways the internet was a central location for key national and international debates about globalization, democracy, public space, consumption, capitalism, and America’s place in the world.
In doing this, I hope to disrupt today’s uncritical debates about the internet’s development and to prompt us to collectively recognize the internet not only as a technology, but also as a cultural and political formation. This recognition is especially important as new technologies appear. I want to collectively critically examine our common senses about technology, about their historical legacies, political implications, and unacknowledged cultural assumptions. All of those things matter to our use technological practices, our innovation strategies, and our policymaking process. Acknowledging the roles our national identities and transnational activities play in our understandings of the internet—in particular in ways we imagine technological development as naturally American, democratic, capitalist, and inevitable—is key as we shape communications technologies increasingly central to daily life around the globe.
STR: Finally, what has your own relationship to the internet been? I am observing my own 20th anniversary this year of my life online, and am feeling extremely self-reflective about that. To be sure, my life online changed my life, in general, in very real and tangible terms. What was your own trajectory with computing and the internet? What was the interplay of the genesis of this research with your own experience living through this period of time?
The first time I experienced computer networks was watching my best friend MUD in the 1980s. I observed as she dialed up and typed, traversing the complexities required in that historical moment. I didn’t understand what she was doing let alone how she was doing it. To me, she was a teenaged Neo from the Matrix. I remember asking her question after question, and she didn’t seem to understand why I was asking many of them. For her, computing was already intuitive. Many of her assumptions, of course, became the naturalized mainstream visions of the internet that I ultimately wrote about. In some ways, this book is my attempt to regain the fresh perspective I had in the late 1980s and apply it in a more systematic fashion to the assumptions that have been codified in culture, policy, and technology itself.
STR: Thanks, Stephanie, so much for your time in talking with me about Cached and about the cultural meaning of the Internet, more broadly. It’s really been a pleasure.
SRS: Sarah, I’m honored to speak with you today. I have long enjoyed HASTAC, one of the most forward-thinking and diverse consortiums around. Many claim interdisciplinarity, but HASTAC actually provides it.
Stephanie Schulte is Assistant Professor of Communication at the University of Arkansas, where she researches communication technologies, media history, media policy, popular culture, and transnational cultural exchanges. Her work has appeared in the Journal of Television and New Media, Mass Communication and Society, the Journal of Transnational American Studies, and Feminist Studies. She is the author of Cached: Decoding the Internet in Global Popular Culture (New York University Press, 2013) and is currently working on a second book length project, which investigates technology and notions of the public good, as well as an interdisciplinary edited collection focused on media and citizenship.
A version of this post appears at hastac.org.
Acknowledging the passing of my best pal, Chesterfield “Chester” Roberts. Born November 1997 in Chapel Hill, North Carolina. Chester passed away peacefully at home surrounded by his mom, his uncle, and his feline sister and littermate, Lucy. We’ll miss him a whole bunch.
Just a brief note to share that my colleagues Miriam Sweeney, of GSLIS, Ergin Bulut, of the ICR, and I have had our panel proposal accepted for IAMCR 2013 Dublin, to be held June 25-29th at Dublin City College. This year’s conference theme is “Crises, ‘Creative Destruction’ and the Global Power and Communication Orders,” and had over 2,400 submissions. The three of us are honored to have had our panel, entitled, “Demystifying the ‘Digital Economy’: Critical Interventions in Online Moderation, Anthropomorphized Virtual Agents and Gaming” selected by the Political Economy section of the conference.
It’s not every day that one’s wildest sci-fi inspired dreams are achieved, but yesterday was such a day.
Over the past few years, I have watched the quiet development of 3D printing unfold. It has been evolutionary rather than revolutionary, with innovations taking place in the rarefied domains of university R&D centers and labs, or in the basements and garages of tinkerers, hackers and makers – the kinds of gadgeteers and engineers willing to bang their heads against a problem until they triumph with a solution, and know that the headbanging part is the actual fun of it. As these printers (in reality, they’re something more akin to giant glue guns that can read instructions and move rapidly along x, y and z axes to produce objects by stacking layers measured in millimeters of thickness – here’s a good layman’s explanation of a few different types of printers available) have come down in price and increased in terms of ease and reliability, they have, predictably, been turning up in more and more places. They are coming out of the garages and into the light, and they are finding homes in community hackerspaces, fab labs, and even libraries.
My own city has benefited greatly by the presence of a fantastic collaborative hackerspace, open to the public via frequent classes, events and a monthly membership. Last night, Sector67 offered an intro to 3D printing, so, for $20 and two hours of time, I was treated to a five-person whirlwind tour of the state of the industry art by none other than Sector’s founder and extremely knowledgeable 3D printer nut, Chris Meyer. During our two hours, we were given an overview of all of the extant 3D printing material options: powder, ABS, PLA (a corn-based plastic that previously mostly functioned as a support system in commercial ABS product manufacturing). We walked through the various 3D printers out there, ranging from the ridiculously DIY (the RepRap, made from 3D-printed parts – a weird, amoeba-like thing to think about) to the very expensive and more functional than hackable (the MakerBot Replicator 2) We examined the output of each, discussed the potential faults and pitfalls of working with the printing control software, ReplicatorG, and – joy of joys! – my idea for production of a print was chosen by Chris as the example we would watch be created before our eyes while talking over the finer points of “jitter” and Skeinforge. We would be printing out a 3D version of a game tile from the popular German boardgame, the Settlers of Catan – with my apologies to the guy who wanted a camera case for his GoPro HeroHD camera.
I downloaded the plans for my gamepiece from the crowdsourced 3D site Thingiverse, and avoided a potential stumbling block when I discovered that the CAD-like STL files I had previously drooled over were nowhere to be found. Tabling that for a moment, I quickly found an alternative. We downloaded the files, picked one component (an “ore tile,” for any Settlers nerds out there), and started printing. 49 minutes later, and I had one lukewarm tile in my hot hands.
And here we get to the sci-fi dreams: look, simply put, watching this thing be created before my eyes was incredible. Anyone who has had a chance to come face to face with a functional version of one of these machines has undoubtedly gone away mesmerized, with visions of what they could do with one of these things in the context of rapid prototyping, proof-of-concept testing, materials development (think: plastic textiles, printed on demand), and just plain fun. Where could any harm lie?
Well, it turns out that there is a potential dark cloud on the horizon for 3D printing. When I got home, I did some poking around, looking for the initial Settlers of Catan 3D boardpiece files that had initially piqued my interest in 3D printing as a technology with personal significance a couple years ago. I even remembered the username of the user on Thingiverse who had created them: Sublime. While I found many references to Sublime and his/her awesome game pieces, all links led me to a big, fat, dead end – and possibly the best 404 I’ve ever seen. What had happened to Sublime, and to the Catan plans? Well, it looks like a cold wind blew into the 3D printing universe: the chilling effect specter of copyright.
And while I couldn’t find Sublime anywhere, I did find this Public Knowledge post that suggested that a. Sublime was getting nervous about potential infringement and b. there was likely no infringement going on. After all, as PK pointed out, “…the pieces themselves are not even distributed. Instead, if you want the pieces you need to download the files, boot up your 3D printer, and make them yourself.” This post is related to a larger whitepaper that PK authored, aptly entitled, “It Will Be Awesome if They Don’t Screw it Up.” This contribution delves further into specifics of original product creation, the making of copies, the nature of patent, trademark and other relevant issues. Yet the title says it all: given that we are still dealing with an ongoing culture and legal war over what constitutes ownership over IP with digital material contained in computer files and compiled of 1s and 0s, it seems unlikely that the introduction of the ability to create tangible, functional 3D objects – or, more to the point, to _replicate_ extant ones – will have clear-cut solutions or yield easily divined answers. Further, it’s not as if legal posturing and wrangling of the past 15 years has slowed the trade in copyrighted materials in the slightest. How long until we see the flip, dark side of Thingiverse, a Pirate Bay for files pulled for copyright infringement , illicit materials, weapons? The latter is not pure conjecture; a smart-alecky law student type from Texas has made his share of headlines with his dream to freely distribute handgun blueprints for DIY arsenal-builders.
Given this, are we likely to, as PK puts it, spoil everything by screwing it up? As 3D printing technology is poised on the verge of making a leap from the esoteric to the commonplace and from the rare to the ubiquitous, questions about the technology with invariably shift, from “Can we do this?” to “Should we?” Meanwhile, I plan to book as much time at Sector67 as I can to get my board printed out before it’s too late.
A special shout-out to the students of LIS 502LE, visiting this blog at the end of their hard work in the inaugural intersession LEEP Foundations in LIS course. Congrats on a job well done, everyone!!
My blog posting has been on the wane of late, but it has been for a good reason. Work on my dissertation has continued apace, which means I’ve been putting the vast majority of my writing efforts towards it and fewer here. That having been said, I relish this space as a great starting point to help me work out my thoughts and capture issues as they are unfolding – your comments and participation are a great help to me, in that end, and I appreciate greatly the participation of those of you who read this site. I look forward to our conversations in the coming year. Thank you!
One of the few things I’ve been able to give time to that is not directly tied to my dissertation work has been to switch a good portion of my computing to an open source platform. I’ve been a Mac user since the late 1980s and online (on the Internet) for just about 20. In that time, I’ve watched with interest as the open source software movement, in general, and Linux, in particular, have gained momentum and a following. My own attempts at using Linux spans pretty much its entire existence, and I’ve tried more distros than I can remember – Red Hat, NetBSD/FreeBSD, Ubuntu are the ones that are coming easily to mind. Because extraneous PC hardware upon which to run the Linux flavors was often out of my grasp, or the technical acumen required to run the OSes lost out to my ease of use and familiarity with my OS of everyday choice, MacOS, many of my attempts to work Linux into my own computing life ended prematurely.
My increasing frustration with the restrictions being placed on Mac OS, and its increasing iOS-ization, as well as my disdain for both the experience of Windows and the practices of its maker, led me to put a call forth to my friends for the headlines on the state of the art of Linux computing. The call was met with a unanimous response: check out Linux Mint. This elegant, aesthetically lovely project, led by Clément Lefebvre and teams of many other volunteers and based off a branch of the Ubuntu distro, was also reported to be easy to use, user-friendly and accessible (at least as far as Linux goes). I was ready to take the plunge and, along with a friend, we installed Linux Mint 14 on our refurbed Lenovo laptops that we have for Windows emergencies (when we are forced to run Windows for some task or other). He was a Linux newbie and I, more of a veteran, but also with a steep uphill battle to getting my chops back. To our delight, the OS installed with ease and we were up and running, and using, Mint – and abandoning Windows – almost immediately.
Part of my joy in this process has been discovering the open source analogues to so many of the software packages and processes that everyone I know of, including myself, has come to rely upon. Many I already knew of and had used in the past (e.g., Gimp), but so many more of them have come so far even since my last attempt at getting serious with Ubuntu, about five or six years ago, that it’s been a great pleasure to find out what is truly possible while running under Linux. All of my everyday necessities are working nicely: productivity software, Zotero (and its hooks into other apps), browsers and net utilities, graphics and audio apps, and so on. And there is great pleasure, too, in being able to get under the hood and really crank around in the file system from the command line (I remember my thrill when I first was able to score a Unix shell account at the University of Wisconsin in my freshman year, by joining up with a computer club pretty much in name only).
I also appreciate so much the politics of what Linux, specifically, and many open source projects, in general, represent: another model, and an alternative way of doing things that challenges the status quo and conventional wisdom that major projects like this can only succeed when driven by a profit motive. Mint, and other projects like it, relies on a healthy community of developers and users who engage in mutual aid and assistance, and welcome newcomers. My hat goes off, for example, to the fellow who stayed up with me into the wee hours of the morning a number days ago, as we worked together to troubleshoot a particularly tricky dual-boot issue that challenged my knowledge and solo skillset. I have thought about this today in particular while waiting on endless hold today to get a hold of someone at Microsoft in order to “unlock” the OS (Windows 8) that I already paid for, and yet can’t fully use.
With the increase in tethered devices (e.g., smart phones; tablet computers) and a philosophy of closed, proprietary computing only increasing in prevalence, my switch to Mint has brought with it a surprising feeling of freedom and of possibility – the same kind I used to have when I first ventured online in the early 1990s, and imagined what could be in the new world of information and social interaction that I discovered there. If you, like me, are feeling constrained by the artificial blocks, locks and relationships being imposed on you by your reliance on commercial OSes and all that those relationships entail – financial obligation, limitations on use, surveillance, etc. – then I urge you to give Linux Mint – or any flavor of an alternative OS – a try. Report back and let me know how it goes. I’ll be eager to hear what you have to say.
Happy new year to all!
On September 20th, I had the pleasure of traveling to the Illinois Institute of Technology to deliver the first talk in IIT’s fall series, “Defining Boundaries and Goals in the Digital Humanities.” My talk, entitled, “Digital Humanity: Foregrounding Human Traces in Technological Systems (and Why We Should Care),” was followed by a lively and engaging Q&A session with faculty, grad students and staff. In addition to discussing the current state of, and the potential for, the digital humanities to highlight and unveil human traces in digital technologies, we talked about platforms that provide the potential for humanizing digital tools and creating space for alternative perspectives in technical systems; indeed, the Raspberry Pi that I brought along was a particular hit. The abstract for the talk follows below; thanks to Dr. Marie Hicks and all those at IIT who made my visit such a treat.
Taken from the perspective of the academy’s long view, the “digital humanities” as a concept is nascent and its precise definition remains a moving target, with a variety of methods, disciplinary perspectives and approaches finding a home under its ample umbrella. Yet the fluidity around its precise meaning affords opportunities for scholars to apply the critical lens of the humanities to the study of the digital, and to ask questions about who benefits, how and why, in the context of an ever-increasingly networked, computerized and digitally enclosed world.
In this talk, I will discuss current research in and several practical applications of technology that foreground the humanity in the digital and that offer and model alternatives. In some cases, these examples unveil hidden or obfuscated traces of humans within digital systems, literally and in the abstract – in labor, representations, and by other means – and the implications that such erasures engender. I will also highlight practical examples of platforms, systems and tools that endeavor to challenge existing paradigms extant in many mainstream or instantiated technical systems. This talk is intended as interactive dialog with opportunity for the audience to offer their own experiences, tools and solutions for discussion and inspiration.
It is with great pleasure that I announce I am one of the recipients of the 2012 Beta Phi Mu Eugene Garfield Doctoral Dissertation Fellowship. Thank you to the Committee for this honor! GSLIS has drafted a release regarding this award, which can be found here.
As reported by Reuters and picked up in the Huffington Post, Facebook today released a confusing infographic ostensibly designed to shed light on the cryptic route that reported content takes through the company’s circuit of screening.
According to the company, content flagged as inappropriate, for any one of myriad reasons, makes its way to “…staffers in several offices around the world to handle the millions of user reports it receives every week about everything from spam to threats of violence.” Reasons cited in the infographic that may cause material to be reported include content that is sexually explicit, involves harm to self or others, depicts graphic violence, contains hate speech, and so on.
What the infographic and accompanying statement from Facebook fail to do is to suggest what amount of content is routed through this circuit, and how much of a problem addressing problematic user-generated content (UGC) routinely tends to be. In reviewing the infographic, the lack of real information it provided about workers, the nature of this issue and the nature of the content being flagged left me thinking of the old disparaging computing phrase of “security through obscurity;” the infographic offers protection to Facebook by revealing very little of import. It is obscurity through ostensible transparency.
Critically lacking, for example, is a lack of discussion of the working conditions for the “staffers…around the world” who contend with this material as a major function of their job. Are these staffers full-time Facebook employees, afforded the status and benefits commensurate with their positions? As other reporting has already indicated, and as I have discussed in another entry on this site, Facebook indeed employs micro-work sites such as oDesk and others to conduct these moderation and review practices. The workers engaged in the digital piecework as offered on micro-work sites are afforded no protections or benefits whatsoever, and do not even benefit from the ability to commiserate with other workers about the content they view as a condition of their work.
In this way, Facebook benefits from the lack of accountability that comes with introducing secondary and tertiary contracting firms into the cycle of production – a fact that is critically absent from the infographic above. Workers engaged as moderators through digital piecework sites are isolated, with few (if any) options for connecting – for emotional support as well as for labor organizing – with other workers in similar conditions, and without any real connection to the worksites of origin from which the content emanates. While the micro-work sites and the major corporations that engage them may tout the ability to draw on expertise from a global labor marketplace, in practice even the New York Times notes that these temporary work relationships result in lost payroll tax revenue for companies such as the US when labor is outsourced, and note that increases in these kinds of labor pools are significant in Greece and Spain, countries devastated by economic crisis and crippling “austerity” measures. Notably absent, however, from the NYT piece is any discussion of the bargain-basement rates that drive the value of the labor down to the lowest bidder, by design. The connection between economic crisis in a region and an increase in the availability competent labor that is exceedingly cheap cannot be lost here.
Of course, one cannot reasonably expect Facebook or any other company to ease the way for workers to organize and push back against unpleasant work conditions and unfair labor arrangements; this, after all, is one of the features of outsourcing and using intermediaries to supply the labor pool in the first place, along with the lack of regulation and oversight that these arrangements also offer. In response, non-traditional organization among workers in these sectors is taking place, such as in India, where UNITES Professionals have issued a charter for IT and call center workers, and the Precarious Workers’ Brigade, whose focus is educational and cultural workers, but whose model and scope could certainly conceivably be extended to workers engaged in screening and moderation.