Below are some of the research projects I’ve been working on lately and their abstracts. Works in progress are on top and forthcoming and published materials follow, with links to PDF copies or the relevant journal page. If something is unpublished, please ask before citing.

The Promise of Access: Hope and Inequality in the Information Economy

In 2013, a series of posters began appearing in Washington, DC’s Metro system. Each declared “The internet: Your future depends on it” next to a photo of a middle-aged black Washingtonian, and an advertisement for the municipal government’s digital training resources. This hopeful story is so familiar as  to be common  sense. It is the threat of a ‘digital divide’ between the high-skilled knowledge workers of the future and the low-skilled service workers condemned to the past.

But why do we keep trying to solve poverty with technology? And how did we learn that we need to learn to code—or else? In The Promise of Access, I show that the problem of poverty became a problem of technology in order to manage the contradictions of a changing economy. This political common sense is embraced and circulated by the organizations that fight poverty, themselves threatened by crises of austerity and legitimacy.

This book explains where the idea of the digital divide comes from and why, despite decades of research and advocacy, it is so hard to get rid of the idea—which I call the access doctrine—that poverty can be solved with the right tools and the right skills. The neoliberal political revolution replaced labor market interventions with skills training opportunities—and carceral threats—and in this context the persistent poverty of the information economy was refigured as a digital divide between those with internet access and the skills to make something of it, and those without. The book explores these ideas ethnographically in different but connected parts of Washington, DC, showing how the access doctrine, despite all evidence against it, persists because it provides the organizations that fight poverty with new clarity, new resources, and new legitimacy.  First, I show how startups and their workers embrace uncertainty and their ability to survive it as an organizational and sectoral identity. I then show how the public libraries and charter schools that manage the problem of poverty start to look more like startups as they pursue technology transfer and skills training missions. These new missions often alienate these organizations from the very people they try to serve. Drawing connections between these different field sites, I uncover the organizational pressures that lead organizations embrace the access doctrine. Finally, I trace the political alliances that can be created within schools and libraries, alliances that can change the political terms on which these cities operate and create new imaginaries for our economy and our technologies.

Data Everyday: Data Literacy Practices in Division I Sports

This project is an ongoing collaboration with my iSchool colleague Tamara Clegg, UNC sports scientist Erianne Weight,  graduate student Nathan Beard, and undergraduate student Jasmine Brunson. College athletes are often forced to choose between formal education and high-level athletics, but in their sports play and training they often undertake significant informal learning using the vast array of data collected from and about them: from body-fat percentages to performance trends to speed and activity maps. Our goal is to better understand what data is collected from student-athletes and how they interact with it in order to help them engage with that data and formalize their amateur data science skills. This project brings in ideas from the learning sciences, sports science, organizational studies, HCI, and the sociology of work.

Our first publication is in the 2020 CHI proceedings, where we received an honorable mention.

“Data Everyday: Data Literacy Practices in a Division I Sports Context”

Data analysis is central to sports training. Today, cutting- edge digital technologies are deployed to measure and improve athletes’ performance. But too often researchers focus on the technology collecting performance data at the expense of understanding athletes’ experiences with data. This is particularly the case in the understudied context of collegiate athletics, where competition is fierce, tools for data analysis abound, and the institution actively manages athletes’ lives. By investigating how student-athletes analyze their performance data and are analyzed in turn, we can better understand the individual and institutional factors that make data literacy practices in athletics meaningful and productive—or not. Our pilot interview study of student- athletes at one Division I university reveals a set of opportunities for student-athletes to engage with and learn from data analytics practices. These opportunities come with a set of contextual tensions that should inform the design of new technologies for collegiate sports settings. PDF.

Landlords of the Internet

This article is in its early stages. I presented on it at the 2019 Maintainers conference in DC. Here is the abstract:

At the physical core of the newest sectors of the economy is one of the oldest: The noble landlord. This paper maps the real estate market for globe-spanning, critical internet infrastructure: Tier 1 backbone, Internet Exchange Points, carrier hotels, and data centers. Particularly in the US, the firms building and managing this core infrastructure are not organized as traditional ‘tech’ companies, much less as utilities or nonprofits. Rather they are real estate investment trusts (REITs) backed by private equity and in search of the ideal locations on which they can grow their facilities and solicit business from major purveyors of internet traffic (e.g., telecommunications) and major content providers (e.g., the search and social media giants, but also large firms in finance and health). They see their mission–as a recent IRS ruling confirmed–as leasing a particular kind of space for a very specific class of tenants. Their competition is not Microsoft or Facebook, but other REITs who specialize in other sorts of spaces–whether that’s storage units or malls. Following the privatization of the NSFNET backbone, the firms I call ‘internet landlords’ cemented a new model of rentiership into the most important physical infrastructure of the information economy. These enigmatic figures often started their careers in commercial real estate (e.g., malls, offices), and took that business model onto the internet. Today, their renting decisions undergird the transmission, internetworking, and storage of everything from Netflix binges to medical records. In both urban and rural settings, this dynamic reshapes the physical and economic landscape of the industrial era, and traps internet infrastructure in feedback loops that jeopardize secure communications, universal service, and safe neighborhoods.

Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work (with Ifeoma Ajunwa)

This article lays out a research agenda in the sociology of work for a type of data and organizational intermediary: work platforms. As an example, we employ a case study of the adoption of Automated Hiring Platforms (AHPs) in which we distinguish between promises and existing practices. We draw on two main methods to do so: Critical discourse analysis (CDA) and affordance critique. We collected and examined a mix of trade, popular press, and corporate archives; 135 texts in total. Our analysis reveals that work platforms offer five core affordances to management: 1) structured data fields optimized for capture and portability within organizations; 2) increased legibility of activity qua data captured inside and outside the workplace; 3) information asymmetry between labor and management; 4) an ‘ecosystem’ design that supports the development of limited-use applications for specific domains; and 5) the standardization of managerial techniques between workplaces. These combine to create a managerial frame for workers as fungible human capital, available on demand and easily ported between job tasks and organizations. While outlining the origin of platform studies within media and communication studies, we demonstrate the specific tools the sociology of work brings to the study of platforms within the workplace. We conclude by suggesting avenues for future sociological research not only on hiring platforms, but also on other work platforms such as those supporting automated scheduling and customer relationship management.

This article  appeared in Research in the Sociology of Work. Here’s a PDF.

We also prepared some research on AHPs and online job search for the Data & Society Research Institute‘s amicus brief for the US Supreme Court case Carpenter v United States.

Against Ethics: The Context of, Rules for, and Alternatives to Ethical Design in Artificial Intelligence (with Anna Lauren Hoffmann and Luke Stark)

It seems like we can’t talk about artificial intelligence and machine learning without talking about ethics these days. Envisioning bodies to develop ethical principles for design sprout like weeds. Declarations on ethical behavior for developers proliferate. Calls to introduce ethics into CS curricula or corporate R&D abound. But what does ethical design mean? What obligations does it impose? How did this become the lingua franca of technological reform, instead of the libertarianism we’re used to or the abolitionist alternatives we see in social movements like Black Lives Matter? And why is it happening right now? In this paper with Anna Lauren Hoffmann and Luke Stark we we use frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. It appeared in the Proceedings of the 52nd Hawaii International Conference on System Sciences.

“Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning”

Because Privacy: Defining and Legitimating Privacy in Mobile Development (with Katie Shilton)

For a long time, we’ve known that iOS is a more privacy-sensitive mobile platform than Android for most consumers. Figuring out why required understanding how app developers for iOS and Android, who generally don’t work for Apple or Google, learn what privacy means and how it works. So we spent months hanging out in developer forums, following conversations around design, privacy, and work practices. This resulted in two open access articles, reflecting our different interests in ethical deliberation and distributed work, respectively.

The first, with Katie as lead author, has been published in the Journal of Business Ethics:

“Linking Platforms, Practices, and Developer Ethics: Levers for Privacy Discourse in Mobile Application Development”

Privacy is a critical challenge for corporate social responsibility in the mobile device ecosystem. Mobile application firms can collect granular and largely unregulated data about their consumers, and must make ethical decisions about how and whether to collect, store, and share these data. This paper conducts a discourse analysis of mobile application developer forums to discover when and how privacy conversations, as a representative of larger ethical debates, arise during development. It finds that online forums can be useful spaces for ethical deliberations, as developers use these spaces to define, discuss, and justify their values. It also discovers that ethical discussions in mobile development are prompted by work practices which vary considerably between iOS and Android, today’s two major mobile platforms. For educators, regulators, and managers interested in encouraging more ethical discussion and deliberation in mobile development, these work practices provide a valuable point of entry. But while the triggers for privacy conversations are quite different between platforms, ultimately the justifications for privacy are similar. Developers for both platforms use moral and cautionary tales, moral evaluation, and instrumental and technical rationalization to justify and legitimize privacy as a value in mobile development. Understanding these three forms of justification for privacy is useful to educators, regulators, and managers who wish to promote ethical practices in mobile development.

And the second, with me as lead author, has been published in in New Media & Society: 

“Platform Privacies: Governance, Collaboration, and the Different Meanings of ‘Privacy’ in iOS and Android Development”

Mobile application design can have a tremendous impact on consumer privacy. But how do mobile developers learn what constitutes privacy? We analyze discussions about privacy on two major developer forums: one for iOS and one for Android. We find that the different platforms produce markedly different definitions of privacy. For iOS developers, Apple is a gatekeeper, controlling market access. The meaning of “privacy” shifts as developers try to interpret Apple’s policy guidance. For Android developers, Google is one data-collecting adversary among many. Privacy becomes a set of defensive features through which developers respond to a data-driven economy’s unequal distribution of power. By focusing on the development cultures arising from each platform, we highlight the power differentials inherent in “privacy by design” approaches, illustrating the role of platforms not only as intermediaries for privacy-sensitive content but also as regulators who help define what privacy is and how it works.

Discovering the Divide: Technology and Poverty in the New Economy

This article uses archival materials from the Clinton presidency to explore how the ‘digital divide’ frame was initially built. By connecting features of this frame for stratified internet access with concurrent poverty policy discourses, the ‘digital divide’ frame is revealed as a crucial piece of the emergent neoliberal consensus, positioning economic transition as a natural disaster only the digitally skilled will survive. The Clinton administration framed the digital divide as a national economic crisis and operationalized it as a deficit of human capital and the tools to bring it to market. The deficit was to be resolved through further competition in telecommunications markets. The result was a hopeful understanding of ‘access’ as the opportunity to compete in the New Economy. In the International Journal of Communication 10 (2016): 1212-1231.

Not Bugs, But Features: Towards a Political Economy of Access

This short chapter on the future of research into stratified access to the internet and the skills to use it, the ‘digital divide’ research program, was written in response to a call from the Partnership for Progress on the Digital Divide‘s 2014 Twenty Years of the Digital Divide symposium, at the International Communication Association Annual Conference in Seattle. It is forthcoming in an ebook of the same title, where authors stake out different positions on the future of the digital divide and research into it. I argue that digital divide scholarship has missed an opportunity to lead the conversation on inequality in the information economy by focusing on bugs in contemporary capitalism rather than features of technological change driving stratification. A research program centered on ever more carefully refined measures and spectra of who has which skills or tools and what rewards they receive from them at best gives tacit approval to the pernicious myth of a skills gap. At worst, it acts as an institutional cargo cult: Assuring ourselves that the good life will emerge if the symbols of it (i.e., ICT and related skills) are present. I argue that the field should instead shift our focus from informational poverty to informational inequality by developing a ‘political economy of access’ that focuses not on degrees of poverty but its production in relationship to wealth, not gaps but the power to make them. Three potential research areas for this program are offered: the over- or under-valuing of certain technical skills in certain geographies and labor markets (i.e., who is invested in the ‘skills gap’ story and why); the redefinition, movement, or erosion of ‘good jobs’ through information technology; and the design of online job applications as a screening process for enterprises, and as a black-boxed filter and a digital poll tax for applicants. PDF

The Digital Spatial Fix (with Daniel Joseph)

This article brings distinct strands of the political economy of communication and economic geography together in order to theorize the role digital technologies play in Marxian crisis theory. Capitalist advances into digital spaces do not make the law of value obsolete, but these spaces do offer new methods for displacing overaccumulated capital, increasing consumption, or accumulating new, cheaper labor. We build on David Harvey’s theory of the spatial fix to describe three digital spatial fixes, fixed capital projects that use the specific properties of digital spaces to increase the rate of profit, before themselves becoming obstacles to the addictive cycle of accumulation: the primitive accumulation of time in the social Web, the annihilation of time by space in high-frequency trading, and affect rent in virtual worlds. We conclude by reflecting on how these digital spatial fixes also fix the tempo of accumulation and adjust the time-scale of Marxian crisis theory. In TripleC 13.2 (2015): 223-247.

Drone Vision

What does the drone want? What does the drone need? Such questions, posed explicitly and implicitly by anthropomorphized drones in contemporary popular culture, may seem like distractions from more pressing political and empirical projects addressing the Global War on Terror (GWOT). But the artifacts posing these questions offer a different way of viewing contemporary surveillance and violence that helps decouple the work of drones from justifications for drone warfare, and reveals the broader technological and political network of which drones are the most immediate manifestation. This article explores ‘drone vision’, a globally distributed apparatus for finding, researching, fixing and killing targets of the GWOT, and situates dramatizations of it within recent new materialist theoretical debates in surveillance and security studies. I model the tactic of ‘seeing like a drone’ in order to map the networks that support it. This tactic reveals a disconnect between the materials and discourses of drone vision, a disconnect I historicize within a new, imperial visual culture of war distinct from its modernist, disciplinary predecessor. I then explore two specific attempts to see like a drone: The drone art of London designer James Bridle and the Tumblr satire Texts from Drone. I conclude by returning to drone anthropomorphism as a technique for mapping the apparatus of drone vision, arguing that the drone meme arises precisely in response to these new subjects of war, as a method to call their diverse, often hidden, materials to a public accounting. In Surveillance & Society 13.2 (2015): 233-249.