Below are some of my research projects. Works in progress are on top and forthcoming and published materials follow, with links to PDF copies or the relevant journal page. If something is unpublished, please ask before citing.

Data Everyday: Data Literacy Practices in Division I Sports (with Tamara Clegg, Erianne Weight, and Nathan Beard)

College athletes are often forced to choose between formal education and high-level athletics, but in their sports play and training they often undertake significant informal learning using the vast array of data collected from and about them: from body-fat percentages to performance trends to speed and activity maps. The goal of this project  is to better understand what data is collected from student-athletes and how they interact with it in order to help them engage with that data and formalize their amateur data science skills. We draw on ideas from the learning sciences, sports science, organizational studies, HCI, and the sociology of work.

Our first publication is in the 2020 CHI proceedings, where we received an honorable mention.

“Data Everyday: Data Literacy Practices in a Division I Sports Context”

Data analysis is central to sports training. Today, cutting- edge digital technologies are deployed to measure and improve athletes’ performance. But too often researchers focus on the technology collecting performance data at the expense of understanding athletes’ experiences with data. This is particularly the case in the understudied context of collegiate athletics, where competition is fierce, tools for data analysis abound, and the institution actively manages athletes’ lives. By investigating how student-athletes analyze their performance data and are analyzed in turn, we can better understand the individual and institutional factors that make data literacy practices in athletics meaningful and productive—or not. Our pilot interview study of student- athletes at one Division I university reveals a set of opportunities for student-athletes to engage with and learn from data analytics practices. These opportunities come with a set of contextual tensions that should inform the design of new technologies for collegiate sports settings. PDF.

Landlords of the Internet

This article is currently under review. I last presented on it at the 2019 Maintainers conference in DC. The abstract and preprint are provided below.

The physical assets at the core of the internet, the warehouses that store and transmit data and interlink global networks, are owned not by technology firms like Google and Amazon, but by commercial real estate barons who compete with malls and property storage empires. While the infrastructural turn has brought data centers and transcontinental cables further into view in technology studies, the economic life of these massive projects has come under less scrutiny.  Granted an empire by the US at the moment of the internet’s commercialization, these internet landlords and their particular business model shaped how the network of networks that we call the internet physically connects, and how personal and business data is stored and transmitted. Under their governance, internet exchanges, colocation facilities, and data centers take on a double life as financialized real estate assets that circle the globe even as their servers and cables are firmly rooted in place. This article relates the history of the landlords of the internet, showing how a fundamental reconsideration of the business model at the heart of the internet changes how we understand internet governance and contemporary capitalism writ large. Preprint PDF.

Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work (with Ifeoma Ajunwa)

This article lays out a research agenda in the sociology of work for a type of data and organizational intermediary: work platforms. As an example, we employ a case study of the adoption of Automated Hiring Platforms (AHPs) in which we distinguish between promises and existing practices. We draw on two main methods to do so: Critical discourse analysis (CDA) and affordance critique. We collected and examined a mix of trade, popular press, and corporate archives; 135 texts in total. Our analysis reveals that work platforms offer five core affordances to management: 1) structured data fields optimized for capture and portability within organizations; 2) increased legibility of activity qua data captured inside and outside the workplace; 3) information asymmetry between labor and management; 4) an ‘ecosystem’ design that supports the development of limited-use applications for specific domains; and 5) the standardization of managerial techniques between workplaces. These combine to create a managerial frame for workers as fungible human capital, available on demand and easily ported between job tasks and organizations. While outlining the origin of platform studies within media and communication studies, we demonstrate the specific tools the sociology of work brings to the study of platforms within the workplace. We conclude by suggesting avenues for future sociological research not only on hiring platforms, but also on other work platforms such as those supporting automated scheduling and customer relationship management.

This article  appeared in Research in the Sociology of Work. Here’s a PDF.

We also prepared some research on AHPs and online job search for the Data & Society Research Institute‘s amicus brief for the US Supreme Court case Carpenter v United States.

Against Ethics: The Context of, Rules for, and Alternatives to Ethical Design in Artificial Intelligence (with Anna Lauren Hoffmann and Luke Stark)

It seems like we can’t talk about artificial intelligence and machine learning without talking about ethics these days. Envisioning bodies to develop ethical principles for design sprout like weeds. Declarations on ethical behavior for developers proliferate. Calls to introduce ethics into CS curricula or corporate R&D abound. But what does ethical design mean? What obligations does it impose? How did this become the lingua franca of technological reform, instead of the libertarianism we’re used to or the abolitionist alternatives we see in social movements like Black Lives Matter? And why is it happening right now? In this paper with Anna Lauren Hoffmann and Luke Stark we we use frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. It appeared in the Proceedings of the 52nd Hawaii International Conference on System Sciences.

“Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning”

We also published a follow-up to this article in the groundbreaking collection The Cultural Life of Machine Learning: An Incursion into Critical AI Studies, edited by Jonathan Roberge and Michael Castelle. Our chapter is entitled “Critical Perspectives on Governance Mechanisms for AI/ML Systems.” PDF.

Because Privacy: Defining and Legitimating Privacy in Mobile Development (with Katie Shilton)

For a long time, we’ve known that iOS is a more privacy-sensitive mobile platform than Android for most consumers. Figuring out why required understanding how app developers for iOS and Android, who generally don’t work for Apple or Google, learn what privacy means and how it works. So we spent months hanging out in developer forums, following conversations around design, privacy, and work practices. This resulted in two open access articles, reflecting our different interests in ethical deliberation and distributed work, respectively.

The first, with Katie as lead author, has been published in the Journal of Business Ethics:

“Linking Platforms, Practices, and Developer Ethics: Levers for Privacy Discourse in Mobile Application Development”

Privacy is a critical challenge for corporate social responsibility in the mobile device ecosystem. Mobile application firms can collect granular and largely unregulated data about their consumers, and must make ethical decisions about how and whether to collect, store, and share these data. This paper conducts a discourse analysis of mobile application developer forums to discover when and how privacy conversations, as a representative of larger ethical debates, arise during development. It finds that online forums can be useful spaces for ethical deliberations, as developers use these spaces to define, discuss, and justify their values. It also discovers that ethical discussions in mobile development are prompted by work practices which vary considerably between iOS and Android, today’s two major mobile platforms. For educators, regulators, and managers interested in encouraging more ethical discussion and deliberation in mobile development, these work practices provide a valuable point of entry. But while the triggers for privacy conversations are quite different between platforms, ultimately the justifications for privacy are similar. Developers for both platforms use moral and cautionary tales, moral evaluation, and instrumental and technical rationalization to justify and legitimize privacy as a value in mobile development. Understanding these three forms of justification for privacy is useful to educators, regulators, and managers who wish to promote ethical practices in mobile development.

And the second, with me as lead author, has been published in in New Media & Society: 

“Platform Privacies: Governance, Collaboration, and the Different Meanings of ‘Privacy’ in iOS and Android Development”

Mobile application design can have a tremendous impact on consumer privacy. But how do mobile developers learn what constitutes privacy? We analyze discussions about privacy on two major developer forums: one for iOS and one for Android. We find that the different platforms produce markedly different definitions of privacy. For iOS developers, Apple is a gatekeeper, controlling market access. The meaning of “privacy” shifts as developers try to interpret Apple’s policy guidance. For Android developers, Google is one data-collecting adversary among many. Privacy becomes a set of defensive features through which developers respond to a data-driven economy’s unequal distribution of power. By focusing on the development cultures arising from each platform, we highlight the power differentials inherent in “privacy by design” approaches, illustrating the role of platforms not only as intermediaries for privacy-sensitive content but also as regulators who help define what privacy is and how it works.

Discovering the Divide: Technology and Poverty in the New Economy

This article uses archival materials from the Clinton presidency to explore how the ‘digital divide’ frame was initially built. By connecting features of this frame for stratified internet access with concurrent poverty policy discourses, the ‘digital divide’ frame is revealed as a crucial piece of the emergent neoliberal consensus, positioning economic transition as a natural disaster only the digitally skilled will survive. The Clinton administration framed the digital divide as a national economic crisis and operationalized it as a deficit of human capital and the tools to bring it to market. The deficit was to be resolved through further competition in telecommunications markets. The result was a hopeful understanding of ‘access’ as the opportunity to compete in the New Economy. In the International Journal of Communication 10 (2016): 1212-1231.

Not Bugs, But Features: Towards a Political Economy of Access

This short chapter on the future of research into stratified access to the internet and the skills to use it, the ‘digital divide’ research program, was written in response to a call from the Partnership for Progress on the Digital Divide‘s 2014 Twenty Years of the Digital Divide symposium, at the International Communication Association Annual Conference in Seattle. It is forthcoming in an ebook of the same title, where authors stake out different positions on the future of the digital divide and research into it. I argue that digital divide scholarship has missed an opportunity to lead the conversation on inequality in the information economy by focusing on bugs in contemporary capitalism rather than features of technological change driving stratification. A research program centered on ever more carefully refined measures and spectra of who has which skills or tools and what rewards they receive from them at best gives tacit approval to the pernicious myth of a skills gap. At worst, it acts as an institutional cargo cult: Assuring ourselves that the good life will emerge if the symbols of it (i.e., ICT and related skills) are present. I argue that the field should instead shift our focus from informational poverty to informational inequality by developing a ‘political economy of access’ that focuses not on degrees of poverty but its production in relationship to wealth, not gaps but the power to make them. Three potential research areas for this program are offered: the over- or under-valuing of certain technical skills in certain geographies and labor markets (i.e., who is invested in the ‘skills gap’ story and why); the redefinition, movement, or erosion of ‘good jobs’ through information technology; and the design of online job applications as a screening process for enterprises, and as a black-boxed filter and a digital poll tax for applicants. PDF

The Digital Spatial Fix (with Daniel Joseph)

This article brings distinct strands of the political economy of communication and economic geography together in order to theorize the role digital technologies play in Marxian crisis theory. Capitalist advances into digital spaces do not make the law of value obsolete, but these spaces do offer new methods for displacing overaccumulated capital, increasing consumption, or accumulating new, cheaper labor. We build on David Harvey’s theory of the spatial fix to describe three digital spatial fixes, fixed capital projects that use the specific properties of digital spaces to increase the rate of profit, before themselves becoming obstacles to the addictive cycle of accumulation: the primitive accumulation of time in the social Web, the annihilation of time by space in high-frequency trading, and affect rent in virtual worlds. We conclude by reflecting on how these digital spatial fixes also fix the tempo of accumulation and adjust the time-scale of Marxian crisis theory. In TripleC 13.2 (2015): 223-247.

Drone Vision

What does the drone want? What does the drone need? Such questions, posed explicitly and implicitly by anthropomorphized drones in contemporary popular culture, may seem like distractions from more pressing political and empirical projects addressing the Global War on Terror (GWOT). But the artifacts posing these questions offer a different way of viewing contemporary surveillance and violence that helps decouple the work of drones from justifications for drone warfare, and reveals the broader technological and political network of which drones are the most immediate manifestation. This article explores ‘drone vision’, a globally distributed apparatus for finding, researching, fixing and killing targets of the GWOT, and situates dramatizations of it within recent new materialist theoretical debates in surveillance and security studies. I model the tactic of ‘seeing like a drone’ in order to map the networks that support it. This tactic reveals a disconnect between the materials and discourses of drone vision, a disconnect I historicize within a new, imperial visual culture of war distinct from its modernist, disciplinary predecessor. I then explore two specific attempts to see like a drone: The drone art of London designer James Bridle and the Tumblr satire Texts from Drone. I conclude by returning to drone anthropomorphism as a technique for mapping the apparatus of drone vision, arguing that the drone meme arises precisely in response to these new subjects of war, as a method to call their diverse, often hidden, materials to a public accounting. In Surveillance & Society 13.2 (2015): 233-249.