Research

Below are some of my research projects. Works in progress are on top and forthcoming and published materials follow, with links to PDF copies or the relevant journal page. If something is unpublished, please ask before citing.

Data Everyday: Data Literacy Practices in Division I Sports (with Tamara Clegg, Erianne Weight, and Nathan Beard)

College athletes are often forced to choose between formal education and high-level athletics, but in their sports play and training they often undertake significant informal learning using the vast array of data collected from and about them: from body-fat percentages to performance trends to speed and activity maps. The goal of this project  is to better understand what data is collected from student-athletes and how they interact with it in order to help them engage with that data and formalize their amateur data science skills. We draw on ideas from the learning sciences, sports science, organizational studies, HCI, and the sociology of work.

Our first publication is in the 2020 CHI proceedings, where we received an honorable mention.

“Data Everyday: Data Literacy Practices in a Division I Sports Context”

Data analysis is central to sports training. Today, cutting- edge digital technologies are deployed to measure and improve athletes’ performance. But too often researchers focus on the technology collecting performance data at the expense of understanding athletes’ experiences with data. This is particularly the case in the understudied context of collegiate athletics, where competition is fierce, tools for data analysis abound, and the institution actively manages athletes’ lives. By investigating how student-athletes analyze their performance data and are analyzed in turn, we can better understand the individual and institutional factors that make data literacy practices in athletics meaningful and productive—or not. Our pilot interview study of student- athletes at one Division I university reveals a set of opportunities for student-athletes to engage with and learn from data analytics practices. These opportunities come with a set of contextual tensions that should inform the design of new technologies for collegiate sports settings. PDF.

Our second publication, which Tammy Clegg led, is in the Journal of Research in Science Teaching.

“Data Everyday as Community-driven Science: Athletes’ Critical Data Literacy Practices in Collegiate Sports Contexts”

In this article, we investigate the community-driven science happening organically in elite athletics as a means of engaging a community of learners—collegiate athletes, many of whom come from underrepresented groups—in STEM. We aim to recognize the data literacy practices inherent in sports play and to explore the potential of critical data literacy practices for enabling athletes to leverage data science as a means of addressing systemic racial, equity, and justice issues inherent in sports institutions. We leverage research on critical data literacies as a lens to present case studies of three athletes at an NCAA Division 1 university spanning three different sports. We focus on athletes’ experiences as they engage in critical data literacy practices and the ways they welcome, adapt, resist, and critique such engagements. Our findings indicate ways in which athletes (1) readily accept data practices espoused by their coaches and sport, (2) critique and intentionally disengage from such practices, and (3) develop their own new data productions. In order to support community-driven science, our findings point to the critical role of athletics’ organizations in promoting athletes’ access to, as well as engagement and agency with data practices on their teams. Journal page.

Our third publication, which I led, is forthcoming in Big Data & Society.

“The Visible Body and the Invisible Organization: Information Asymmetry and College Athletics Data”

Elite athletes are constantly tracked, measured, scored, and sorted to improve their performance. Privacy is sacrificed in the name of improvement. Athletes frequently do not know why particular personal data are collected or to what end. Our interview study of 23 elite US college athletes and 26 staff members reveals that their sports play is governed through information asymmetries.  These asymmetries look different for different sports with different levels of investment, different racial and gender makeups, and different performance metrics. As large, data-intensive organizations with highly differentiated subgroups, university athletics are an excellent site for theory building in critical data studies, especially given the most consequential data collected from us, with the greatest effect on our lives, is frequently a product of collective engagement with specific organizational contexts like workplaces and schools.  Empirical analysis reveals two key tensions in this data regime: Athletes in high-status sports, more likely to be Black men, have less freedom to see or dispute their personal data, while athletes in general are more comfortable sharing personal data with people further away from them. We build from these findings to develop a theory of collective informational harm in bounded institutional settings such as the workplace. The quantified organization, as we term it, is concerned not with monitoring individuals but building data collectives through processes of category creation and managerial data relations of coercion and consent.  Open-access journal page.

Landlords of the Internet

Who owns the internet? It depends where you look. The physical assets at the core of the internet, the warehouses that store the cloud’s data and interlink global networks, are owned not by technology firms like Google and Facebook but by commercial real estate barons who compete with malls and property storage empires. Granted an empire by the US at the moment of the internet’s commercialization, these internet landlords shaped how the network of networks that we call the internet physically connects, and how personal and business data is stored and transmitted. Under their governance, internet exchanges, colocation facilities, and data centers take on a double life as financialized real estate assets that circle the globe even as their servers and cables are firmly rooted in place. The history of internet landlords forces a fundamental reconsideration of the business model at the base of the internet. This history makes clear that the internet was never an exogenous shock to capitalist social relations, but rather a touchstone example of an economic system increasingly ruled by asset owners like landlords.

This article appears in Social Studies of Science. Here’s PDF.

Platforms at Work: Automated Hiring Platforms and Other New Intermediaries in the Organization of Work (with Ifeoma Ajunwa)

This article lays out a research agenda in the sociology of work for a type of data and organizational intermediary: work platforms. As an example, we employ a case study of the adoption of Automated Hiring Platforms (AHPs) in which we distinguish between promises and existing practices. We draw on two main methods to do so: Critical discourse analysis (CDA) and affordance critique. We collected and examined a mix of trade, popular press, and corporate archives; 135 texts in total. Our analysis reveals that work platforms offer five core affordances to management: 1) structured data fields optimized for capture and portability within organizations; 2) increased legibility of activity qua data captured inside and outside the workplace; 3) information asymmetry between labor and management; 4) an ‘ecosystem’ design that supports the development of limited-use applications for specific domains; and 5) the standardization of managerial techniques between workplaces. These combine to create a managerial frame for workers as fungible human capital, available on demand and easily ported between job tasks and organizations. While outlining the origin of platform studies within media and communication studies, we demonstrate the specific tools the sociology of work brings to the study of platforms within the workplace. We conclude by suggesting avenues for future sociological research not only on hiring platforms, but also on other work platforms such as those supporting automated scheduling and customer relationship management.

This article  appeared in Research in the Sociology of Work. Here’s a PDF.

We also prepared some research on AHPs and online job search for the Data & Society Research Institute‘s amicus brief for the US Supreme Court case Carpenter v United States.

Against Ethics: The Context of, Rules for, and Alternatives to Ethical Design in Artificial Intelligence (with Anna Lauren Hoffmann and Luke Stark)

It seems like we can’t talk about artificial intelligence and machine learning without talking about ethics these days. Envisioning bodies to develop ethical principles for design sprout like weeds. Declarations on ethical behavior for developers proliferate. Calls to introduce ethics into CS curricula or corporate R&D abound. But what does ethical design mean? What obligations does it impose? How did this become the lingua franca of technological reform, instead of the libertarianism we’re used to or the abolitionist alternatives we see in social movements like Black Lives Matter? And why is it happening right now? In this paper with Anna Lauren Hoffmann and Luke Stark we we use frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work. It appeared in the Proceedings of the 52nd Hawaii International Conference on System Sciences.

“Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning”

We also published a follow-up to this article in the groundbreaking collection The Cultural Life of Machine Learning: An Incursion into Critical AI Studies, edited by Jonathan Roberge and Michael Castelle. Our chapter is entitled “Critical Perspectives on Governance Mechanisms for AI/ML Systems.” PDF.

Because Privacy: Defining and Legitimating Privacy in Mobile Development (with Katie Shilton)

For a long time, we’ve known that iOS is a more privacy-sensitive mobile platform than Android for most consumers. Figuring out why required understanding how app developers for iOS and Android, who generally don’t work for Apple or Google, learn what privacy means and how it works. So we spent months hanging out in developer forums, following conversations around design, privacy, and work practices. This resulted in two open access articles, reflecting our different interests in ethical deliberation and distributed work, respectively.

The first, with Katie as lead author, has been published in the Journal of Business Ethics:

“Linking Platforms, Practices, and Developer Ethics: Levers for Privacy Discourse in Mobile Application Development”

Privacy is a critical challenge for corporate social responsibility in the mobile device ecosystem. Mobile application firms can collect granular and largely unregulated data about their consumers, and must make ethical decisions about how and whether to collect, store, and share these data. This paper conducts a discourse analysis of mobile application developer forums to discover when and how privacy conversations, as a representative of larger ethical debates, arise during development. It finds that online forums can be useful spaces for ethical deliberations, as developers use these spaces to define, discuss, and justify their values. It also discovers that ethical discussions in mobile development are prompted by work practices which vary considerably between iOS and Android, today’s two major mobile platforms. For educators, regulators, and managers interested in encouraging more ethical discussion and deliberation in mobile development, these work practices provide a valuable point of entry. But while the triggers for privacy conversations are quite different between platforms, ultimately the justifications for privacy are similar. Developers for both platforms use moral and cautionary tales, moral evaluation, and instrumental and technical rationalization to justify and legitimize privacy as a value in mobile development. Understanding these three forms of justification for privacy is useful to educators, regulators, and managers who wish to promote ethical practices in mobile development.

And the second, with me as lead author, has been published in in New Media & Society: 

“Platform Privacies: Governance, Collaboration, and the Different Meanings of ‘Privacy’ in iOS and Android Development”

Mobile application design can have a tremendous impact on consumer privacy. But how do mobile developers learn what constitutes privacy? We analyze discussions about privacy on two major developer forums: one for iOS and one for Android. We find that the different platforms produce markedly different definitions of privacy. For iOS developers, Apple is a gatekeeper, controlling market access. The meaning of “privacy” shifts as developers try to interpret Apple’s policy guidance. For Android developers, Google is one data-collecting adversary among many. Privacy becomes a set of defensive features through which developers respond to a data-driven economy’s unequal distribution of power. By focusing on the development cultures arising from each platform, we highlight the power differentials inherent in “privacy by design” approaches, illustrating the role of platforms not only as intermediaries for privacy-sensitive content but also as regulators who help define what privacy is and how it works.

Discovering the Divide: Technology and Poverty in the New Economy

This article uses archival materials from the Clinton presidency to explore how the ‘digital divide’ frame was initially built. By connecting features of this frame for stratified internet access with concurrent poverty policy discourses, the ‘digital divide’ frame is revealed as a crucial piece of the emergent neoliberal consensus, positioning economic transition as a natural disaster only the digitally skilled will survive. The Clinton administration framed the digital divide as a national economic crisis and operationalized it as a deficit of human capital and the tools to bring it to market. The deficit was to be resolved through further competition in telecommunications markets. The result was a hopeful understanding of ‘access’ as the opportunity to compete in the New Economy. In the International Journal of Communication 10 (2016): 1212-1231.

The Digital Spatial Fix (with Daniel Joseph)

This article brings distinct strands of the political economy of communication and economic geography together in order to theorize the role digital technologies play in Marxian crisis theory. Capitalist advances into digital spaces do not make the law of value obsolete, but these spaces do offer new methods for displacing overaccumulated capital, increasing consumption, or accumulating new, cheaper labor. We build on David Harvey’s theory of the spatial fix to describe three digital spatial fixes, fixed capital projects that use the specific properties of digital spaces to increase the rate of profit, before themselves becoming obstacles to the addictive cycle of accumulation: the primitive accumulation of time in the social Web, the annihilation of time by space in high-frequency trading, and affect rent in virtual worlds. We conclude by reflecting on how these digital spatial fixes also fix the tempo of accumulation and adjust the time-scale of Marxian crisis theory. In TripleC 13.2 (2015): 223-247.

Drone Vision

What does the drone want? What does the drone need? Such questions, posed explicitly and implicitly by anthropomorphized drones in contemporary popular culture, may seem like distractions from more pressing political and empirical projects addressing the Global War on Terror (GWOT). But the artifacts posing these questions offer a different way of viewing contemporary surveillance and violence that helps decouple the work of drones from justifications for drone warfare, and reveals the broader technological and political network of which drones are the most immediate manifestation. This article explores ‘drone vision’, a globally distributed apparatus for finding, researching, fixing and killing targets of the GWOT, and situates dramatizations of it within recent new materialist theoretical debates in surveillance and security studies. I model the tactic of ‘seeing like a drone’ in order to map the networks that support it. This tactic reveals a disconnect between the materials and discourses of drone vision, a disconnect I historicize within a new, imperial visual culture of war distinct from its modernist, disciplinary predecessor. I then explore two specific attempts to see like a drone: The drone art of London designer James Bridle and the Tumblr satire Texts from Drone. I conclude by returning to drone anthropomorphism as a technique for mapping the apparatus of drone vision, arguing that the drone meme arises precisely in response to these new subjects of war, as a method to call their diverse, often hidden, materials to a public accounting. In Surveillance & Society 13.2 (2015): 233-249.