Below are some of the research projects I’ve been working on lately and their abstracts. Works in progress are on top–with some including a draft manuscript–and forthcoming and published materials follow, with links to PDF copies or the relevant journal page. If something is unpublished, please ask before citing.

The Promise of Access: Hope and Inequality in the Information Economy

In 2013, a series of posters began appearing in Washington, DC’s Metro system. Each declared “The internet: Your future depends on it” next to a photo of a middle-aged black Washingtonian, and an advertisement for the municipal government’s digital training resources. This hopeful story is familiar. It is the threat of a ‘digital divide’ between the high-skilled knowledge workers of the future and the low-skilled service workers condemned to the past. For at least two decades, we’ve been warned that our livelihoods depend on getting the right digital tools and digital skills, that we must learn to code—or else risk being left behind. But problems arise when we inquire as to the jobs of the future and the skills they demand. We know that projected job growth is concentrated in low-wage service industries that require few coders. And common sense would tell us that changing a jobseeker’s skills or tools can’t alter the structural conditions dictating how many jobs are available, where they’re available, or how well they pay. But still the message persists, remaining a powerful, hopeful driver of policy and institutional reform. This book project explores where that hopeful message comes from, why it remains so powerful, and how  “The internet: Your future depends on it” transforms the problem of poverty into a problem of technology.

The Promise of Access, under contract with the MIT Press, investigates the deployment of information technology and the stories we tell about it in different institutions that are trying to solve the problem of persistent poverty in the information economy. The Clinton administration’s ‘digital divide’ policy program popularized this hopeful discourse about personal computing powering social mobility, positioned internet startups as the ‘right’ side of the divide, and charged institutions of social reproduction such as schools and libraries with closing the gap and upgrading themselves in the image of internet startups.  After introducing the development regime that builds this idea into the urban landscape through what I call the ‘political economy of hope’, and tracing the origin of the digital divide frame, I draw on seventy interviews and three years of comparative ethnographic fieldwork in startups, schools, and libraries to explore how this hope is reproduced in daily life. I situate my fieldwork within political-economic analyses of a city amid a tech boom, to explore how digital divide thinking fits into an era of skyrocketing inequality. I show how public institutions within crises of austerity—because there’s not enough money to fund everything a school must do—or crises of legitimacy—because who needs a library when everything’s online?— end up embracing the idea that poverty can be overcome with the right upgrades. Such techno-cultural reforms help organizations defend their mission, secure new resources, and make overwhelming social problems more manageable. I trace the movement of ideas, technology, and people between these organizations to show how this embrace of digital divide thinking in schools and libraries helps them survive.

But prioritizing the entrepreneurs of the future risks leaving today’s public behind. As public institutions start looking more like startups, they move further away from their public service missions. By combining fine-grained ethnographic detail with comparative institutional analysis and political-economic scope, we see how schools and libraries starved of resources and support, and overwhelmed by problems they are not equipped to handle, turn to digital divide thinking to shore up their foundations and manage crises of urban poverty. Schools and libraries garner not only new technologies—overdue renovations, laptops for each student—but new staff and new curricula meant to help the people they serve enter the ranks of the office-dwelling middle-class. But a conflict soon arises. Because the survival of the institution depends on digital divide thinking, it becomes imperative to protect that mission and its tools at all costs. So homeless library patrons are kept away from the technologies and spaces meant to empower them, and students are forced onto a narrow path of professional success and sanctioned for deviating from it. The system is broken, but it is also working just fine.

Landlords of the Internet

This article is in its early stages. I presented on it at the 2017 American Association of Geographers annual meeting in Boston alongside colleagues addressing the theme of Real Estate Technologies. Here is the extended abstract:

CEO Jeff Markley is an internet landlord. His 1 Summer Street complex, a non-descript ten-story building in downtown Boston, is one of the largest private consumers of electricity in Massachusetts. With Macy’s on the ground floor, you’d be forgiven for mistaking this for a shopping mall or another cubicle farm. But Macy’s is only one tenant, left over from when The Markley Group purchased the building in the 1990s. The main tenants are upstairs, in the million square feet of server space and fiber-optic networks: Logan Airport, Comcast, CenturyLink, Netflix, Boston Medical Center, Harvard, the Broad Institute, and many more. Markley has the rights to put whatever garish sign he’d like on the building’s façade but he’s always declined: Their customers pay for security and reliability, and publicity doesn’t help. Besides, owning the piece of New England where internet service providers and content delivery networks physically interconnect with each other and the transcontinental cables that make up the backbone of the internet means that Markley is, if not the only game in town, certainly the safest bet in town for high-dollar, mission-critical digital real estate. Laser sensors ring the server rooms to detect individual smoke molecules. Thumb print scanners protect every door above the first floor. Eight massive diesel generators sit on the roof, waiting to be engaged in an emergency.

Before he started providing internet infrastructure, Markley was in commercial real estate. And really, he still is. He is one of the internet’s landlords. This presentation maps these enigmatic figures and their business model. Drawing on Marxist theories of rent—particularly the work of Anne Haila—I build a political economy of the rentiership that (literally) undergirds the transmission, internetworking, and cloud storage of everything from Netflix binges to electronic medical records. Comparing the development of US internet landlords in urban, exurban, and rural settings,  I describe how changes in the market for digital real estate reshape the physical geographies in which these massive fixed capital projects emerge: using immense amounts of water and electricity, drawing other high-tech firms nearby, constructing imposing, high-security fortresses, and re-purposing productive (e.g., mills, railroads) or consumptive (e.g., malls) spaces of the industrial era for the transmission and storage of digital information.

In the US—ownership models differ elsewhere—I identify internet landlords as private firms managing critical internet infrastructure as commercial real estate, specifically Tier 1 backbone, Internet Exchange Points, carrier hotels, and data centers (which often overlap on the same site). These include The Markley Group but also major competitors with a more global footprint, such as Equinix, Digital Realty, and CoreSite. Following the full privatization of the NSFNET backbone in 1995, these firms built a new model of rentiership on top of the most important physical infrastructure of the information economy, with major implications for the environment, internet governance, finance, and consumer privacy. Internet landlords may appear to be a fundamentally new development, but these firms respond to many of the same pressures as traditional real estate operators: the scarcity of land and its relative productivity, convenient (or dangerous) features of the natural and built environment, location relative to state and corporate power, and more. They are also paradigmatic examples of the contemporary tendency to tread land and property as pure financial assets, with many of the largest players organized as Real Estate Investment Trusts. The market dynamics at the core of this sector run orthogonal to both public needs for secure communications (because oligopoly begets a race to the bottom) and public needs for safe, open cities (because of extreme demands for power and the ability to outbid those who might have other uses for land). At the physical core of the newest sectors of the economy is thus one of the oldest: The noble landlord.

What's in the Black Box?: Automated Hiring and Labor Market Stratification (co-authored with Ifeoma Ajunwa)

This project is in its early stages. It is a collaboration with management scholar Ifeoma Ajunwa that will result in a series of articles on a mundane technology that mediates the life outcomes of millions: The online job application. We’re approaching this collaboration as a way to to map out some of the conceptual and historical concerns that will be addressed from different angles in our subsequent book projects, and as a way to answer some questions that have been bugging both of us, as scholars and advocates with backgrounds in re-entry: What do employers get out of online job applications that they didn’t get out of paper applications and how do these technologies affect power relations in the labor market? Since these technologies do much more than just take down your work history, we refer to them as ‘automated hiring platforms’ or AHPs.

We gave a plenary talk at the biennial work and organization studies conference WORK2017 in Turku, Finland on AHPs. Here’s our abstract:

Private firms began to embrace automated hiring platforms (AHPs) in the late 1980s. Since then, corporations have increasingly sought to automate the process of soliciting, analyzing, screening, and even interviewing jobseekers. Among other things, automated hiring platforms promise to minimize cost-per-hire and reduce turnover. AHPs are remotely controlled and frequently eliminate local managerial discretion. These capacities offer the private firm the possibility for the centralization and homogenization of hiring decisions, thus enabling the creation of a more uniform corporate culture for higher efficiency gains.

The rise of AHPs, which analyze existing employee data to predict future hiring needs, also suggested to private firms the opportunity to streamline the workforce by creating labor pools of applicants who may only be called upon as needed. Thus, the introduction of AHPs, as part of the labor market, may be read as the beginning march of the gig or on-demand labor market. In fact, exploring the history of automated hiring and the relationships between vendors, employers, and jobseekers, reveals a corporate push for on-demand labor that is far broader and longer-lived than the contemporary, app-driven gig economy. An examination of the history of automated platforms represents the opportunity to explore the ur-technology that ushered in such apps as Task Rabbit, Instacart, etc. ​

This paper explores the design history of online job applications and the infrastructure for automated hiring, particularly for hourly workers in the US, in order to map out how these technological intermediaries reconfigure the the hiring process and the power relations between hirers and jobseekers. First, we examine the discrepancies between the promises and the reality of AHPs by focusing on one major actor in the automation of hiring and the design of automated hiring platforms: Unicru. Founded as Decision Point Systems in 1987, Unicru was acquired by workforce analytics giant Kronos in 2006. Drawing on a diverse set of archives that include financial disclosures, court cases, mainstream and trade press coverage, instruction manuals, and policy guidance from human resource professionals, we reveal important tensions and contradictions in the discourse around how AHPs are marketed and their true functions. For example, for most of the systems active in the AHP ecosystem, it is not necessarily hiring that is being automated, rather, it is the swift rejection of perceived risky job applicants. Thus, the AHP is first and foremost a culling mechanism; a sieve that sifts for malleable and dependable labor while discarding those considered not a “fit” for the job task or for the corporate culture based on quantifiable skills and attributes. While humans remain in the loop to make the final hiring decisions, the pool of who is afforded the opportunity for an in-person interview is pre-selected. Applicants conduct the initial screening themselves, responding to various assessments for skill, work history, and personality—with the correct answers set by the AHP on the hirer’s behalf.

We also prepared some research on AHPs and online job search for the Data & Society Research Institute‘s amicus brief for the US Supreme Court case Carpenter v United States. This included an early draft of our first paper, which you can read here.

How AI Sees Crime

I am collaborating with data scientist and computer vision researcher Genevieve Patterson on a popular piece about accountability mechanisms for artificial intelligence used in public welfare systems like health, urban planning, and policing. We focus on policing because, outside of advertising and credit reporting, it seems to be the place where the general public is currently most affected by artificial intelligence systems–and the one putatively under democratic control. Specifically, we explore how dominant body camera manufacturer Axon (née Taser) plans to automate the work of writing police reports, and thus build the archive of police activity, through the use of machine learning systems trained on body camera footage. We explain why, technically, their plans to offer these AI services within the next year are almost certain to fail, being far beyond the state of the art, and what the nature and consequence of those failures will be. As we show, the recent AI renaissance has been built on public, collaborative efforts to improve these systems through de-bugging and unit-testing. Axon’s AI solution, the precise means by which they classify body camera footage and write police reports, is a trade secret and so will not be subject to the sort of professional scrutiny that has powered recent AI advances. Drawing on recent journalism from outfits like ProPublica and recent research from the Fairness, Accountability, and Transparency in Machine Learning community, we show how we could potentially debug and unit test police surveillance technologies and give control of public AI back to the community who is paying for it and submitting their data to it.

Because Privacy: Defining and Legitimating Privacy in Mobile Development (co-authored with Katie Shilton)

This is a project I began with Katie at the EViD Lab and carried on into my time as a postdoc. We wanted to know how app developers for iOS and Android, who generally don’t work for Apple or Google, learn what privacy means and how it works. So we spent months hanging out in developer forums, following conversations around design, privacy, and work practices. This resulted in two open access articles, reflecting our different interests in ethical deliberation and distributed work, respectively.

The first, with Katie as lead author, has been published in the Journal of Business Ethics:

“Linking Platforms, Practices, and Developer Ethics: Levers for Privacy Discourse in Mobile Application Development”

Privacy is a critical challenge for corporate social responsibility in the mobile device ecosystem. Mobile application firms can collect granular and largely unregulated data about their consumers, and must make ethical decisions about how and whether to collect, store, and share these data. This paper conducts a discourse analysis of mobile application developer forums to discover when and how privacy conversations, as a representative of larger ethical debates, arise during development. It finds that online forums can be useful spaces for ethical deliberations, as developers use these spaces to define, discuss, and justify their values. It also discovers that ethical discussions in mobile development are prompted by work practices which vary considerably between iOS and Android, today’s two major mobile platforms. For educators, regulators, and managers interested in encouraging more ethical discussion and deliberation in mobile development, these work practices provide a valuable point of entry. But while the triggers for privacy conversations are quite different between platforms, ultimately the justifications for privacy are similar. Developers for both platforms use moral and cautionary tales, moral evaluation, and instrumental and technical rationalization to justify and legitimize privacy as a value in mobile development. Understanding these three forms of justification for privacy is useful to educators, regulators, and managers who wish to promote ethical practices in mobile development.

And the second, with me as lead author, has been published in in New Media & Society: 

“Platform Privacies: Governance, Collaboration, and the Different Meanings of ‘Privacy’ in iOS and Android Development”

Mobile application design can have a tremendous impact on consumer privacy. But how do mobile developers learn what constitutes privacy? We analyze discussions about privacy on two major developer forums: one for iOS and one for Android. We find that the different platforms produce markedly different definitions of privacy. For iOS developers, Apple is a gatekeeper, controlling market access. The meaning of “privacy” shifts as developers try to interpret Apple’s policy guidance. For Android developers, Google is one data-collecting adversary among many. Privacy becomes a set of defensive features through which developers respond to a data-driven economy’s unequal distribution of power. By focusing on the development cultures arising from each platform, we highlight the power differentials inherent in “privacy by design” approaches, illustrating the role of platforms not only as intermediaries for privacy-sensitive content but also as regulators who help define what privacy is and how it works.

Discovering the Divide: Technology and Poverty in the New Economy

This article uses archival materials from the Clinton presidency to explore how the ‘digital divide’ frame was initially built. By connecting features of this frame for stratified internet access with concurrent poverty policy discourses, the ‘digital divide’ frame is revealed as a crucial piece of the emergent neoliberal consensus, positioning economic transition as a natural disaster only the digitally skilled will survive. The Clinton administration framed the digital divide as a national economic crisis and operationalized it as a deficit of human capital and the tools to bring it to market. The deficit was to be resolved through further competition in telecommunications markets. The result was a hopeful understanding of ‘access’ as the opportunity to compete in the New Economy. In the International Journal of Communication 10 (2016): 1212-1231.

Not Bugs, But Features: Towards a Political Economy of Access

This short chapter on the future of research into stratified access to the internet and the skills to use it, the ‘digital divide’ research program, was written in response to a call from the Partnership for Progress on the Digital Divide‘s 2014 Twenty Years of the Digital Divide symposium, at the International Communication Association Annual Conference in Seattle. It is forthcoming in an ebook of the same title, where authors stake out different positions on the future of the digital divide and research into it. I argue that digital divide scholarship has missed an opportunity to lead the conversation on inequality in the information economy by focusing on bugs in contemporary capitalism rather than features of technological change driving stratification. A research program centered on ever more carefully refined measures and spectra of who has which skills or tools and what rewards they receive from them at best gives tacit approval to the pernicious myth of a skills gap. At worst, it acts as an institutional cargo cult: Assuring ourselves that the good life will emerge if the symbols of it (i.e., ICT and related skills) are present. I argue that the field should instead shift our focus from informational poverty to informational inequality by developing a ‘political economy of access’ that focuses not on degrees of poverty but its production in relationship to wealth, not gaps but the power to make them. Three potential research areas for this program are offered: the over- or under-valuing of certain technical skills in certain geographies and labor markets (i.e., who is invested in the ‘skills gap’ story and why); the redefinition, movement, or erosion of ‘good jobs’ through information technology; and the design of online job applications as a screening process for enterprises, and as a black-boxed filter and a digital poll tax for applicants. PDF

The Digital Spatial Fix (co-authored with Daniel Joseph)

This article brings distinct strands of the political economy of communication and economic geography together in order to theorize the role digital technologies play in Marxian crisis theory. Capitalist advances into digital spaces do not make the law of value obsolete, but these spaces do offer new methods for displacing overaccumulated capital, increasing consumption, or accumulating new, cheaper labor. We build on David Harvey’s theory of the spatial fix to describe three digital spatial fixes, fixed capital projects that use the specific properties of digital spaces to increase the rate of profit, before themselves becoming obstacles to the addictive cycle of accumulation: the primitive accumulation of time in the social Web, the annihilation of time by space in high-frequency trading, and affect rent in virtual worlds. We conclude by reflecting on how these digital spatial fixes also fix the tempo of accumulation and adjust the time-scale of Marxian crisis theory. In TripleC 13.2 (2015): 223-247.

Drone Vision

What does the drone want? What does the drone need? Such questions, posed explicitly and implicitly by anthropomorphized drones in contemporary popular culture, may seem like distractions from more pressing political and empirical projects addressing the Global War on Terror (GWOT). But the artifacts posing these questions offer a different way of viewing contemporary surveillance and violence that helps decouple the work of drones from justifications for drone warfare, and reveals the broader technological and political network of which drones are the most immediate manifestation. This article explores ‘drone vision’, a globally distributed apparatus for finding, researching, fixing and killing targets of the GWOT, and situates dramatizations of it within recent new materialist theoretical debates in surveillance and security studies. I model the tactic of ‘seeing like a drone’ in order to map the networks that support it. This tactic reveals a disconnect between the materials and discourses of drone vision, a disconnect I historicize within a new, imperial visual culture of war distinct from its modernist, disciplinary predecessor. I then explore two specific attempts to see like a drone: The drone art of London designer James Bridle and the Tumblr satire Texts from Drone. I conclude by returning to drone anthropomorphism as a technique for mapping the apparatus of drone vision, arguing that the drone meme arises precisely in response to these new subjects of war, as a method to call their diverse, often hidden, materials to a public accounting. In Surveillance & Society 13.2 (2015): 233-249.