I have always enjoyed fixing computers. This is not because of the challenges that are presented by the process of computer repair (although there is a certain amount of enjoyment to be found there as well) but because it is interesting to hear how people feel about their computers both in terms of their normal functioning and their malfunctioning. There seemed to be a near-infinite number of ways that people had come up with to make the functioning (or malfunctioning) of these machines make sense. I came to think of these little quirky approaches to grappling with the black box of computational devices as little rituals. Cultural anthropologist Victor Turner describes rituals as symbolic actions, grouping them alongside other forms of symbolic action such as social drama and metaphor (4). However, I did not have a concrete definition of what a technological ritual was; I just knew it when I saw it.
Fundamental to these is the idea that rituals are activities that occur in the material world, but have some sort of importance beyond their material qualities. Metaphor has become an important to aid users in understanding the functioning of the otherwise complex functioning of digital devices (e.g. 1). Digital technology also has its share of social drama: Facebook relationship status being one way to solidify a romantic engagement between two people. Even ritual itself has been spoken of in the context of computation. One study has examined how “ritualized interactions often play a major role in the performance and experience of the art or performance work,” (2) while another has looked at how ritual activities could be used to make virtual characters seem more like real characters (3). However, art performances hold a kind of lofty ambition and a focus on making virtual characters have rituals focuses on representing people to make them easier to interact with. I wonder how looking at the more everyday practices of people as they relate with technology could lead to a better understanding of both people and the technology they use. As an example of how to look at technological interactions in terms of ritual, I point to Merlin Mann’s Inbox Zero.
It is common to hear people complain about having too much email. It takes a lot of time to sort through all of one’s messages, it causes problems with missed communication, and it can make people feel overwhelmed with the amount of information they are receiving. As an answer to this problem, Merlin Mann describes Inbox Zero (http://inboxzero.com/) , a way of handling email overload. At one level, this is a prescription of simple actions of sorting, removing and addressing the demands presented in a person’s inbox. However, it is also a set of small actions that in combination hold a certain higher personal and social value. The empty inbox described by the processes name not only reduces distractions when new email comes in, it also gives a symbol of technological well-adjustment. It is social in the sense that the person’s relations to others are kept in check. The material of Inbox Zero is an empty in box, it’s meaning is control of technology in a way that also incorporates interactions with other people.
This idea of ritual, as it pertains to technology, is still quite rough. However, as HCI has focused more on experiences and the designing thereof, the kind of duality of meaning that comes from ritual acts may prove to be a valuable way of understanding the relationships between the form and function of artifacts and the meanings that people ascribe to them. Looking at interactions as rituals may point to better understandings of digital artifacts and the people who interact with them.
 Blackwell, A. F. (2006). The reification of metaphor as a design tool. ACM Transactions on Computer-Human Interaction (TOCHI), 13(4), 490-530.
 Loke, L., Khut, G. P., & Kocaballi, A. B. (2012, June). Bodily experience and imagination: designing ritual interactions for participatory live-art contexts. InProceedings of the Designing Interactive Systems Conference (pp. 779-788). ACM.
 Mascarenhas, S., Dias, J., Afonso, N., Enz, S., & Paiva, A. (2009, May). Using rituals to express cultural differences in synthetic characters. InProceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (Vol. 1).
 Turner, V. W. (1975). Dramas, fields, and metaphors: Symbolic action in human society. Cornell University Press.
Calling into question design’s ability to solve problems: a quick look at micromanagement technologies for low-wage service jobs
In academia, we often talk about technology becoming increasingly pervasive (or ubiquitous) in daily life, referring to technologies moving beyond the personal computer and present in multiple locations. Technologists often herald this vision of technological pervasiveness as a positive change: having more technology opens up new spaces for design to explore solving problems. While new pervasive technologies are able to account for problems in more innovative ways, these new forms create as many problems as they are purported to “solve”. In the case we examine today, new technologies are not shown to solve problems as much as they displace burdens from one set of people to another.
This article from the New York Times outlines a plight of retail and wholesale service workers (e.g., cashiers, cooks, stockers, etc.). Newly adopted time management technologies micromanage workers’ work hours to such a degree that in impacts their non-work lives. From one perspective, these technologies solve employers’ problems such as creating new ways to deal with peak customer demands and getting the most out of workers in four-hour periods. This may be beneficial for the employers, but in the process of creating efficiencies and responsiveness to economic pressures and trends, however, the new technologies have essentialized human beings as parts of algorithms. By understanding what these new technologies are doing to low-wage service employees, we understand that this time-management software is not solving a problem; it’s shifting a burden.
“We’re seeing more and more that the burden of market fluctuations is being shifted onto the workers, as opposed to the companies absorbing it themselves” – from the article
By using these neoliberal micromanagement technologies, employers want to have access to a flexible on-demand workforce, but without the responsibility (or cost) for officially placing individuals on-call. In more skilled labor jobs, companies often have to pay for the privilege of having a person “on-call” (meaning they can request for you to come in work), which is not the case for these new service workers, which indicates that with the introduction of these new practices and technologies there are also shifting of worker’s workplace expectations.
This article leaves me with a few thoughts:
To be clear, I don’t think shifting burdens happens in every case of design, but becomes likely in cases where design enrolls multiple parties and stakeholders with unequal positions of power. In this scenario, you have employers and employees both impacted by the novel micromanagement technology, but employees are made to bear the responsibility to be responsive to market pressures.
These new micromanagement technologies create new ways for employers to understand their workforce and efficiently allocate their human and non-human resources. These technologies create different types of visibility and understanding of these resources, but we do not entirely understand the potential impacts of these technologies and their accompanying practices on employers and employees. If anyone has any links to relevant research regarding the impact of such technologies on lower waged service jobs, I would welcome their suggested readings.
As I’ve argued, designers and technologists are not always “solving problems” through their innovations; in their efforts to solve problems, they are also creating new problems by displacing and shifting burdens to others. This leaves unanswered questions regarding how design might better account for shifting burdens and what the processes are by which these shifts actually happen. This also brings about a new occasion for design to create new opportunities for these low-wage auto service workers. Prior research documents the rarity for new technologies to disrupt power structures, but it is not impossible. At the end of the article, the author points to workers’ diminished power to collectively organize and form unions as part of why such technologies exist and why low-wage service jobs without much mobility may increasingly become the norm. This point presents an opportunity for design to better help low-wage service workers better understand how technology impacts their everyday working experiences as well as designing for new methods for collectively organizing for better treatment, wages, and working expectations. Which leaves open questions of how can design change and help improve low-wage service workers’ situations? What kinds of new technologies, visibilities, practices and norms would need to be established and/or supported to help low-wage service workers collectively produce action?
It is important to note that new micromanagement technologies that rely on creative and novel ways of algorithmically thinking and collecting data will continue to pervade the lives of low-wage service workers. This leaves open areas of research to explore the relationship and impact of these technologies, workers, and market-forces.
In the New York Times today there is an article about Google X, the top-secret lab for big ideas at Google. According to the article, the future being imagined here is “a place where your refrigerator could be connected to the Internet, so it could order groceries when they ran low. Your dinner plate could post to a social network what you’re eating. Your robot could go to the office while you stay home in your pajamas. And you could, perhaps, take an elevator to outer space.”
This is indeed a compelling vision.. maybe. Am I the only one who finds this future a little underwhelming, maybe even problematic and dysfunctional? For one thing, aren’t there already enough what-I-had-for-lunch tweets without plates getting in on the action? And what if the plate (because of course it has artificial intelligence) decides to chime in with some commentary: ‘pizza leftovers again?! @John’sMom are you seeing this?’.
And while staying at home in pajamas does sound pretty attractive, how does sending your robot into the office help? Does it make typing noises at your computer so people think you’re there? Does it go to meetings for you? Does it make decisions for you? What if it messes up? Could you really relax at home in your pajamas knowing that your robot might create a huge mess (bureaucratic or physical) that you will need to clean up? What if your robot knows how you really feel about your coworker and gets into a fight with your coworker’s robot? Could your robot be fired? Could your robot get you fired? Could it get promoted? Who would be held responsible for its actions: you, the robot, the robot’s designer? Would the robot have a moral compass, and if so, whose? Would everyone send their robots in for them, so the workplace would be entirely robots? Would it be all the same to them if the lights and heat were shut off to save electricity? Would there be robot unions to protest this mistreatment?
And then there’s the grocery-ordering refrigerator. This seems to be one of the most common images of a digital future of pervasive computing, no doubt inspired by a moment of watching the last few drops of milk drip onto still-dry cereal and thinking ‘man, I wish the refrigerator could have just taken care of that.’ But what kind of groceries would it order? It stands to reason that a digital refrigerator might need to deal in SKUs, which would make it easy to order more frozen pizza but maybe more difficult to order ‘the best-looking local in-season fruit’. Also, what infrastructure would this require? In addition to the refrigerator, the ordering system would need to be in place on the grocery store end, as well as maybe a delivery service. It’s hard to imagine smaller markets being able to invest in this, and vendors at the local farmers’ market would be out of the loop entirely. This would undoubtedly be unproblematic for many people, but it is significant that these biases could be encoded in technical systems that could encourage already-existing (unhealthy) habits to become even more entrenched.
As Langdon Winner has argued, technologies shape forms of life: technology design is ultimately about choosing ways of living, of ordering the world around us and our activities in it. While geeky technophiles tend to do a pretty good job of dreaming up some very cool and labor-saving technologies, they are less good at envisioning the forms of life that they might institute.
This is where more nuanced and critical approaches like Social Informatics might be useful. As scholars who study social dimensions of technologies we are used to teasing apart their various social, cultural, philosophical, historical, political, and ethical aspects, and looking at them critically. These aspects are just as much, if not more, important than technical feasibility, yet they are discussed far less frequently (if at all) during technology development and assessment. Maybe one of the reasons for this is that our existing critical approaches focus on technologies that already exist, not ones that have yet to be implemented.
But why should geeks working at big corporations with deep pockets be the ones who get to decide what our (digital) future should look like? What sorts of futures might Social Inforfmatics scholars envision? And as we’re imagining futures, could we also maybe move past our own laziness to consider how we might build a future with less inequality and more justice, less stress and more health, less poverty and excess and more true wealth and happiness?
All of these may sound like unattainable goals. But imagining a future in which they are true would be a first step toward making them a reality. And I would take that over a ‘smart’ refrigerator any day.