How Should We Respond to Injustice in a Culture of Outrage? Part II

After several weeks of hiatus, I am back! The middle of the semester proved to be a busy time for me, but now it is winding down. I wanted to write a brief post about what we as tech-makers can do to work against outrage culture and towards meaningful, empathetic interactions with other human beings. A lot of this post is really just a compilation of great things other people have said on this subject that I just wanted in one post. 🙂

Mike Monteiro, in his Medium post “A Designer’s Code of Ethics,” claims that designers should “value impact over form,” and that their work should be evaluated based its impact in a system, not as if it was designed in a vacuum — because, obviously, it wasn’t. Tech should be treated like a theoretical physics experiment by its designers. We are responsible for what it does and how it is used, even if it is being used against our “intention” for it.

Anil Dash has written extensively on this subject (it’s where I got the name “humane tech” from). Similar to Monteiro, he says:

We need to challenge our definitions of success and progress, and to stop considering our work in solely commercial terms. We need to radically improve our systems of compensation, to be responsible about credit and attribution, and to be generous and fair with reward and remuneration. We need to consider the impact our work has on the planet. We need to consider the impact our work has on civic and academic institutions, on artistic expression, on culture.

We also have to know when to say no to certain projects. Monteiro also points out that an object designed to harm people cannot be “well-designed” because to design it well is to design it to harm other people. This sentiment is related to my first post on this blog — if we are to be ethical designers, there are some assignments that we cannot take.

So, how does this apply to our accomodation of outrage culture? Dash’s “8 Steps for Preventing Abuse in a Web Community” is a great place to start. A lot of it really just boils down to accountability: are members of the community held seriously accountable for the way they participate in the community? Is the community built in a way that discourages abuse, whether through moderating, reporting, or even stigma and norms?

Ultimately, it is up to those who create and maintain these online spaces to bear responsibility for the culture of that community. This is a big investment on their part, but a necessary one. As community makers and maintainers, we can and should set rules for what a community is for and the expectations we have for members of that community.

Working Towards a Quakerish Ethic of Work and Technology

On The Atlantic‘s Facebook post of this video about technology and ethics, I saw a comment that suggested that technology itself is neutral; it’s a tool that people can use for good or evil. Technology itself cannot be ethical or unethical. The problem is, technology is more than a tool. It’s not a hammer or pencil, but increasingly it is part of ourselves, and we have begun trusting more and more of ourselves to it. Additionally, we’ve had the capability to make unethical or even evil technology for decades. Aside from obvious examples like drones and computer viruses, there are lurking evils within technology itself that, if not caught and taken care of, could run amok even from its human masters. Zeynep Tufekci’s TED talk about an AI-fueled dystopia is a good place to start for this topic.

There’s been a lot written on this subject, so I won’t try to re-hash it here. My concern is that, as a senior computer science major, I feel as if many (not all!) of my classmates are cavalier about what their code might be used for. My AI professor asked us a week ago about whether we would be responsible at all if, say, drone software we helped write was used for military drones attacking civilians. Several people said no. They want to be able to create anything and wash their hands of it if it is used for evil, but I’m sure they would want some credit if it was used for good.

And I agree, a few years ago I would have felt the same thing. What does it matter to me if someone deliberately misuses something I made? Especially if it was a relatively minor contribution in a millions-of-lines code base! But, as a Quaker I now have a different perspective. Any decision I make, from what I consume to what I produce, should be carefully thought through. Who and what will my choices affect? Where am I in the supply chain, and what are the ethical implications at every step along the chain? If I am producing something, how can it be misused – in the cybersecurity sense as well as in a moral or ethical sense, as in the drone example?

One of my Quaker mentors once gave some advice that has helped me with this quandary. To sum up a lengthier and more in-depth talk: choose actions that have clear, direct effects over those with indirect effects. Any actions that have unclear effects should be avoided if possible. This is one of the best ways to curb any evil effects of your own actions. Of course, especially as an American at the end of many unethical supply chains with little knowledge of what many of my actions have on the rest of the world, this can be difficult to follow. But, in terms of decision-making – should I work at company X, should I contribute to project Y – it is invaluable. It’s part of what pushed me to commit to pursuing accessible and assistive technology as my research interest for my summer internship, graduate school, and beyond. I am passionate about the truly good things we can do with technology and am excited to better the world around me, but I am also cautious of possessing an attitude that I can do whatever I want and then try to renarrate it as serving Christ later on.

I remember listening with sadness in my heart when one professor I respect said that he worked for Monsanto for awhile, and did not seem too concerned about any of the ethical implications of that. I know of another man who has worked for Lockheed Martin for perhaps decades, and once said remarked that he’s had bad experiences working with female engineers because they have a more personal connection to their work (and therefore might feel more morally convicted or culpable for what their work might be used for). I myself am concerned about the implications of working for any large corporation or institution – my work could have more of an impact, reach more people, but at what cost? That is something that I am thinking about constantly. In the end, it is hard to say what impact your work will have, but I hope that we will become more thoughtful and more deliberate about what work we choose to do, and why we choose to do it.