Activism in Technology

One of the things that first attracted me to Quakerism was its legacy of social activism. As a disillusioned “Exvangelical,” I was frustrated with the ways in which I felt many churches actively supported the status quo instead of questioning and resisting it. Many of these churches do this not by what they say or do but what they do not say and do not do. By remaining silent, however, complicity is not merely implied but realized (consider the “Good German” phenomena). The thing is, this is not just a quirk of some conservative churches. This is something that many of us do, because it is easier to do nothing than to do something. And this, of course, extends to tech.

I know lots of people in tech who think that evil technology (and evil uses of “neutral” technology) are inevitable. “If we don’t build it, someone else will” is a sentiment I’ve heard in the classroom as well as on the Internet. This was the topic of my first post, in which I thought about what a “Quakerish ethic” in our work and in technology would look like.

In the past few weeks, I’ve seen some heartening examples of what it looks like when this kind of ethic is realized. It recently came out that, at the beginning of this year, a group of nine (nine!) Google employees protested Google’s military contract work by refusing to build a key security feature geared to help Google win such contracts. This act of rebellion was allegedly a catalyst for the larger employee movement to end Project Maven in April — which was also succesful.

More recent than that was the response of employees at huge tech companies to their employers’ contracts with ICE: Amazon and Microsoft (including recently-acquired GitHub) employees have signed open letters to their respective employers, threatening to leave if ICE contracts continued. Just today, Buzzfeed reported that Salesforce employees have signed a petition to end Salesforce’s contract with US Customs and Border Protection.

Jackie Luo, an engineer I follow on Twitter, pointed out that when this kind of activism happens, the “If we don’t build it, someone else will” argument falls apart:

This. Works. Tech employees don’t often realize how much power we have, especially in big corporations where you can feel replacable, one miniscule part in a massive machine. Alphabet, Inc., Google’s parent company, reported having 88,110 employees in 2017. It only took nine of them, situated in a key area, to block Google from winning a military contract. There were 4,000 signatures on the petition against Project Maven, which is only 5% of Google’s full-time employees.

Let’s continue to take responsibility for what we create, and think about the consequences of our actions. My hope is that this will bleed into the rest of tech, past the hot-button government contract issues. While these are so, so important, I also hope that the more insidious problems, like the unethical smartphone supply chain, will begin to be wrestled with at this level as well.

How Should We Respond to Injustice in a Culture of Outrage? Part II

After several weeks of hiatus, I am back! The middle of the semester proved to be a busy time for me, but now it is winding down. I wanted to write a brief post about what we as tech-makers can do to work against outrage culture and towards meaningful, empathetic interactions with other human beings. A lot of this post is really just a compilation of great things other people have said on this subject that I just wanted in one post. 🙂

Mike Monteiro, in his Medium post “A Designer’s Code of Ethics,” claims that designers should “value impact over form,” and that their work should be evaluated based its impact in a system, not as if it was designed in a vacuum — because, obviously, it wasn’t. Tech should be treated like a theoretical physics experiment by its designers. We are responsible for what it does and how it is used, even if it is being used against our “intention” for it.

Anil Dash has written extensively on this subject (it’s where I got the name “humane tech” from). Similar to Monteiro, he says:

We need to challenge our definitions of success and progress, and to stop considering our work in solely commercial terms. We need to radically improve our systems of compensation, to be responsible about credit and attribution, and to be generous and fair with reward and remuneration. We need to consider the impact our work has on the planet. We need to consider the impact our work has on civic and academic institutions, on artistic expression, on culture.

We also have to know when to say no to certain projects. Monteiro also points out that an object designed to harm people cannot be “well-designed” because to design it well is to design it to harm other people. This sentiment is related to my first post on this blog — if we are to be ethical designers, there are some assignments that we cannot take.

So, how does this apply to our accomodation of outrage culture? Dash’s “8 Steps for Preventing Abuse in a Web Community” is a great place to start. A lot of it really just boils down to accountability: are members of the community held seriously accountable for the way they participate in the community? Is the community built in a way that discourages abuse, whether through moderating, reporting, or even stigma and norms?

Ultimately, it is up to those who create and maintain these online spaces to bear responsibility for the culture of that community. This is a big investment on their part, but a necessary one. As community makers and maintainers, we can and should set rules for what a community is for and the expectations we have for members of that community.

Working Towards a Quakerish Ethic of Work and Technology

On The Atlantic‘s Facebook post of this video about technology and ethics, I saw a comment that suggested that technology itself is neutral; it’s a tool that people can use for good or evil. Technology itself cannot be ethical or unethical. The problem is, technology is more than a tool. It’s not a hammer or pencil, but increasingly it is part of ourselves, and we have begun trusting more and more of ourselves to it. Additionally, we’ve had the capability to make unethical or even evil technology for decades. Aside from obvious examples like drones and computer viruses, there are lurking evils within technology itself that, if not caught and taken care of, could run amok even from its human masters. Zeynep Tufekci’s TED talk about an AI-fueled dystopia is a good place to start for this topic.

There’s been a lot written on this subject, so I won’t try to re-hash it here. My concern is that, as a senior computer science major, I feel as if many (not all!) of my classmates are cavalier about what their code might be used for. My AI professor asked us a week ago about whether we would be responsible at all if, say, drone software we helped write was used for military drones attacking civilians. Several people said no. They want to be able to create anything and wash their hands of it if it is used for evil, but I’m sure they would want some credit if it was used for good.

And I agree, a few years ago I would have felt the same thing. What does it matter to me if someone deliberately misuses something I made? Especially if it was a relatively minor contribution in a millions-of-lines code base! But, as a Quaker I now have a different perspective. Any decision I make, from what I consume to what I produce, should be carefully thought through. Who and what will my choices affect? Where am I in the supply chain, and what are the ethical implications at every step along the chain? If I am producing something, how can it be misused – in the cybersecurity sense as well as in a moral or ethical sense, as in the drone example?

One of my Quaker mentors once gave some advice that has helped me with this quandary. To sum up a lengthier and more in-depth talk: choose actions that have clear, direct effects over those with indirect effects. Any actions that have unclear effects should be avoided if possible. This is one of the best ways to curb any evil effects of your own actions. Of course, especially as an American at the end of many unethical supply chains with little knowledge of what many of my actions have on the rest of the world, this can be difficult to follow. But, in terms of decision-making – should I work at company X, should I contribute to project Y – it is invaluable. It’s part of what pushed me to commit to pursuing accessible and assistive technology as my research interest for my summer internship, graduate school, and beyond. I am passionate about the truly good things we can do with technology and am excited to better the world around me, but I am also cautious of possessing an attitude that I can do whatever I want and then try to renarrate it as serving Christ later on.

I remember listening with sadness in my heart when one professor I respect said that he worked for Monsanto for awhile, and did not seem too concerned about any of the ethical implications of that. I know of another man who has worked for Lockheed Martin for perhaps decades, and once said remarked that he’s had bad experiences working with female engineers because they have a more personal connection to their work (and therefore might feel more morally convicted or culpable for what their work might be used for). I myself am concerned about the implications of working for any large corporation or institution – my work could have more of an impact, reach more people, but at what cost? That is something that I am thinking about constantly. In the end, it is hard to say what impact your work will have, but I hope that we will become more thoughtful and more deliberate about what work we choose to do, and why we choose to do it.