Activism in Technology

One of the things that first attracted me to Quakerism was its legacy of social activism. As a disillusioned “Exvangelical,” I was frustrated with the ways in which I felt many churches actively supported the status quo instead of questioning and resisting it. Many of these churches do this not by what they say or do but what they do not say and do not do. By remaining silent, however, complicity is not merely implied but realized (consider the “Good German” phenomena). The thing is, this is not just a quirk of some conservative churches. This is something that many of us do, because it is easier to do nothing than to do something. And this, of course, extends to tech.

I know lots of people in tech who think that evil technology (and evil uses of “neutral” technology) are inevitable. “If we don’t build it, someone else will” is a sentiment I’ve heard in the classroom as well as on the Internet. This was the topic of my first post, in which I thought about what a “Quakerish ethic” in our work and in technology would look like.

In the past few weeks, I’ve seen some heartening examples of what it looks like when this kind of ethic is realized. It recently came out that, at the beginning of this year, a group of nine (nine!) Google employees protested Google’s military contract work by refusing to build a key security feature geared to help Google win such contracts. This act of rebellion was allegedly a catalyst for the larger employee movement to end Project Maven in April — which was also succesful.

More recent than that was the response of employees at huge tech companies to their employers’ contracts with ICE: Amazon and Microsoft (including recently-acquired GitHub) employees have signed open letters to their respective employers, threatening to leave if ICE contracts continued. Just today, Buzzfeed reported that Salesforce employees have signed a petition to end Salesforce’s contract with US Customs and Border Protection.

Jackie Luo, an engineer I follow on Twitter, pointed out that when this kind of activism happens, the “If we don’t build it, someone else will” argument falls apart:

This. Works. Tech employees don’t often realize how much power we have, especially in big corporations where you can feel replacable, one miniscule part in a massive machine. Alphabet, Inc., Google’s parent company, reported having 88,110 employees in 2017. It only took nine of them, situated in a key area, to block Google from winning a military contract. There were 4,000 signatures on the petition against Project Maven, which is only 5% of Google’s full-time employees.

Let’s continue to take responsibility for what we create, and think about the consequences of our actions. My hope is that this will bleed into the rest of tech, past the hot-button government contract issues. While these are so, so important, I also hope that the more insidious problems, like the unethical smartphone supply chain, will begin to be wrestled with at this level as well.

How Should We Respond to Injustice in a Culture of Outrage? Part I

When searching for an appropriate definition of “outrage culture” online, I came across this one by Reddit user headless_bourgeoisie in /r/OutOfTheLoop:

Outrage Culture refers to the idea that a large number of people in “western” society seem to crave being offended and actively seek out things that will offend them and create controversy where there is none (presumably in an effort to claim the figurative “moral high ground”). This is perhaps a product of the social media age since everyone with an internet connection now has a potentially gigantic audience for their opinions. “Clickbait” sites like Buzzfeed and Jezebel perpetuate this phenomenon by jumping on any controversy, no matter how flimsy, in order to amass precious website traffic.

People across the political spectrum have been guilty of participating in and perpetuating outrage culture (my personal favorite–if you can call it that–is still Hannity fans destroying their own Keurig coffee makers in support of an alleged child molester). What I find interesting is headless_bourgeoisie’s point that this is a product of the “social media age.” I think outrage culture has in some sense been around for a much longer time, but it has flourished in the broader Internet culture of the 2000s and 2010s. Social media is a hotbed for perpetuating this culture of outrage, because “outraged” people will like, share, and comment more than their calmer counterparts. This creates the infamous comments sections that ferment beneath practically anything on the Internet, and it also serves to stifle real conversation, as many people have noted before.

Expressing your outrage at a real or perceived injustice is a natural way we try to show our Facebook friends or Twitter followers that we belong in the ideological tribe we’ve chosen. It’s also a nice form of catharsis. As Christians, we have a moral obligation to oppose injustice and speak up when we see it (check out this article for some highlights of this in Quaker history). However, I believe we also have an obligation to do so in love and in a way that will not just oppose injustice, but promote the loving, restorative justice of Godde.

When I see an article about the last thing the President did or said, I need to question my motivations for reading it and evaluate my response once I do. I generally don’t share or repost a lot on social media, but on platforms like Twitter I like a lot of tweets that often are perpetuating outrage culture, and I know my followers can see that I like them. So, I have come up with some questions to ask myself before engaging with a post on social media, including whether I assimilate that perception of the world into my own, which is often more dangerous and insidious than merely liking or sharing a post.

  1. Rhetorical analysis of the headline or blurb of the content: what emotional response is it trying to get from me? What basic assumptions of mine is it playing to?
  2. What is the effect of this news, etc., on real people? Is that effect unjust? If I am not sure, what more information do I need?
  3. What are other (reasonable) responses to this content? Is there a place for me to contribute to the conversation to better understand others participating in it?
  4. If this content is about an injustice, what can I best do to directly promote justice? Is it responding in love to others in the conversation? Is it looking for ways to donate or volunteer for a cause? What can I do that will have a real effect on victims of injustice?

Obviously, I don’t methodically go through these questions for every post I see on Facebook. Instead, I try to keep these types of questions present in my mind as I am engaging in social media, and it has helped me to try and respond more lovingly. I know that participating in outrage culture is one of my weaknesses that I am constantly succumbing to, but acknowledging that is the first step on my journey to love.

With this in mind, my next post will focus on how people in tech can work to circumvent our current propensity towards outrage culture through humane design and development.

Silent Worship with Caroline Stephen

I was inspired to read some of Caroline Stephen’s book, Quaker Strongholds, after taking a class on Virginia Woolf last semester (Stephen is Woolf’s aunt). I wanted to share a section on silence that I bookmarked and have been thinking about for the last few days.

Take a deep breath.

Let it out.

Take another one.

Release.

Now, read with me.

It seems to me that nothing but silence can heal the wounds made by disputations in the region of the unseen. No external help, at any rate, has ever in my own experience proved so penetratingly efficacious as the habit of joining in a public worship based upon silence. Its primary attraction for me was in the fact that it pledged to me nothing, and left me altogether undisturbed to seek for help in my own way. But before long I began to be aware that the united and prolonged silences had a far more direct and powerful effect than this. They soon began to exercise a strangely subduing and softening effect upon my mind. There used, after a while, to come upon me a deep sense of awe, as we sat together and waited — for what? In my heart of hearts I knew in whose name we were met together, and who was truly in the midst of us. Never before had his influence revealed itself to me with so much power as in those quiet assemblies…

Take a deep breath.

Release.

Consider the sounds around you. I hear my computer whirring, my keyboard keys being tapped as I write this, birds chirping outside, my chair squeaking, and the washing machine outside my door spinning. My phone just vibrated with a message from my sister.

I’ve been thinking about silence, meditation, and mindfulness lately with respect to programming. I saw this piece on mindful code exercises on Twitter recently, and I love it so much. But, Stephen’s words on “united and prolonged silences” struck me. While bookending coding sessions with meditations or repeating mantras while writing code reviews (I love that one) are great, I get stuck thinking that it’s okay if I keep sticking mindfulness activities into my daily work, instead of setting aside some time to be silent with other people. I have found that centering myself with guided meditation YouTube videos in my room have been really good for my anxiety, but, for me at least, I also need that deeper centering that comes from silent worship in a group setting. I also, frankly, do not have the self-discipline yet to do silent meditation for any more than ten minutes by myself; that’s where it helps to be in an hour-long unprogrammed meeting. 🙂

Working Towards a Quakerish Ethic of Work and Technology

On The Atlantic‘s Facebook post of this video about technology and ethics, I saw a comment that suggested that technology itself is neutral; it’s a tool that people can use for good or evil. Technology itself cannot be ethical or unethical. The problem is, technology is more than a tool. It’s not a hammer or pencil, but increasingly it is part of ourselves, and we have begun trusting more and more of ourselves to it. Additionally, we’ve had the capability to make unethical or even evil technology for decades. Aside from obvious examples like drones and computer viruses, there are lurking evils within technology itself that, if not caught and taken care of, could run amok even from its human masters. Zeynep Tufekci’s TED talk about an AI-fueled dystopia is a good place to start for this topic.

There’s been a lot written on this subject, so I won’t try to re-hash it here. My concern is that, as a senior computer science major, I feel as if many (not all!) of my classmates are cavalier about what their code might be used for. My AI professor asked us a week ago about whether we would be responsible at all if, say, drone software we helped write was used for military drones attacking civilians. Several people said no. They want to be able to create anything and wash their hands of it if it is used for evil, but I’m sure they would want some credit if it was used for good.

And I agree, a few years ago I would have felt the same thing. What does it matter to me if someone deliberately misuses something I made? Especially if it was a relatively minor contribution in a millions-of-lines code base! But, as a Quaker I now have a different perspective. Any decision I make, from what I consume to what I produce, should be carefully thought through. Who and what will my choices affect? Where am I in the supply chain, and what are the ethical implications at every step along the chain? If I am producing something, how can it be misused – in the cybersecurity sense as well as in a moral or ethical sense, as in the drone example?

One of my Quaker mentors once gave some advice that has helped me with this quandary. To sum up a lengthier and more in-depth talk: choose actions that have clear, direct effects over those with indirect effects. Any actions that have unclear effects should be avoided if possible. This is one of the best ways to curb any evil effects of your own actions. Of course, especially as an American at the end of many unethical supply chains with little knowledge of what many of my actions have on the rest of the world, this can be difficult to follow. But, in terms of decision-making – should I work at company X, should I contribute to project Y – it is invaluable. It’s part of what pushed me to commit to pursuing accessible and assistive technology as my research interest for my summer internship, graduate school, and beyond. I am passionate about the truly good things we can do with technology and am excited to better the world around me, but I am also cautious of possessing an attitude that I can do whatever I want and then try to renarrate it as serving Christ later on.

I remember listening with sadness in my heart when one professor I respect said that he worked for Monsanto for awhile, and did not seem too concerned about any of the ethical implications of that. I know of another man who has worked for Lockheed Martin for perhaps decades, and once said remarked that he’s had bad experiences working with female engineers because they have a more personal connection to their work (and therefore might feel more morally convicted or culpable for what their work might be used for). I myself am concerned about the implications of working for any large corporation or institution – my work could have more of an impact, reach more people, but at what cost? That is something that I am thinking about constantly. In the end, it is hard to say what impact your work will have, but I hope that we will become more thoughtful and more deliberate about what work we choose to do, and why we choose to do it.