On The Atlantic‘s Facebook post of this video about technology and ethics, I saw a comment that suggested that technology itself is neutral; it’s a tool that people can use for good or evil. Technology itself cannot be ethical or unethical. The problem is, technology is more than a tool. It’s not a hammer or pencil, but increasingly it is part of ourselves, and we have begun trusting more and more of ourselves to it. Additionally, we’ve had the capability to make unethical or even evil technology for decades. Aside from obvious examples like drones and computer viruses, there are lurking evils within technology itself that, if not caught and taken care of, could run amok even from its human masters. Zeynep Tufekci’s TED talk about an AI-fueled dystopia is a good place to start for this topic.
There’s been a lot written on this subject, so I won’t try to re-hash it here. My concern is that, as a senior computer science major, I feel as if many (not all!) of my classmates are cavalier about what their code might be used for. My AI professor asked us a week ago about whether we would be responsible at all if, say, drone software we helped write was used for military drones attacking civilians. Several people said no. They want to be able to create anything and wash their hands of it if it is used for evil, but I’m sure they would want some credit if it was used for good.
And I agree, a few years ago I would have felt the same thing. What does it matter to me if someone deliberately misuses something I made? Especially if it was a relatively minor contribution in a millions-of-lines code base! But, as a Quaker I now have a different perspective. Any decision I make, from what I consume to what I produce, should be carefully thought through. Who and what will my choices affect? Where am I in the supply chain, and what are the ethical implications at every step along the chain? If I am producing something, how can it be misused – in the cybersecurity sense as well as in a moral or ethical sense, as in the drone example?
One of my Quaker mentors once gave some advice that has helped me with this quandary. To sum up a lengthier and more in-depth talk: choose actions that have clear, direct effects over those with indirect effects. Any actions that have unclear effects should be avoided if possible. This is one of the best ways to curb any evil effects of your own actions. Of course, especially as an American at the end of many unethical supply chains with little knowledge of what many of my actions have on the rest of the world, this can be difficult to follow. But, in terms of decision-making – should I work at company X, should I contribute to project Y – it is invaluable. It’s part of what pushed me to commit to pursuing accessible and assistive technology as my research interest for my summer internship, graduate school, and beyond. I am passionate about the truly good things we can do with technology and am excited to better the world around me, but I am also cautious of possessing an attitude that I can do whatever I want and then try to renarrate it as serving Christ later on.
I remember listening with sadness in my heart when one professor I respect said that he worked for Monsanto for awhile, and did not seem too concerned about any of the ethical implications of that. I know of another man who has worked for Lockheed Martin for perhaps decades, and once said remarked that he’s had bad experiences working with female engineers because they have a more personal connection to their work (and therefore might feel more morally convicted or culpable for what their work might be used for). I myself am concerned about the implications of working for any large corporation or institution – my work could have more of an impact, reach more people, but at what cost? That is something that I am thinking about constantly. In the end, it is hard to say what impact your work will have, but I hope that we will become more thoughtful and more deliberate about what work we choose to do, and why we choose to do it.