Little Listicles #2 (1)


This is the second in a weekly series of “little listicles” though this week is a single “listicle” and really more like a think-aloud article. I am stongly disliking the title of this series, but I haven’t figured out a better alternative yet. You can read more about my commitment to this kind of writing here.

1 evolving “irk”

Note: The intended audience for this post is the general public. This is a free-write with minimal editing – these thoughts will surely change with time and new information, but it’s helpful to me to record them publicly.

This is an observation or idea about a potential research direction that I’ve discussed in conversation a few times, and I’m going to attempt to write it out here. I am still new to all of this, and there are surely some pieces missing from what I’m about to say.

From what I know so far, there is a flurry of recent research about ethics in computer science education, and more recently, what is called “critical consciousness” in computing, or critical computing education. The idea of “critical consciousness” comes from a Brazilian researcher named Paulo Friere who wrote about the emancipatory potential of education in Pedagogy of the Opressed (which I haven’t read in its entirety; I’ve only read things descended from it). In my understanding, this kinda-mostly boils down to the idea that through education, people who experience oppression learn about their oppression, question it, and organize to change it. Critical consciousness is the outcome of this education, where a critically conscious person is able to look at a system or something happening in the world and see where systems of oppression are produced and re-produced, kind of like “seeing through the matrix” to use a popular culture reference.

What does this have to do with computing? Well…oppression is enforced and reinforced through computational systems, and by “computational systems”, I mean anything that has to do with computers. We mostly hear about this oppression in terms of biased machine learning algorithms and biased datasets, but this can extend to digital technology in general: who has access to these technologies, who is included/excluded in the design of the technologies, who is surveilled by them, their impact on the environment, and who is most affected by this impact.

At the root, I understand this to be the result of capitalism. Capitalism works by exploiting groups of people for the gain of other groups. Today’s powerful western world is founded on exploitation/capitalism, and innovations in computing allow capitalism to operate at warp speed. The majority of digital technology is created with capitalist intentions, so using this logic, these technologies are exploitative by default because they are capitalist. There are certainly people researching alternative paradigms for digital technology e.g. how technology can be liberatory or how technology can be feminist, but the fact remains that the majority of digital technology / computational systems we encounter today are exploitative at some level, often in ways we can’t really see without careful contemplation or a wealth of background knowledge.

To bring this back around, my understanding is that critical consciousness in computing is the ability to see how computing systems are exploitative.

The thing I don’t understand is what is expected to happen once people are critically conscious of computing. What I am noticing, through the lens of my own biases, is a dangerous assumption that by teaching the software engineers of the future to be critically conscious about computing, they will refuse to implement or propose alternatives to exploitative systems. A drastically simplified version of this narrative is something like: The critically conscious developer at Big Tech Company is asked by their manager to implement a biased algorithm, and they say “no”. This seems very far-fetched to me. Would a critically conscious developer even work at Big Tech Company? How would they know the algorithm they’ve been asked to implement is biased? Even if they do suspect it to be biased, what if they are accessing the algorithm via third-party API and cannot access its details? Are they willing to lose their job over it? Won’t the manager just find someone else to do the work?

I guess I’m appreciating that educating critically conscious engineers who will purportedly make more responsible decisions at work is important, but it’s not the whole story. I wonder what other narratives are out there about what is expected to happen when developers are taught to be critically conscious of computing.

I’m learning how to see “gaps” in research, i.e. when there is a topic missing from “the literature”. Not all gaps are important to fill, but as a new researcher, I understand it to be a generally positive thing to find a “gap” that is exciting to you because that means it’s a place you can contribute. Maybe I’m seeing a gap in computing literature…or I’m using the wrong search terms and looking in the wrong places.

A bit of great advice I got from a Ph.D. student I spoke with when I was working on my applications was to keep track of the things that irk me, that is, things that cause me to feel a certain way and keep coming up. These irks will be meaningful to read later once I understand a lot more about the topic. So, this post is about one of those irks (I have a private list of many more!).


A few sources that contributed directly and indirectly to my ability to write this post: