How important is word choice?

We’re in a semantic age, when battles are fought over the words we use to refer to people or experiences, and lines are drawn through vocabulary. In a non-political arena, I’m having a similar fight over the specific words historians use. How important are these distinctions, really?

I’ve been in a fight with one of my dissertation readers about whether or not I can use the word science to talk about the 12th century. Their argument is that science as a word conjures up a whole host of societal associations that are specific to life from the turn of the 20th century onward, and so it is misleading to use the word to talk about something so far in the past. As a result of this debate, I now have a section of my introduction dedicated to explaining the etymology and use of the word science in the Middle Ages, including the words scientia (Latin), ‘ilm (Arabic), and episteme (Greek). I was mostly done with this argument (I’m not going to lie, I’m still salty about it, science is literally in the title of my dissertation and this reader has known that for 4 years), but then I got an email from that same reader this morning inviting me to a talk on the art of alchemy. And in a moment of vindictiveness, I fantasized about emailing them to tell them they should not use the word art. Sincerely, this is another semantic argument I already had with myself prior to writing any of my dissertation. All of my art history training taught me that art is as complex a word as science, which in the present has very specific connotations. There is no art in the 12th century as we know it – there is craft, there is ornament, there is manipulation of matter, but there is no creative expression as such. As a result, I don’t use the word art in my dissertation (except as a translation of the Latin word ars, and even then only when I’m repeating someone else, because I think that translation is wrong too, feel free to ask me about it).

Ok, but does any of this matter? Don’t we all know what we mean anyway? No one thinks that people in the 12th century are straight up doing science – we know there is no scientific method, no laboratories, no control protocols. We know they’re not doing art either, that patronage is what determines artistic output and so everything we see is an expression of layers of input and not the emotional interpretation of one person. Why argue over the language we use to name these things, then?

It matters precisely because of the associations we have with words. The meaning of a word, especially in English, is highly dependent on context. And we need to signal the context we are using as much as possible to keep the people we are talking to focused on the point, and not trailing off into other associations. English is crazy this way, just ask anyone who has had to learn it as a second language. I was thinking recently of how necessary context is to understanding meaning in English, in relation to the word wound. What does wound mean? Does it mean the place where someone has been hurt? Or does it mean the act of twisting something around something else? Those are two completely different words and it is impossible to know without context. So, as much as we can, we need to signal context.

For my dissertation, this signaling matters in so much as it indicates to my readers how much of an expert I am on the subject I’m writing about. That helps my reputation, sure. But it also means that someone in Art History who picks up my dissertation (ha, as if anyone ever reads dissertations!) can understand what I’m talking about, because I’m using language they’re familiar with in the way they would use it. I don’t want to create too much of a barrier to entry to reading my work by requiring my reader to learn a whole new vocabulary. When we’re talking about semantics, these are the functions we have to keep in mind: do the words I’m using pose a barrier to entry, do they signal that I know what I’m talking about, and to what extent to they label me as belonging to a particular group? These are all matters of context that can change the implicit meaning of what I’m saying.

We have these semantic debates a lot in the socio-political sphere in the context of political correctness. What is the most respectful way to refer to someone as a member of a group? Rather than sifting through the muck of talking about racial identity, because that’s way too complicated, we can see this playing out more simply in the shift toward person-first language. Person-first language is naming someone as “a person who does x” rather than “an x person”. For example, I have asthma. The person-first approach would be to say “I am a person with asthma” or “I am a person suffering from asthma” as opposed to “am am (an) asthmatic”. The argument for using person-first language is that is emphasizes that even though we are talking about a person because of the modifier attached to them, they are still a person, and their feelings and rights should still take precedence. You may have noticed this shift based on the harm reduction approach to drug use, by referring to people as “those who use drugs” rather than “addicts”. Terminology like “addict” is dehumanizing, reducing the person to their disease, and obscuring any life they have outside of their drug use. The person-first language prioritizes their existence as a reminder that when we make decisions about how to deal with drug use, we are talking about people and their behaviors, not inanimate objects. But there are other contexts where person-first language is not the preferred approach. Autism activists, for instance, prefer identity-first language, because, in the words of the Autistic Self Advocacy Network (ASAN) “we understand autism as an inherent part of a person’s identity”. Autistic people don’t want you to forget what they are – it’s not a disease that can be cured in them, like drug addiction, but an aspect of their personalities, and person-first language implies that it is separable and, ultimately, “fixable”.

Using the right words in these situations is a matter of respect, but it’s also a matter of signaling. Saying “a person who uses drugs” implies that you understand the principles of harm reduction. Saying “an autistic person” signals that you won’t donate money to Autism Speaks, an organization that seems to take advantage of familial anxieties about autism. In context, these word choices are not just about being respectful in the presence or in the spirit of the people they describe, they’re also about indicating where you stand on an issue. And that’s where semantics really start to grind people’s gears.

The flip side of signaling is gatekeeping. By using a word with layers of implicit meaning, you build a barrier to entry around yourself. You may be happy to learn that person-first language is humanizing when talking about disease, but you may also be frustrated to know that in not using that language you could be perceived as insensitive or uncaring. The onus is on you to educate yourself, but if you didn’t know that person-first language could even be an issue, you might understandably feel that you’ve been placed on one side of the battle line without doing anything at all. Battles over semantics have an “if you’re not with us, you’re against us” mentality. If you know, you know, if you don’t know, it’s the same as knowing and choosing not to use that knowledge. For people who don’t keep themselves up to date with these kinds of social issues, semantics become a threat to their ability to even begin to engage with them. Boomers especially feel that if they use the wrong language, they’re going to be maligned before they’ve made themselves known. The signaling has overtaken the content.

The problem with that line of thinking is that these semantic distinctions exist for a reason, and the people they serve to fight against like to use subtlety as plausible deniability. Woke language isn’t there to catch 65 year olds in a “racist” trap, it’s there to make it clear that the context it is being used in is distinctly not one of fascism or white supremacy. Well-meaning baby boomers are the dolphins caught in the tuna nets left for actual racists – people who actively oppose structural changes to promote equality. White supremacists love dog whistles – subtle cues in language that signal the bigger context – because they want people to get used to their way of thinking. Let’s take an example from back in the day. In the 1950s, Americans weren’t walking around talking about the benefits of the Aryan ideal. But they were saying things like “Gentlemen Prefer Blondes”.

Even explaining the blonde beauty standard falls into pernicious white supremacist mythology. A teenVogue article trying to unrap the myth said:

The connection between beauty and spirituality took a more prescriptive turn in the Middle Ages with the rise of Christianity in Europe. “Now [women] were exhorted to appear pure and virginal, forever young,” writes Mark Tungate in Branded Beauty: How Marketing Changed the Way We Look. Light features, like blonde hair, blue eyes, and fair skin, were believed to be physical manifestations of “the light of God.” Starting around the 15th century, “colonizers went to Africa, Asia, and Latin America and introduced the idea that whiteness is good, that nothing is better than white,” Adawe says. “If you were white, you had better economic well-being, you had good employment and education attainment.”

How White Supremacy and Capitalism Influence Beauty Standards | Teen Vogue

Oh, I see. We used to think the Aryan ideal was preferable because Christianity told us so, and that became colonialism. But now we’re enlightened and post-colonial, so we don’t have to think that way anymore? Here’s a few big problems with that argument. Christianity isn’t a European religion. Why didn’t that Aryan beauty standard exist in other Christian regions, like Eritrea or Egypt? Moreover, the association of blonde hair, blue eyes, and pale skin is an entirely modern construct, and the grouping of these features as representations of virtue, while they might have some conceptual links to the Middle Ages, is not part of medieval European Christianity. Finally, 15th century colonialism wasn’t the first time European encountered dark-skinned people, and they didn’t impose a hierarchy on them because of this ideology. The ideology developed as a way to justify the hierarchy that colonizers wanted to impose, in order to subjugate native peoples, use them for slave labor, and take their land and resources.

So, even though this teenVogue article is trying to dispel the myth of white beauty, it’s feeding into exactly the same foundational myth. It’s hard to break free of the subtle white supremacy ingrained in our society, even in seemingly innocuous frivolity like “the most glamorous musical ever made”, without being really explicit about what you’re doing. The point of being careful with language is not so much to signal that you know who your friends are, but to actually do the work of changing the implicit biases baked into our social thinking and, as a result, our language. Signaling helps with that – reinforcing this behavior in other people, pointing it out to others, and explaining it all make that effort widespread. But gatekeeping does nothing to help the effort. I want people who don’t follow social trends to know what the words they use without thinking mean. I want them to be the biggest advocates for changing that behavior. And I want them to know a white supremacist when they hear one talk.