TW: vague discussions of mental illness, specifically triggers.
Trigger warnings are one of the best ways for people to engage with sensitive content in a healthy way. They enable someone to create the safest space possible for consuming the material.
For some people, this means only reading upsetting materials when they have a supportive friend or partner nearby, to hold them and tell them it’s okay. For others, this means reading upsetting materials alone, because the presence of a partner will feel threatening and make the fear worse. For some people, this means reading materials that hit close to home with a partner in the house, so they won’t be tempted to self-harm, attempt suicide, abuse substances, or other harmful behaviors. For others, they may need to be alone, because the strain of acting “normal” when they’re upset will grind the pain in worse, until they lash out.
For some people, this means avoiding upsetting materials because they can’t be consumed in a healthy way, and for others, it simply means picking the right time and setting.
See, triggers affect everyone differently, depending on really minute aspects of their lives and mental state. For some people, it might cause a panic attack that causes them to be unable to leave the house or pursue their to-do list. For others, it might cause them to become aggressive toward those around them who are making the trigger worse, unknowingly.
Forewarning and awareness are a survivor’s best friends. When a trigger sneaks up on someone, sometimes it can be too late to back away before the person is already seeing detrimental effects. Content warnings actually enable people to deal with more traumatic material than they otherwise could, because they are able to do it safely. They actually prevent consumer dissatisfaction, because they help consumers know up front what they’re in for.
Vague product warnings like “steamy material” or “too hot for your kindle” don’t actually provide the cues necessary to tell sensitive readers that a book may contain themes that they actively try to avoid. If Amazon is truly interested in being all about the customer, then building better tools for labeling should be one of their top priorities– not filtering authors out who use trigger warnings or censoring specific words. Is it any wonder that without these warnings, there’s a lot more people blindsided, reacting negatively, and bending Amazon’s ear either with direct complaints or bad reviews?
One of those tools could be providing a form to allow authors to label various common strong themes, such as sexual violence, and providing a button on the site that would allow readers to opt out of seeing any content with those themes. Not restricting their visibility in lists, searches, etc. Simply allowing it as an optional filter, similar to how they allow consumers to filter books down by length, for readers who are seeking shorter reads. Or even just a general adult filter toggle similar to what their competitor Smashwords uses.
Hell, this is a niche that readers already desire, and that other sites attempt to cater to in the small scale– a Google search for “clean reads” turns up a host of blogs, publishers, and groups that specialize in assisting readers seeking books without such mildly offensive things as cursewords, like the much maligned Clean Reader. Clean Reader came under fire last year for scrubbing swear words or offensive content from books entirely… without the authors’ consent. Authors came together to protest, to declare that their aesthetic could not be honored by replacing those words– that even if they made some people uncomfortable, the word had been chosen for a specific reason (Does that sound familiar?). They removed their books from the retailers who supply to Clean Reader in protest. But even though Amazon’s policy is much more destructive, arbitrary, and pervasive, it holds too much power for authors to be able to protest in an effective way, as they could for one fairly small app. So we are stuck with whatever Amazon says, no matter how unfair, problematic, or contradictory. And all of that is getting off the topic of trigger warnings as a needed classification tool.
Keywords such as “clean romance”, and “sweet romance” help readers who prefer to avoid sexual content entirely develop a community and a niche built around their desire to avoid that content. Could you imagine if we were allowed to similarly label and build communities around fiction with other love-it-or-hate-it themes, too?
Put simply, Amazon’s practice is bad, in every way. Bad for consumers, bad for authors, bad for survivors and people struggling with PTSD, and bad for society as a whole.