The Clubhouse Chronicles : What is Safety, and Can We Avoid Scaling Grudges?
While I had originally intended to only have a single definitive piece on Clubhouse, it has become too fascinating a platform and more importantly a phenomenon to let go of, so I’m starting a series of pieces on it entitled ‘The Clubhouse Chronicles’. The subject of this piece will be around intimacy, grudges, and how both of these are impacted by a platform scaling. Core to these concepts is safety, and how different communities define it.
A word that is often used in relation to Clubhouse is intimacy. How do we define it? If one were to consult the tubes of the internet, there are multiple definitions. The one that I think is most appropriate, however, is this:
a close, familiar, and usually affectionate or loving personal relationship with another person or group.
On the surface, it would seem inherently contradictory that a platform that more or less foists strangers upon each other in conversation would qualify under the definition, but I think that it’s the familiar situation of pandemic isolation that fosters intimacy on Clubhouse. Everyone, to a degree, has had their usual methods of socialization cut off, so the situation of isolation is what we are familiar with, and that familiarity breeds intimacy with the individual members of the community.
Intimacy also tends to foster open, honest, and candid conversations. The early days of Clubhouse where the user count was lower fostered intimacy more easily, and while I still think it is possible and even promoted by the platform, it has at least become different. And the reasons behind this are intimately tied to platform governance. I’ve written previously on this through the lens of Trust between users and the platform, and now I’d like to talk about Safety, the other main component in a Trust and Safety department.
As with intimacy, the crux of the matter is how we define safety. In particular, when it comes to platform governance, we hear a lot about keeping users safe, or addressing situations where users don’t feel safe.
Before I dive into this though, it’s important to acknowledge my somewhat unique position as of late. Politically, I’m somewhere between a left-wing market anarchist and a minarchist. When it comes to intersectionality, I’m a queer neurodiverse transwoman. At times I feel not unlike Captain Kirk in the episode where he is split into the ‘good’ half and the ‘bad’ half.
What does safety mean to me? It’s an environment where transphobia isn’t given a pass, bad faith misgendering is called out for what it is, and a platform that won’t tolerate hateful individuals playing semantic games with ever changing community guidelines. The big ‘however’ is that I also consider safety to be a platform that isn’t naive about its place politically, or that platform moderation doesn’t have a tendency to turn into humans building bureaucracies of power over those they disagree with. Working in government relations, I have become intimately familiar with the creatures that these bureaucracies eventually morph into.
The inescapable conclusion is that for better or worse, platform governance is a bureaucracy of the technocrats.
The concept of safety on a platform is often rooted in emotions, which is why as guidelines get more formulaic, safety on the platform tends to get worse. In particular, the phrase of ‘I don’t feel safe’ seems simple on the surface but is a road to trouble paved in the sparkly glitter of good intentions.
I was conflicted about the phrase ‘I don’t feel safe’ and community generated blacklists before transitioning. I am about 3000% more conflicted about them now.
I am very cognizant of the fact that these lists, or whisper networks, are the only thing that stands between predators en masse. The allegations against a prominent cannabis activist were known for far too long before they were published. In the interim, his list of victims grew. Current predators in the cannabis and psychedelics industries will likely be shielded from justice for a long time, so these lists and networks are again the only band-aid that we have against further infliction of trauma. They are valuable, and we need them.
On the other hand, a core component of the transphobia being whipped up by a well-known author and feminists that exclude transgender people is being centered around the statement of ‘I don’t feel safe’. In particular, bathroom panic around transwomen who haven’t had reassignment surgery. The concept of society being committed to safety can easily be turned on its head, like many other principles with good intentions, to ostracize marginalized populations. As such, I believe the tendency of platform governance to try and satisfy all groups isn’t tenable in a heavy top-down moderation approach as we see with Twitter.
Which brings me to Twitter and community generated blacklists. In 2020, there are still people I’ve never interacted with who have me blocked due to group blocklists. Grudges from years prior effectively act as artificial walls between interaction. Blocklists trend towards lacking nuance (but not always) : Those that might be blocked for inflicting serious trauma may be treated the same as those who might be blocked for espousing a particular viewpoint.
It’s not really the existence of these blocklists that I take issue with, as we all have our internal ones. There have been accusations of defamation against some of these lists, but those cases are rather few at this point and limited to specific reputational damage. I think the question that Clubhouse has to answer is whether decreasing friction for creating these lists, possibly in-application is valuable or damaging to the platform.
My take is that community blocklists have the potential to both be an effective shield against abuse, and a political weapon to create large social stratifications on the platform at the behest of ideologues. The amount of friction around their creation on Clubhouse will decide where the overall tendency of them will fall.
Circling back to safety, there are obviously other viewpoints than my own. Some view safety through the lens of Chatham House rules, that their words won’t be taken out of context to score political points on Twitter or for some other purpose. This, in my view, has already happened on Clubhouse, so for some I think the platform is inherently unsafe. I occupy a fairly privileged position in terms of being able to speak my mind on social media so I am not actually that concerned for myself, but for others in the community who may feel reticent or event fully silenced.
There is also the view that safety means never running into situations that might make one feel uncomfortable or unsafe. The distinction between comfort and safety is at the core of many philosophical arguments so I don’t want to delve too deeply into that distinction, but instead suggest that the core question is whether platform users have a right to never be offended by the subject matter of a discussion. I speak in these broad terms because everyone has a different line, but it is important to underscore that situations such as harassers and those who have committed legitimate crimes stalking women go far beyond simple offense and are not what I’m referring to here.
In my opinion, creator first, and the Burning Man ethos are not compatible with the entitlement to never be offended. Additionally, it is incredibly easy to weaponize, as transphobes have done to roll back transgender rights movements. As I said before, being ‘good at Twitter’ is threading the unspoken line of whose sensibilities take priority, and many of their governance decisions in my view are entirely reactionary and incredibly biased in favour of those making up the platform elite sometimes referred to as ‘Bluechecks’.
Through the lens of safety, I expect that if I create a room to discuss a serious, controversial, or sensitive topic, I will provide safety to the platform users as a whole by clearly marking it as such. I also expect that the platform will provide myself and my guests in the room safety by not allowing trivial weaponization of platform governance vis a vis someone mashing the report button with only saying ‘this discussion makes me feel unsafe.’
To summarize everything I’ve said, the platform has to define both a broad overton window, and the overton windows that creators are able to create. In that process, if they are going to be legitimately able to say that they operate on a creator-first ethos, they must also put in safeguards to prevent ideologues from performing a smash and grab through that window.