September life-drawing: not safe for whose work?

three new life-drawings by Paul Watson

Three life-drawings from last weekend. Model’s pronouns: they/them.

I managed to fit in a life-drawing session this past weekend, three drawings from it are displayed above.

As you can probably tell, I was unsure of the best medium or style to work in that day, but I was generally pleased with the results and it enabled me to convince my nagging brain that I’d been “productive”.

As usual I posted the life-drawings to Instagram and, fairly unsurprisingly, Instagram once again flagged my account as “ineligible for recommendation”, meaning that it wouldn’t be suggested or visible to non-followers in functionality like “explore”, “search”, “suggested users”, “reels”, or “feed recommendation”.

This has happened before and I managed to resolve it by deleting a small number of life-drawings (regardless of the fact that non-sexual nudity in artwork such as drawing or painting is allowed in Instagram’s terms and conditions).

What was surprising was that this time when I half-heartedly clicked the “appeal this decision” link — without deleting or changing anything — Instagram agreed with my appeal, and my posts were once again “eligible for recommendation”. This has never happened before!

Update: 18 October 2023: Instagram have now flagged exactly the same post, that they previously manually reviewed as OK, as “against our guidelines on sexual activity or nudity”. Once again they have flagged my account as “ineligible for recommendation”. Once again I clicked the button to appeal the decision…

I’m guessing what had happened was an algorithm had mistaken my life-drawings for photographs and flagged them as breaching Instagram’s terms and conditions on nudity. As mentioned above, Instagram explicitly allows non-sexual nudity in drawing or painting, so this is clearly not a particularly good algorithm.

Maybe, just maybe, there was a real human being reviewing it this time?

The whole problem — and this is a wider problem than just Instagram — is about US corporate culture. Actually, no, it’s US culture in general. It’s present here in the UK as well, but not quite as puritanically.

One problem is the misleading phrase “Not Safe For Work” which, for brevity and out of common custom, I’ll henceforth refer to as NSFW.

NSFW, as far as I can remember (and a quick search seems to support this), was a ground-up invention on all the old internet forums that people used as a courtesy to alert other people that a particular link might not be the most appropriate thing to open on your screen while your boss is standing behind you.

But NSFW is not a universal constant.

If you work in an art gallery, then what counts as “safe for work” — whether you’re a security guard wandering around the galleries where work hangs on the walls or is stood on plinths, or a curator or researcher going through the gallery’s archives — is going to include drawings, sculptures, and paintings of nude people as a normal day-to-day work. Similarly if you’re the cashier taking payment for postcard’s of the gallery’s artwork in the little shop near the exit.

If you work with any sort of scholarly medical content (as I sometimes do in my day-job) you’re going to see photos of naked people, including close-up photographs of human genitals, often displaying symptoms of some disease, as a regular part of your work.

If you work as an artist (as I am) or as a doctor/nurse then you’re probably going to see real people without their clothes on, not just images of them, as part of your normal work.

If you’re a sex worker (and we all agree that sex work is work, right?) then what you’ll see during the normal course of your work is going to be an even broader definition of “safe for work”.

So, while the label NSFW was invented out of a common spirit of goodwill — added before a link as a courtesy to other people, to prevent them accidentally getting into trouble with their employers — it is thoroughly grounded in what counted as “not safe for work” for the typical early users of Slashdot, Fark, et al., namely people who worked in corporate IT.

The concept of NSFW was eagerly leapt on by HR departments everywhere — with all the grasp of degree and nuance commonly found in HR departments — and its definition instantly became as broad and draconian as possible.

This was why we’ve had all the usual debacles on all the obvious platforms: from pictures of babies being breastfed to Michelangelo’s David all being classed as “sexual content”.

Most platforms have made a U-turn on the two examples mentioned above because, in the eyes of all but the US far-right, they make the platforms look stupid and reactionary.

Some platforms have implemented a voluntary feature that a user can select to indicate that viewer discretion is required (which I always use if present), although nearly all of these features wrongly conflate nudity and sexual content especially — as in the case mentioned above of posting my life-drawings on Instagram — when the initial “policing” of the platform’s rules is done by an algorithm that hasn’t been programmed to tell the difference.

There are two exceptions: Mastodon and newcomer Bluesky.

Mastodon offers users a way to flag their post with a content warning (CW) which hides the post behind a “reveal” feature and blurs any images unless some who sees the post takes the decision to reveal the post after reading the CW.

Different Mastodon instances have different rules (or no rules!) on what should be hidden behind a CW, ranging from extreme pornography to “direct eye contact” in a photo.

The Mastodon user posting decides the text that is displayed on the CW (for example, I will typically type “Artistic nudity - life drawings”) which means it can be specific enough to let any other user make a clear and informed choice whether they want to view the post or not (or, conversely, misleading enough to “trick” users into viewing something they’d rather not - in which case the “report” or “block” functionalities are your friends on Mastodon, albeit as an after-the-fact remedy).

Bluesky currently offers three “Adult Content” options for someone posting: Suggestive (“Pictures meant for adults”, but presumably not including nudity), Nudity (“Artistic or non-erotic nudity”), and Porn (“Sexual activity or erotic nudity”). I presume there’s also a detection algorithm running that can auto-tag posts that it thinks the poster has not correctly tagged.

Bluesky also offer Show/Warn/Hide options for what you want to see from other people, which include the three categories above and also: Violent/Bloody, Hate Group Iconography, Spam, and Impersonation (I’m presuming these latter four are detected by algorithms or user reports, since I suspect that very few spammers will agree to tag their spam with a “spam” CW!).

I’m hoping that the idea that there are graduated levels in what you might want to see in your timeline - as per the two approaches from Mastodon and Bluesky - gets adopted more widely.