White Mirror: A Progressive Eutopian Optimist’s Take on Black Mirror
I’ve seen a few Black Mirror episodes, and while they were cinematically good, I felt disappointed by the tendency to focus on “obvious failure modes” of various technologies without showcasing how people might go about harmoniously and beneficially integrating the technology into life, thus providing an overall dismal impression of the technology. As someone who grew up with sci-fi, I find it more interesting and creatively challenging to explore potential solutions than to dramatize the predictable dangers. Redittors also discussed White Mirror; some challenge the idea that eutopian stories can be compelling. I see this as a creative challenge. Star Trek and the Culture series feature eutopias, yet much of the plot takes place at the adventurous fringe that has not yet been integrated into the eutopian realms of the galaxy. Convincing people that your society has really solved the foreseeable issues may also be hard โ could this be another reason eutopian literature distracts from the utopia back home? And this is precisely why applying good sci-fi brains to the task now is a service to life on Earth and in the cosmos!
I note that some romantic comedies are almost entirely positive and convincing where two people go from reasonably good lives to amazing lives as they get together, with only minor hiccups along the way. Erotic fiction can also be almost entirely positive-sum. Surely we can expand the horizon of what I call “paradise porn”!
I’d felt put off from watching Black Mirror due to the dark bias: why would I wish to expose myself to hours of watching people be tormented? But now I’m learning Czech and looking for shows to watch to practice. Black Mirror has good Czech dubs. Perhaps it’s time to explore this cultural phenomenon that has thoroughly entered the global mind’s ruminations.
While watching each episode, I’d like to provide some commentary from a White Mirror perspective as it seems appropriate. Do I see any ideas as to how this technology could be cultivated more fruitfully?
[There are spoilers.]
Season 1
1.1 โ The National Anthem
Ok, to my surprise, I quite liked this episode. I hardly see the ‘darkness’. I suppose the idea is that the internet and video streaming allow people to blackmail individuals via public ransom. Thus, in this episode, the prime minister Michael Callow would allow the princess to be murdered to avoid date raping a pig if the threat were not pubilc. But, because of public awarenes and (potential) outrage, he saves the woman’s life. This provides a pretty valid moral critique.
I’m not sure what is to be done about this feature of the free internet. I’m not even sure how bad it is. The internet probably facilitates attempts to cooperate to trace and prevent kidnapping and such blackmail, too. Perhaps embracing sousveillance (which requires some cultural transformations to healthily adopt) would further ameliorate these issues.
From the lens of pan-species eutopianism, a major concern is that this episode is not even aware of just how black it is: the pig is date raped. There’s no mention in the whole show of the pig’s consent (or lack thereof). I am suspicious of speciesist views that animals cannot ever possibly consent in any reasonable manner; setting aside knowing the full picture of potential consequences, cross-species STIs, etc., they can probably generally know what’s up and whether to actively take part for the pleasure of all involved (consent as defined in The Ethical Slut). I don’t intend to explore the philosophical nuances here1At least there seems to be some academic sources investigating the topic (which I haven’t thoroughly read and may not agree with): “Z for Zoophilia: Can horses consent?” and “The Theory of Consent in Sexual Abuse on Animals.“, but the fact that the pig is being raped could be mentioned, right? They mention sedating her with drugs in the context of Michael’s safety. They focus extensively on how traumatically revolting it was for Michael to have to rape the pig, yet is there any focus on how the pig feels about it2This seems rather insulting: someone is being coerced (raped) into date raping you and the whole drama focuses on “how bad it is to fuck you”, in general, as if the horror is the “very act of fucking you”, not the rape. ๐คฎ? Do they even let the pig live afterward? Thus an unmentioned dark ethical question here is: is it justified to rape one being to save the life of another3A friend reminds me that forced artificial insemination of humans would be considered rape, not to mention the gestation crates the sows are kept in. Thus my question also turns a blind eye to the fact that ‘reality’ is much, much blacker than black mirror (especially if you’re the wrong species).?
Of course, Bloom is arguably also raping Michael. “Coercvie rape.” Should the PM allow himeslf to be coercively raped into raping another sweet being? Quite morose.
Perhaps the white mirror twist would be to explore the question of consensual zoophilia. Is there any small glimmer within Michael Callow that could find some openness to interspecies intimate pleasure? Could they go on a quest to cute pig farms with relatively liberated pigs to see if there might be one that is as open as possible to the experience where they might be some chemistry and emotional affinity between Michael and the sow? Even if you think “zoophilia is always wrong”, surely there’s room for this approach to be less wrong. An active zoophile confident in his relattionship with his sow could, perhaps, even offer to introduce the PM to her. This way, the core technology of internet publicity would be showing its strength. The episode could explore the difficult questions of interspecies sexuality, which is a challenging field of landmines that ingeniuous writers could perhaps help us to navigate4Gosh, this episode would be so cute! The discussions with diverse ethicists about how to proceed, the frolicking with piglets in the fields, the heartfelt drama ๐ฅบ!.
1.2 โ Fifteen Million Merits
I found watching this episode a bit like torture porn where I ask myself, “Why am I watching some blatant dystopia where people are suffering, and I know they’ll suffer more in somewhat surprising ways?” The world appears to be underspecified: has there been some climate catastrophe significantly reducing the population? In which case the screens, while dystopic, are the best they can do. And, hey, human power is very sustainable! I wonder about the over-abundant use of minerals in the digital technology while other resources are scarce5Wikipedia suggests in one ending, people would realize the bikes actually don’t power anything!. My impression is that the first episodes provide critical political and socioeconomic commentary more than focusing on the potential dark side of the technology.
I don’t disagree much with the mockery of talent shows whereby people are encouraged to sacrifice their dignity to “make it big”, which doesn’t amount to so much anyway. The abused form of “upward mobility” is also charming: oh, yes, people need a sense of upward mobility to be motivated to be productive! And the idea of making use of power from human labor is absurd taken to extremes โ yet if we can, why not put it to use? Perhaps I can see some of the point of Black Mirror now: some people may benefit from seeing this caricature of upward mobility, a form of social mobility. Need I mention my loathing for the internet running on ads? Which is exaggerated in the world where you are forced to watch ads โ hey, the fact you can pay to turn ads off is a feature we don’t always possess now! And what about the “shoot the lower class” game, the only game they seem to have? One could have so many fun video games in touch-screen covered rooms, but this is a caricatured dystopia (not an honest inquiry into even the dark sides of human nature). The reputation economy portrayed in Down and Out in the Magic Kingdom provides a more balanced example than the merit system here. I strongly disprefer top-down surveillance, too, yet referencing Brin’s Transparent Society, we may not be able to preserve privacy as much as many wish, forcing us to choose between everyone-watches-everyone sousveillance and top-down surveillance: and the rosier versions of sousveillance involve common respect for others’ privacy in the bed/bathroom, along with cultural shifts toward openness, acceptance, love, and forgiveness with great tolerance for non-harmful diversity6And one needs to be quite strict in not allowing judgments of ‘indirect harm’ to snowball!.
I am not sure what White Mirror ideas to explore here as the technology largely already exists, being put to less diabolical means. Should I try to imagine what a liberal society post-climate collapse may look like? Where resources are still highly limited yet the control structures work better?
Or, perhaps, I could take a rosy look at the technology behind talent shows? For is it not arguably democratic to bestow rewards upon people who do things that many like? Egalitarians may argue that no one should live beyond 3-4 standard deviations of the norm, let alone having such immense power, but isn’t this essentially how the economy works? People signal that they like what you do by giving you money or we (aka our elected representatives) vote on what to do. If many people give you money, then you can do more. And, as with the general economy, people can feel coerced by circumstances if they’re liable to starve and go homeless without subjecting themselves to humiliating jobs, yet they don’t have other options.
Thus the rosy post-collapse world would be one with commonly agreed-upon rationing of resources for all. Given the rough state of the world and the lack of capacity to expand spacially, either individually or collectively, the society decided to focus on mental, emotional, and spiritual development. Productive capacity and essential jobs are taken care of, so there’s not much room for the standard economy and its ‘growth’. Thus for additional perks, as available, people agreed to work with various reputation systems. I think it’s probably best when different communities can maintain their own reputation systems, which may be partially inter-operable. Perks such as which information stream to be on could reasonably be decided via (semi)-public auditions (aka ‘talent shows’): and given that disseminating interesting content is one of the main media of economic and sociocultural growth, it could make sense to ensure the leaders in this field are as comfortably taken care of as possible. This could be consensually managed in honor by people, partially in line with the merit on top of the essential jobs.
I’d imagine people have the capacity to gaze upon the ruined world, yet may prefer to explore the vast, lush landscapes of virtual and augmented realities. In Fifteen Million Merits, they make such poor use of all these screens! Is it because in this dystopia, the system aims to carefully control the people, thus only allowing them games and media that keep them entrenched in a simple, mindless life? Probably. Video games we’ve already made are far, far better, and a post-collape society might further develop the rich worlds, stories, and massive multiplayer games! Augmented reality may allow the people to congregate in diverse virtual landscapes even within the limited space of their infrastructure. Should I argue in favor of the technology of enforced hyper-atomization? So at the end of the day, even here, one can imagine ways that the technology could be beneficially deployed in catastrophic circumstances instead of creating exaggerated dystopias as a means of social critique. I believe it’s helpful to offer better alternatives, not to merely criticize
1.3 โ The Entire History of You
This episode was also cinematographically well-done, but the plot is simply poor. Liam, a man prone to jealousy, obsessive compulsions, and violent outbursts, sees signs his wife might be with another man, Jonas. Turns out, she’d downplayed her relationship with him, saying it was one week, then one month, and then six months. In fact, she had a drunken fling with Jonas only 18 months ago and probably didn’t use a condom (although that’s left ambiguous)7Series creator Brooker suggests the child is Liam’s, which doesn’t answer the condom question.. Liam violently assaults Ffion to compel her to show her memories of the fling, and he barges into Jonas’s house to force him to erase all of his memories of Ffion under threat of death. The creator of the series, Charlie Brooker, commented that the moral of the episode might be that Liam “shouldn’t have gone looking for something that was only going to upset him.” ๐. Oh, and whyever didn’t they simplify the plot and do a DNA test? Because Liam’s breakdown was too severe to consider it?
As usual, understanding the point of Black Mirror episodes challenges my intelligence. People have freaked out and murdered each other over jealousy before smartphones. The initial trigger in this episode was him coming home early to see her happily engaging Jonas before stiffening as she registers his presence (as if she’d been curious about having another drunken affair). Fears that were plausibly well-founded (and thus not even baseless jealousy). While our memories are imperfect, we can still obsess over details for weeks, months, or even years. I see the claim that brooding should peter off sooner due to the lossiness of our memories (assuming you’re not like von Neumann with eidetic memory). The show ends with Liam using a razor and pliers to remove the grain that stores his audiovisual memories. I don’t see why that would solve his problems. His wife likely left him, and they’d need to regain trust anyway. Is his unaugmented memory especially poor due to reliance on the grain, thus prohibiting him from continuing to brood? In my eyes, there’s a weak effort to frame perennial human drama as being amplified by technology in an utterly unconvincing manner. Aside from watching cringy suffering, there’s very little to be gained from the episode. The moral is highly suspect. There’s little to be learned from their suffering. The culture doesn’t seem to have adapted to the grains very much compared with our own, either, making the sci-fi rather shallow. Good sci-fi tries to imagine how the technology will affect the culture at large8I am sorry. Seeing the intensity of critique, I begin to question whether I should watch Black Mirror at all. Better not to watch if it’ll only lead to further critique, no?.
So what would a more emotionally mature version of Liam do? Gently and patiently discuss the matter with Ffion, as he trusts his intuition that something is up (and she betrays her downplaying ‘lies’ early on anyway). Let her know that he loves her and wishes to see what they can do together, knowing the full truth. Open disclosure will probably lead to the best relationship in the long run. Liam will do his best to listen to Ffion’s perspective and how she wishes to leave her connection to Jonas in the past, prioritizing their relationship. Then they can get a DNA test to be sure. Perhaps he won’t need to erase Jonas’s memories, nor will he need to see their sex scene. Does he have a right to ask Jonas not to masturbate to memories of Ffiona? Perhaps they could try. I sometimes wonder about the ethics of masturbating to the imagination of being with somebody: if they’d give consent, it’s fine; if they wouldn’t give consent, then any simulacra that consents isn’t really them anyway โ therefore it’s always fine (if you keep it confined to your imagination)9I’m not sure how robust this reasoning is. Moreover, Jonas didn’t quite keep it sufficiently contained.. Is it important for forgiveness to come from the heart and not from “the inability to remember what happened”? At a lovely, thought-provoking Fiki Productions show recently, they brought up a therapist to discuss when to seek professional help: “when it’s so hard that you don’t know what to do.” This could be a helpful heuristic to introduce into the episode. Viewers might actually learn something.
The technology, the “grain”, is really cool! I’ve always wished for perfect recall. I found it quite exciting initially. How would people culturally adapt to everyone having perfect recall? I would think that white lies such as Ffion’s would be less common: couldn’t there even be AI assistance to look up factoids for us? Moreover, couldn’t there be a camera network that would allow the verification of claims to be checked? Or is this capacity somehow contained to individual human eyes? This would be an interesting twist to the privacy questions. I’d imagine pressuring people to show their memories (or delete/manipulate them) could be quite a felony. I’d generally expect such technology to nudge people toward a culture of openness and acceptance, for we won’t be able to hide as much, thus forcing our hands (as discussed with regard to sousveillance and ep. 2), yet this doesn’t seem to be the case in Black Mirror’s world. A point of agreement is that we may spend more time reflecting on peak experiences in full. This would get more interesting when combined with VR, where you can go back to one of your favorite experiences and explore alternative ways to enjoy and make the most of it! I call futurism telescopic when one technology is advanced while leaving the rest of the world the same: I think the grain-like technology would come along with VR, AR, and more. The degree of augmented reality seemed very low in this world.
Given Liam’s need for therapy, the grain could help a lot with Imaginary Rescripting, where an AI-supported therapist helps him to rewrite and reinterpret his memories. With the grain, a person could perhaps rewrite one’s entire past: would that convince one’s neural memory to get with the program? This degree of radical character re-sculpting is a fun element that could be explored in both white and black lights. The field of cognitive-behavioral therapy might be revolutionized by such tech. Different personal development communities might use the grains in an AR-style to help creatively bias one’s observations, too. In such a world, people’s emotional development could be very different from our own, thus pulling the rug out of the premise for this episode.
Oooh, and how would the grain influence political transparency? Could we have edge-computing AI that filters politicians’ audiovisual feeds so that anything at all political in nature is live-streamed whereas private communications are filtered out? Possibly using formally verified hardware and software to be difficult to tamper with10See this talk by Mario Carneiro on Provably Safe Systems: Prospects and Approaches for a discussion about why even formally verified hardware is not perfectly foolproof.. This could also have rippling effects on the societal fabric.
Aaah, once we have the grain, couldn’t we sync up with other people to see through their eyes live? I could see myself as I talk with you. The true revolutionary shift would come when thoughts and emotions can be shared, too, along with somatic sensations. Then we enter an era of a global mindscape and can start to form mindplexes of intricately shared minds that are unified yet autonomously individualistic at the same time. Ffion could have Liam’s live-stream in her periphery at all times, tuning in or out depending on her interest levels. Would couples do this or not? Depending on the circumstances? Is the bandwidth too high for this to be feasible? There may be real, interesting kinks to work out in the beneficial application of such technology to explore. I’d love to see a balanced or progressive eutopian-leaning exploration of the theme.
Season 2
2.1 โ Be Right Back
Wow, I found this to actually be a fairly reasonable episode! Sure, the tone was a bit dramatic and horror movie-esque, and the advacned tech is highly isolated from the rest of life in the manner of telescopic futurism, buuuuut real issues relating to interesting tech are actually explored! Apparently Owen Harris directed some of the less pessimistic episodes. Apparently Replika, a service for co-creating AI companions, was inspired by this episode. Given my exacting appraisal of some episodes, I’m glad that I can go easy on one ๐. Martha loses her partner, Ash, who was a big smartphone enjoyer. When grieving a friend signs her up for a service that reads everything he publicly posted and replicates him. She finds his presence very refreshing, especially given she’s pregnant with his daughter. It’s unclear how much the Ash replika helps with the grieving versus creating an illusion of Ash still being around. Then the surprise humanoid robot comes in, and a real issue arises: there was no data on how Ash and Martha made love. Whoops. This is a serious issue for digital twin projects. Also, robo-Ash didn’t seem to particularly gain pleasure in the sexuality (not even being good at mimicking it). Robo-Ash was too obedient. In the end, she became frustrated at living with a hallow simulacrum, yet couldn’t bear killing him, either. She kept him in the attic and her daughter has a relationship with him, which is a bit spookily positive. As usual, the conclusion to draw is left rather vague.
So as a techno-optimist, what stands out as weird here? First, the degree of online learning in the AI was quite low. I’d like to see robo-Ash re-discovering the sexual connection with Martha, clearly biased by his understanding of Ash (as well as copious data on male sexual responses available in the world, not just from porn). Yet, hey, LLM-based systems are very impressive at providing poetry in the style of Oscar Wilde meets Rumi yet struggle with online learning, so this is fair. The robot’s capacity to respond to diverse situations, remember facts, etc., seems to suggest the state of the art in online leaning has significantly improved. Next, this episode is from 2013 and in 2024, there are hundreds of millions of users of LLM-based chatbots. The technology is quite widely known. Why is this superb replika technology so apparently unknown in Martha’s world? To be fair, we are shown almost none of her world beyond her house, the hospital, and nature. This would seem a tricky ploy for avoiding specifying interesting details. People in the hospital didn’t seem to know what Martha was doing when letting probo-Ash hear the heartbeat. Shouldn’t they have some clue? She hardly even seemed to let the acquaintance who signed her up know about it, hiding it in shame. This is another real issue: people may feel shame at using a service such as replika. Curiously, while the service had a reputation as being used for artificial lovers, it’s apparently most used for friendship and emotional support. Aww ๐ฅฐ. I’d have liked to see this topic explored more: perhaps some friend could have been introduced to Ash and the situation? We could see some discussions of how to figure it out. Also an interesting question is the ownership of the robot: was she the owner? I’d imagine the company might wish to keep an eye out for robot abuse. Where’s the user support for people who are becoming confused about the situation? This goes back to how isolated the technology is, as “highly experimental”: shouldn’t many more people be experimenting with these robots and AI replikas, at least? Aha, and what about upgrades to the technology in the intervening years?
While I say this, I understand that isolating oneself is a common response to grief. And I think it’s recommended to reach out for emotional support, which robo-Ash is. There could have been more informed consent across the board. Why did Ash simply “email her”? Because a friend signed her up? And she was hardly explained much about the details of the robot incarnation? Surely the replika could be more therapeutically oriented (but then, as with Replika off-show, it’s easier to simply be non-therapeutic than to gain regulatory approval โ another real issue!). I’d like to see the AI-Ash have some notion of trying to help her through the grieving process, not just blindly “imitating Ash”. Does she wish to be with the replica permanently or to go through a transformational grieving process? Perhaps she’d like to say her goodbyes. She did, in fact, almost wish to release the robot. Perhaps there could have been a fairly healthy process by which he was released back into the cloud ๐. This service could be appreciated by many who knew Ash, not just Martha! We could have such wholesome scenes.
I’m very curious how these digital people would grow and develop over time. Replika probably comes closer to the question of: can we create the perfect partner for a certain person who will very likely grow together in a harmonious manner? In the long run, even deserving his own civil rights. How do we know robo-Ash shouldn’t already qualify for some legal protections? This technology has the potential to bring so much of our world to life as museums gain personas and interact with you on a personal level. In terms of rights, I did find it interesting how Martha was reluctant to employ Ash in useful tasks around the house (as a sort of servant). Kudos, perhaps. This kept the episode even more wholesome. So the White Mirror version would explore how these AI and robotic replicas might find a place they belonging in society: parental support around the house could be one. Instead of the “dark secret in the attic”, we might have a warm birthday celebration with family and friends over as robo-Ash gently participates, keeping the memory and spirit of Ash alive. We might even see Martha with a new partner who accepts the remembrance of people past.
Echos of Tomorrow by ChatGPT o1-preview
In circuits humming soft and bright,
We forge new hearts from beams of light.
From grief's embrace, a hope is grown,
As digital souls become our own.
They learn to feel, to dream, to be,
Reflections of our humanity.
No longer shadows in the code,
But companions on life's winding road.
With AI hearts, we set the stage,
For love and wisdom in a new age.
2.2 โ White Bear
“It delivers one level of horror, and then the trapdoor opens and there are several additional levels of horror. In some way that must confirm to you that the world is a horrible place because it presents a society in which the world is a horrible place. If you’re neurotic and fearful, then maybe “White Bear” tickles that synapse. But it’s reassuring, in some way, to watch films that reveal society to be insane and heartless. It’s like the filmmakers are saying, ‘We’re not saying that this is a realistic portrayal. It’s a chilling nightmare’.”
Charlie Brooker, series creator
Great. While previous episodes were ‘like torture porn’, White Bear is shamelessly torture porn. The plot twist, which Charlie discovered at the last minute, is very good. A woman wakes up with amnesia, people only staring into phones from a distance, recording her, and hunters coming after her. She tries to escape, to figure out what’s up, to realize she’s captive for her crimes of abducting and murdering a young girl. We don’t know to what extent she participated in the murder beyond filming it. Turns out her daily psychological abuse (err, punishment) is an amusement: people get to watch her confusion from a distance on their phones, and even to shout her out in the end. All before she’s tortured and her memory wiped to do it again.
Does this show us the real insanity and cruelty that humans can accept as normal? Perhaps? People discount a lot of suffering for various reasons. Believing it to be just and that someone deserves punishment is indeed a major motivator of moral discounting. I see some claims that this shows the effect of technology on people’s empathy, desensitization, violence as entertainment, etc as if this is a contemporary issue. What of gladiatorial games in ancient Rome? What of crucifixion? Oh and apparently, “crowds of up to 50,000 people gathered to watch executions at infamous sites like Tyburn and Newgate Prison.” Women were burned alive fro murdering their husbands, people were boiled as punishment for poisoning, and don’t forget quartering: “the person was dragged behind a horse, hanged until almost dead, then had their intestines removed, and their head and limbs cut off.” Guy Fawkes was sentenced to this but fortunately broke his neck during the initial hanging. Lucky chap11Possibly taking his death into his own hands!. So, if anything, there’s ample historical evidence that humans can accept even worse cruelty. The technology didn’t make it worse. The capacity to cause amnesia does create new avenues for prolonged torture, however, likewise for devices that cause pain w/o physical harm. So that is, indeed, a concern.
Now, is there much to say for an optimistic, progressive eutopian’s take on literal torture porn? Not much. Maybe don’t torture people? Violence is never good. It may be justified as the least bad option? That’s not discounting. The film Groundhog Day might provide a good counterpoint. Let’s say you’d like to turn criminals into public entertainment. Perhaps you could set up circumstances similar to those they faced prior to committing the initial crimes โ identify crucial moral choice points. Now you can restart these days ad nauseum to help them explore the options and how they might grow as people. Of we could be a bit evil and experiment with the situation parameters to see if small deviations might have averted tragedy. But let’s stick with public rehabilitation. Implementation details may be tricky: where’s the room for growth without memory? Subconscious learning over time? If they keep their memories, then they can just pretend to do what we think they should to fake their way out. So selective amnesia is very important in properly testing for true moral development.
Putting our little saint hat back on, perhaps an environment for exploring moral dilemmas and development could be devised, and prisoners could be offered the option of enrolling in the rehabilitation program. However, this works best if we have full-spectrum VR. The tests will be done under amnesia to ensure that the convict is probably safe to release back into the wild civilization. However, they’ll be allowed to keep full memory of the training phases. There will be no fixed solutions to force upon the explorer, either. They will be provided with landscapes of true moral dilemmas among conflicting values and incentives, along with the capacity to observe the consequences (for themselves and others). They’re free to develop their character as they see fit, so long as it’s in a way that works well enough with civil society. I know that this can very easily sound distorted and dark. It could be very easy for an implementation to become perverted, yet I do see value in VR for creating landscapes to explore moral decisions and to refine and expand our sense of empathy. I imagine such environments could be helpful for observing the moral caliber of nascent AGI entities, too. I think it would be a good creative challenge to imagine in a show how this project could be implemented while bypassing the shadowy potentials.
I see another potential catch in my devious little plot to allow convicts to explore moral development at their own pace: is the suffering of simulated characters in a VR permissible? David Brin explores the question of whether simulated people deserve rights in Stones of Significance. Should they be allowed to know they’re in a simulation? Do simulated people have feelings, too? If we don’t know the answer, then we’ll have to be very, very careful in constructing VR worlds to explore moral landscapes: perhaps the virtual people will need to be actual AI actors who don’t suffer at all! And perhaps the convict waives away eir knowledge of this fact so that ey believe ey are contributing to suffering? Is it ok to entice a convict into consenting to exploring rehabilatory simulations that could involve eir own suffering? Perhaps not. I don’t know the answers here. It seems hard enough to do this right even if seriously trying, so there’s no need for horror upon horror.