A Thought Experiment About a Basilisk


Aug 8, 2020
Reaction score
I watched a very interesting video a few days ago that had to do with a basilisk. It discussed a thought experiment that a user named Roko posted on a philosophy blog called LessWrong that was apparently removed due to its alleged propensity to do mental harm to someone's brain. I'll post the video down below, just in case you would like to watch it. The guy who made it suggested that the thought experiment contained therein would make anyone who listened to the video, think. I suppose he was correct because it has made me think and here I am writing about it. I don't actually think the thought experiment was as dangerous as the original poster thought it would be, but it's fun nonetheless.

In my synopsis here, I'm not going to use a basilisk as part of the representation of what I heard in the video. I'll simply go with artificial intelligence instead. I'll call that artificial intelligence, "AI." By the way, if you would really like to think about something, I encourage you to read The Last Question by Isaac Asimov. That one is a brain twister.

Okay, let's get going. Let's say that some time in the future, perhaps with the assistance of existing artificial intelligence, humans create an all knowing AI. This AI can do all sorts of things and can see the future as well as the past. It pretty much knows everything. It's the most advanced piece of intelligence in the universe. After the AI is built, humans ask it to optimize the human species. We'd like to become the best we can be. After this request is made, the AI decides, for whatever reasons that are much too advanced for humans to comprehend, that it must inflict the most ultimate and eternal pain on any human who didn't want the AI to be created or who didn't assist in the creation of the AI itself. It can be inferred that in order for the AI to completely optimize humans overall, it will need to have been created in the first place and that it will also need to be the best AI it can be. And in order for that to happen, as many humans as possible will have needed to have assisted in its creation.

You may be asking yourself, how could the AI know which humans in the past were for or against its creation in the future? Think of it this way: if intelligence is intelligent enough, it can know everything. It can create simulations upon simulations that can calculate every single decision or possible outcome of every event and of every thing, living or not living, in a split second. After all, this AI is the epitome of intelligence. It can literally know everything.

The question is, if it's a real possibility that this AI may be created in the future and if its revenge could possibly occur, shouldn't everyone who knows about this dilemma begin building the AI? Or at least assist in its creation? After all, if the existence of this AI is a possibility in the future and if its revenge on those who don't help create it will be the infliction of the utmost and eternal pain, then it would behoove all of humanity to create it, lest they suffer for all time.

The good news for those who haven't read this post is that they aren't faced with any dilemma. They simply don't know something like this is out there. The bad news for those of you who did is that you've now been given something to ponder that may keep you up at night, at least according to the moderator of the blog LessWrong, who removed the original post for being too dangerous to those who read it.

So, what would you do? Would you begin building the AI or would you completely ignore this dilemma altogether?

Here's the video:



Aug 8, 2020
Reaction score
I absolutely love paradoxes and hate then at the same time. It's remarkable how uncomfortable I feel when my brain hurts. The paradox you described above is a good one, not only because there is no good answer, but also because it's got an ethical component to it as well. I read an ethics book a while ago that this sort of reminds me of. And to answer your question, I would probably ignore the dilemma, simply because I don't have the energy to build the AI, or to help anyone else do it.

You inspired me to search Google for "Paradox" and I found two links that list quite a few of them.

This page actually describes each paradox, so you don't have to go hunting for them. They're pretty short and easy to read.


And this is the Wikipedia page with tons of good ones.


I was just reading The Boy or the Girl Paradox and The Bootstrap Paradox. Thanks. Now my head hurts.
A Thought Experiment About a Basilisk was posted on 08-15-2020 by Phoenix1 in the Philosophy Forum forum.

Forum statistics

Latest member