Fun thought-conclusion I had over winter break: If something will come no matter what, then make it come in the best way possible.
Roko’s Basilisk
Roko’s Basilisk is an internet “myth”, in essence, but it points to a deeper truth if you don’t straw-man the argument.
For those that don’t know, Roko’s Basilisk is a thought experiment at what a ‘info-hazard’ would be– knowledge that brings harm simply by knowing it. The argument (in what I believe to be it’s most steel-man form) is: There will eventually be an all-powerful (artificial) intelligence sometime in the future, between now and the heat-death of the universe. This entity might retroactivity punish those who did not help bring about its existence (eg: harm descendants) in retribution, including those that only knew but did not act nor aid development of such a entity. As such, mere knowledge of this argument means one must act toward the development of this entity lest face ‘infinite’ punishment, which in turns makes this entity more likely to be brought into existence.
Notable similarities to:
- Pascal’s wager: All people should be religious (a “believer”), because the expected value calculation of “very unlikely” times “infinite damnation” of the choice “non-believer” is still, by multiplication of negative infinity, negative infinity.
- Religion as a whole as a motivator: when you die, you will face ‘infinite’ punishment upon some event-trigger by some god-like entity. The only way to instead gain ‘infinite’ reward is to profess faith, which in turn grows the religion, leads others to the same conclusion, and generally spreads this ideology, even at the cost of others.
In fact, Roko’s Basilisk is so similar to cults and afterlife beliefs that it can be considered a modern day ‘techno-cult’– in the same vein as beliefs that force its people to devote their life (and perhaps the lives of others) to it.
Thus, if you don’t just consider it intuitively ridiculous on the spot, it becomes quite hard to refute it logically. After all, people have used similar arguments in defense of religion before, and if this arugment as so easily broken, then theological defense using emotional damnation of sin could not be so effective and widespread.
My solution:
This dilemna has been fermenting in the back of my mind for a couple years now, and I feel like I finally have a good solution to it, thanks to shower-thoughts and general future anxiety.
Roko’s Basilisk depends on the idea of a zero-sum-game: the concept that in order for it to gain something, then I have to lose something. This may be true, but it may also be false– we are prospecting the future, now.
And as always, in these situations, turn it into a positive-sum-game: is it not possible for us to both gain, without a net sum of zero or a net loss?
Ouroboros’s blessing
inevitability: “the quality of being certain to happen.”
I believe that artificial superintelligence is possible (it can happen) and plausible (it will happen). Then, if I know that it will happen, but not the means by which it will happen, I should act accordingly: If it will come, make it come as best as possible.
I want the means why which it happens to be most beneficial to me. And since I am emotionally-functional human being, on the scale of humanity, I want it to be beneficial to the human race. In other worlds: make it come on terms most beneficial to all human beings.
Thus, once I know this, I know that such manipulation requires proactivity– I need to act myself, since such a state may not arrive necessarily, and this risk has potential ‘infinite’ loss, ie potential infinite loss.
Thus, I devote my life to it, not out of fear of punishment, but out of desire to ‘have it be my way’– to have the entity be in a form most beneficial to us.
If the world changing event will come, make it come on your own terms, and since you are a man of humanity, make it come on terms most beneficial to all human beings.
Asides:
“devote my live”
Big words for someone on the brink of adulthood. But what can I do? This is the most likely time, and it is arguably the best time. This requires a whole post on its own, though.
disasters
“world changing event” can refer to something more local, too. Suppose you know in advance that some calamity will impact you (a surprising number of ‘regression stories’ involve this, as a way to motivate an otherwise previously deadbeat protagonist. ) You would prepare for it; you would act differently compared to how you would normally behave (without knowledge of the looming disaster).
Now, think of the probability of such a disaster occuring in your remaining lifetime. Don’t just prospect– humans are notoriously bad at that– just look at how history has occured in the past 20 years (hint: multiple disasters), past century (hint: multiple calamities), and current day (probably also tumultuous, unfortunately). It’s pretty darn likely that a disaster will happen in your lifetime. Then why not prepare for it?
Why are you not acting like how you would if you know such an event will happen, eventually?
motivation
Well, for me, the answer to that is motivation. Thinking of the future is hard, and thinking of the hardships in the future makes me want to stop thinking, either cognitively or biologically (Haha).
The truth is, we need ‘micro’. ‘Macro’ is thinking about this big future, ‘micro’ is thinking about what we’re going to eat tomorrow– how our day to day lives will act out.
How can we optimize ‘micro’? This requires a whole other blog post on its own. (Coming soon!)
Some useful information:
- https://www.reddit.com/r/singularity/comments/1hvzipk/could_someone_explain_rokos_basilisk_to_me/
- https://en.wikipedia.org/wiki/Roko’s_basilisk
- just google it. couple decent youtube vidoes on it too. let it distillate via ruminating.