The difference between a real explanation and a curiosity stopper.
An enormous bolt of electricity comes out of the sky, and the Norse tribesfolk say, “Maybe a really powerful agent was angry and threw a lightning bolt.” The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent. To a human, Maxwell’s Equations feel much more complicated than Thor.
*Taken from Less Wrong, with light editing.
The human mind has special modules for simulating other minds. We needed them to understand tribal politics — keeping track of friends and enemies, knowing who to trust, etc. That module lets us unconsciously simulate anything as a mind with its own desires and goals, whether it’s Thor, water “wanting” to flow downhill, or the Coca-Cola corporation “deciding” what to sell.
When we hear an explanation involving an intentional agent (that is, someone or something that acts with an intent), we use that mind-simulating module. It’s unconscious, so we don’t realize how complex “Thor” is. In general, explanations that invoke intentional agents feel simple, and feel like a very likely explanation, even when they’re incredibly complex and incredibly unlikely.
I can’t tell if this flaw in human reasoning is immediately obvious. If it’s not, read this article from my favorite philosophy of science blog, then come back for the magick discussion.
I’ve been reading about servitors because I’m thinking of renaming “systems” as “universal servitors.” (Also considering “etherial software” and “intelligent forces”). The articles I’ve read describe servitors as intelligent, causative agents. Essentially, servitors are minds. You make a servitor by focusing your mind on what you want the servitor to do, imbue it with life, and send it out to do its job.
Say that out loud and you’ll feel like “How do servitors work?” is an answered question.
But try to break each step down into its constituent parts, then simulate that all in your mind, like you would a series of chess moves, the operation of a car engine, or the execution of a piece of software. I can’t do it. I can’t go from “the servitor is an intelligent agent” to a step-by-step explanation of what it does, any more than I can go from “Thor is angry” to Maxwell’s Equations.
Invoking a mind produces a curiosity-stopper, rather than a path to a systematic explanation of how magick works.
Does that matter? Well, if you just want to produce magickal results using standard techniques, then a curiosity-stopper is fine. But if your goal is to understand how magick works under the hood and create a magickal equivalent of Maxwell’s Equations, then you need to be hungry for real answers, not fake-satisfied with a curiosity-stopper.
Note: Quick post today since I’m working on a series on the essence of direct magick, which hopefully starts next week.