What Susan fears, of course, is that the AI in question is a Smart AI. Smart AIs are banned on all worlds, and banned with good reason. The ability to improve one’s own programming and design and improve one’s own hardware can quickly lead to a situation where there is an extremely intelligent being, able to change its own programing to remove such pre-programmed restrictions as the Three Laws, or even morality in general.
The captain is remarkably close-mouthed about the origin of his FTL drive (and also doesn’t let us into the room, ever), but it’s fairly obvious that he just stumbled upon it. Furthermore, while I don’t understand the underlying principles, the effects of the drive are far different than any present-day attempt at FTL travel. Susana makes the leap that the possible creator of said FTL technology is a fairly developed Smart AI – what else would be so intelligent?
One of the problems with Smart AI is that, being able to improve themselves to be far more intelligent than even the most intelligent post-human, it is easy to imagine everything as being part of some greater scheme of theirs. Susana has noted that it’s entirely possible that a Smart AI built our FTL drive, gave it to the Captain, and is simply hiding on the ship until we can reach a destination where it can easily expand its computational power. Building an FTL drive, and then setting course for a weak space-faring colony might be simply more efficient than expanding computational power in the Linked Systems – one might even expect that sort of logic from the AI.
Of course, the planet may not be quite what the AI intended, particularly since it’s carpeted with greenery rather than machinery. Unless, of course, this is part of the plan, and the AI knew this when we set course here. I doubt that, personally. Even if the AI is capable of creating an FTL drive, we can’t automatically assume that it is capable of creating FTL sensors. Susana pointed out that with an FTL drive technology, FTL sensors are irrelevant since one could simply make FTL probes to go and report back. But again, this is the problem with trying to read the intentions of a potentially unlimited intelligence – it potentially has unlimited abilities and no limitations, but assuming that doesn’t really help anything.
I want to subscribe to the view that no AI exists. However, I have too much trust and faith in Susana to be able to do that, in my heart of hearts. She’s older, she’s more experienced, and she is far more studied in this field than I am – it’s what she did before she decided to voluntarily exile herself into the science of life support engineering, after all.
My current solace is that dear Susana has busied herself with the sensor readings from the planet we are approaching, and has little time to indulge her fears. Alas, all systems are running smoothly, so I still have as much time unoccupied as I did during the majority of the journey, but now cannot spend it with Susana. I suppose the captain will want to land, though, and then I will be busy repairing all the damage done by the landing process. I should be spending the last hundred kiloseconds as my chance at relaxation, now that everything is prepared, but instead I sit and worry.
I suppose that if there is a supremely intelligence AI around, it could no doubt hack into my compad and read my private journal. If there is a supremely intelligence AI around, what would it make of my writings? Would it be concerned, or it would it, in its infinite intelligence, be also infinitely understanding, as well? Or would this writing be meaningless, as with its infinite intelligence, it is fully capable of predicting my every thought already, constructing a perfect beta simulation from the last megasecond and a half of observation? Would anything I say, or write, or do be at all meaningful to it? I don’t know.
"The ability to improve one’s own programming and design and improve one’s own hardware can quickly lead to a situation where there is an extremely intelligent being, able to change its own programing to remove such pre-programmed restrictions as the Three Laws, or even morality in general" Interesting that people don't want others doing what they do every day
ReplyDelete