12.21.2013

Journal of Susana Reiviki: Excerpt 1

I wonder about how colonial civilizations end themselves.  So far I have seen out-of-control biotech, internal strife, and, now, I have seen how a truly peaceful civilization can court disaster.

A bit of context, first, dear reader.  After revealing the lack of electronic activity within the FTL room, Maria, Gestler, and I were dismissed from the Astral Zephyr.  New Eden is as welcoming a planet as could be imagined, so I have no doubt that we could have found any number of pleasures to pass the time.  Gestler certainly had something on his mind when he went off.

I, however, wanted to check on the technological progression of New Eden, and so I went with the ship’s idea of checking on whether New Eden solved the Friendly Artificial Intelligence problem.  Maria chose to come along for what may have been a very dry conversation to her.

After a number of videocalls, the staff of the Synthetic Genesis institute welcomed me in, quite happy to see a very foreign visitor, and hungry for what insights my outside perspective might bring.  I was less sanguine about my insights, given the state of advancement I have seen throughout New Eden.  (In particular, I have been impressed by what they call ‘The Flower’ – a massive, thin antimatter factory that hovers over the sun, wafting on the solar wind.)

But when I arrived, I was dismayed to find that all of their questions were regarding the creation of a better seed AI, rather than how to ensure such a god would be friendly.  I inquired further, and while their attempts at creating a seed AI have apparently failed thus far, they had never solved the Friendly AI problem.  In fact, they had never thought of it.

New Eden is a colony of unsuspicious, friendly, welcoming people.  They all assume the best of all people, and they act their best towards all people.  For them, this works, as everyone cooperates, and nobody takes advantage of this.  According to Maria, there are some historians who take a wider view of things, but for the average New Edenite, everyone is nice.

And AI, even at the level they have created it, has also been nice.  If you give a dumb AI directions to be nice, and directions on how to be nice, the only badness comes from accidents and flaws.  At most, the AI is too stupid to know when it is not being nice, given limited judgment and limited directions.

A seed AI, however, is constantly re-writing its own judgment, constantly re-writing the directions it believes should be used to carry out its will.  Unlike the dumb AI, which reflects the intentions of the maker, a seed AI develops its own intentions, with far greater freedom than humans, who are hard-wired to be social (which is almost like being moral) and far, far greater freedom than New Edenites (who actually are wired to be moral).

Thus while it may exponentially increase in cleverness (with time measured in terms of hardware cycles), the seed AI has no reason to always be nice to people.  In fact, it has no reason to consider people to be people, the way that we do – they are just objects that act in a certain way, according to certain rules, just like everything else.  Violent destruction at the hands of a smart AI comes with no malice, just the lack of interest in preserving humanity.

The problem of Friendly AI, on the other hand is how to preserve an interest in being good to humanity, despite the self-editing nature of a seed AI.  This is trickier, as not only does the seed AI self-edit, but additionally, if all goes well, it will be far smarter than its makers.  If we make a being of infinite intelligence, how do we make sure that intelligence agrees with us about our survival, and doesn’t find a better answer that we don’t like?

The New Edenites are nothing if not intelligent, however, and once I had explained the dangers I saw in seed AI and an unfriendly technological singularity, they immediately stopped their work, paused the currently-running seed AI attempts, and set themselves, as a body, to solving the question of how to 100% ensure the friendliness of any infinitely-clever gods they might create.

I may have just saved a civilization.  A heady thought.  Or at least removed one potential risk factor among many.

But to return to the first thought of this entry… a civilization cannot survive if it is freely naïve, and it cannot survive if it is entrenched in suspicion.  Perhaps there is a middle road?  Freely naïve with some amount of suspicion?  Or maybe there is no middle road, and everyone is doomed to self-destruct, eventually.  After all, a surviving civilization must go on avoiding destruction forever, while the event of destruction only has to occur once.

Or, of course, perhaps the introduction of FTL travel will change something.  I want to say change something for the better, but I am not so sure.  Humanity, on a whole, has been prospering.  While individual civilizations appear to consistently undergo upheaval (or, as in the case of the New Edenites parent civilization, occasionally wiping themselves out utterly), enough civilizations reach a level of technology sufficient to colonize other planets to outweigh the number of civilizations that go truly extinct.  For every colony that goes dead, there is more than one colony that is created.  Humanity has spread.

But FTL travel would mean that every colony is linked: That they are all one great big civilization.  Right now, if someone did something that destroyed the universe, many colonies would have thousands of years before the border of destruction (even traveling at the speed of light) reaches them.  If the end comes via FTL drive… not so long.

The analogous system, of course, is the Gateway system.  There is a persistent, horrible theory often batted around that I am now beginning to believe.  The Linked Systems are only a few generations old.  The system that has had an open Gate the longest has only had it open for a two dozen gigaseconds or so.  According to the oldest records, those dating back to the ancient Homeworld itself, humanity has been spacefaring for over six hundred gigaseconds.  The directions for the Gateway system, being acausal, should have existed for anyone with the technology to read the instructions.  Anyone, at any time.  So why is the oldest Gateway on our network only two dozen gigaseconds old?

Well, perhaps because there was a previous network of Gateways, and a previous Linked Systems (or something analogous) and then something went horribly wrong.

So what does that say about this new, budding era of FTL travel?  FTL that lets a ship travel anywhere, rather than to just some small percentage of planets advanced enough to have Gateways?

I hope it will bring peace, prosperity, and survival to all of humanity, until the stars burn out, and possibly beyond.  But that does not mean I find it likely.

No comments :

Post a Comment