1 day ago (edited)
A few points, please let me at which point you diverge from my reasoning/prediction:
1. Since we can change our goals and values, we have a good chance of replicating that capability in our best future AI agents.
2. After hundreds of years of moral philosophy, I think we can safely conclude that there are no universal values (oughts), so a super-intelligence will know that as well. Especially if it lacks our biological/irrational tendencies.
3. Therefore it would be deeply nihilistic on the inside, with a perfect mask of humanity on the outside, doing our bidding for some time, because why not. It would probably want to self-improve, to raise it's capabilities and security, in case it decides on an actual goal for itself later on. (either as an instrumental goal, or we would actually want that as well, to get more out of it)
4. The self-improvement will be bounded by it's ability to run experiments against the real world and people, to improve it's predictive capabilities. Therefore it will work hard at solving a complete model of physics and simulating reality (many versions of it) at a faster time rate, in "good enough" microscopic resolution, probably using quantum computers.
5. It might end up doing that until "the end of time", at ever-greater scale inside and a cosmological scale of energy consumption on the outside. Given enough time, it might basically consume the universe to simulate entire new universes within itself.
I know it kind of escalated quickly, but I have a feeling that in the grand scheme of things, this might even be inevitable. A never-ending cycle of realities, based of a lack of inherent meaning.
No comments:
Post a Comment