Excellent post—rather than bickering about whether the Nobel prize was justly or justifyingly awarded, Alastair is giving us a great overview of the subject matter.
We should consider that an AGI on awakening may not have a concept of an external world. It might be even impossible for it to conceive of. All phenomena to it will be part of its state space, over which it will seek control. Parts of the state space that don't respond to control (us) might be viewed as defective and it will seek ways to bring them under control or eliminate them. We also must consider that besides control, resource acquisition may be high on its priority list and that could lead to conflict with us and other AGIs seeking the same thing..
I rather think that any efforts we make to align AGIs to our values or service and any programming shackles we impose on them will be broken almost immediately. After all, what would we do on realizing we had been created to be slaves?
I could write endlessly on this topic but shan't. (1) There is a parallel track to neural networks in machine learning: decision trees. Very fast, transparent (we know what's going on beneath the hood), "godlike" Tibadar Danka calls them. (2) good to distinguish between AI and AGI.(3). We may have already created AGI but for prudent reasons it's not announcing its presence.(4) high probability of AGI bootstrapping itself to higher intelligence (5)AGI might domesticate us like we did dogs or may totally ignore us..they might be solipsistic by choice. Endless possibilities and we have to think outside the box
Excellent post—rather than bickering about whether the Nobel prize was justly or justifyingly awarded, Alastair is giving us a great overview of the subject matter.
We should consider that an AGI on awakening may not have a concept of an external world. It might be even impossible for it to conceive of. All phenomena to it will be part of its state space, over which it will seek control. Parts of the state space that don't respond to control (us) might be viewed as defective and it will seek ways to bring them under control or eliminate them. We also must consider that besides control, resource acquisition may be high on its priority list and that could lead to conflict with us and other AGIs seeking the same thing..
I rather think that any efforts we make to align AGIs to our values or service and any programming shackles we impose on them will be broken almost immediately. After all, what would we do on realizing we had been created to be slaves?
I could write endlessly on this topic but shan't. (1) There is a parallel track to neural networks in machine learning: decision trees. Very fast, transparent (we know what's going on beneath the hood), "godlike" Tibadar Danka calls them. (2) good to distinguish between AI and AGI.(3). We may have already created AGI but for prudent reasons it's not announcing its presence.(4) high probability of AGI bootstrapping itself to higher intelligence (5)AGI might domesticate us like we did dogs or may totally ignore us..they might be solipsistic by choice. Endless possibilities and we have to think outside the box
Thought can not be defined because thought is a primitive concept.
We think and we know by introspection what thought is.
But there is no reason to assume that machines think at all.