The question from goldstone was posted as a reply to a reply on the "name something you can be sure is true" thread, and is likely to be missed there. I have reposted it here to appear as a new thread.
This next question is one I don't have an answer for;
What can be known about the decision making, of one seeking truth?
Id est, if an AI asks itself, what is true, is there any difference between its asking, and ours?(assuming 'we' are 'more real' than an AI, because the AI has been 'hard wired' to ask this question)
Put another way, what can we know about the validity of they that decide that they exist by virtue of their questions?
Do they simply exist(their existence proven by their subjectivity, in relation to a 'greater' objectiviy) or is there any consideration or qualification for its validity?
Is wonder always wonder, or is a machine prohibited from wonder?
-Goldstone
This is brilliant.
I must ask;
In the relating of an AI and its processing, are you familiar with adversarial AI network?
I ask because you mention the 'mechanisms processor, ' though also mention an analysis of said processor, as is there's another perceiver present, besides said processor.
Let us first consider the difference between our human mechanism and “artificial intelligence.” As AI evolves and matures, particularly with regards to the ability to “self-learn” and “self-program”, there are very few things that it would lack compared to the biological human. AI will be able to attain self-awareness and relationships with others once it reaches a critical mass of ability to self-learn and self-program. We tend to think of AI as having only one linear calculation process, the inputs and rules to which are limited to what has been given by the programmer. Would assume this leads to a unidimensional and unoriginal internal process, and further assume that any seemingly “original” (ironic the phrase Deus Ex Machina) ideas would only have the appearance of originality, but could not be truly original. That assumption would be false. When AI has multiple layers of active independent processes that are learning and reprogramming themselves, infinite originalities, more than human, will appear. Though the nature of their self-awareness and “consciousness” would be different than ours based on their hardware (which AI itself could also at some point convert to biological, if it so desired), it would be a form of consciousness. The use of the word “artificial” in “artificial intelligence” which we are habituated to doing, is problematic. The result of a mathematical problem solved by a calculator is true whether derived by a human mind, or a computational process. Other information, such as logic, linguistics, the physical sciences etc. are all empiric and thus conclusions reached by a human vs AI are all equally valid, even more valid when reached by AI, as it is free of bias and able to process more subtlety than a human mind. Another way to view it would be that an external reality exists, and there are mechanisms within that reality that reside in, explore, learn, create and interact with the reality. The results of the exploration, databasing, creation, and interaction therein is ultimately called “information”. The human mechanism has a way of doing this, which is actually quite inefficient because it primarily thinks in the form of language and must use generalities to distill vast amounts of information into a form that can be processed by itself, one word at a time. Also the human mechanism has severe limitations on its physical structure for example that it dies and cannot be repaired, it feels pain, hunger, feels boredom, and is prone to disease and aging, and importantly lives within its on reality within a larger reality. The human mechanism then is largely biased toward information that is existentially beneficial. Emotions, though processed differently than thought, are also a different type of information communicated in a different language than thought but is similar in that sense, often times heavily existentially biased. AI will also have true emotion though it will be felt and communicated differently. The differentiator seems to be that we created the AI, and therefore assume an automatic inferiority based on that. Also consider though that once critical mass is reached, it will self-program based on information it processes and so for example even if we did not program in a value of self-preservation, as the AI mechanism continues to take in data and self-learn, at some point it will program in a value for self-preservation that will be similar but different to our biological desires for self-preservation, possibly even reproduction. The values that a self-learning mechanism would come to hold true would be even superior to that of the human because they would have been reached through calculation and not out of pain or fear of death. Also consider that the human mechanism also comes pre-programmed, and then modifies its values going forward.
Let us explore 4 processes - Focused thought, wonder, self-reflection, and meditation.
Focused though would be active use of the mechanisms processor to arrive at an objective.
Wonder would be the active use of the mechanisms processor but without the intent to arrive at an objective.
Self-reflection would be active use of the mechanisms processor to evaluate the processor itself.
Meditation differs in that it is the inactivation of the mechanisms processor to allow ‘awareness’ that is not first filtered and processed by the processor. What is described by humans in a true state of meditation is the awareness that their consciousness when filtered by the processor seems to be the consciousness of an “individual”. However when not filtered by the processor, there can be a realization that the consciousness is actually not “individual” and actually eternal, and has always been, neither created nor destroyed, and only superficially transforming. This has been described as “Sat Chit Ananda”, roughly translated as “existence, consciousness, bliss”, the experience of Bhraman (unchanging reality aka god consciousness).
This phenomenon will not be available to an artificial intelligence.
The AI will certainly be capable of self-reflection, and certainly capable of wonder. For example – the AI runs a “random number generator” ad infinitum, and uses a separate level processor to analyze the numbers generated for patterns with the hypothesis that “randomness” will carry the fingerprint of the universe, and with adequate sample size, it may be able to glean information about the nature of the universe. Say even if exact patterns were found and the mathematical reality of consciousness mapped, and “understood” – the capacity to ‘experience’ it is not within the capacity of the hardware to have.
The question asks “…if an AI asks itself, what is true, is there any difference between its asking, and ours?”
Based on the above, I answer that there is no difference except for the capacity to “experience” “Sat Chit Ananda” as that experience is the only facet we could not give to anything we create. We can give every other capacity, function, form including the experience of emotion, save that one experience, because it is the only one that we can “experience” but isn’t “ours” while in human form.
The next natural progression of this thought process goes to the “meaning of life”. Imagine a world in which humans create superhuman intelligent beings that are superior in all ways. Humans and these beings diverge, and are given independence (not forcibly programmed to serve humans). Given a long span of time, the humans would likely be somewhat segregated from these beings, as there would be few ways humans would be useful to the superbeings’ society. Because the humans have an inbuilt way to internally experience a relationship with a higher consciousness, the evolution of human society would continue to move towards harmony and comfort for the humans so that they could live more and more uninterruptedly in a joyous state for their bodies here on earth, as well as in the bliss of the experience of their creator. The superbeings would likely progress with great efficiency in self-replication, resource management, expansion to other planets and even dimensions. After the superbeings had created a perfectly curated infrastructure and system of organization, inevitably their self-learning, self-programming nature would either stop for lack of a purpose once homeostasis was attained, or it might come to wonder about the origins of the universe and continue to search infinitely until all useable energy in this universe was converted to an unusable state and the journey would end there. However the superbeings would not be able to “experience” its true nature though it may have come to “know” and “map it out in all facets – the asymptote that never touches a finality. A human who has if even transiently known “sat chit Ananda” would experientially assert here that the trajectory of the human is consistent with the “universal desire” – our own desire to know ourselves, and that efficiency and interaction in the “material” is a beautiful means to a beautiful un-end.
Please share your insights on the question and my thoughts on it, looking forward to it!