The Chinese room and morality

Started by Sean, November 13, 2007, 07:06:00 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Sean

Some older notes of mine- any thoughts? I studied the Searle thought experiment 12 years ago but not sure where the literature stands at present.

The work of Searle (1980) can be interpreted as constituting an interesting attack on Humean theories of reducing the mind, or at least the substantive self lying behind the mental states or perceptions of the mind, just to a bundle of related, interacting perceptions (query expand on- ref previous intro para here?). He claims that this cannot be all there is to the mind because he has an a priori argument to show that this alone produces no consciousness or understanding- which people clearly do possess. Something else, such as a substantive self, seems to be required in addition to such a bundle in order for these quintessential things to be found in the mind of a person; the bundle itself cannot give rise to them. Further, the self would then in fact appear to amount to these things themselves.

Instead of interacting perceptions of minds, Searle speaks of the analogous situation of information processing or manipulation of formal symbols that is found in computers. No matter how high the order of complexity of computers rises, even if it matches that of the human, they will become endowed with no consciousness or understanding as long as their functioning is ultimately defined by no more than in terms of a process of formal symbol manipulation- ordinary information processing. Searle can assert this, and hence here that the equivalent interaction of perceptions of a mind alone can neither be endowed with these things, because, he holds, it can be seen that computers have no consciousness or understanding of what their symbols mean. With no meaning being derived by computers from their symbols, they can be conscious of and understand nothing because symbols is all they have. Only people, with their substantive selves that they can use in this, setting them apart from computers, can know what a sequence of symbols really means. Artificial intelligence thinkers have underestimated the subtlety of the processes of consciousness and understanding: they do not consist simply in words, or computers' analogous symbols. All that these do in fact is merely to give an outward expression to these things. The consciousness and understanding in minds cannot be reduced just to interaction of perceptions but must involve something more.

His argument is a thought experiment where an English speaking person is in a room whose only contact with the rest of the world is to receive from it batches of Chinese script- which they do not understand. There is a set of rules in the room however, which tell the person how to manipulate the symbols of the script, resulting in the generation of a new script, which is then passed back out to the outside world. The Chinese corresponds to a computer's uninterpreted and therefore meaningless formal symbols and the rules to its program- or process of formal manipulation of the symbols. The central point is that, regardless of it not being discernable from the answers they produce, the person understands nothing at all of the Chinese, not even that it is indeed Chinese script. Therefore, although it may be that computers appear to be conscious or to understand, or at least to have the potential for this in the future, they are not and do not.

When the Humean mind's perceptions are taken as the computer's symbols, and its interaction of perceptions as the computer's manipulation of the symbols, the mind's perceptions should, like the computer's symbols, be wholly uninterpreted and meaningless. Yet minds do know what their perceptions mean- they have consciousness and understanding, and so they must not solely be the bundle of perceptions as a computer is, in essence, a bundle of symbols because, as shown by the reduction of an otherwise deceptively person-like computer to the person in the Chinese room- who, to them, processes the symbols entirely meaninglessly, there is in reality no consciousness of understanding arising in it at all, nor will there ever be in any other computer of a similar, selfless, sort.

Against Searle's argument however, although it receives the symbols of the batches of Chinese script from the world as the mind of a person likewise receives words from it, the Chinese room lacks both the full connection to an relationship with the world that a normal person has in addition, and so it might then seem the two are not really comparable. Hence if instead there is a computer that is given motorized robotic hardware and sensory equipment so that it can move around in and see and hear4 to interact with the world, it may be that with only its formal symbol manipulation it could, in view of its person-like responses to the world, be properly described as actually being conscious of and understanding the world. Theories of the mind holding that it consists only in perceptions and no substantive self may of course then not preclude the possibility of human consciousness or understanding. Searle responds to this objection though by arguing that really these things are no more present in the robot computer than in the Chinese room computer because as in the Chinese room, it is the case that the robot computer can still be replaced by a person whose understanding of the input symbols, this time coming from its sensory equipment, and output symbols that are generated after being manipulated by them with their set of rules, this time going to activate its robotic hardware, is nil. The sensory data symbols could perhaps be in Chinese form again and no inference could be made of what they mean, or even that they constitute any connection to and relationship with the world at all, or even that there is a world. So even if there is a robotic computer with walked and talked like a person it would no less follow that the definition of its functioning only as formal symbol manipulation precludes it from consciousness or understanding of the world. It can be seen here that programs or any such rule-following are, in themselves, fundamentally meaningless. The essence of a person must therefore be more than this to give rise to these things; theories of the mind cannot hold that it is comprised exclusively of perceptions and no substantive self. It is the addition of the self to the perceptions that is crucial.

It is possible though that Searle's reply here may well nonetheless be quite inadequate. One only needs perhaps to look to one of the many theories that there are which hold, despite Searle's response, that the consciousness or understanding of a person resides not in their brain but actually in their relationship with the world itself. Any argument to show that the formal symbol manipulation basis if computers precludes them from consciousness or understanding is then irrelevant because these theories maintain that it is in point of fact the case that brains themselves are not conscious and do not understand either. Accordingly the human thinking involved in using language, for example, is not a process that itself understands that language. Consciousness or understanding here consists rather in many processes of interaction with the world occurring together; it is not a computer or brain itself that is the carrier of these things. Searle cannot then dispense with this interaction because with it, formal symbol manipulation in computers or hence interaction of perceptions in minds- although indeed not symbols or perceptions on their own, wholly abstracted away from the world, may still succeed in giving rise to consciousness or understanding. Either a computer or brain functioning on this basis, when given movement and senses for an interaction and relationship with the world, may then overcome the problem Searle addresses here of deriving semantics from the mere syntax of symbol manipulation or interaction of perceptions. Hence it may be known, for instance, that the otherwise meaningless symbol '4' means 4. It may seem then that, notwithstanding Searle, it remains a possibility at least that nothing else is needed, such as a self, to achieve consciousness and understanding.

There is then a controversial, Platonic type theory of ethics that can be constructed here however, which in its explanation of ethical and unethical behaviour gives support to the existence of the substantive Vedic Self. If the theory of the Self corresponding to a physical function of the brain is right, meaning that as well as the absolute realm giving rise to the relative realm in it ultimately comprising the foundation of the relative, the relative, because of the physical brain, also in fact gives rise to the absolute- the absolute or Self being something universal only by virtue of brains producing it, then it is conceivable that some brains have less of this physical function or Self than others. Selfhood may be a variable quantity in people. The differences between people then may not actually lie purely in their other, relative levels of mind or bodies- their selves: the mind of some may be endowed with less absolute or samhita, leaving them largely with just the relative discursive mind or manas and intellect or buddhi. Moreover, in such cases therefore, what is ultimately the absolute foundation of their relative selves is, to the extent that they lack the physical function of the brain corresponding to the Self or the absolute, along with all the rest of the relative realm similarly lacking this, only the absolute in as far as it is perceived to be by the minds of brains with accordingly more absolute; their relative is not so absolute per se. The Self is consciousness and it might be thought that this is always required in order to think, so that when it is the case that one person can think as equally well as another they must then be equally conscious and so have the same degree of Self. However against this of course one can argue, as Searle does, that though computers can, in as far as they can process information, think, and may even come to think on a level as complex as human thought, they do not have, nor will have as long as their functioning is defined in terms of this ordinary information processing, any level of consciousness. Consequently, nor do they have any understanding- for the reason here that this consists in the reference, carried out by the intellect level of the mind, of the thinking level to the Self or consciousness: there is no consciousness to refer so there is no understanding. Reasoning, whether done with symbols or thoughts, and understanding cannot therefore be the same. Computers only have a thinking level here and not a Self and hence although they can think and therefore know, as it were, how any logical argument works, they have no capacity for the understanding of what their symbols mean and so none for the understanding of the actual meaning of an argument constructed by the symbols. Similarly some people, having less Self than others, have less consciousness to refer to for understanding; they may know, as it were, how any argument works but, lacking the same degree of the source of meaning- the Self, that some have, understand less the actual meaning of one. It seems hard then to say that just because all people can think, and are members of the same species, that their brains should be endowed necessarily with the same degree of Self. Although they may be able to think or process information as much as others, some people are sure to have minds that are hence less  conscious and so have less the ability to understand anything- and are instead more purely computer-like.

Consequently, when people do wrong things the reason for this is that they do not actually understand the arguments showing why they are wrong; wrongdoing itself is then not voluntary. It may be that the action is indeed freely chosen but it will not be so on the basis where it is understood why the action is wrong. Accordingly, rather than the view that can be held that just lack of alignment of the relative levels of mind with the Self itself gives explanation of wrong action, it is really just lack of Self. Alignment then simply amounts to a refinement of what is only a particular inherent level of 'perfection' or rightness in thought and action manifest, at all times, as a result of the presence of the particular amount of Self one is endowed with. People with more Self will have a deeper consciousness and so understanding of what a logical argument for a right action means, not just how it works, than people with less Self. People with less Self may not do wrong things, but they will not understand why the right things they may instead do are right. If they say they understand what is right or wrong they are here just thinking of moral norms that they have been told or read, not what they actually understand themselves. An analogy would be when a tiger- which has (a lot) less Self, has been trained not to bite one's arm off: it is not declining to bite because it understands this would be wrong. Alternatively, a robot computer, perhaps a humanoid one, that was programmed not to do wrong things is not declining to do them because it understands that they would be wrong or it is right not to do them; as Searle shows it would in fact be entirely unconscious, or Selfless. Some people are like this in that they have less Self, are less conscious and more zombie like, where there is nothing that is like to be like them. Instead of being fully conscious of their mental states and fully reflective agents, their thought and action is more in accordance with unconscious, blind cause and effect processes of the relative levels of mind alone. The people who have a good, natural understanding of right and wrong then should not assume everyone has it. Any reasoning with wrongdoers that they may engage in is at best like he attempt to train a tiger: it may be that the attempt succeeds but this will be a s a result of something other than any actual improvement in the wrongdoers' understanding because there is only a certain amount of Self there to reason with. No such improvement is to be engendered from anyone in fact because the level of the ability of a person to understand meaning, of for example moral arguments, is something fixed like other characteristics they have, and for some this will be a lower level. The understanding of the full meaning of an argument does not consist in the words, which themselves only give an outward expression to the meaning. Trying to express oneself more and more lucidly will make no difference to the understanding of a person of limited Self because understanding of the meaning of arguments really consists in a reference of the words involved to the Self, and where there is less Self there will ineluctably be less understanding. (This also elsewhere- NBs do understand the arguments on the purely intellectual, logical level- its just that particularly in ethics and aesthetics they won't know what arguments really mean or refer to; but in pure reason/ kn also there is less understanding about what the arguments actually mean ie how they relate to lived life ie the gunas.) People with less Self or consciousness are radically different to people with more- not just not understanding but not being able to, and consequently less intrinsic value is to be accorded to them; rather anthropomorphically assuming that all people are essentially alike is a mistake under this view. A successful result of the reasoning attempted here- or the training process, with wrongdoers may often beguile those with a higher level of the abiloity to understand into thinking the ability of the wrongdoers has improved, but this is really an illusion. People always and necessarily act morally to whatever is the extent to which they really understand or are conscious of the meaning of moral arguments because this consciousnes is their Self, constituting what they actually are and so must necessarily be. there is no such thing as weakness of the will or a genuinely evil motive for a bad action because one cannot truly understand what is right and then do wrong in that there will just be no basis for this: one's understanding will invariably dictate otherwise. There is never any genuine malice in a wrong action because if it was really understood- for there to be the malice, this would preclude it from being done. Descartes says everyone can understand reasoning equally well, just that the rate at which they are able to do the reasoning often differs. However, computers can do at least some reasoning very well indeed yet their understanding of it is nil. Reasoning and understanding are not co-extensive; people may hence understand the reasoning they do to different extents. Plato is then more right when he says there are only some people bestowed with a fully developed innate ability to really understand moral reasoning or knowledge, and that these cannot then but do right, although there may be the problem of them of gaining enough knowledge for the innate ability to be exercised in certain circumstances and right decisions made. It is the distinct ability to understand knowledge, however, not having knowledge itself which is crucial in moral, or indeed any, action.

drogulus

#1
I find this part which discusses the nature of consciousness and the possibility of artificial intelligence interesting.

Quote from: Sean on November 13, 2007, 07:06:00 AMAgainst Searle's argument however, although it receives the symbols of the batches of Chinese script from the world as the mind of a person likewise receives words from it, the Chinese room lacks both the full connection to an relationship with the world that a normal person has in addition, and so it might then seem the two are not really comparable. Hence if instead there is a computer that is given motorized robotic hardware and sensory equipment so that it can move around in and see and hear4 to interact with the world, it may be that with only its formal symbol manipulation it could, in view of its person-like responses to the world, be properly described as actually being conscious of and understanding the world. Theories of the mind holding that it consists only in perceptions and no substantive self may of course then not preclude the possibility of human consciousness or understanding. Searle responds to this objection though by arguing that really these things are no more present in the robot computer than in the Chinese room computer because as in the Chinese room, it is the case that the robot computer can still be replaced by a person whose understanding of the input symbols, this time coming from its sensory equipment, and output symbols that are generated after being manipulated by them with their set of rules, this time going to activate its robotic hardware, is nil. The sensory data symbols could perhaps be in Chinese form again and no inference could be made of what they mean, or even that they constitute any connection to and relationship with the world at all, or even that there is a world. So even if there is a robotic computer with walked and talked like a person it would no less follow that the definition of its functioning only as formal symbol manipulation precludes it from consciousness or understanding of the world. It can be seen here that programs or any such rule-following are, in themselves, fundamentally meaningless. The essence of a person must therefore be more than this to give rise to these things; theories of the mind cannot hold that it is comprised exclusively of perceptions and no substantive self. It is the addition of the self to the perceptions that is crucial.

It is possible though that Searle's reply here may well nonetheless be quite inadequate. One only needs perhaps to look to one of the many theories that there are which hold, despite Searle's response, that the consciousness or understanding of a person resides not in their brain but actually in their relationship with the world itself. Any argument to show that the formal symbol manipulation basis if computers precludes them from consciousness or understanding is then irrelevant because these theories maintain that it is in point of fact the case that brains themselves are not conscious and do not understand either. Accordingly the human thinking involved in using language, for example, is not a process that itself understands that language. Consciousness or understanding here consists rather in many processes of interaction with the world occurring together; it is not a computer or brain itself that is the carrier of these things. Searle cannot then dispense with this interaction because with it, formal symbol manipulation in computers or hence interaction of perceptions in minds- although indeed not symbols or perceptions on their own, wholly abstracted away from the world, may still succeed in giving rise to consciousness or understanding. Either a computer or brain functioning on this basis, when given movement and senses for an interaction and relationship with the world, may then overcome the problem Searle addresses here of deriving semantics from the mere syntax of symbol manipulation or interaction of perceptions. Hence it may be known, for instance, that the otherwise meaningless symbol '4' means 4. It may seem then that, notwithstanding Searle, it remains a possibility at least that nothing else is needed, such as a self, to achieve consciousness and understanding.


     Both Searle and Dennett, two of the foremost antagonists on this question, agree that you need a body attached to a mind to give meaning to syntactic processing, but Searle opts for natural only while Dennett see no problem with a robotic one. The processor has to care about some things more than others to give meaning to mere syntax and to solve the "frame problem", which computers are notoriously bad at. But what sort of body? Could it be a virtual body like in Artificial Life? Can a virtual entity solve the frame problem and derive meaning like a natural one?

     I disagree with you about the substantive self. Any system that solves the problem of deriving meaning is on track to be as real a self as you or me, though it might be an unintelligent, uninteresting self like Nagels bat or Elgar or something. :P
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

Sean

Thanks for that drogulus; these are rather old notes but the question of deriving meaning and hence being conscious I argued here depends on possessing a 'substantive' self, a thinker of thoughts, and not just the collection of thoughts as in mindless unconscious computers: there is more to consciousnes than formal information processing.

Sean

A substantive self then ties in with Plato's idea of there being no weakness of the will ie either you see moral reasons or you don't- depending on your Self-content...

drogulus

#4
Quote from: Sean on November 15, 2007, 01:39:12 AM
A substantive self then ties in with Plato's idea of there being no weakness of the will ie either you see moral reasons or you don't- depending on your Self-content...

     I don't think this is very satisfactory. First of all, if you split the self off into (1) the set of competences by which a self is recognized and (2) the real self where it all comes together, the Guy in Charge as it were, you have one hell of a regression problem. How, then, is the Substantive Self organized? Does it have a tiny Guy in Charge to contract out all the important work to, just like you do with your Substantive Self? No, it's far better to locate the decision maker among the identifiable processes you have at hand. Any intelligent agent must be composed of many relatively unintelligent ones, and operate as a Commitee Of The Whole for all decision making. The published minutes of its perpetual sessions are what we call Consciousness.
     
     Second, your location of everything outside the actual machinery is a temptation to resort to all sorts of metaphyical speculation like the Plato example you gave. This is a distraction. Plato and the many philosophers after him who indulged in this were in no position to say how the mind works in any detail. They just looked at outputs and made something up. Our made-up stuff is grounded somewhat better in both how brains work and how information processing works in general.  :D

     Searle would agree with you that information processing in itself not being enough to create a mind (the point of the Chinese Room), but I think he would say you need a brain in a body to do that. Both Searle and Dennett are sturdy materialists, but Dennett is more indulgent of the romance of Computation. Searle charges Dennett with fictionalizing consciousness to dissolve the problem, and Dennett charges Searle with being a closet dualist who just doesn't want the problem solved. The idea of the Substantive Self would probably make Searle choke but I'm not sure his "ontological subjectivity" doesn't lead in the same direction. As you can see, I get off on this stuff. ;D

     Edit:You might find this interesting. Searle talks about his new book, etc. 

http://www.youtube.com/v/vCyKNtocdZE
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

BachQ

Quote from: Sean on November 13, 2007, 07:06:00 AM
Therefore, although it may be that computers appear to be conscious or to understand, or at least to have the potential for this in the future, they are not and do not.

Fascinating.  Please explain further.

greg

Quote from: D Minor on November 15, 2007, 03:27:13 PM
Fascinating.  Please explain further.
i wanna know more, too!

about a week or two ago, i saw a program where this guy said he is working on a computer that can think more like a human- that can eventually learn how to do stuff like look at a picture and easily be able to tell if it's a dog or not. It was an interesting thought, because I don't think any computer can do that yet. This probably isn't exactly related to consciousness, but it's more of a human thing.

david johnson

when the yet-to-be-constructed computer conceives and carries out a murder & is apprehended $:), will it be placed on trial?
...executed?  >:( ...maintained for life in a cell?  :( ...sent to a shrink? :P
hmmmmm...

:o
dj

drogulus

#8
Quote from: G...R...E...G... on November 20, 2007, 04:45:46 AM
i wanna know more, too!

about a week or two ago, i saw a program where this guy said he is working on a computer that can think more like a human- that can eventually learn how to do stuff like look at a picture and easily be able to tell if it's a dog or not. It was an interesting thought, because I don't think any computer can do that yet. This probably isn't exactly related to consciousness, but it's more of a human thing.


     It sounds like a good program, and being able to recognize an object as a representation of something else will be difficult for a robot. The early hopes for AI have been frustrated because they were too ambitious. We should start with a robot ant and then move up. It will take time because we have to teach the robot to learn things and not just know things. It will happen, but not in the near future.

     Actually, consciousness is exactly what it is about. A robot that can recognize when another being is conscious and can report on it's internal states will eventually come to recognize that it's conscious, too. Being conscious is largely a matter of recognizing that you are. If you didn't know you were conscious would you be? No, so consciousness is a form of knowledge. Not everyone thinks so, of course, but they think it must be some other "substantive" thing over there in the dark where we can't find it, whereas really it's right here under the lamppost with all the machinery.  ;D

Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

greg

Quote from: drogulus on November 20, 2007, 12:14:30 PM
     Actually, consciousness is exactly what it is about. A robot that can recognize when another being is conscious and can report on it's internal states will eventually come to recognize that it's conscious, too. Being conscious is largely a matter of recognizing that you are. If you didn't know you were conscious would you be? No, so consciousness is a form of knowledge. Not everyone thinks so, of course, but they think it must be some other "substantive" thing over there in the dark where we can't find it, whereas really it's right here under the lamppost with all the machinery.  ;D


i couldn't argue with that, so i'd have to say i agree  :)


Quote from: drogulus on November 20, 2007, 12:14:30 PM

     It sounds like a good program, and being able to recognize an object as a representation of something else will be difficult for a robot. The early hopes for AI have been frustrated because they were too ambitious. We should start with a robot ant and then move up. It will take time because we have to teach the robot to learn things and not just know things. It will happen, but not in the near future.

yeah, hopefully they don't get too smart in the near future (like in scifi movies)

so, i have a question...... does anyone think it would be possible to create a robot that feels pain or pleasure? I'm having trouble understanding how a robot/computer would do certain things because it wants to, instead of doing them because it was programmed to. To put it simply, the reason behind why any living thing does anything is to achieve pleasure in some way, even if it means pain in the present for pleasure in the future. So without the feelings of pain/pleasure, it would seem a computer/robot would have a long way to go from really acting humanlike.

drogulus

#10
Quote from: G...R...E...G... on November 20, 2007, 03:12:37 PM
i couldn't argue with that, so i'd have to say i agree  :)

yeah, hopefully they don't get too smart in the near future (like in scifi movies)

so, i have a question...... does anyone think it would be possible to create a robot that feels pain or pleasure? I'm having trouble understanding how a robot/computer would do certain things because it wants to, instead of doing them because it was programmed to. To put it simply, the reason behind why any living thing does anything is to achieve pleasure in some way, even if it means pain in the present for pleasure in the future. So without the feelings of pain/pleasure, it would seem a computer/robot would have a long way to go from really acting humanlike.

      Living beings don't really want things, or feel pleasure or pain, do they?? They're just programmed to by their environment, right? So robots, like meat machines, will be total phonies! They won't have real Substantive Selves with Real Altruism, or Real Consciouness, just a bunch of skills that mimic these things.

      In some ways the situation is analogous to biology a century ago, when it was said that yes, we can explain life through all these metabolic processes and cellular chemistry, but we'll never truly understand life until we discover the mysterious nature of elan vital. Those other things are just a bunch of skills life has, they aren't life itself. Now we understand there is no Master Process controlling things from above or outside. Life just is those processes, and that's how it is with consciousness.

      The point of the Chinese Room is that this view is wrong, that in order for a conscious mind to understand things, for things to actually mean something to it, some noncomputational process or entity must supervene over the merely computational goings on that do the perceptual and motor processing. Something has to run things, and it can't create meaning if it's just a computer.

      But the Chinese Room is itself wrong, because there is no Archmedean point from which an outside force could play Chairman of the Board, and because it underestimates the likelihood that a computer that evolves in a body would in the process of defending and promoting the interests of that body climb the ladder from As-If intentionality ("the bacteria "wants" to eat that amoeba") to For-Itself intentionality (mmmmm...cheeseburgers!). It will take a long time for robots can perform the same tricks as we do, but when they do they will be just as convincing phonies as we are.  ;D
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

greg

Quote from: drogulus on November 20, 2007, 04:26:08 PM
      Living beings don't really want things, or feel pleasure or pain, do they?? They're just programmed to by their environment, right?
could you give an example of this? or did you already in the following paragraphs?

drogulus

#12
     
Quote from: G...R...E...G... on November 21, 2007, 05:32:51 AM
could you give an example of this? or did you already in the following paragraphs?

      I did, with the bacteria that "wants" to eat. Some organisms evolved to take control of their own programming, so they don't just eat, they get hungry, and then develop language and become gourmets. And they don't just move away from harmful things, they feel pain. All the environmental signals become internalized as wants and fears, which are spurs to action.

      Intelligent beings grow an extra layer of intentions about themselves ("what do I want to do now?") because they got used to reading the intentions of other intelligent beings ("I know that guy will try to kill me") and began to apply these skills to their own workings. Once you start wondering about what you will do, the way you do about what that lion will do, you're on the path to consciousness, and when you add language, self-awareness. The story of consciousness is the story of greater control through the internalization of environmental signals, the representation of them as concepts through language, and finally the concept of a self (Dennett's "center of narrative gravity") which acts as the locus of action and meaning. As jerry-built as it is, as blind to it's own fragmentary nature (we don't see the gaps, we are the gaps) it works pretty well. Hell, I'm convinced I'm conscious!  :D

      The point of this is that when robots start to think they are conscious, we'll be in no position to tell them they're wrong, that that's just "fake" consciousness. But that may be in the far future.
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

greg

Quote from: drogulus on November 21, 2007, 11:23:23 AM
     
      I did, with the bacteria that "wants" to eat. Some organisms evolved to take control of their own programming, so they don't just eat, they get hungry, and then develop language and become gourmets. And they don't just move away from harmful things, they feel pain. All the environmental signals become internalized as wants and fears, which are spurs to action.

      Intelligent beings grow an extra layer of intentions about themselves ("what do I want to do now?") because they got used to reading the intentions of other intelligent beings ("I know that guy will try to kill me") and began to apply these skills to their own workings. Once you start wondering about what you will do, the way you do about what that lion will do, you're on the path to consciousness, and when you add language, self-awareness. The story of consciousness is the story of greater control through the internalization of environmental signals, the representation of them as concepts through language, and finally the concept of a self (Dennett's "center of narrative gravity") which acts as the locus of action and meaning. As jerry-built as it is, as blind to it's own fragmentary nature (we don't see the gaps, we are the gaps) it works pretty well. Hell, I'm convinced I'm conscious!  :D

      The point of this is that when robots start to think they are conscious, we'll be in no position to tell them they're wrong, that that's just "fake" consciousness. But that may be in the far future.
ok, i think i understand... Not easy, though, since I never read about this type of stuff.

drogulus

#14
Quote from: G...R...E...G... on November 21, 2007, 01:36:27 PM
ok, i think i understand... Not easy, though, since I never read about this type of stuff.

      <click

Start here. If you're interested, that is. He's a good and very funny writer.

Check this out, too:

http://www.youtube.com/v/wIdxbJyvfNw

Youtube has all six parts, but go to Google Video for the whole thing in one piece.
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
      
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:142.0) Gecko/20100101 Firefox/142.0

Mullvad 14.5.8

greg

that dude looks like Santa Claus..... or Penderecki..... or maybe a cross-breed.  :-X
i'll see if they have that book at the library, next time i go