Wednesday, January 23, 2008

My Solution To Searles Chinese Room

The Chinese Room Thought Experiment from Wikipedia

In my mind I'm not really qualified to be taken seriously about this, but I do have strong feelings about why Searles conclusion is wrong. I have not looked into criticisms of this very much except that Searles Chinese Room suffers from the "Homunculus Fallacy" in that it includes the problem it is trying to solve, or that the analogy is flawed. I also realized later in the day, after posting this, that it solves the mind-body problem.

In a nutshell the Chinese Room thought experiment was intended to express doubt about whether a machine could ever have understanding. It goes like this. John Searle is in a room with instructions that when someone passes in phrases in Chinese, he is to use his references to look up the corresponding reply and pass it out. His argument is that as long as he knows how to match the two phrases, he doesn't have to understand Chinese. A machine using a program doesn't have to understand chinese either.

I agree, but I also think that the analogy is flawed in that it doesn't go far enough. He's missing the representation of the link between the two phrases. His man is the link but the man needs to be broken down further into the individual actions that are involved. It needs to solve the problem of where do the rules come from that lead to the result of "Understanding"

My point is that if we make a list of the two sets of phrases, we can draw lines between them to connect them. These represent the rules. We don't really understand them but we can see the correlation. But how did the lines get there?

If we took each phrase and had visual representations of every nuance of each phrase, then we could mechanically look for similarities between the phrases and match them up. We might make mistakes, wrong inferences, but so does the mind. Consider the pictures individual properties of the representation of the phrase. But then where would the pictures come from?

We have to find a natural algorithm for the simple rules.

How this relates to the brain is that since the brain represents the world using an electrochemical storage and retrieval mechanism, and it stores this information all over the brain in no specific place, it must have pointers or some method analogous to computer hard drive technology of knowing where the next bit is and where the previous bit was. Electrons naturally travel from one potential to another, so there probably is an equivalent mechanism that relates to chemicals. When the brain gets input, it starts processing information, storing and retrieving information, making associations, presumably using a simple physical algorithm made up of chemical or electrical properties such as properties of attraction or similarities. Since the brain is a system of processes, these simpler systems would work together using this simple algorithm starting from the neuron and the smallest piece of information to make more complex representations by looking for similarities and associations to memories from experience. In this way, we result in the more complex illusion of consciousness. To test this all we need to do is to look at interactively less capable minds starting with humans and working our way back through species. The first step in this experiment should be a field trip to the local bar.

No comments: