Wednesday, September 12, 2012

Homework 3 : The Chinese Room

    So, I just read the Chinese Room argument and several papers that argue/support/explain it and the concept it refers. The argument it implies is intriguing because it explains, using a very simple metaphor how a computer cannot function like the mind, but instead just convincingly mimic one at best. His argument ( according to Wikipedia and other articles ) rocked the Artificial Intelligence world and remains very controversial.
    To begin, I'd like to explain the metaphor, I find the metaphor poorly explained in alot of the papers and even though I'm fairly certain that the only people who will read this already understand it ( *ahem* Manoj *ahem* ), I'd like to explain for those who don't. Basically, Imagine that you're in a room with a few buckets of Chinese character symbols written on paper, you have a GIGANTIC rule book written in English, You don't understand Chinese, and in the room there's a slot where papers with various Chinese symbols come through. On the other side of the slot is a man who speaks fluent Chinese and is trying to have a written conversation with you. Now, he passes a slip through to you and it translates into "Hello, How are you?", you use the GIGANTIC rule book which says when they give you x y z, respond with a b c. You respond with a paper that reads " I'm fine, and yourself?" Now, to the fluent speaking man on the other side, you are responding intelligently, like a fluent speaking man would. But in reality, inside the room, you don't know what he said to you, nor what you responded with. You only followed the rules in the book. In this metaphor, the room is the Robot, and you with the rule book is the processor/programming. The man who speaks fluently is himself. And with this metaphor you realize that even though a robot can perfectly simulate intelligent conversation, it does so without actual intelligence, the man in the room does not know what he is saying.


        If you understand the metaphor then you can easily see what this implies about artificial intelligence. You cannot create the perfect rule book, insert it into a robot and think it's an actual mind. The robot has no actual understanding, it's just convincing you it does. I think this a really cool argument because it's so simple but yet so powerful. It's irrefutably similar to what goes on within a robot and it makes you wonder How could you make a mind? What would be the metaphorical equivalent to it? At face value this doesn't directly effect me but I'm glad I've read this, I'd strongly recommend this to others. There's a little more to it such as weak AI vs Strong AI or arguments against it, but once you've understood the metaphor you've gained what the paper has to really offer.

     Anyways, my reaction? good writing, good point! But bad attitude, and it could be more clearly written

    8/10

No comments:

Post a Comment