Thursday, February 14, 2008

Machines modeling after/ as models for human thought

I think that one of the most interesting things about computers is what they are made to imitate and what they are made to explain. The people who make computers are, in a way, modeling them on the human brain to better understand the human brain. What is thought? What is consciousness? How can we give these to a machine, creating “artificial intelligence”? Yet, while we strive to make a machine that acts the way the human brain acts, we also assume that once it is made...it can then teach us more about the human brain. It's interesting how Katherine Hayles notes this, “Humans, who have limited access to their own computational machinery (assuming that cognition includes computational elements...), create intelligent machines whoe operations can be known in comprehensive detail (at least in theory). In turn, the existence of these machines, as many researchers have noted, suggests that the complexities we perceive and generate likewise emerge from a simple underlying base; these researchers hope that computers might show us, in Brian Cantwell Smith's phrase, 'how a structured lump of clay can sit up and think.'” (Hayles 41) Isn't this then, an oxymoron of a sort? How can you model something on something you don't fully understand and then hope that the result will make you understand the original object?


What actually got me thinking about this was discussion section last Friday. Some people started talking about Google Street View and face recognition software, as if face recognition software was something that would occur in the near future. Yet, I don't believe this is true at all. Last semester, I took Intro to Cognitive Science where this issue was actually brought up. The fact is that we don't even really know how the human brain recognizes faces. It isn't one set area of the brain. There were experiments which tried possible ways that a machine could recognize faces to see if the human brain worked in that way but I think that, in the end, it was just too difficult. There are too many parameters. Because a machine is a machine...it needs to be told what to include, what to exclude, all the parameters. If the lighting changes from one photo to another, the machine will not be able to tell that it is still the same person. Angles, shade, position...it all matters. Therefore, I don't really believe that machines and computers should be modeled on human thought processes and then used to understand the brain. They belong in their own separate categories especially because we have not yet reached the point in understanding ourselves that we can produce something imitating humanity.

No comments: