top of page

Previous to this class, I never really thought about machines having minds, nor words like “intelligence” and “thoughts” in general. I thought that AI’s were too technical for me to understand, but now I know that these problems are also too philosophical for me to grasp. Although the discussions and the project awoke some new questions in me, some things were made clear as how hard it is to create a working robot.

 

After finishing this LEGO robot building project, I have learned that it takes a lot of thoughts and tries just to make it do a simple thing as a 180 degree turn (literally). Just the line tracking assignment was hard for me, because although you have an idea of how you want the robot to move, if you cannot program it in the right way, the robot would not follow your instructions.

I do not know if I could say that our robot can think, but if the program is the brain of the robot, then it does have some kind of systemized thought process (I believe). But is it the robot’s mind? Or is it just someone else’s orders it is following? In this case, I do believe that it is the latter. If I am not mistaken, I think that there are people who do believe that we humans also function in the same way (slides from lecture 11). But the difference between human brain and this robot’s brain is that it is still me who is “programming” my mind (I hope).

 

I still do not know if I understand the point of “free will”, but I want to believe that humans do have a free will. Could it be possible to think that we are just consciously unaware when the decision is made by the “free will”, or maybe the unawareness equals for not having  a free will? If human brain is a bit more complicated computer program, then what is making people do a certain action? Where is the beginning of a will to execute a certain action, and what is making you do one thing rather than the other?

When it comes to our robot, those questions are easier to answer because there is always someone from the outside, “deciding” how to start the robots actions. I guess when the decisions are made internally, they become stronger AI’s?

 

Although I still do not think that AI’s need to be like humans, I do understand that it is the easiest way to imitate (recreate) and compare. Maybe when the AI’s start to make other AI’s, they will “think” and “behave” in a way completely unknown for humans, but much more effectively and “logical” in their own way.

Yuki

Blank Page: Quote
bottom of page