Can machines ever become persons? The topic of this entry is from a “Star Trek” episode entitled “The Measure of a Man”. If you are unfamiliar, check out the synopsis at this link. In this world, there is seemingly no God and the agreed laws of science are quite often ignored. In this respect, we must assume that most things are possible. However, Data’s presentation of human behavior does not necessarily dictate that Data is a human. He was able to mimic our behavior by having desires, inclinations, forming relationships, and seemingly making free choices.
The problem lies in Data’s hardware. The hardware was designed to do these things and programmed in a way that made Data analytical. The idea that Data was ever able to make a “free” choice is simply wrong. He was confined to the laws of science and the moral standards as understood by the person who created him. Therefore, he was technically unable to be anything more than an android who analyzes data and makes a decision based on prescribed programming.
You may ask what makes a Christian so different? We were created by God, He did give us a moral code to live by, and the laws of science help us form conclusions. All these are valid points, but consider that we are not initially inclined to serve God. Serving God is a choice we have to make of a free mind and a free heart. What makes our choice in this matter different from that of Data’s decision making is the uniqueness we have in our decisions. Data’s thinking is seemingly linear, while ours involves every situation we can attest to in memory (mind) that has occurred in our lives. Data may have been able to retain information, but so does a computer. That does not necessarily make a computer a human being. Such hardware is designed to do such things.
Perhaps we can best describe Data’s mimicking of human behavior by his hardwiring. He was able to analyze circumstances, form a consensus, and exhibit behavior based on his programming. Another example of this is a Furby. Remember those? You would pet them and they would seemingly purr. You would let them bite your finger to simulate eating and he would giggle to simulate joy. When he felt a lack of connection, he would call for you. When you put him in a dark closet, he would yell out in fear. The examples demonstrated simulated emotion.
Human beings exhibit emotions based on how they generally feel. We are different from machines, in that, we react to situations differently. Take the loss of a family pet who had a terminal illness as an example. Maybe the woman in the house has a deep attachment to the pet and the man has a close attachment, but not quite deep. This animal dies and the woman reacts with grief and sadness. The man reacts with a sense of relief for an end to the suffering. Both people lived with this animal for the same length of time and developed attachment to it, but they reacted differently. What makes Data’s case for being a “human” so difficult to agree with is the fact that he was the only one like him. That means there was no other android with the same technology to compare with him. Recall in “I, Robot”, the robots were also said to have the same abilities, yet they all acted alike when they revolted.
Also take into account the idea of morality. “I, Robot” presented us with three Laws Of Robotics that were a sort of Morality for Androids. However, it was revealed that such a “morality” is unbinding as the androids revolted. Whereas mankind is set in a society based on morality, androids only adhere to their programmed specifications. Or to put it simpler: their designer’s sense of morality. Among these are often not killing and not bringing bodily harm. Whereas human beings have designated which actions are unacceptable, androids are merely assigned such values. We have often seen examples in pop culture of androids created to bring harm. They are literally enslaved to their programmed cause and ideas. Any perceived “free thought” is still within the designer’s encryption.
In this episode, Picard argues from the idea of materialism. He views Data as more than just a machine, and in effect, a human being. He argues that Data’s brain and body are one in the same. Hasker talks about this when he raises the issue of consciousness only being present in biological systems or whether it can come about through wiring and microchips. (Hasker, 70). He harps on Data’s use of the word “my” in regards to his rights, freedoms, and choices. His argument is based on the idea that we can clearly see that Data has a brain and he also has a body that both work together to form idea and emotions. Through this view he is able to remove the idea that the mind even really exists and that argument is quite easy to make simply because the mind cannot be seen.
Maddox seems to argue from the idea of dualism. He acknowledges that Data is capable of mimicking emotions and actions, but argues that these are products of the man made specifications assigned to him. Maddox seems to acknowledge that human beings have a mind that governs their lives and their emotions whereas Data’s actions are simpler and easier to map out and predict. This demonstrates Data’s machine characteristics. His desire to disassemble Data shows that he views Data as having no spiritual state, only a physical. If Data had a spiritual state as well, Maddox would be unable to simply learn about the technology behind Data.
I’m going out on a limb here. But I see Maddox as also presenting a Pre-established harmony approach. In this instance, the machine’s “god” is the man. Man made the physical state “pre ordained” to correspond with the mental state. In this case, Data’s processing technology. Therein lies Data’s lack of free will and therefore his case for being a human being.
Book reference: Hasker, W. Metaphysics: Constructing a Worldview. Downers Grove, IL: InterVarsity Press, 1983. ISBN: 0877843414.