The Issue with Morality | Teen Ink

The Issue with Morality

April 4, 2018
By PolkaPens BRONZE, Apple Valley, California
PolkaPens BRONZE, Apple Valley, California
1 article 0 photos 0 comments

“Have a seat here, Ms. Greene.” The scientist guided me to a reclined seat in front of a touchscreen. I follow her lead, taking a seat and examining the screen. Displayed was a light blue screen, with the words “Touch to start”, displayed in bold lettering on the top.

“Just follow the instructions on screen and you’ll be right as rain!” The scientist smiled at me, speaking in a rather chipper tone, but I couldn’t help but stop her before she turned away fully. She turned with a patient smile on her lips, and I couldn’t help but wonder how many other people she has dealt with who were like me.

“Could you run me through the process once more? Just, just for safe measure.” I mumble sheepishly, attempting not to agitate her or cause her to run out of patience. Luckily, it doesn’t anger her, or if it did, it doesnt show on her countenance, and she makes her way back over to me.

She takes a deep breath. “Well, you’ve been chosen to partake in this, test, to test what you believe is morally sound. This test has no wrong answers, it’s merely what you believe is right,” She sounded like a customer service worker, repeating what had fallen off her lips countless times to the point where it was almost muscle memory, “We will be taking the data we receive and programming it into these new Police A.I.s, like the one you see in front of you,” She gestures towards a man on the other side of the glass with a hand. He stares blankly back at me, eyes robotic, but with a hint of humanity to them, that reminded me of a human, yet not quite human, qualities of glass replacement evident in his eyes. It was actually quite uncomfortable to look at, so I turn back to the scientist. “He’s already been programmed with some of our prior clients’ choices, so he will be participating as well.” She turns back to me and grins with a smile that appears almost fudged, again, like a customer service worker.

“Thank you, that’s all I needed.” I return to the screen in front of me, and watch her leave out of my peripheral vision. After hearing the soft click of the door shutting, I tap the screen. It lights up in a flurry of colors, from yellows, to blues, to even a few pink shades. A guide displays itself, repeating everything the scientist had just told me in her light, chipper tone.

“I guess I didn’t have to ask after all…” I mutter to myself, watching the tutorial play out before me. Before long the tutorial had concluded and the first question was laid out before me. I see the A.I. pick its choice, then settle back to wait for, what I assume, is my answer. I lean forward, reaching for the screen in front of me, reading the question carefully.

“If a person(s) is shooting at the A.I., what should it do?” It reads. Quite an easy question. I tap the option that reads, “Use taser to stun, then apprehend.” Easy. A loading circle appears above the question, spinning for what seemed endlessly, before moving on to the next question. A grey bar with a green section filled appears at the top at this point, reading “1 out of 10 answered” in green text.

“If a person(s) is seen with a weapon drawn, what should the A.I. do?” I ponder on this question for a few quick moments, letting it stick in the back of my head. The weapon is drawn, but the person, or people, isn’t threatening anyone, so there isn’t any means for violence in the slightest at all, right? I tap one of the more peaceful options on the screen, and look away to see the A.I., gazing directly towards me as well, again with those only half human eyes. I redirect my stare to the screen almost immediately, finding that the next question has already appeared in front of me.

Question three is a little different than the rest, dealing with a potential danger rather than a definite one. I really take my time on this one, thinking to myself which answer I would find to be morally better than the others. I see motion out of my peripheral vision again, and it grabs my attention. The A.I. looks as if it’s trying to distract me by waving its hand in the air, so I just ignore it, returning to the screen. Must be glitching out or something. I finally decide on an option, tapping it and moving on. This time, the weight of the question almost stuck with me. The warmth of the screen hung on the pad of my index finger for, just a bit longer this time, as if my mind was still pondering the question, though it had already been answered.

The loading circle starts to hold more weight as well, a weight I can feel occupying my mind as I wait for the fourth question to load. Question four fades in, and I notice the pattern the test is following. The questions seem to be getting harder and harder as they go, which isn’t the most original format in the world but works well for what the entire test’s purpose is. I took my time on this one as well, even if the A.I. was, presumably, waiting for my answer to continue. At least, I think that’s how the test works. I brush off these thoughts, returning my focus to the test in front of me. I finally decide on an answer, but feel a nagging in the back of my head telling me I may be wrong, a sort of strong anxiety that I had just chosen the wrong answer.

I try to push off the fear, reassuring myself that there are no wrong answers. However, this doesn’t work as I begin to let my mind wander off about the context of the test. My answers could mean someone’s life, I could end up destroying another human being, and if I do choose wrong I could put several people in danger. My mind begins to race with scenarios where my choices affect innocent people, and I begin to dread the next question, letting time drag on becoming slower and slower, knowing it’ll only make my state of mind even worse.

Question five displays, and my sense of dread was correct. This question is the hardest of the test so far, and I find myself trembling as I try to think of a morally correct answer that would cause the least harm to an innocent person. The point of the police is to serve and protect, which is why they switched to A.I. in the first place, right? I try to calm myself down, finding that being lost in emotions is hindering my ability to think logically. Thinking doesn’t seem to work best for me at the moment, so eventually I just decide to choose the answer that looks the best to me.

When question six loads, I notice it resembles question four in a way. It confuses me for a moment, but I press on regardless. Question seven follows the same formatting, handling a hazardous situation like question three. Then the realization hits me, the questions seem to be getting easier now. It must be winding down from the hardships it was just dealing with just a moment ago. Eventually I reach the last question, and just as suspected, it’s as easy as the first question. I complete it with ease, but still can’t shake the dread from earlier.

The screen flurries into color once more, with text reading “Great job! You may now rise from your chair and interact with your testing partner.” This confuses me, why was the quiz referring to the A.I. as if it were another human being?  I don’t have much time to dwell on it, as the glass begins to raise and I see the A.I. rise from it’s seat. It makes its way over and standing in front of the glass. It then dawns on me.

The A.I. was sweating, and I’m not entirely sure how far we’ve come with developing humanoid robots, but I’m sure an A.I. wouldn’t be able to excrete bodily fluids. I watch as the man’s eyes dart between mine, and he looks like he can’t tell if I’m real or not, so I take it upon myself to clear things up with my realization. 

“Looks like we’ve been lead on.” I speak up first, rising from my seat and approaching him. He doesn’t say anything, choosing to just fiddle with his hands apparently. I begin to ponder on why he might be so hesitant to interact with me, could he still think I’m not an actual human? “Sam Greene.” I state, stretching my hand out to shake his. He stares at it, and slowly grasps my hand. Bingo, his hands were clammy and warm. No way this was a robot.

“Juan’re really realistic for an A.I., Sam.” He mumbles, and I can’t help but burst out laughing. This startles Juan, and he lets go of my hand and steps backwards.

“Like I just said a moment ago, we’ve been lead to believe the other person is an A.I., when we’re both obviously humans. I only figured this out when I noticed how sweaty you are.” I chuckle, not bothering to step closer to him. Juan doesn’t seem to have realized he was sweating until now, and he chuckles nervously, wiping his hands on his shirt.

“Gee, I was so scared you were gonna replace me in my life if I answered these too personally…” He sputters, “I’m actually really relieved you’re a human.” I give him a funny look. Replace him? Does he actually believe an A.I., not capable of emotion or anything that strays from its programming, would replace him?

“It’s funny you say that. I was scared I was going to end humanity as a whole if I chose incorrectly.” I state, trying to be understanding with him, even if he was the complete opposite of me personality-wise. He shoots me a puzzled look, and I know I have to explain more carefully with him. “You see I was lead to believe this was a test to help program Police A.I.s, and I was terrified you’d take my wrong answers and go rogue, killing whatever human you saw fit rather than doing your job of ‘serve and protect’.” I explain, watching the confusion leave his face.

“I was told the same thing, actually. Do you think the scientists can see us? I hope they know we figured out their-” Juan gets cut off by the same scientist who explained the test to me entering the room, grinning widely.

“Yes, we can see you. I’m surprised you two figured this out, and together no less.” She states, walking towards us. She’s holding two clipboards in her hands, which she ultimately hands us. They’re consent and release forms. I begin to fill mine out, stating that I consent that the lab use my answers as a research tool in the project.

“So if research for a police robot wasn’t the point of the test, what was it really for then?” I hear Juan speak up, and guide my gaze to the scientist, who’s sheepishly rubbing the back of her neck.

“Well you see, we’re actually trying to find the psychology behind human morality with this. In our research, we found that different people have different views of what’s right and what’s wrong. Realizing this, we created these tests to see how many different views individual people have. To do so we had to create two groups of two different kinds of people. Logical thinkers,” She motions to me, “and creative thinkers,” She motions to Juan, “Seeing as the two types are basically complete opposites, we figured we’d get very different results. You two are actually the sixth pair we’ve tested, and we’ve got quite a few more pairs lined up.” She looks at  the both of us. “So, since you’ve completed your forms while I explained, you’re both free to go.” She turns on her heel and leaves the room, leaving Juan and I to stare in wonder.

“Well, at least no A.I. is gonna take our places.” I joke with him, grinning for lack of anything better to do. He laughs at this, and starts to head for the door. I do the same, feeling relieved that my worries were pointless.

The author's comments:

I hope that people can take whatever they can from this piece. There's barely a loose message that states that morals aren't real laws and are totally up to each individual and their view of the world. For example, a serial killer's morals would be much different from your typical working citizen. Maybe this story will change your views, I'm not sure. Take what you can from it.

Similar Articles


This article has 0 comments.

Wellesley Summer

Smith Summer

Parkland Speaks