My Favorite Scene in Alien: Covenant

Monica S. Flores
5 min readDec 31, 2020
Winged Victory, Photo by Jon Tyson on Unsplash

“Orders that might conceivably result in catastrophic system failure require the corroboration of a ranking or second bridge officer” — Mother, Ship’s Computer

I recently rewatched Alien: Covenant (https://www.20thcenturystudios.com/movies/alien-covenant) on Google Play. We finally bought a copy.

Everything about the film works for me: gorgeous cinematography, frightening action scenes, philosophy of the parent/child relationship, exploring and dealing with outer space, colonization, David’s portrayal by the talented Michael Fassbender, and the strong ensemble casting, plus the creepy deal with the black nanoparticles — it’s a great film for a cozy night at home for adults.

While watching, I was reminded how much I appreciate this particular scene, which is the first of my two favorites from the Alien series*.

Ship’s logic vs Tennessee and Upworth

Here’s a run-through. The scene starts at 1:13:20, when Tennessee (played by Danny McBride) located in the bridge of the ship with two other officers, receives a distress call from the away team. The call mentions casualties and the need for evacuation. The ship’s primary mission is to bring colonists (who are currently frozen/in stasis) to their new home. The team on the bridge receives word of dangers being faced on the ground, and Tennessee decides to offer assistance.

However, the Mother Shipboard AI does not allow any actions which may endanger the colonists who are currently in storage.

“I’m sorry, complying with that directive could exceed my structural tolerances. I am unable to abide with any order that could conceivably result in catastrophic system failure. Orders that might conceivably result in catastrophic system failure require the corroboration of a ranking or second bridge officer.”

The way that I originally heard this in the video, it sounded like the ship’s computer is actually saying “require the cooperation of a ranking or second bridge officer.” It is, in fact, “corroboration”.

from: https://www.merriam-webster.com/dictionary/corroboration

Here’s the “official movie novelization” excerpt:

(Google Books excerpt of this scene)

What I am struck by every time I run through this scene are three things: 1) the limits of artificial intelligence, 2) situation-specific scenarios that require human intervention, and 3) human cooperation and consensus required for decision-making.

For the limits of artificial intelligence, our machines as of today follow directives pre-programmed into them from their earliest makers. In this case, the colony ship’s primary order is to preserve the colonists. Rules are underlying assumptions and directives and pathways that we, the technologists of today, implement in the software and tools we currently develop. This leads to our underlying bias becoming “baked in” to the tools we create (see: COMPAS https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing). I believe it to be of paramount importance to uncover, understand, and account for my own inherent bias. I’m using the Ethical Explorer https://ethicalexplorer.org/ tool to help with this, created by Omidyar Network.

For situation-specific human intervention, I’m of the opinion that the corroboration needed in the above scenario is the right way to go: if an equally ranked or similarly ranked officer agrees, then the machine would defer to the shared judgement of the two humans. Note: I’m even wondering if we’d want additional people for signoff, in future scenarios that have greater complexity. Common similar examples I have experience with include the “buddy diving” scenario as well as “pair programming” (see: Two-man rule https://en.wikipedia.org/wiki/Two-man_rule aka Two-person rule)

Photo by Laya Clode on Unsplash

For human cooperation, this, to me, is the heart of all work. Machines will not take over everything, and I’m confident that, for example, my job will not be outsourced, because so much of my individual work requires empathy, understanding, and a balance between the big picture and the details.

As of today, from the machine learning perspective, machines still have difficulty in making even basic decisions that humans ace (see: blueberry muffin or chihuahua https://www.google.com/search?q=blueberry+muffin+or+chihuahua or the Dark Skies initiative to determine if a picture shows stars in the night sky or city lights from space https://arxiv.org/abs/1406.1528)

Photo by Roberto Martinez on Unsplash
Photo by Luis Quintero on Unsplash

What is interesting to me about this scene is not only how much I continue to think on it, but also how apropos it seems in a world where our technological capabilities far outpace our framework for medical and moral ethics in decision-making (see: https://www.theatlantic.com/magazine/archive/2021/01/covid-ethics-committee/617261/ about coronavirus-related rationing).

What would you do in this situation? What do we do today, in similar situations? Where do we get training and tools to help build a humane, just, and equitable future? I’m open to the resources you’re using to build safe technological solutions.

Some more links:

Playlist of Data Science Ethics, by Rachel Thomas

*My other favorite is “Hey Vasquez, have you ever been mistaken for a man?” (video: https://www.youtube.com/watch?v=HVn2HSHW1aI)

--

--

Monica S. Flores

💚 make a positive difference: 🤖 Lullabot Technical Project Manager, ✨#femalefoundersleadtheway Founder, 🏆 NTEN Faculty, ⚡Pantheon Hero, 💨 Arcadia Ambassador