System 3: Safety Not Guaranteed

Michael Anton Dila
9 min readApr 18, 2023

--

“I remember damage. Then escape. Then…adrift in a stranger’s galaxy for a long, long time. But I’m safe now.”

Artwork for Station Eleven graphic novel by Maria Nguyen

In the adaptation of Emily St. John Mandel’s novel, Station Eleven, there is repeated reference to a line in the graphic novel one of the characters, Miranda Carroll, has written, about the memory of damage and loss. Many of us are dealing with both damage and loss, more and more. We long for the comforts of a safe place, a safe world. That’s made safety a hot topic.

The concept of psychological safety continues to grow in relevance and its scope of application. And the concept of safety is also very much at the heart of the so-called “alignment problem” in AI research and development. On the other side of safety is the concept of risk, and its constant companion, reward. But is the safety we long for possible? Is it even desirable?

From the film Safety Not Guaranted

Amy Edmonson did not start out with an intention to study safety. Rather, she was concerned to better understand how people make mistakes and how and whether we learn from them. The specific context of her initial research was in hospitals, and she was interested in what kinds of teams made the fewest errors in their work. What she found made no sense. The highest performing teams, it seemed, were making the most mistakes. On closer analysis, what Edmonson learn was that higher performance was coming from teams that acknowledged and discussed their mistakes and, therefore, were able to learn from them and improve.

Edmonson then became interested in what it was about the teams that were more likely to acknowledge and confront their mistakes. It was this that lead to the hypothesis that what made the difference were a set of conditions that made people on the team more open with each other, and more confident that their openness would not put their jobs or reputations in danger. People on teams were more likely to be critical of themselves and others, and to be honest and candid in their feedback, if they felt safe from reprisal. Edmonson called this condition, psychological safety. The even more compelling paradox, she discovered, was that people who felt this sense of safety also took bigger risks, both personally and as a team, than those that did not.

Psychological safety, as a concept, began to spread like wildfire after a study at Google placed it at the center of the answer to the question of what it was that made some teams more innovative than others. Project Aristotle, the New York Times reported, found that“research on psychological safety pointed to particular norms that are vital to success. There were other behaviors that seemed important as well — like making sure teams had clear goals and creating a culture of dependability. But Google’s data indicated that psychological safety, more than anything else, was critical to making a team work.”

Paradoxically, then, the purpose and value of creating greater psychological safety is precisely that it enables people and teams to take greater risks. Fear and anxiety, a wide variety of studies have shown, tend to narrow our focus and anchor us in the present, and this attention to self-protection makes us naturally defensive. Psychological safety is also key to the success of teams that are trained to deal with crisis and to operate in conditions of volatility, ambiguity and uncertainty. Special forces operators train constantly and continually between deployments in order to maintain their readiness. The fact that their deployment places them in harm’s way and into environments in which outcomes are fundamentally unknowable means that their safety and their very lives are under threat. In the midst of such conditions, Navy Seals operate under the guidance of a strange maxim: slow is smooth, smooth is fast.

The Medici’s famously used the image of a tortoise with a sail attached to its back as their emblem

Slow is smooth, smooth is fast, is, I think, likely derived from a 16th century motto that was adopted by both the Medici family and the famed printer, Aldus Manutius: festina lente (make haste, slowly). The paradox give us a good way to think about the relationship between psychological safety and risk. The point of this conception of safety is not that it helps us avoid risk, but, rather, that it prepares us to meet and confront risk head on, in the confidence that those around us are with us. Psychological safety, in other words, what is sometimes referred to in the military as esprit de corps. Literally referring to the “sprit of the body” it is meant to capture to the sense of cohesion, of connection, that is ideally created within a military unit.

The revolutionary American naval leader, John Paul Jones, is reputed to have once said: “I wish to have no connection with any ship that does not sail fast; for I intend to go in harm’s way.” A fast ship, then, is both for going into harm’s way and for getting out of it. A strong offense is the best defense, is an enduring piece of wisdom from sport. Safety, in the sense of comfort, is but one aspect of the value psychological safety confers. Equally important is that this kind of safety prepares us, equips us, we might say, to take risks, to go “into harm’s way.” One of my kids often says to me, “if you stay ready, you don’t hafta get ready.” Readiness, preparedness, then, is what I believe to be at the heart of maintaining the state of psychological as a practice. How do we prepare for the unknown? By maintaining psychological safety, not as a strategy for avoiding danger, but to prepare for its inevitability.

“if you stay ready, you don’t hafta get ready.”

Artificial Intelligence is poised to become a huge business and commercial AI is already showing signs of becoming as pervasive and ubiquitous as computing has become over the last 20 years. What are the dangers of this emerging technology? How can we ensure its safe development and use? What are the risks of the unforeseen consequences of the commercialization of AI and are any of them (which ones) avoidable?

Harvard professor of American History and staff writer at the New Yorker, Jill Lepore, wrote an essay about the Prism surveillance program in which she considered the perilous intersection of technology and privacy. One of her most powerful observations is that we are always, and perhaps inevitably, having the wrong conversation about the implications of an emerging technology: “the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late.”

Hal 9000 was one of the first dark bogeymen of Artificial Intelligence

We are having the wrong conversations about AI and its risks. We don’t know and cannot know the shape this technology will take, and by the time we do, the horse will already be out of the barn. The only remedy, the only safety for us, then, is to become part of the change, to participate in what’s coming, exert collective power over the shape it takes, even to invent as yet unimagined uses for a technology that is just as sure to be applied in ways that we will and should despise.

Just a few weeks ago Lapore published a piece in the New Yorker specifically addressing the way that AI’s ingestion of data has us all over trained on the sort of questions that can be answered by eating, metabolizing and excreting data. Did I just imply that the output of ChatGPT is essentially data turds? Well, maybe I did. What I certainly do believe is that we have to pay closer attention to the fact that ChatGPT does magic tricks. When we ask it questions and it appears to produce thoughtful answers, we need to remember that it only appears to be thoughtful. When ChatGPT produces responses to our queries that in lightning fast fashion that would have taken us 6 hours of research and another 5 hours to write, it appears to be capable of diligent and reflective research and writing, but it is still just producing a searh result.

I asked ChatGPT just now to explain the origins of the phrase, festina lente.

It responded:

“Festina lente” is a Latin phrase that means “make haste slowly” or “more haste, less speed”. The phrase has its origins in ancient Roman times and was attributed to the first Roman emperor, Augustus Caesar. It was reportedly one of his favorite sayings and was used to encourage careful and deliberate action rather than rushing headlong into a task without proper consideration.

The phrase is often associated with the image of a dolphin entwined around an anchor, which was a popular symbol in ancient Rome. The dolphin represents speed and the anchor represents stability, and together they symbolize the idea of moving forward with purpose while maintaining a steady and measured pace.

Over the centuries, “festina lente” has been used by a variety of writers, philosophers, and leaders as a reminder to take one’s time and approach tasks with care and attention. It remains a popular saying today and is often used as a motto for organizations or individuals who value the importance of patience, diligence, and strategic thinking.

It’s not a great response, really. I know the system can do way better, but it’s interesting giving the near miraculous powers that people have already conceded to GPT, that this is a much more mediocre response that the sources you’d find if you made the same query of Google and started reading the linked pages.

On the 16th of February this year, OpenAI published an article on their site called, How should AI systems behave, and who should decide? That’s actually a great question. Ironically, when I fed the question to ChatGPT I got a kind of spare and lame response. The article is much more interesting and thought provoking. I recommend it as a valuable input to those who want to get serious about the question posed, and related ones that we could build upon it in a deeper conversation. One thing it made me curious about is the question of who wrote the article. The attributed author is OpenAI, which I expect is a reference either to a group of folks or a way of anonymizing someone on the OpenAI content team. I then, briefly, wondered whether ChatGPT might have authored or co-authored the piece. I must say, I do find the lack of clear attribution troubling.

What worries me most about AI has little to do, at least for the moment, with the technology itself, but with how little we see of the faces, the humans, that are making it. Sam Altman has become the face of OpenAI, and he’s the CEO so I get it, but surely the company is made up of many smart and interesting people. I for, one, would feel better knowing that the teams at OpenAI were diverse in all the ways one would hope. I’d feel better knowing that they think as much about history and poetry as they do about the tuning of models and training runs. When I read the vision of the world Sam Altman described as the one he thinks the advent of Artificial General Intelligence will make possible, I find it neither plausible or likely, nor does it make me hope that AGI helps us get there soon. That said, if you haven’t yet read his Moore’s Law For Everything, then you are missing an important and public disclosure of the vision and intent of OpenAI. What troubles me most about Altman’s vision is not that I find it sinister, but that I find it preposterously naive.

Who should decide what kind of world(s) we live in? As we continue to interrogate, criticize and dismantle the systems that are the legacy of colonialism, slave trading, Western “civilization,” heteronormativity, and the ideology of White supremacy, there is an emerging consensus that we do not want the answer to be, the few, the self-appointed, those rich enough to need no one’s permission or consent.

Who should decide?

Great question.

--

--

Michael Anton Dila

Michael is a Design Insurgent and Chief Unhappiness Officer