Doing Better By Knowing Better

Michael Anton Dila
6 min readDec 27, 2023

What AI might have to teach us about human intelligence.

Alan Turing, British mathematician and forefather of AI

Advances in AI research and development in the last 35 years have seen the emergence of systems capable of modeling mathematically complex human activities, like the games of Chess and Go, and besting the most accomplished human players. Machine learning and other computational approaches to “intelligence” have been finding their way into software systems we use daily from GPS navigation systems and speech recognition to the search tools we now utterly depend on.

There is a very real sense in which it makes sense to say that AI systems learn, especially more recently developed ones that are built on deep learning techniques and models of neural networks. ChatGPT, and other LLM-driven systems, have even more recently amazed us with their capacity to deliver what appear to be thoughtful and coherent responses to our queries. We have been drawn into the illusion that these systems “know” something we don’t, but an illusion is precisely what that is.

Knowing is neither a merely computational phenomenon nor is it a simple or singular one. Knowing is complex. Reflect on the following question: “How do you know”? Think about the things you know and understand and try to remember how it is you came to know them. How do you know the difference between a joke and an insult? How do know that taking the left at the town square is a better route to the Farmer’s Market? How do you know that you are the same person this morning that you were when you went to bed last night? How do you know you are watching a sunset rather than a sunrise?

There are many senses in which we can say we know things, and many different ways of knowing. Different ways of knowing are suited and adapted to different activities, concerns and goals. Knowing is related to thinking.

Alan Turing’s original question in his 1950 paper, Computing Machinery and Intelligence, was: “Can machines think?”

Humans think so that they can come to know. We want to know, so that we can guide our actions, so that we can make better decisions and choices. Thinking comes in as many shapes and sizes as the things we think about. We give our ways of thinking names. Astronomy we call our thinking about the stars. Astrology is what we call our thinking about the fate in our stars. We think and we seek to know so that we can live beyond the limitations of our natural environment. The minute that human beings started thinking, we began to imagine ways to make our world other than simply as we found it. And, in this moment we gave birth to the artificial, and put ourselves on a path to building worlds within worlds.

The Lascaux caves provide some of the earliest evidence of human capacity for representing what we know.

Many people are used to thinking of knowing and knowledge in relation to facts and information. But for most of human history and across cultures, we have talked about ways of knowing in relation to wisdom. In our longest standing traditions of knowing, we seek to learn, to share what we learn and to use what we learn to make our lives better.

When humans started representing things we saw, knew and understood, we began to develop technologies of thinking. The representation of the hunt in the cave paintings in Lascaux offer us early evidence of a system of communication that helped to establish the terms of a common reality and knowledge that could be shared beyond immediate experience. Such systems are our first systems of “artificial intelligence”: they not only record experience, they bring us into a complex system of orienting ourselves into a future for which we can make plans and have strategies.

From the beginning, the development of tools that were more than simply prosthetics (such as tools with sharpened edges for cutting what our teeth couldn’t, tools with handles and weights like hammers, that could be force multipliers for the power of our arms) were focused on representing thought in ways that we could share. Moving from image based systems to more abstract and symbolic ones, the development of language and mathematics, made possible the first systems of information and computation.

Maps are one of the early models of complex computational technology

In his incredible ethnography of navigations systems, Cognition in the Wild, Edwin Hutchins points out that a nautical chart is a computer. It’s a powerful and deep insight. When we start to think about what maps are and how they work, we begin to realize things about them that were never thoroughly explained to us: the way that visual images and representations, geometry, paradigms of scale, cardinal orientation and myriad other elements are brought together so that we can calculate location, bearing and chart a course. This computational power is not autonomous, but powered by human minds and perception, and, in turn, enables the complex coordinated action of many people working in concert.

Artificial Intelligence is, I would argue, not reducible to what we call AI today, which is, in any case, really a catchall for a diverse collection of techniques, models and systems of computing. AI, in fact, gets much more interesting when we connect it to the broader, deeper and richer history of the human development of systems of intelligence, designed to connect minds and make available powerful artificial systems for knowing and deciding. The three of which I have been a serious student are those of language, science and law. My PhD dissertation was focused on an examination of “technologies of objectivity” that are the engines of these systems of thought.

Language, Wittgenstein taught us, shows us “forms of life.” His idea of language games challenged the reduction of language to logic or mathematical rules: “… the word ‘language-game’ is used…to emphasize the fact that the speaking of language is part of an activity, or of a form of life.” Understanding language as an embedded system, rather than merely symbolic artifacts, provides us with a much richer way of understanding not only what language is, but how it works. Games are simple but powerful systems made up of rules, players and a field (the context, sometimes physical, that define the boundaries of play).

The U.S. Constitution is the cornerstone of one of the world’s most poweful AI systems

Law provides us with another example of a human system, that, like games, is constituted by rules, players, and a field. A legal system, though made of simple components, is not only complex, but exists, precisely to help us govern ourselves and each other within the complexity of a social and political order (each, also, artificial systems). Law exists not simply to give us common rules to live by and judge each other against, but to make available a system for reasoning about complex problems that arise in the governance of dynamic social worlds. The legal system isn’t itself intelligent, any more than systems of software and hardware alone are. Rather, the legal system makes possible a collaborative activity that makes it possible to reach intelligent conclusions about important matters. It is a system that operates so that we can do better by knowing better. When it functions at its best it helps us to better understand how to live well together, to find how to create compassion, fairness and justice.

The point of intelligence is not merely to know better, but to do better. AI that is not informed by our diverse traditions of wisdom and knowing cannot ever attain the status of intelligence, in its most robust sense.

“Can machines think?” Turing asked us. We may be getting closer to making machines that can. If we do, they will join a world already well populated by other forms of artificial intelligence. A world in which things know more, but do no better, will not have gained new intelligence.



Michael Anton Dila

Michael is a Design Insurgent and Chief Unhappiness Officer