Picture a cognitive light cone… or the ethics of autopoietic technology
Cognitive light cones, infinite oceans of stress, morphogenesis, ethics, autopoiesis, technology, intelligence, and care? How often do you get a chance to read a paper that weaves these concepts together in a way that, by the end, you look at yourself, your community, an ancient holy scripture, your gut microbiome, the trees in your local park, the nature of technological partnerships, responsibility, and emerging hybrid human/machine AI systems with an entirely new lens?

The new paper is in a special issue of the journal “Biosystems” dedicated to Autopoiesis: Foundations of Life, Cognition, and Emergence of Self/Other. The paper we’ll review today is called “Toward an Ethics of Autopoietic Technology: Stress, Care, and Intelligence.”
Firstly as a rule of thumb, I stumbled on this paper because I’m a fan and follower of the work of Michael Levin. I’d suggest you follow and read any new work that comes out of his and his collaborators labs. He is one of the authors of this paper and seems to cycle between big picture theories that challenge many of the current paradigms about life, intelligence, and the scientific method itself at times, and hands-on experimentation that has led to some significant and breathtaking results.
Let’s start by dissecting a word that lies at the heart of their theory.
Autopoiesis has at its root the greek word poiesis which means to bring into being something that didn’t exist before. The authors use examples like when a flower blooms, or a butterfly emerges from a chrysalis. Basically it’s the idea of making something, which in essence is what we do when we create (or discover) some new technology. We (and I would assert after reading this paper that all intelligences) are toolmakers. The central thesis of the paper is that what we bring into being needs to be more about the path we follow, the values we hold, and the care we take as we become more adept at shaping nature. The authors assert that we need to align our values of care and explore how we reframe the stresses we face and how we resolve them from a larger more encompassing point of view that sees technology as a partner rather than a tool. Autopoiesis adds the idea that entities can be self-making, self-repairing and in some way have a distinct agency within the world. Francisco Varela and Humberto Maturana developed the theory of autopoiesis fifty years ago and this paper builds on their ideas in subtle new ways.
You can’t act and remain unchanged.
The first thing that struck me when reading was the assertion that having agency and being changeless are mutually exclusive. You can’t take action in the world and not be changed by it. They point out that we (all agential beings) are stressed by what is and we strive to bring it into alignment with what we care about and how things should be. We are all of us emergent multi-scale processes of self construction. At the most basic level every cell is another cell’s environment and every person is shaping another person’s environment. They assert that we are self-less selves that are emergent properties of this never ending process loop. They call the loop the stress, care, intelligence (SCI) loop.
Curiously this resonates with me having been a follower of Herb Simon’s concept of the mind (I blur my eyes and think “self”) as being like a pair of scissors (this relates to his theory of bounded rationality). You can’t look at a pair of scissors without looking at both blades and how they push against each other. The place where the blades touch is where the cutting happens. Simon’s posits that one blade of the mind is the intelligence looping through some set of rules. If I’m an ant it might be, “look for sugar, leave a scent trail in the dirt, repeat until you find sugar. If you find sugar follow your trail back home to share with the hive.”
The other blade is the environment the ant pushes against. The dirt which is the ant’s native environment. It is held to the ground by the invariant (at least to ants) called gravity. Time is another feature of the ant’s environment and it is inexorably moving forward one second per second. The other members of the hive following those same rules and leaving their tracks in the dirt are also a feature of the ant’s environment. At the emergence of these two blades, the agent and the environment, we get anthills maintained at a steady temperature, farms of fungus, wars, and the emergence of a form of collective intelligence. If you’re an ant your from a long line of autopoietic agents who have survived and thrived over a vast test of time. After reading the paper, I now wonder if stress is the resistance of the metaphorical paper to be cut and care provides the motive force to squeeze Simon’s scissor’s handles. Intelligence is the emergent action that results. In biology what the ants are doing is called “stigmergic” collaboration.

Our own consciousness would be hard to find in any one place and much like the ship of Theseus, even though all of our parts are replaced over and over again and we endure countless struggles, our sense of self remains intact. We emerge from all of our past experiences and stresses as we work to repair ourselves and move forward.
I can’t help but also be reminded of a framework developed by Steven C. Haye’s called “Relational Frame Theory” and the related efforts Haye’s and collaborators have developed and practice around Acceptance and Commitment Therapy (ACT) and developing prosocial worlds. In particular there is something called the ACT matrix that is used to help individuals and teams “unhook” from their inner thoughts and past—often painful—early experiences and move towards vitality based on their values.
In that framework there are the inner thoughts (in the head), values (in the heart), and what you are doing in the world (with your hands and feet) to either move away from pain or towards vitality.
You can imagine this SCI loop moving through spacetime as you take steps forward through your life. My favorite quote from this domain is from E. O. Wilson and David Sloan Wilson, “Selfishness beats altruism within groups. Altruistic groups beat selfish groups. Everything else is commentary.” Their concepts factor into common pool resources, collective action, and how to help individuals and groups build Elinor Ostrom’s principles into the ways organizations and platforms work.
I explore more on this topic here.

A cognitive light cone of limited or infinite care?
The second thing that struck me was the notion of a cognitive light cone, and further, one that could either be limited in what it cares about or infinite. So just as you’re trying to imagine this loop moving through life they introduce the concept of a cognitive light cone. The past stresses shaped our steps forward to the present. The externalities that stress us today shape what we care about now. Our capacity to take action shapes the cone of potential movements forward into the future.
The wonderful and challenging aspects of this paper come to the fore when they begin to explore the notion that stress and care drive ALL intelligence. When we think about stress you might imagine an endless cycle of life and death and rebirth, wandering through multiple lives in a quest that may seem at times like a Sisyphean task or as the authors of the paper tie to the concept from the Bhagavad Gita of Samsara, an infinite ocean of stress.

When we think about care we may think of all the ways that we take action to relieve ourselves of that stress. The ways we can’t help ourselves to try to solve a problem that vexes us, that mismatches with what we want or need, that reduces surprise so we aren’t caught off guard again, or the goals we set for ourselves to thrive and move forward.
In the paper the authors generalize this motive force to all intelligences, at one point they note that care IS intelligence.
If we don’t care, we don’t try to relieve stress and taken far enough we step off the wheel of life. Whether we are a person, a group, a plant or some other form of life, goals drive us.
They point out that in buddhism the person on a path towards enlightenment makes a vow that “I shall achieve insight in order to care and provide for all beings, through space and time.” The authors wonder what might happen if we take responsibility for the flourishing of all beings how that might be the path forward for an open-ended expansion of intelligence.
Does that mean that care for more than ourselves should be baked in to any technological partnership? It reminds me that in the free energy principle we are trying to reduce surprise. Either in what we sense by shifting our perception (changing our mind about a given input) or in some action we take to change the world outside. I wonder what ways technology could partner with us to help us change our minds more often—to decide that the higher level of caring means we don’t need to do something—even if we could.
What might infinite care look like the next time we are passionate enough to want to invent some new technology? What mechanisms might help us think again and yet still be vital?
Technology isn’t a tool but a partner that shares the stress of becoming.
The third idea that struck me in the paper was the concept that technology shouldn’t be thought of as a thing separate from us that we use like a tool, but rather as a partner in an intimate interplay of becoming. A partner that we use to detect stresses at times and that creates stresses in us at other times, or a partner we can share what we care about with and hand-off those challenges to, in the form of goals.
Or a partner who can take action to relieve stress and move us to a new place. Our things aren’t just things. I can use a technology like a calendar to encode a goal for tomorrow and the calendar can alert me when that time comes. Or I can describe a goal in the form of a prompt for a large language model and it might return an idea that moves me forward in some way with imagery or narrative.

W. Brian Arthur’s seminal book, The Nature of Technology comes to mind here. In it he points out that technology is the capture of a phenomena in the Universe for some use. Note that he didn’t claim for human use alone.
When an organism is trying to stay alive a basic stress might be a need for some sort of food. If it can somehow detect it’s need for food and use a phenomena in the Universe, like a bundle of molecules (see proteins like dyneins for example) that move like a tiny engine, to propel it towards the food, it survives where other organisms fail.
Its notable that in this paper, and in Arthur’s work as well, the resolution of stress perhaps by moving to an area that lowers stress, or harnesses some phenomena in the Universe for use and shifting that stress to some new technology to help resolve it, doesn’t stop all stress. It moves us to a place in our cognitive light cone where we experience (or as Arthur points out technology creates) entirely new stresses. Arthur points out that the nature of technology is that any new technology solves one problem but creates five, ten, or more entirely new problems. We didn’t have a pandemic of cyberbullying before we had social networks for instance. Stress is unending and a natural state of life and intelligence. Cue Samsara and all its deeper meanings.
How do the current discussions of ethics and the scare-mongering about AGI map to this theory that feels like it fits more into the ACI spectrum?
With all the talk about AGI (Artificial General Intelligence) as a technology that strikes fear and excitement (depending on your point of view or place in society) in the hearts of humankind you’d think it’s the only exponential/existential technology in town related to intelligence.
But this paper seems to be taking another approach to understanding and potentially building a greater intelligence that falls into the category of ACI (Augmented Collective intelligence).
I’ve seen some tantalizing examples of the next wave of hybrid machine/human learning that both builds on the current wave of LLMs but also factor in Karl Friston’s concepts of the free energy principle, active inference, and Markov blankets to build systems that show marked increases in collective intelligence.
Researchers like those who wrote this paper and those who publish in collective intelligence journals explore the fundamental embodiment of intelligence within some sort of environment and the emergent collective nature of intelligence as essential to it’s ethics and potential. They suggest that by seeing technology as a partner in a multi-level feedback loop, no matter the substrate of that collective intelligence, we can derive transformational ethics of mutual stress-transfer and respect.
I think I understand that a part of that is taking the bodhisattva vow seriously and expanding our sense of care to a greater pool of stakeholders. But I didn’t quite get enough of that, or the tangible ways ethics might change for the better, or the nuts and bolts of how we might approach artificial intelligence accelerationism at this present moment and how these insights might shape tangible policy innovations.
I wonder what their next paper will explore and feel like I may need to reread this one a few more times and follow their citations back through time a bit to expand my own cognitive light cone backwards and forwards into the future.