I pull out my phone and wave it at the turnstile as I enter the store. I grab some basics. I continue roaming the aisles. I decide to indulge myself a bit with a few sweet and savory snacks. At the last minute I put a few things back, deciding that I could do without some of those extra calories. Behind the scenes the store is watching me, noticing that one or two of those items ended up back on its shelves. It’s seeing how I give in to temptation and manage to resist various enticements. Observing how long I spend in various sections of the store, noticing my gait, my attention, and trying to predict where I might go next. It’s deciding how it might change things up the next time I visit, to match my goals with its own.
I am walking thru a robot masquerading as a store.
You might have been in one as well. When robots move past science fiction and into our lives we tend to stop calling them robots. Roomba is a vacuum cleaner. The clothes washer is a clothes washer, not a robot that happens to detect if your clothing is still damp, manage temperature, water, and gas dynamically so that it can spin up a centrifuge inside of its robotic belly.
The Amazon Go team spent three years building various parts and pieces of the robot as well as successive approximations of the entire experience. They built a team that included architects, construction firms, electronic engineers, big data analysts, machine learning scientists, store managers, and user experience designers. They tested out various methods to reduce errors and determine, for example, if the item I chose was reflected in the countertop and should be counted once or twice. They spent time figuring out how to make the experience as seamless as possible while tuning the ways in which the robot could shape my activities each time I visited. Ashley Arhart, the leader of the entire experience design effort at Amazon at the time noted that they realized as they went along that an entire new collection of ethical challenges had to be addressed as they built robots we shopped, lived, played and worked within. As they iterated they recruited ethicists and policy designers to join the team. They began contending with the notion of ambient consent and ways of protecting privacy to balance the goals of the store with the goals of their customers.
Imagine you’re designing a place that might not only detect buying preferences but whether someone was limping a bit. You couldn’t help but notice if a visitor was shuffling or dragging their feet or sharing other signals that might hint at a sleepless night, a fight with a loved one, or early warning signs of a more serious issue like congestive heart failure.
Here is another example of a place that’s really a robot. The MX3D bridge designed by Joris Laarman Lab is slated to be installed in the red light district of Amsterdam in 2021. Not only was it built by robots using additive manufacturing, but it has a sensor network—a nervous system of sorts—in and around the bridge including environmental sensors, sensors that can detect the stretching of the materials as people walk on it, and cameras that can determine if you’re walking across it or standing in the middle or jumping up and down or waving your hand. The ultimate goal was for the bridge to provide a better understanding of how it is performing and how it is being used. In a city of bridges, if they all began to wake up we could potentially learn more 200 square kilometers of town than we have learned from all the numb bridges in the world combined. However, the team saw plenty of examples of ethically dubious technology companies ignoring or having naive views about privacy and the public good so they went out of their way to see how they might become a part of an emerging design manifesto called Tada.City, and give citizens open, transparent, inclusive, control and the ability to be forgotten as well as informed in ways that fit the balance between city and citizen. For example they have ambient ways of signaling that there is data being captured, and They anonymize information dynamically at the point of capture. Think of these as semantic sensors that know there is someone on the bridge, but can’t tell you who. People look like simple wireframe skeletons. The promise is that we’ll have a pact with our places that honors our rights and provides the city with a digital twin of each bridge (or maybe a digital stunt double) that we can experiment on, without running experiments on people, or with expensive pieces of infrastructure. The ultimate dream is to help the city and citizens flourish and close the design, build, test, learn loop to build better places.
Context is decisive.
Our environments can shape our behavior more than most people realize. The store and the bridge might shift walking or shopping patterns dramatically towards a given set of goals. The bridge builders have made a commitment to give the ultimate control to the citizens of the city. But we’ll have to be alert to really understand how much context decides what we do and believe we can do. To drive this point home consider, not shoppers at shopping district, but the most powerful leaders in the most powerful organizations on earth. Surely they are masters at making just the right decision at just the right moment to guide their organization into the future. Well. We wonder.
Kaye Foster, a C-level executive herself has taken on her next chapter in recent years as an executive coach and senior advisor. She has learned the hard way that the shaping functions pushing on executives as they climb the ladder often limit their thinking when they enter a new realm. She now works with leaders and team mates to uncover the system of thinking they’re embedded within that limits their growth and potential. She notes quite elegantly, “context is decisive.” Sometimes just her being there, giving the leader a way to have a safe talk with themselves, gives them capacity to learn something new. When she holds them accountable to their own goals and gives them a chance to build healthy habits over time, her presence shapes their context and their ability to make better decisions, long after she has left the room. She listens to the words they use, which often signal the metaphors and analogies that shape their thinking. As an aside, we have to wonder. Will places—that end up with mindsets that can shape our capacities—need executive coaches when they become too fixed in their ways? It’s hard to change a place’s context. More urgently we’ve always thought that toolsets you could buy, skill sets you could learn, but mindsets were often fixed early and the hardest to change. Coaching is an emerging super power and literacy that is gaining traction and becoming automated in its own right. Adam Grant’s book about finding ways to “Think Again” may be particularly apt to review, and in the light of places waking up, think again.
Now let’s take a detour to explore a random sample of humanity from the lands of Europe. If you’re a citizen of the Netherlands you are more than likely not an organ donor, if you’re a citizen of Belgium you are. If you’re a citizen of France you are, but Germany? Not likely. Consider this fascinating study conducted by Eric J. Johnson and Daniel Goldstein (whose original doctoral thesis built on Nobel winning economist Herb Simon’s work about bounded rationality). They explored why the variance was so significant across country borders. When people see the graph of the variance they often wonder deep and significant questions and speculate about why there is such a marked difference between cultures. Was it ethics, education, history, cultural norms, was there some strange genetic component that drove altruistic behavior, or something else? If you show the variance to citizens in the studied countries those that had high donor rates often believe they likely chose to become a donor because it was the ethical thing to do. When they see that their fellow countrymen and women also chose to donate at much higher rates than a neighboring country they might note that, they cared more about others or that, they had a history of giving to the future, or believed that it would further scientific inquiry and help future generations. But if you asked citizens in countries that had very low donation rates they may note that they feared that—in the event of a near fatal accident—caregivers might not do whatever it takes to save them particularly if those doctors or first responders or family members knew that their organs where potentially more likely to save someone else’s life. But what if there was a more likely answer that explained why Belgians where so noble and those from the Netherlands where more cautious.
The researchers discovered something more surprising. There was a strong correlation between countries that had “Opt-out” check boxes on their organ donation forms and citizens who decided to donate their organs. Those countries where the citizen had to “Opt-in” to become an organ donor had very low organ donation rates. What could be the explanation for such a simple box, checked or unchecked, leading to such a great disparity in donation rates? It turns out that there is a shortcut our brains use to save resources and deal with complex challenges. We take our clues from the environment. As Herb Simon taught us in part 1 of Infinite U, and Kaye reminded us through her work with leaders, context is decisive. If the checkbox is checked, we leave it checked and trust the default. It is significantly more work to think through all the possible—and risky—futures that might come to pass if we actively choose to “Opt-in.” We let the environment decide for us. Our places and things have always been more than JUST places and things, they have always had the power to shape our decisions in ways that we, like those citizen’s of Belgium, believe we chose for ourselves. This “default bias” may have become a decision-making short cut for good reasons over the long arc of evolutionary time.
A wake up call about our places and things waking up.
Of course complex topics often have multiple causes, but each decision we make as shapers of context adds up. In this case it may make the difference between life or death. A well-meaning or naive designer—or policy maker—is one thing, but if the rise of social media has taught us anything its that machine learning at scale, combined with control over some aspect of our context can shape our ability, or lack thereof, to think, learn, and dream. Most people have come to realize that the negative impact is already significant on digital platforms. Today, what are considered algorithms of oppression or dark design patterns are largely playing out on purely digital networks and platforms. The mathematician and author Cathy O’Neil points out that these machine learning efforts built into social networks and other tech platforms are far from neutral. She notes that they can, “make unlucky people, unluckier.” Pioneers in algorithmic justice like Joy Buolamwini fight against these often subtle and sometimes blatant attempts at unleveling the playing field every day.
This is personal.
There is something deeply personal we have to tackle. In some cultures individualism and the hero worship of lone innovators or transformative leaders defines our societal mythologies. It might be hard—in a world that seems to be telling us all too often that we are masters of our own destiny—if only we’d “pull ourselves up by our bootstraps”—to believe that so much of our thinking is happening outside of ourselves. We will have to get past the philosophical debates about free will, Hollywood super-hero storylines, hagiographic biographies of great (usually) men, or holier than thou beliefs that peddle bread and circus to distract us. We will need to move beyond treating luck and serendipity as a signal of our own capacity and open our eyes to look around and see how much our context has helped us get to where we are, left us asleep at the wheel, or worse, filled the room with knock out gas.
This is urgent.
The recent global pandemic has seen a significant rise in a sort of contextual collapse as “bedzooms” replace not only classrooms but also walks to school with friends, bus rides, study halls, and all the shaping functions humans crave or find traction within, to build their capacity. Imagine if, instead of millions of students, parents, and teachers blaming themselves, over diagnosing symptoms, overmedicating themselves or worse, in the face of an epidemic of missed or incomplete homework, our places and things could listen for goals and help us change our context to help us reach them like a master executive coach. At the more futuristic end of the spectrum our research team mate Neta Tamir has been exploring the development of a “which watch” that can help you pick which new skills, mindsets, or toolsets you want to build, when. It then broadcasts those goals to all of the things and places in your “Cognitive Information Model” and enlists their help. She thinks of it as a parametric CIM (or even Sim) for your BIM.
If none of those words or acronyms make much sense to you, lets look at a simple yet powerful example of a thing that has a small bit of goal oriented agency which, when combined with a powerful learning insight about how we cope with procrastination, can change context and help students reach their goals? Consider the lowly tomato shaped kitchen timer. By setting a goal of a few minutes of focused work, knowing that you can reward yourself when the timer goes off, you can change your context enough to start making progress. Barbara Oakley, an engineer, teacher, and leader in helping people learn how to learn and translate learning research into practical approaches, dives into more details about the Pomodoro method in this short video.
Hope, as the day breaks.
We are convinced that there are incredibly positive aspects of inviting machine learners to help human learners form “accelerating autonomous learning loops” between our own brains and the places and things that surround us. Some early signals?
A new literacy and science called choice architecture.
In Part 1 we talked about Herb Simon and his scissors. The theory that our minds are not all inside our heads. We are—in some deeply seated way–ourselves plus the world around us. We are shaped by and shape, not just our places and things but also the people we interact with and the communities we live within.
Dr. Ting Jiang, a principal for Global Health and Development from Duke’s Center for Advanced Hindsight and advisor to various “design for good” organizations in Canada and China, has been researching how physical architecture and behavioral “choice architecture” intersects and what it might mean to build goals for health and wellbeing into the places we live and work. Her efforts and the entire team’s focus at Duke seems to embody Simon’s scissors. They rearrange cafeterias in a way that significantly shifts whether occupants choose a nutritious or non-nutritious meal. They redesign scales so that they have no visible measurement—yet increase the owner’s chances of losing weight. They build board games that increase a person’s capacity to plan for their financial future and reduce their “decision-myopia.” They and various offshoots of the center (created by Dan Ariely of “Predictably Irrational” fame) have studied how applying the scientific method to the choices we make, in a given context, build healthy habits, shift destructive social norms, tune reward structures for improved wellbeing, and help us build capacity for better decision-making for our future selves. They help architects, designers, and policy makers learn ways of evaluating the symptoms of a specific challenge around health or wellbeing. They spend time diagnosing the underlying causes. They assist in constructing hypotheses and protocols where the places or things take on some of the burden of supporting a given goal. They structure controlled testing to determine the efficacy of an approach and evolve theories that have real impact based on those findings.
Goal Directed APIs
Practitioners are beginning to recognize that their designs must take into account both digital and physical experiences as well as the behavioral and social interactions that occur within and beyond their spaces. Yet venues are still built for simple and often linear expressions of what are increasingly becoming complex systems. The rise of generative design–where designers set the goals and constraints across multi-dimensional challenges–is changing the practice of how a product, building, campus, or community is created as well as how it is run and evolved into a higher performing space.
Countless popular books about behavioral science or healthy building design have raised awareness about these topics, and some organizations, like the Center for Healthcare Design, have spent decades building a knowledge repository of scientific, outcomes-based research into how buildings might foster well-being and health. However, all too often, those efforts are trapped in white papers, studies, research databases, or the minds of the experts, researchers, and consultancies defining these new practices. Further the work is overwhelming at best, rife with counter-indications based on mismatched protocols, sample types and sizes, and unavailable at worst in a way that either a human architect, or a building’s algorithms can take advantage of—or build on—in the wild.
What if we had the ability to embed those lessons learned into a goal directed system? What if Dr. Jiang’s ideas about choice architecture could be embedded as dimensions of goals and constraints that are dynamically considered, adjusted, tested, and improved upon within such a system? What might it look like if we allowed for the shaping of those goals and constraints by the people who lived and worked within the robots our spaces are becoming? Ultimately all of the core principles we explored about generative learning in the earlier parts of “Infinite U” could be applied to places and things themselves as co-creators and contributors to shared collections of goals.
When computer developers want someone else to be able to use some of their code a common approach is to build a set of “hooks” or handles that are exposed on the “outside” of the code so we can connect to those hooks and get the benefit of the code without all the complications of trying to understand or write the whole box of code ourselves. If you want to find a location on a map, you might not be able or interested in building an entire information visualization for mapping or even know that there are different projection methods to wrap a map around a spherical object like the Earth. You might have a latitude and longitude tho, and you may have a name for that point on the map. If you could hook your information (lat, long, name) to a handy piece of code someone else has written, you can use their code without really seeing all their “secret sauce.” This approach to externalizing hooks or handles is called an Application Programing Interface, or API. You likely played with APIs when you were very young and had a set of LEGO bricks. They had an innie at the bottom of the brick and an outie at the top. That simple API allowed all sorts of creators to invent new LEGO parts that “plugin” to the original LEGO bricks and do things the original inventors of LEGO could have never dreamed about. For more about APIs without having to be a computer developer, see our video about a box that changed the world, from Trillions.
Now imagine that you have a collection of internal aspirations, norms, favorite ways of doing things, history, and identity as a person at this particular point in your life. You may not want to share some of the inner fears and hopes and dreams you have or the most private parts of what makes you, you.
What if our places and things were coordinated in their efforts the way a good team of doctors work to help you to reach a health or wellness goal. Your family doctor focuses on the whole you, your endocrinologist focuses on what she’s good at, your cardiologist works on what he’s good at and together they help you stay healthy and build positive habits. The best collaborative teams work on a common goal and gain the ability over time to predict how their other team mates will act or how to work with each other to get closer to a goal. We are beginning to see provocative examples of places gaining the ability to successfully predict the expertise, and mastery of the people working within the place. Could we enlist our places to help us see patterns about ourselves that we can’t see?
Detecting expertise in context.
An illustration of this ability for places to take a step towards detecting expertise can be found in the work by Autodesk Research entitled, “Instrumenting and Analyzing Fabrication Activities, Users, and Expertise.” In their work, using off-the-shelf sensors combined with machine learning algorithms, they were able to successfully predict the level of expertise of a person performing a task like drilling a hole or soldering a circuit. In a further research effort they were able to use cameras to detect and predict territoriality and approach pathways within an office environment and make assertions about how a building’s design may shape the social comfort levels of its occupants.
The science of science is changing.
Science is hard and the social and behavioral sciences are both facing a replication crisis—as we realize that tests in the lab with powerful and seductively publishable results might not represent how challenges in the real world are actually addressed or capture the nuances of underlying norms, beliefs, mindsets, context, or history.
The cost of instrumenting and running these scientific experiments ‘in-situ” even after the new environments have been constructed is dropping dramatically with the commoditization of industrial and consumer IoT and the rise of edge device machine learning.
There are tantalizing signals that the scientific method itself is evolving as systems that can both find patterns in the scientific protocols performed–to discover human biases and statistical errors–as well as consume and predict non-obvious outcomes hidden in the data emerge. We are even seeing how machine learning can help us uncover causal mechanisms in the undiscovered country of public knowledge. These evolutions of scientific knowledge may lower the barriers for us as individuals to uncover why we do what we do and how we can accelerate or deepen our mastery of a topic or increase our psychological flexibility in the face of a changing world. They may also help the networked matter that makes up our world in this next century uncover new ways they, and we, might learn together.
So we’ll end with a few questions.
It is clear our places and things are learning more and more about us all the time. What are they learning and how much of what they’re learning is compromised by the lack of a choice architecture or a model of how much the existing context and physical architecture is unwittingly shaping our behavior?
What bad habits are these places learning, and teaching us?
How might we tap into this emerging golden era of science “in the wild” and shape the disciplines and practices that will inevitably be responsible for the robots we live within, and the things we surround ourselves with, in ways that help us reach our goals, or the goals of our teams or communities?
Next? Stay tuned for how Darwin, Ostrom, and the Wilson boys have a theory of evolution and collective action that might help us build a world spanning learning collective. In the next installment of Infinite U (What would Darwin do?) – Part 5.