From a public policy perspective, how might behavioural insights help shape the context in which people make decisions in a way that would be of broader benefit?
A major concern people have is when to intervene and how to know whether it’s good or not. I love the book Nudge.1 But some of the points the book makes get lost in public discussion: that nudging is not the only goal. There’s no such thing as neutral choice architecture, but there’s certainly always a choice environment. It’s about a mindful design of the choice environment.
In a book I’m writing now, I’m thinking about the differences in emphasis between choice architecture and nudging — very often, people think of these as the same thing. One point that as choice architects we need to think about is what our goals are. The aim may not be to change everyone so that they exhibit one kind of behaviour, say to lose weight or save more. It may actually be to have each person meet their own goals, or a goal that hopefully is wisely chosen.
Choice architecture differs from how the nudge concept is often used, in that choice architecture tries to make people’s choices the right choice for them. The other big difference is that we are also trying to make choices easier, faster and subjectively something people find more pleasant. People often don’t want to make choices, so if you can make that choice easier for people, you’re doing something good.
The beauty of defaults is that you can save people from having to make some painful choices. Defaults work really well when you have an option most people would choose. For example, we know that almost nobody saves enough, so making them save more money is probably a good thing. For retirement decisions however, since some people live longer than others, there are no one-size-fits-all decisions; we have to customise the options. The challenge is to help people make the choices that are right for them, individually.
One option is smart defaults. There are times that we may know more about a person than they know themselves, so we might then set an appropriate default that would be the one they’re likely to choose if they thought about it carefully. They may not, for example, enjoy thinking about how long they’d live, but could instead answer a few questions and have a calculator make estimates that will help them make a better decision.
In what ways might time, age, culture and other dynamic factors influence the way choices are made?
In the United States, when you default into retirement savings, you may be defaulted into a life cycle or life stage fund. This is a fund that manages certain aspects of your retirement money. When you are young, you should be taking more risk and be invested mostly in stocks. But as you get older, you should be putting more of your money into lower risk cash and bonds. Many retirement plans default you into a target date and automatically readjust the proportion of your money invested in stocks and bonds as time goes on.
This is a smart default: we know roughly when you were born, we roughly know what most people want in terms of their investments. So if I’m older than you and we both save through a retirement plan, I should have a different investment profile than you.
People tend to make such decisions and never revisit them. They are ‘one and done’, and tend to stick throughout their lifetime. Some other decisions, such as what you are going to eat, are made several times a day; perhaps even with every spoonful. Those are harder to change, because it’s not a set-it-and-forget decision, and these decisions can be quite influenced by the environment. For example, the layout of a menu might change what we order, and different restaurants have quite different layouts. These frequently repeated decisions are different. So far, we have been lucky and most of our interventions have been set-and-forget-it decisions, and we have been successful so far.
Another source of differences occurs over the life span. There are two findings from research we have done that shows how decision making changes as you get older. The first trend is that you start measurably losing what is called fluid intelligence from about 25 years of age onwards.2 This is actually a remarkably strong pattern. Fluid intelligence is measured by things like how quickly you can respond using button presses, or complete sets of visual patterns. Luckily, there’s a second kind of intelligence, called crystallised intelligence, that is related not only to biology but to experience. A person in their 60s knows a lot that a 25-year-old does not. This has an important implication. It is not the case that older people are necessarily worse decision makers than younger people; they are just different kinds of decision makers — crystallised intelligence can compensate for the decline of fluid intelligence. For instance, our research shows that older people are less likely to fall for certain standard cognitive biases, such as hyperbolic discounting or present bias. They also tend to have less loss aversion.
All this suggests that we should support the decision making of younger and older people differently. When younger people make a pension decision, they may not know as much about investing. An older person may know more, and perhaps may not benefit from being shown as many different options which are also more complex to process.
Also, because crystalised intelligence increases until about age 65, you might want to consider making an initial pass at important financial decisions at around retirement age, and not wait until you’re in your 80s. You may make changes later, but you should have a plan earlier in your senior years, rather than when it is needed.
One thing to note is that crystallised intelligence is domain specific. The classic example is older people being better than younger people in doing crossword puzzles. So if I have more financial knowledge, it might help me with stocks and bonds, but if you start asking me about blockchain and bitcoin, then crystallised intelligence is not as relevant.
If people are aware of being nudged, in what ways might it affect the efficacy of interventions?
There are some recent research efforts that warn people they’re being nudged, then ask them whether they thought the nudges affected them. The fact is that warning people doesn’t mean the nudges become ineffective. You may change your behaviour because someone wants you to do something else, but for that, you have to disagree with what they want you to do.
Organ donation laws, such as you have in Singapore, are an example of defaults that I often use. It is important to be aware of the effect of defaults, because you are trying to change people’s behaviour in a way that some segments of the community may not approve. In these cases, you may want, instead of changing the default itself, to ask people to pay more attention to this decision and its implications for them.
I think the reality is often that if we make the decision easier to make, people will make the decision actively. If I can help you avoid expending effort on a choice I know you’d choose anyway, it seems like a good idea. You’re choosing the same thing with less effort. However, if you are only making each decision easier, you are not necessarily giving people more time to think. It may be slightly more pleasant to choose, and in that sense, the choice architecture might be good, but you might not be improving the decision.
How might social media and the evolving online, digital landscape change our understanding of choices?
There has been quite a bit of discussion on how people tend to choose a more homogenous set of information that confirms rather than disconfirms their opinions. Then you end up with polarisation. Scholars have talked about this for years, but I think we’re seeing it now more and more. This is also a question of largely unmediated, unverified sources of information becoming more available. So people are relying on very different sets of facts, which can work against trying to help people make informed choices. Vaccines are a good example. One of the things that needs to be done is of course to provide relevant information, at a relevant time to people. I think part of the disconnect is that easily processed, factual information is not being made more readily available.
Another concept we talk about in psychology is constructive preferences, which I prefer to call assembled preferences. By that I mean a belief that I put together as a result of how I am asked a question, or due to the environment. For instance, we have done research that shows that whether the temperature today is warmer or cooler than usual can influence your belief in climate change. It’s like a form of saliency bias. We think it works through what you think about. So even for an information rich topic like climate change, your opinion can be changed by the environment.
One way to address this is to give people more pros and cons, making sure they are aware of both sides of an argument. We’ve been playing with a technique we call a preference checklist. This is usually applied to retirement claiming decisions: we give people a list of reasons why they might claim earlier, or why they might claim later.
So when people make a choice, we want to help make sure they’ve thought about all the relevant considerations. We cannot address the breadth of the issue but at least this helps to increase awareness, since people may not have thought about a particular element before. It is like a checklist for an airplane before it takes off. Some airlines today use positive affect and emotional or humourous appeal to get people to pay attention to the safety message before takeoff. They are wise enough to know it’s probably not a good idea to warn people about how bad things can get if they hit turbulence without seat belts on. It would put people off.
How might the public sector and the policymaking process become a more mindful choice-making environment?
Too often choice architecture gets framed in a very paternalistic framework: I know what’s best for you, so here are your choices. It is quite clear that public policy, even when it’s well intentioned, may have unintended consequences.
Obviously, it is important to carry out as much empirical evaluation as possible beforehand. For instance, the US Consumer Financial Protection Bureau tests regulations with companies using randomised controlled trials (RCTs). RCTs are useful but can be expensive, slow, and almost impossible in certain contexts, but there are other kinds of research that you can do as well.
There are framed field experiments — these are experiments that match real world decisions in terms of the population, stimuli, and forms, and are framed just like the actual decisions, except without the consequences. You can do those studies very easily and relatively quickly, particularly since so many of forms now are web-based. So you can assign the appropriate people who need to make the decisions to various conditions and see if you make a difference in what they will choose. It’s a form of empirical testing that can inform policy, without having to do full blown RCTs every time.
One example we’ve worked on is with health insurance choices and health exchanges in the US — the people implementing these systems were under incredible time pressure to produce exhanges.3 Nobody had time to do a careful RCT, but we did six different framed field trials in six months. We used people who would be making these choices and examined the impact of various aspects of the exchanges on the quality of their decisions. For example, we looked if providing more policies resulted in better or worse decisions. This worked well in answering these questions. People were motivated, since they were incentive compatible: people got more when they made better choices. We would never have done this if we had relied on RCTs and had to set up an actual exchange with each of these features.
Such trials can be done carefully and pragmatically, resulting in changes that offer not just qualitative but relevant quantitative feedback. We can explore many more things than we could if we were doing RCTs. For example, in our last study, we looked at six different possible designs and we identified a winner in a month. The difference between the worst and best design would mean billions of dollars of savings if implemented.
I think Singapore is an interesting laboratory for choice architecture, in part because you’ve been bolder than most in pursuing this. You have all the functions of a federal government. Philadelphia, New York may be doing small tweaks on retirement systems — you do it all. The advantage of a small but very diverse city is that you have a population that you can reach much more easily, but this is an effort that would be strengthened by carrying out empirical evaluations. One of the challenges of government is making sure you’re not making the same kind of mistakes that the people you are governing make.
Professor Eric Johnson is a faculty member at the Columbia Business School at Columbia University where he is the inaugural holder of the Norman Eig Chair of Business. He is also Director of the Center for Decision Sciences. He was awarded the Distinguished Scientific Contribution Award from the Society for Consumer Psychology, and is a Fellow of the Association for Consumer Research, and the Association for Psychological Science. His recent research interests focus on choice architecture and its influences on public policy.
Professor Johnson spoke with Civil Service College Principal Researcher Sharon Tham and ETHOS Editor-in-Chief Alvin Pang when he was in Singapore to deliver a NUS-MTI-CSC lecture on “Beyond Nudges: What and How to Present Options” in October 2016.
- Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions About Health, Wealth and Happiness (London, UK: Penguin Books, 2009).
- Y. Li, et al., “Sound credit scores and financial decisions despite cognitive aging,” Proceedings of the National Academy of Sciences 112(2015): 65–9, http://doi.org/10.1073/pnas.1413570112.
- E. J. Johnson, et al., “Can Consumers Make Affordable Care Affordable? The Value of Choice Architecture,” PLoS ONE 8(2013): e81521–6, http://doi.org/10.1371/journal.pone.0081521.