The Neuroscience behind financial decision making: Q&A with Carsten Murawski

Carsten Murawski

Industry Moves speaks with Carsten Murawski, Principal Investigator for Melbourne University's Brain, Mind and Markets lab about the science behind financial decision making.

Q&A with Carsten Murawski

Can you tell us about the research that you do at the Brain Mind & Markets Lab?

Broadly, we do experimental research on decision-making, at the level of both individuals and markets. We are particularly interested in how people learn and how they solve complex problems. Some of the questions we are currently addressing include how people react to extreme events (e.g., a financial market crash) and why they tend to over-react, how people solve complex problems (e.g., innovation) and how we can make them better at it, and the effect of robo-traders on financial markets and why humans will remain important in markets with robots.

This research deviates from other research in finance in several important ways. Firstly, most of our research is based on experimentation. This means that we use controlled laboratory experiments to investigate particular phenomena. While this method is common in most other areas of science, it is new for finance, where research has been based mainly on the study of historical data (e.g., historical stock prices on the the NYSE). The problem with historical analysis is that while you can learn about correlations in the data you have got, you cannot get at causation. To get at the latter, you need experiments.

Secondly, a lot of our work is grounded in biology. We are biological organisms and we therefore believe that many explanations of the kinds of phenomena we are interested in, will be found in our biology, particularly the brain. We use techniques like brain imaging to study the neural processes underlying particular aspects of behaviours, such as risk-taking. We now also use pharmacology to probe the role particular neuromodulators like dopamine (which are contained in many 'smart drugs' such as Ritalin) play in our decision-making.

Thirdly, in our research, we draw quite heavily on computer science. The brain can be thought of as a computational device that processes information, and computer science provides us with a formalism to study the brain's computations.

Fourthly, a lot of the research is problem-led, that is, inspired by real-world problems, which we bring into the lab to study, with the hope to be able to develop a solution that we can then take back out into the real world. For example, quite a bit of work we are doing in the are of algorithmic trading at the moment was inspired by problems that occurred in real financial markets.

So in a way our approach to research in decision-making is quite novel, bringing together and combining methods from at least three different disciplines (decision theory/economics, neurobiology and computer science).

It's early days, but what research results have excited you the most since setting up the Lab in 2016?

I would like to share two interesting recent findings. One is related to our ability to make complex decisions. In traditional economic theory, it is assumed that when making a choice, people will always choose the best option available to them, no matter whether they have to choose from 3 options or from one billion options. This assumption is the core of what is known as "rational choice". It underlies most economic theories, the way economists interpret people's choices, and it is closely related to the concept of market efficiency. People have suspected for a long time that this assumption might be unrealistic.

For example, we know that if you increase the number of decision options, people become less likely to actually make a choice at all, a phenomenon known as "choice overload". We haven't had good explanations for this phenomenon though. We have been looking at this issue using concepts from computer science. We use something called theory of computation to quantify the resources that a computer would need to make decisions that people make in their every-day life. And we find that people's ability to find the best option from among those available, decreases rapidly as the computational resources required to make the decision (number of computations, memory), as identified by computer science, increases. This means that in many decisions people face, we shouldn't expect them to be able to find the best option, because they don't have the necessary resources. For example, if we gave someone $100 and asked them to go to the supermarket and buy the set of goods that is best for them, in the sense of maximising their utility (which is what you would have to do in order to be "rational" in the economics sense of the word), you would need more time than the age of the University and more memory than is available in the University to figure it out. In other words, we cannot possibly expect people to do that. This means that in many situations, people will deviate from the kinds of behaviour that "rational choice" postulates. What exactly people do in complex decision situations is something we are now trying to figure out.

This finding also has implications for markets. In economics and finance, there is the notion of market efficiency. Intuitively, it is the notion that at any time, prices in a market perfectly reflect the fundamental value of a good or security. But again, this theory assumes that people, who set these prices, individually or collectively are always able to perform the computations that are necessary to determine the fundamental value of a good or security. Here, too, we have shown that as the computational resources required to do those computations increase, the quality of prices, and thus market efficiency, decreases. We also argue that we can only expect markets to be efficient in very specific, and possibly rare, situations.

These insights will help us to design decision environments that make it easier for people to make 'good' decisions and markets that better reflect the fundamental value of goods or securities.

Another example concerns the way people react to extreme events such as a crash in a financial market. We have known that people tend to over-react to extreme events (as did, for example, a lot of superannuation account holders during the last financial crisis). Trading data suggests that robo-traders also tend to overreact, which might be one of the causes of the flash crashes we have been observing in markets in recent years. We have studied one of the most common types of algorithms used in robo-trading, called reinforcement learning, and investigated how it reacts to extreme events. We found that it does overreact in the same as humans do. At first sight, this was surprising because we thought that human over-reaction was caused by psychological biases and we thought that these algorithms would be objective, that is, free of biases. But it turns out they aren't. We then investigated where the algorithm's bias comes from. Like many algorithms used in robo-trading, and artificial intelligence more generally, these algorithms are based on principles discovered in biology, and in the case we were studying, a principles used by the brain for learning. So people used insights about humans to design powerful algorithm for electronic computers.

What has happened in the process is that these algorithms have 'inherited' human biases. And that's why they overreact to extreme events in the same way humans do. The next step for us was to see how exactly the overreaction comes about. Because we had access to the code of these algorithms, we could trace their behaviour step-by-step and exactly identify which part of the code generates the bias. We then also developed a way to get rid of the bias. This is helpful not only to develop 'bias-free' algorithms, for things like trading. It also pointed us towards where the bias in humans comes from and how to mitigate it. This project is a nice example where we started with a phenomenon in humans that we wanted to understand and used electronic computers and computer science to investigate human behaviour (as opposed to using insights about humans to design better computers).

In what ways do you predict technology (artificial intelligence) to play a role in the decision making process of the financial services sector in the future? And how far away are we in seeing major changes in this area?

There is clearly a trend to 'computerise' finance. But it's not new. The process of automating the finance industry has been underway for many decades. What's different now is that new technologies, such as new methods of machine learning, allow automation of new areas in finance that used to be the exclusive domain of humans. A good example is equities trading. About 15 years ago, equities trading was done mostly by people. Today, about 80 per cent or so of equities are being traded by computer algorithms. I predict that we will see similar developments in many other areas of finance. Importantly, I expect that we will see automation move more and more towards client-facing areas of the industry, for example in financial advice.

There is a lot of hype in fintech and artificial intelligence right now. A lot of fintech I have seen is basically just automation of existing processes. For example, a lot of the robo-advisers I have seen are just automation of what human advisers do but executed by an algorithm. Or in other words, it's traditional processes delivered automatically over the Internet. And in consequence, the advice they deliver is no different, and no better, than the advice you get through traditional channels. That's not the future of finance. The real potential lies in harnessing new technologies like AI, in combination with data and new insights about human behaviour, to completely transform the nature of things like financial advice. But we haven't really seen any of this yet.

But I don't think that all of the industry will be taking over by robots. For starters, these robots will need to be programmed by someone, typically humans. We will also need humans to oversee robots (try to leave a financial markets to robots, without any human intervention - the market will crash and fail). Also, there are many important tasks, such as creativity or innovation, that computers are not (yet) very good at at all. And a lot of theoretical work suggests that (electronic) computers may never be good at it.

This new wave of automation also brings new challenges with it. If we want humans to work more closely with technology, they will have to trust the technology much more than they do at the moment. But we know very little about how trust is built between humans and machines. Another challenge are biases of algorithms. As our research described above showed, algorithms, like humans, sometimes have biases. In a way, they have their own 'mind'. And their biases can be much stronger than those in humans. In fact, there is a huge risk that algorithms exacerbate human biases. We don't really know how to think about that at the moment and what to do about it. A third challenge I want to mention is regulation related to automation. The introduction of more technology in finance, such as robo-trading or robo-advice, will transform at least parts of the industry and require new regulation to mitigate severe unintended consequences. It will be critical that we make sure that regulation stays on top of these developments, using evidence-based approaches (see Peter Bossaerts's and my submission to the Australian Financial System Inquiry in 2015).

Some research suggests that female executives in the financial services sector are more likely to make lower risk decisions than their male counterparts, while other research suggests that this might not be the case. What are your thoughts on this?

I'm not an expert on gender differences in risk-taking. Work by the economist Julie Nelson has shown that gender differences in financial risk-taking tend to be very small. Other work suggests that even these small differences are population-specific. In other words, knowing whether someone is a man or a woman will tell you very little about their propensity to take financial risk. A good reference is Cordelia Fine's recent book Testosterone Rex , which reviews existing academic research in this area (Disclosure: I'm Cordelia Fine's partner).

Can you tell us about the new Doctoral program in Decision, Risk and Financial Sciences? What will it offer to students and in what career path could this lead to?

The cutting-edge in decision sciences draws on models and methods from several disciplines including economics, finance, psychology, neurobiology and computer science. Yet, existing doctoral training is still largely focused on one particular discipline. We therefore decided to create a new program that trains students in the skills they need to do cutting-edge research on decision-making. The program is based on two years of coursework in the disciplines mentioned above, during which students also complete two lab rotations in a research group either in Melbourne or elsewhere, and three years of doctoral research. We envisage that our students will end up in different sectors, including academia, industry and government. This is also why we are keen to build strong relationships with partners in industry and government, which give our students exposure to those sectors outside of academia and the research being done there.

...and a little bit about you.

What attracted you to this specific area of research?

I've always been very interested in risk and risk-taking, and how people go about it, why some people take more risk than others, and where risk taking goes wrong. Our lab gives me the opportunity to ask these questions with the best tools available at the moment. The research we do requires not only an environment of highly motivated and highly trained researchers but also a lot of complicated and expensive technology such as brain scanners. You only find such a set-up at leading universities like the University of Melbourne.

What was your very first job?

Analyst in investment banking at J.P. Morgan.

Who has had the biggest influence on your life/career so far?

The people around me, most importantly, my family, friends and the people I have worked with closely.

How do you maintain a work/life balance?

A combination of supportive family and friends, sport (long-distance running) and a healthy work environment.

You can read more about Carsten and the Brain, Mind and Markets lab here.