Home » Blog » What computer science can learn from finance

What computer science can learn from finance

Science should be a two-way street. In the same way that we might view the financial system as a computational model that can be simulated, there are undoubtedly things that computing can learn from finance. In particular, a lot of financial instruments and techniques evolved because of uncertainty and risk: issues that are absolutely a part of computer science in general and of sensorised systems in particular. The financial crunch hasn’t invalidated the  significance that these techniques may have for computer science and systems architecture.

There ‘s an increasing acceptance that economics as a whole has lessons for computing. The connection may not be obvious, but if one considers that most modern systems of any scale are built from components, and often from agents whose goals are not necessarily completely aligned and whose actions need not be trusted, one begins to see how a model that assumes only self-interest might have relevance. Typically economics is used to provide models for large-scale behaviour, rather than being treated as a way to formulate or program the agents themselves. (For a survey of recent results see Neumann, Baker, Altmann and Rana (eds). Economic models and algorithms for distributed systems. Birkhäuser. 2010.)

How about the individual instruments, though?

Let’s take a step back and think what a financial instrument actually is, or rather what it’s goals are. To make money, clearly, but making money is easy if you know the exact circumstances in which a transaction will take place. It’s generally accepted in the economics of ordinary trade, for example, that both sides of the transaction win from the trade: the seller will only sell if the goods sell for more than he spent in creating them, while the buyer will only buy if the goods have a higher utility value to him than the cost of their purchase. (It’s interesting that it’s necessary to point this out, since a lot of people tacitly assume that trade is only good for one side or the other, not for both.) The different between this sort of economic activity and financial instruments such as shares and derivatives is that the traders know (most of) the circumstances that affect their valuations, whereas the value of a mortgage or share is affected by events that haven’t happened yet.

If we represent instruments as computational objects, then we can look at the ways in which this uncertainty might be managed. A common way is to identify the risks that affect the value of an instrument, and hedge them. To hedge a risk, you find another instrument whose value will change in a way that balances that of the initial risk. The idea is that a loss on the first is compensated, at least in part, by a gain on the second.

The best example I can find — given my very limited understanding — is airline fuel. An airline knows it needs fuel in the future, and can even estimate how much it will need, but doesn’t want to buy it all up-front because things might change. This means that the airline doesn’t know how much the fuel will cost, since the price may change. A large increase would cost money since tickets already sold would reflect the earlier, lower fuel price. Passing this on later as a surcharge is extremely unpopular. What the airline can do is to buy a fuel future, a call option, that gives it the right to buy fuel from someone at a fixed price per litre. That is, someone guarantees that they will sell the airline the fuel it needs in (say) six months at a given price. The option itself costs money, above and beyond the cost of the fuel, but if the prices are right the airline is insulated against surges in the fuel price. If in six months the cost of fuel is higher than the cost in the call option, the airline exercises the option, buys the fuel, and makes money versus what it would have paid; if the price is less than that in the call option, the airline just writes-off the cost of the option and buys the fuel in the market. Either way the airline caps its exposure and so controls its future risk. (If this all sounds one-sided, the entity selling the option to the airline needs to make money too — perhaps by having a source of fuel at a known price, or having a stockpile on hand, or simply betting that the prices will move in a certain way.)

There is already experience in writing financial instruments, including derivatives and the like, using programming languages. (See, for example, Peyton Jones and Eber. Composing contracts: an adventure in financial engineering. Proc. ICFP. Montreal, CA. 2000.) Instruments represented as computational objects can be linked to their hedges. If we’re simulating the the market, we can also simulate the values of hedges and see under what circumstances their values would fall alongside the original assets. That shows up risks of large crashes. At present this is done at a coarse statistical level, but if we link instruments to their metadata we can get a much finer simulation.

We can potentially use a similar technique for any system that’s exposed to future uncertainty, though. Systems that do inference have to live the risk that their inferences will be wrong. In a pervasive computing system such as a smart building, for example, one typically looks at a range of sensors to try to determine what’s happening and respond accordingly (by opening doors, providing information, sounding alarms or whatever else is needed by the application). Most such actions occur by inference, and can typically be wrong.

How might the lessons of finance help? We have a process that is dealing with future uncertainty, and whose utility is governed by how well it manages to address that uncertainty relative to what’s “really happening” in the world. If we rely on our single inference process, and it makes a mistake — as it inevitably will — we’re left completely exposed: the utility of the system is reliant on the correctness of the single inference function. Put in the terms above, it isn’t hedged.

So we might require each inference process to come with a hedge. That is to say, each process not only specifies what should happen as a result of its determination of the current situation; it also specifies what should happen if we get it wrong, if later evidence shows that the inference we made was wrong. This might be something simple like reversing the action we took: we turned the lights on because we thought a room was occupied, and in mitigation if we’re wrong we turn the lights off again. The “cost” of this action is some wasted power. Some processes aren’t reversible, of course: if we (think we) recognise someone at a door, and open it to admit them, and then discover we made a mistake and it’s not the person we thought, just locking the door again won’t work. We could however, take some other action (sound an alarm?), and accept a “cost” that is expressed in terms of the inappropriate or malicious actions the person might have taken as a result of our mistake. Moreover because we’ve combined the action and its hedge, we can assess the potential costs involved and perhaps change the original inference algorithm to for example require more certainty.

Essentially each behaviour that’s triggered comes with another, related behaviour that’s triggered in the event that the first one shouldn’t have been. There are some interesting lifecycle issues involved turning this into a programming system, since the lifetime of the hedging behaviour might be longer than that of the original behaviour: even after the person has left the room, we might want to sound the alarm if we realise we made a mistake letting them in. The costs of mistakes might also be a function of time, so that problems detected later are more costly.

The point here is not that pervasive and sensor systems can get things wrong — of course they can — but that we need to design systems on the assumption that they will get things wrong, and account for the costs (financial and otherwise) that this exposes us to and the actions we can take to mitigate them. The financial system does this: it recognises that there is uncertainty in the world and tries to construct a network of agents which will gain value regardless of the uncertain events that happen. The process isn’t perfect, as we’ve seen in recent financial events, but it’s at least a recognition within the system of the impact that uncertainty will have. That’s something we could learn in, and incorporate into, computer science and systems engineering.

Leave a comment