Skip to main content

Posts about ireland (old posts, page 1)

Modern postcodes

Ireland doesn't have a postcode system -- a state of affairs that causes endless problems with badly-designed web sites that expect them, as well as with courier deliveries. But of course in the internet age there's no reason to wait for the State to act...

It always surprises people that Ireland doesn't have post codes, when pretty much everywhere else does. Dublin has postal districts -- I used to work in Dublin 4, which covers about 50 square kilometres and so doesn't really function as an aid to delivery or navigation -- but there's nothing similar in the rest of the country. Add to this the fact that many country villages don't have street names either, and you start to understand why getting a package delivered by a courier usually involves a phone call to talk them along the route.

In actual fact the problem is less severe than you might expect, because the postal system works rather well. This is because the postmen and women get very good at learning where each person lives by name, so a name, village and county will normally get through (outside a major town). The villages are small and people tend not to move too frequently, so human-based routing works well for frequent contacts. For couriers and infrequent service providers, though, it's a different story. I usually have to take phone calls from the people who deliver heating oil, for example, because a twice-a-year delivery isn't enough for them to remember where we live.

One might think that this situation would be easily remedied: choose a postcode system and implement it. But this leads to the next thing that people often find surprising: many people in the countryside, despite the obvious benefits in terms of convenience and efficiency, are implacably opposed to postcodes. The fear is that such a system would do away with the quaint townland names: each village will often have several smaller townlands surrounding it, that get mentioned in the postal addresses. These often have a lot of history attached to them, and in some parts of the country are written in Irish even when nothing else is. Every time the idea of national postcodes is raised in the national press a whole host of letters are published opposing it and predicting the death of the rural Irish lifestyle, and it seems that this has been enough to stymie the implementation of the system on a national basis. There are a disproportionate number of Irish parliamentary seats in rural areas, so political parties are loath to do anything that alienates the rural vote.

In past times, that would have been it: the State doesn't act, end of story. But we're now in the internet age, and one doesn't have to wait for the State.

I just came across Loc8, a company that has established an all-Ireland post code system. They've devised a system of short codes that can be used to locate houses to within about 6m. So -- to take an example directly from the web site -- the Burlington Hotel in Dublin has Loc8 code NN5-39-YD7. The codes are structured according to the expected hierarchy of a zone, a locality and then a specific location, with the latter (as far as I can tell) being sparse and so not predictable from neighbouring codes: you can't derive the code of one property by knowing one nearly. (I might be wrong about that, though.)

So far so good. If the service is useful, people will use it: and apparently they are. You can enter a Loc8 code into some GPS systems already, which is a major step forward. The courier companies -- and even, apparently, the national postal service, An Post -- will apparently take Loc8 codes too. There's also a plug-in for Firefox that will lookup a Loc8 code from the context menu: try it on the code above. It's a bit clunky -- why does it need to pop-up a confirmation dialogue? -- and integration with something like Hyperwords would make it even more useable, but it's a start.

What I like about Loc8 is that it's free and (fairly) open: you can look a code up on their web site and it'll be displayed using Google Maps. The integration with commercial GPS systems is a great move: I don't know if it's integrated with the Google navigation on Android, but if it isn't it'd be easy enough for LOC8 to do -- or indeed for anyone else, and that's the great bonus over (for example) the closed pay-world of UK postcode geolocation.

The real story here is that it's possible to mash-up a national-scale system using extremely simple web tools, make it available over the web -- and then, if there's value enough, cut commercial deals with other providers to exploit the added value. That sort of national reach is a real novelty, and something we're bound to see more of: I'd like something similar for phone numbers, skype names and the like, all mashed-up and intermixed.

Call for panels in integrated network management

We're looking for expert panels to be run at the IFIP/IEEE International Symposium on Integrated Network Management in Dublin in May 2011.

IFIP/IEEE International Symposium on Integrated Network Management (IM 2011)

Dublin, Ireland, 23-27 May 2011 http://www.ieee-im.org/

Call for panels

Background

IM is one of the premier venues in network management, providing a forum for discussing a broad range of issues relating to network structure, services, optimisation, adaptation and management. This year's symposium has a special emphasis on effective and energy-efficient design, deployment and management of current and future networks and services. We are seeking proposals for expert discussion panels to stimulate and inform the audience as to the current "hot topics" within network management. An ideal panel will bring the audience into a discussion prompted by position statements from the panel members, taking account of both academic and industrial viewpoints. We are interested in panels on all aspects of network management, especially those related to the theme of energy awareness and those discussing the wider aspects of networks and network management. This would include the following topics, amongst others:

  • Multi-transport and multi-media networks
  • Network adaptation and optimisation
  • Co-managing performance and energy
  • The uses and abuses of sensor networks
  • The science of service design and implementation
  • Programming network management strategies
  • Tools and techniques for network management
  • Socio-technical integration of networks
  • Energy-efficiency vs equality of access
  • Network-aware cloud computing
  • The future of autonomic management
  • Coping with data obesity
  • Managing the next-generation internet

How to propose a panel

Please send a brief (1-page) proposal to the panel chairs, Simon Dobson and Gerard Parr. Your proposal should indicate the  relevance of the panel to the broad audience of IM, and include the names of proposed panel members.

Important dates

Submission of panel proposals: 20 October 2010 Notifications of acceptance: mid-November 2010 Conference dates: 23-27 May 2011

Are the grey philistines really in charge?

Recently there's been an exchange in the Irish media about the decline of intellectuals in universities and calling into question whether universities are still fit for purpose given their funding and management structures. The fundamental question seems to me to be far deeper, and impacts on the UK and elsewhere as much as Ireland: what is  -- or should be -- the relationship between academics and their funding sources? To what extent does the person paying the academic piper call the research tune?

The opening shot in this discussion was published in the Irish Times as an op-ed piece by Tom Garvin, an emeritus (retired but still recognised) professor of politics at UCD, Ireland's largest university, although perhaps his intervention was triggered by a rather more supportive editorial in the same paper a few days before. The piece is something of a polemic, but revolves around the lack of support for curiosity-driven ("blue skies") research within the university system. "A grey philistinism [Garvin writes] has established itself in our universities, under leaders who imagine that books are obsolete, and presumably possess none themselves." He then goes on to attack the application of science-based research methods such as team-based investigation to humanities, and observes that even for sciences there is no longer scope for free inquiry: "Researchers are being required by bureaucrats to specify what they are going to discover before the money to do the research is made available." The article provoked a large number of supportive comments from readers and a rather narrow response from UCD's management team.

The questions Garvin raises deserve consideration, and he should be commended for raising them. Indeed, they cut to the heart of whether universities are fit for purpose. Does the linking of funding to student numbers (whether under- or post-graduate) and to the pursuit of State-guided research programmes, the desire to profit from commercially significant results, and the need (or at least desire) to excel in international league tables impact on academics' ability to investigate the world and pass on their results freely?

(For full disclosure, I should point out that I'm a computer science academic who, until last year, worked at UCD.)

There are, of course, major differences in topic area and research philosophy between research in sciences and in humanities, and I'm sure it's fair to say that the same research approaches won't translate seamlessly from one to the other: studying and commenting on the writings of others wouldn't constitute computer science, for example. Having said that, though, there are advantages to be had from considering the ways in which techniques translate across fields. I'm sure some projects in history (to use Garvin's example) would benefit from a team-based approach, although clearly others wouldn't -- any more than all physics research is conducted in large teams with large machines.

As a science researcher I have to point out that the crux of this argument -- specifying results before being granted funding -- is of course nonsense. When I apply for a grant I'm required to describe what I intend to investigate, the methods I'll use and the expected benefits of the investigation. This of course is a long way from specifying what I'll find, which of course I don't know, in detail: that's the whole point, after all. I don't find this to be the mental straitjacket that Garvin seems to think it is: quite the contrary, spelling these matters out is an essential planning phase to design and manage what is likely to be a complex, elongated and collaborative endeavour. If I can't explain what I want to do in a research proposal, then I don't know myself.

But the real question isn't about management: its about justification. Research proposals can (and probably should) be written for any research programme, but they're typically used to get funding. And it's here that we encounter the constraints on blue-skies research. A funding agency, whether public or private, will only fund research from which they'll get an acceptable impact and return on investment. Different agencies will judge that return differently. Commercial funding usually follows topics that will lead, directly or indirectly, to future profits: it's hard to see how a company's board would be meeting its fiduciary duties if it did otherwise. Private foundations and State agencies such as research councils generally have a different definition of impact, for example the abstract scientific significance of a topic or its ability to effect social change. So to that extent, an academic who takes on research funding agrees, at least tacitly, to support the funder's chosen model for assessing their research. There are some nasty examples where this causes problems, the classic case being research sponsored by tobacco companies, and taking funding doesn't excuse from their duties as scientists to collect and report their findings honestly and completely, but in the main the system doesn't cause too many dilemmas.

Given that many universities derive a large fraction of their income (approaching 100% for some) from the State, it is natural that the State will have an interest in directing resources to topics that it regards as more significant. To do otherwise would be to neglect the State's responsibility to the electorate. Academics are not democratically accountable: it is unreasonable to expect that they should be responsible for the allocation funds they have no authority to collect.

However, there's another point that needs to be made. Most academic appointments centre around teaching and conducting research, and not around getting funding. This has important implications for Garvin's argument. The UK and Irish university systems are such that there is absolutely no constraint on an academic's ability to do curiosity-driven research: I can conduct research on whatever topic I choose, and publish it (or not) as I see fit, at my absolute discretion (absent ethical considerations). I would be meeting fully my obligations to my employer, since these relate to my meeting my teaching and research commitments. But I can't expect an external agency to fund me additionally, and it would be surprising if it were otherwise. Put another way, my academic position is my opportunity to conduct my own  curiosity-driven research, but if I require resources beyond my own I have to justify the wider benefits that will derive from that additional funding. There may be other side-effects: certainly in sciences, anyone without a significant funding profile would struggle to be promoted. But a requirement to justify the value one expects from a research programme before it's funded is hardly a crimp on academic freedom.

There's also a mis-understanding in some circles about how science is conducted: a perception that scientists go rather robotically about the conduct of the scientific method, collecting data and then drawing conclusions. This is hardly a surprising mis-perception, given that it's almost implicit in the structure of a typical paper. But it nonetheless mis-states the importance of intuition and curiosity in deciding which hypothesis -- amongst the infinite possibilities -- one chooses to explore. (Peter Medawar. Is the scientific paper a fraud? The Listener 70, pp. 337-378. 12 September 1963.) Without curiosity there can't really be any science: in fact one could go further and say that science is simply curiosity with a bit of useful structure attached.

Garvin's other point, about the lack of influence of academics on public life, was picked by Kevin Myers in the Irish Independent as evidence of cowardice and a disinclination to risk-taking. From the perspective of the humanities this may be true, and it's not for me to comment, but from the perspective of the sciences academics have become more engaged over time in the big issues: climate science, online-privacy and evolutionary biology are the three highest-profile examples of where science academics have stepped into the public realm. It's also important to note that these three examples are first and foremost social in their impact: the science is there to inform public understanding, debate and policy.

It's unfortunate that in many of these discussions "academic" has been taken tacitly to imply "academic in the humanities" -- as though scientists were somehow less "academic" (whatever that means). In the past it was perhaps more expected for humanities academics to engage as public intellectuals, but a lot of the modern world's problems simply can't be understood, let alone be debated and addressed, without the serious and deep engagement of the scientists themselves in the debate.  This, perhaps, is the biggest challenge: to capture, leverage and pass on the excitement for learning and for research into the wider community. As long as that continues to be a central goal, universities remain fit for purpose.

Sensing financial problems

Just as we can think of large-scale, detailed financial modeling as an exercise in simulation and linked data, we can perhaps also see the detection of hazards as an exercise in sensor fusion and pervasive computing, turning a multitude of sensor data into derived values for risk and/or situations requiring attention. There's a wealth of research in these areas that might be applicable.

Environmental and other sensing is intended to make real-world phenomena accessible directly to computers. Typically we simply collect data and archive it for later analysis; increasingly we also move decision-making closer to the data in order to take decisions in close to real time about the ways in which the data is sensed in the future (so-called adaptive sensing) or to allow the computers to respond directly in terms of the services they provide (autonomic or pervasive systems).

Can we treat the financial markets as the targets of sensing? Well, actually, we already do. An index like the FTSE is basically providing an abstracted window onto the behaviour of an underlying process -- in this case a basket of shares from the top 100 companies listed on the London exchange -- that can be treated as an observation of an underlying phenomenon. This further suggests that the technology developed for autonomic and pervasive computing could potentially be deployed to observe financial markets.

In some sense, pricing is already based on sensing. A put option, for example -- where the buyer  gains the right to compel the seller to buy some goods at some point in the future at some defined cost -- will, if exercised, have a definite value to the buyer then (when executed). It's value now (when sold) will be less than this, however, because of the risk that the option will not be exercised (because, for example, the buyer can sell the goods to someone else for more than the seller has contracted to pay for them). Deciding what value to assign to this contract is then a function over the expected future behaviour of the market for the underlying goods. This expectation is formed in part by observing the behaviour of the market in the past, combined with the traders' knowledge of (or guesses about) external factors that might affect the price.

These external factors are referred to in pervasive computing as context, and are used to condition the ways in which sensor streams are interpreted (see Coutaz et alia for an overview). One obtains context from a number of sources, typically combining expert knowledge and sensor data. A typical pervasive system will build and maintain a context model bringing together all the information it knows about in a single database. We can further decompose context into primary context sensed directly from a data source and secondary context derived by some reasoning process. If we maintain this database in a semantically tractable format such as RDF, we can then reason about what's happening in order to classify what's happening in the real world (situation recognition) and respond accordingly. Crucially, this kind of context processing can treat all context as being sensed, not just real-world data: we often "sense" calendars, for example, to look for clues about intended activities and locations, integrating web mining into sensing. Equally crucially, we use context as evidence to support model hypotheses ("Simon is in a meeting with Graeme and Erica") given by the situations we're interested in.

A lot of institutions already engage in automated trading, driven by the behaviour of indices and individual stocks. Cast into sensor-driven systems terminology, the institutions develop a number of situations of interest (a time to buy, hold, sell and so forth for different portfolios) and recognise which are currently active using primary context sensed from the markets (stock prices, indices) and secondary context derived from this sensed data (stock plummeting, index in free-fall). Recognising a situation leads to a particular behaviour being triggered.

Linked data opens-up richer opportunities for collecting context, and so for the management of individual products such as mortgages. We could, for example, sense a borrower's repayment history (especially for missed payments) and use this both to generate secondary context (revised risk of default) and to identify situations of interest (default, impaired, at-risk). Banks do this already, of course, but there are advantages to the sensor perspective. For one, context-aware systems show us that it's the richness of links between  context that is the key to its usefulness. The more links we have, the more semantics we have over which to reason. Secondly, migrating to a context-aware platform means that additional data streams, inferences and situations can be added as-and-when required, without needing to re-architect the system. Given the ever-increasing amount of information available on-line, this is certainly something that might become useful.

Of course there are massive privacy implications here, not least in the use of machine classifiers to evaluate -- and of course inevitably mis-evaluate -- individuals' circumstances. It's important to realise that this is going on anyway and isn't going to go away: the rational response is therefore to make sure we use the best approaches available, and that we enforce audit trails and transparency to interested parties. Credit scoring systems are notoriously opaque at present -- I've had experience of this myself recently, since credit history doesn't move easily across borders -- so there's a screaming need for systems that can explain and justify their decisions.

I suspect that the real value of a sensor perspective comes not from considering an individual institution but rather an entire marketplace. To use an example I'm familiar with from Ireland, one bank at one stage pumped its share price by having another bank make a large deposit -- but then loaned this second bank the money to fund the deposit. Contextualised analysis might have picked this up, for example by trying to classify what instruments or assets each transaction referred to. Or perhaps not: no system is going to be fully robust against the actions of ingenious insiders. The point is not to suggest that there's a foolproof solution, but rather to increase the amount and intelligence of surveillance in order to raise the bar. Given the costs involved in unwinding failures when detected late, it might be an investment worth making.

Computer science and the financial crisis

Many people expected a financial crisis; many also expected it to be caused by automated trading strategies driving share prices, As it turned out, that aspect of computing in finance didn't behave as badly as expected, and the problems arose in the relatively computing-free mortgage sector. Is there any way computing could have helped, or could help avoid future crises?

Over brunch earlier this week my wife Linda was worrying about the credit crunch. It's especially pressing since we're in Ireland at the moment, and the whole country is convulsed with the government's setting-up of the National Asset Management Agency (NAMA) to take control of a huge tranche of bad debts from the Irish banks. She asked me, "what can you do, as a computer scientist, to fix this?" Which kind of puts me on the spot thinking about a topic about which I know very little, but it got me thinking that there may be areas where programming languages and data-intensive computing might be relevant for the future, at least. So here goes....

The whole financial crisis has been horrendously complex, of course, so let's start by focusing on the area I know best: the Irish "toxic tiger" crash. This has essentially been caused by banks making loans to developers to finance housing and commercial property construction, both of which were encouraged through the tax system. Notably the crash was not caused by loans to sub-prime consumers and subsequent securitisation, and so is substantively different to the problems in the US (although triggered by them through tightening credit markets). The loans were collateralised on the present (or sometimes future, developed) value of the land -- and sometimes on "licenses to build" on land rather than the land itself -- often cross-collateralised with several institutions and developments, and financed by borrowings from the capital markets rather than from deposits (of which Ireland, as a country of 4M people, has rather little). As capital availability tightened across 2008-9 the property market stalled, the value of the land dropped (by 80% in some cases, and by 100% for the licenses to build), and the banks have been left with bad loans and capital shortcomings in the range of at least EUR60B which now the government, for reasons strongly suspected to be political and ideological rather than being necessarily in the best interests of the taxpayer, are taking onto the public balance sheet rather than allowing them to be carried by the banks' owners and bondholders. The crash has also, as might be expected, revealed enormous shortcomings in banks' risk management, their understanding of their own holdings and exposures, some unbelievably lax supervision by the authorities, and a lot of suspected corruption and corporate fraud.

(The above is a gross simplification, of course. The Irish Economy blog is the best source of details, being run by a large fraction of Ireland's leading academic economists who have been depressingly accurate as to how the crisis would unfold.)

So what can computer science say about this mess? To start with, there are several key points we can pull out of the above scenario:

  1. Modelling. Its hard to know what the effect of a given stressor would be on the system: what exactly happens if there's sector-wide unemployment, or a localised fall in purchasing?. Most banks seem to have conducted modelling only at the grossed statistical level.
  2. Instrument complexity. Different financial instruments respond to stimuli in different ways. A classic example is where unpaid interest payments are rolled-up into the principal of a mortgage, changing its behaviour completely. Parcelling-up these instruments makes their analysis next to impossible.
  3. Traceability. The same assets appear in different places without being detected up-front, which makes all the instruments involved significantly more risky and less valuable.
  4. Public trust. The "stress tests" conducted by regulators were conducted in secret without independent scrutiny, as have been the valuations applied to the bad loans. The public is therefore being asked to sign-off on a debt of which it knows nothing.
Clearly this is very complicated, and statistical modelling is only going to provide us with an overview. The simplifications needed to get the mathematics to work will be heroic, despite the power of the underlying analytic techniques.

So let's treat the problem as one of simulation rather than analysis.A mortgage is generally treated as data with a particular principal, interest rate, default rate and so on. It can however also be viewed as process, a computational object: at each accounting period (month) it takes in money (mortage payment) and consequently has a value going forward. There is a risk that the payment won't come in, which changes the risk profile and risk-weighted value of the mortgage. It will respond in a particular way, perhaps by re-scheduling payments (increasing the term of the mortgage), or trying to turn itself into a more liquid object like cash (foreclosing on the loan and selling the house), Foreclosure involves interacting with other objects that model the potential behaviour of buyers, to indicate how much cash the foreclosure brings in: in a downturn, the liklihood of payment default increases and the cash value of mortgages forclosed-upon similarly reduces.

The point is that there's relatively little human, banking intervention involved here: it's mostly computational. One could envisage a programming language for expressing the behaviours of mortgages, which defines their responses to different stimuli and defines an expected present value for the mortgage discounted by the risks of default, the amount recoverable by foreclosure, and so on.

So the first thing computer science can tell us about the financial crash is that the underlying instruments are essentially computational, and can be represented as code. This provides a reference model for the behaviour we should expect from a particular instrument exposed to particular stimuli, expressed clearly as code.

We can go a stage further. If loans are securitised -- packaged-up into another instrument whose value is derived from that of the underlying assets, like a futures contract -- then the value of the derivative can be computed from the values of the assets underlying it. Derivatives are often horrifically complicated, and their might be significant advantages to be had in expressing their behaviours as code.

How do we get the various risk factors? Typically this is done at a gross level across an entire population, but it need not be. We now live in an exabyte era. We can treat the details of the underlying asset as metadata on the code: the location of a house, its size and price history, the owners' jobs and so forth. We currently have this data held privately, and as statistical aggregates, but there's no reason why we can't have associate the actual details to each loan or instrument, and therefore to each derivative constructed from them. This after all is what linked data is all about. means that each financial instrument is inherently computational, and carries with it all the associated metadata. This little package is the loan, to all intents and purposes.

So the second thing computer science can tell us is that we can link instruments, assets and data together, and track between them, using the standard tools and standards of the semantic web. This means we can query them at a high semantic level, and us these queries to extract partitions of the data we're interested in examining further. There's no scientific reason that this can't be done across an entire market, not simply within a single institution.

The net result of the above is that, given this coding and linking, the financial system can be run in simulation. We can conduct stress tests at a far finer resolution by for example running semantic queries to extract a population of interest, making them all redundant (in simulation), and seeing what happens not only to their loans, but to the securitised products based on them -- since everythings just a computation. Multiple simulations can be run to explore different future scenarios, based on different risk weightings an so forth.

(This sounds like a lot of data, so let's treat the UK housing market as a Fermi problem and see if it's feasible. There are 60M people in the UK. Assume 2-person families on average, yielding 30M domestic houses to be mortgaged. Some fraction of these are mortgage-free, say one third, leaving 20M mortgages. The workforce is around 40M working in firms with an average of say 20 employees, each needing premises, yielding a further 2M commercial mortages. If each mortgage needs 10Kb of data to describe it, we have 22M objects requiring about 250Tb of data: a large but not excessive data set, especially when most objects execute the same operations and share a lot of common data: certainly well within the simulation capabilities of cloud computing. So we're not in computationally infeasible territory here.)

We can actually go a few stages further. Since we have a track-back from instrument to metadata, we can learn the risks over time by observing what happens to specific cohorts of borrowers exposed to different stimuli and stresses. Again, linked data lets us identify the patterns of behaviour happening in other data sets, such as regional unemployment (now directly available online in the UK and elsewhere). The more data goes online the easier it is to spot trends, and the easier and finer one can learn the behaviors the system is exhibiting. As well as running the simulation forwards to try to see what's coming, we can run it backwards to learn and refine parameters that can then be used to improve prediction.

Therefore the third think computer science can tell us is that the financial markets as a whole are potentially a rich source of machine learning and statistical inference, which can be conducted using standard techniques from the semantic web.

Furthermore, we can conduct simulations in the open. If banks have to represent their holdings as code, and have to link to (some of) the metadata associated with a loan, then regulators can run simulations and publish their results. There's a problem of commercial confidentiality, of course, but one can lock-down the fine detail of metadata if required (identifying owners by postcode and without names, for example). If each person, asset and loan has a unique identifier, it's easier to spot cross-collateralisations and other factors that weaken the values of instruments, without needing to be able to look inside the asset entirely. This exposes a bank's holdings in metadata terms -- residential loans in particular areas -- but that's probably no bad thing, given that the lack of information about securitise contributed to the fall.

This is the last thing computer science can tell us. Open-source development suggests that having more eyes on a problem reduces the number, scope and severity of bugs, and allows for re-use and re-purposing far beyond what might be expected a priori. For a market, more eyes means a better regulatory and investor understanding of banks' positions, and so (in principle) a more efficient market.

For all I know, banks already do a lot of this internally, but making it an open process could go a long way to restore confidence in both taxpayers and future investors. There's no time like a crash to try out new ideas.