Skip to main content

Research fellowships available in Dublin

Two post-doctoral positions in smart cities now available at Trinity College Dublin. Research Fellowships in Autonomic Service-Oriented Computing for Smart Cities Applications are invited for two Postdoctoral Research Fellowships at Trinity College Dublin’s Distributed Systems Group to investigate the provision of a new service-oriented computing infrastructure that provides demand-based composition of software services interacting with a city-wide, dynamic network infrastructure. The project will investigate autonomic adaptation of services and infrastructure, ensuring resilient service provision within an integrated, city-wide system. Applicants should have a Ph.D. in Computer Science, Computer Engineering or a closely-related discipline and strong C++/C#/Java development skills. Experience with autonomic computing, service-oriented middleware, and/or smart city technologies is desirable as are strong mathematical skills. The project is supported by Science Foundation Ireland under the Principal Investigator programme between 2014-2018 and will be conducted in collaboration with Cork Institute of Technology, NUI Maynooth, IBM Smarter Cities Research Centre, Intel Intelligent Cities Lab, EMC2 Research Europe, and Arup.  The position is tenable from September 2014. Please apply by email to Siobhan.Clarke@scss.tcd.ie quoting “Smart Cities Fellowship” in the subject line. Applications should include a curriculum vitae, in PDF format, giving full details of qualifications and experience, together with the names of two referees. The closing date for applications is the 20th June, 2014. Trinity College is an equal opportunities employer.  

Call for papers: new journal on self-adaptive systems

Papers are welcome for the EAI Endorsed Transactions on Self-Adaptive Systems.

EAI Transactions on Self-Adaptive Systems

http://eai.eu/transaction/self-adaptive-systems Editor-in-Chief: Dr. Emil Vassev, Lero, the Irish Software Engineering Centre, University of Limerick, Ireland SCOPE This journal seeks contributions from leading experts from research and practice of self-adaptive systems that will provide the connection between theory and practice with the ultimate goal to bring both the science and industry closer to the so-called "autonomic culture" and successful realization of self-adaptive systems. Both theoretical and applied contributions related to the relevance and potential of engineering methods, approaches and tools for self-adaptive systems are particularly welcome. This applies to application areas and technologies such as:
  • adaptable user interfaces
  • adaptable security and privacy
  • autonomic computing
  • dependable computing
  • embedded systems
  • genetic algorithms
  • knowledge representation and reasoning
  • machine learning
  • mobile ad hoc networks
  • mobile and autonomous robots
  • multi-agent systems
  • peer-to-peer applications
  • sensor networks
  • service-oriented architectures
  • ubiquitous computing
It also hold for many research fields, which have already investigated some aspects of self-adaptation from their own perspective, such as fault-tolerant computing, distributed systems, biologically inspired computing, distributed artificial intelligence, integrated management, robotics, knowledge-based systems, machine learning, control theory, etc. MANUSCRIPT SUBMISSION Manuscripts should present original work in the scope of the journal and must be exclusively submitted to this journal, must not have been published before, and must not be under consideration for publication elsewhere. Significantly extended and expanded versions of papers published in conference proceedings can be submitted, providing also a detailed description of the additions. Regular papers are limited to a maximum of 20 pages. Prepare and submit your manuscript by following the instructions provided here. OPEN ACCESS Authors are not charged with any publication fees and their papers will be published online with Open Access. Open Access is a publishing model where the electronic copy of the article is made freely available with permission for sharing and redistribution. Currently, all articles published in all journals in the EAI Endorsed Transactions series are Open Access under the terms of the Creative Commons with Attribution license and published in the European Union Digital Library. EDITORIAL BOARD
  • Christopher Rouff , Johns Hopkins Applied Physics Laboratory, USA
  • Danny Weyns , Linnaeus University, Sweden
  • Franco Zambonelli , UNIMORE, Italy
  • Genaina Rodrigues , University of Brasilia, Brazil
  • Giacomo Cabri , UNIMORE, Italy
  • Imrich Chlamtac , CREATE-NET Research Consortium, University of Trento, Italy
  • James Windsor , ESTEC, European Space Agency, Netherlands
  • Michael O'Neill , UCD, Ireland
  • Mike Hinchey , Lero, the Irish Software Engineering Research Centre, University of Limerick, Ireland
  • Richard Antony , University of Greenwich, UK
  • Simon Dobson , Uni­ver­sity of St Andrews, UK

Solaris

Solaris

Stanisław Lem

1961


An atmospheric and episodic tale of first contact. The descriptions are wonderful, and the focus on character is far more detailed than is common even in first-class science fiction. Lem leaves most of the plot elements unfinished, which is somewhat dissatisfying at one level but leaves plenty of space for the reader's imagination to play.

3/5. Finished 14 May 2014.

(Originally published on Goodreads.)

Flash Boys: A Wall Street Revolt

Flash Boys: A Wall Street Revolt

Michael Lewis

2014


Another of Michael Lewis' now-classic tales of Wall Street misadventure, this one focusing on the all-but-unseen - and even less understood - growth of high-frequency trading (HFT). This one follows the efforts of a small group of insiders to create a new stock exchange that's immune both by policy and by design to the arbitrage and strategies HFT uses to game the conventional exchanges. A fascinating list of characters cross the pages, including Russian programmers, disaffected traders, and a network guy from Dublin - all working to make a system that was paying them well disappear in the interests of fairness (and their own long-term financial gain).

At one level this book is less satisfying than The Big Short: Inside the Doomsday Machine, perhaps because the story still hasn't finished. The reader is left wanting to know the fate of the new IEX exchange, and the way the market changed as a result. For a techie, it's also unsatisfying that so much of the technology remains unexplored, although it would obviously have made the book inaccessible to anyone but a computer junkie: perhaps there's a much more technical follow-up that could be written.

Although the story mainly revolves a case of market failure - high-frequency traders capturing huge value while taking no risk and providing no real advantage - it's also in a strange way an example of market success, when Goldman Sachs and other banks realise that their support of HFT is simply too risky for the gains they're capturing themselves. There's also an irony in the banks' worrying that, in the case of another crash, the banks will take the losses while the HFT firms walk away with the gains - which is exactly the reverse of the situation after the 2008 crash, where the public took up the banks' bad debts. Whether this is a sign of things to come is hard to decide, but it does show how even the most dysfunctional system can be changed when people recognise its dysfunction and are prepared to act to remediate it.

4/5. Finished 03 May 2014.

(Originally published on Goodreads.)

Let's teach everyone about big data

Demolishing the straw men of big data. This post comes about from reading Tim Harford's opinion piece in the Financial Times in which he offers a critique of "big data", the idea that we can perform all the science we want to simply by collecting large datasets and then letting machine learning and other algorithms loose on it. Harford deploys a whole range of criticisms against this claim, all of which are perfectly valid: sampling bias will render a lot of datasets worthless; correlations will appear without causation; the search goes on without hypotheses to guide it, and so isn't well-founded in falsifiable predictions; and an investigator without a solid background in the science underlying the data is going to have no way to correct these errors. The critique is, in other words, damning. The only problem is, that's not what most scientists with an interest in data-intensive research are claiming to do. Let's consider the biggest data-driven project to date, the Large Hadron Collider's search for the Higgs boson. This project involved building a huge experiment that then generated huge data volumes that were trawled for the signature of Higgs interactions. The challenge was so great that the consortium had to develop new computer architectures, data storage, and triage techniques just to keep up with the avalanche of data. None of this was, however, an "hypothesis-free" search through the data for correlation. On the contrary, the theory underlying the search for the Higgs made quite definite predictions as to what its signature should look like. Nonetheless, there would have been no way of confirming or refuting the correctness of those predictions without collecting the data volumes necessary to make the signal stand out from the noise. That's data-intensive research: using new data-driven techniques to confirm or refute hypotheses about the world. It gives us another suite of techniques to deploy, changing both the way we do science and the science that we do. It doesn't replace the other ways of doing science, any more than the introduction of any other technology necessarily invalidates hat came before. Microscopes did not remove the need for, or value of, searching for or classifying new species: they just provided a new, complementary approach to both. That's not to say that all the big data propositions are equally appropriate, and I'm certainly with Harford in the view that approaches like Google Flu are deeply and fundamentally flawed, over-hyped attempts to grab the limelight. Where he and I diverge is that Harford is worried that all data-driven research falls into this category, and that's clearly not true. He may be right that a lot of big data research is a corporate plot to re-direct science, but he's wrong to worry that all projects working with big data are similarly driven. I've argued before that "data scientist" is a nonsense term, and I still think so. Data-driven research is just research, and needs the same skills of understanding and critical thinking. The fact that some companies and others with agendas are hijacking the term worries me a little, but in reality is no more significant than the New Age movement's hijacking of terms like "energy" and "quantum" -- and one doesn't stop doing physics because of that. In fact, I think Harford's critique is a valuable and significant contribution to the debate precisely because it highlights the need for understanding beyond the data: it's essentially a call for scientists to only use data-driven techniques in the service of science, not as a replacement for it. An argument, in other words, for a broadly-based education in data-driven techniques for all scientists, and indeed all researchers, since the techniques are equally (if not more) applicable to social sciences and humanities. The new techniques open-up new areas, and we have to understand their strengths and limitations, and use them to bring our subjects forwards -- not simply step away because we're afraid of their potential for misuse. UPDATE 7Apr2014: An opinion piece in the New York Times agrees: "big data can work well as an adjunct to scientific inquiry but rarely succeeds as a wholesale replacement." The number of statistical land mines is enormous, but the right approach is to be aware of them and make the general research community aware too, so we can use the data properly and to best effect.