Call for papers: D3Science
Papers are solicited for a workshop on data-intensive, distributed and dynamic e-science problems.
Workshop on D3Science - Call for Papers
(To be held with IEEE e-Science 2011) Monday, 5 December 2011 Stockholm, Sweden
- Case studies of development, deployment and execution of representative
- D3 applications, particularly projects suitable for transcontinental collaboration
- Programming systems, abstractions, and models for D3 applications
- Discussion of the common, minimally complete, characteristics of D3 application
- Major barriers to the development, deployment, and execution of D3 applications, and primary challenges of D3 applications at scale
- Patterns that exist within D3 applications, and commonalities in the way such patterns are used
- How programming models, abstraction and systems for data-intensive applications can be extended to support dynamic data applications
- Tools, environments and programming support that exist to enable emerging distributed infrastructure to support the requirements of dynamic applications (including but not limited to streaming data and in-transit data analysis)
- Data-intensive dynamic workflow and in-transit data manipulation
- Adaptive/pervasive computing applications and systems
- Abstractions and mechanisms for dynamic code deployment and "moving code to data"
- Application drivers for end-to-end scientific data management
- Runtime support for in-situ analysis
- System support for high end workflows
- Hybrid computing solutions for in-situ analysis
- Technologies to enable multi-platform workflows
Submission instructions
Authors are invited to submit papers containing unpublished, original work (not under review elesewhere) of up to 8 pages of double column text using single spaced 10 point size on 8.5 x 11 inch pages, as per IEEE 8.5 x 11 manuscript guidelines. Templates are available: http://www.ieee.org/web/publications/pubservices/confpub/AuthorTools/conferenceTemplates.html Authors should submit a PDF or PostScript (level 2) file that will print on a PostScript printer. Papers conforming to the above guidelines can be submitted through the workshop's paper submission system: http://www.easychair.org/conferences/?conf=d3science It is a requirement that at least one author of each accepted paper register and attend the conference.Important dates
- 17 July 2011 - submission date
- 23 August 2011 - decisions announced
- 23 September 2011 - final versions of papers due to IEEE for proceedings
Organizers
- Daniel S. Katz, University of Chicago & Argonne National Laboratory, USA
- Neil Chue Hong, University of Edinburgh, UK
- Shantenu Jha, Rutgers University & Louisiana State University, USA
- Omer Rana, Cardiff University, UK
PC members
- Gagan Aggarwal, Ohio State University, USA
- Deb Agarwal, Lawrence Berkeley National Lab, USA
- Gabrielle Allen, Lousiana State University, USA
- Malcolm Atkinson, University of Edinburgh, UK
- Adam Barker, University of St Andrews, UK
- Paolo Besana, University of Edinburgh, UK
- Jon Blower, University of Reading, UK
- Yun-He Chen-Burger, University of Edinburgh, UK
- Simon Dobson, University of St Andrews, UK
- Gilles Fedak, INRIA, France
- Cécile Germain, University Paris Sud, France
- Keith R. Jackson, Lawrence Berkeley National Lab, USA
- Manish Parashar, Rutgers, USA
- Abani Patra, University of Buffalo, USA
- Yacine Rezgui, Cardiff University, UK
- Yogesh Simmhan, University of Southern California, USA
- Domenico Talia, University of Calabria, Italy
- Paul Watson, Newcastle University, UK
- Jon Weissman, University of Minnesota, USA
Pervasive healthcare
This week's piece of shameless self-promotion: a book chapter on how pervasive computing and social media can change long-term healthcare. My colleague Aaron Quigley and I were asked to contribute a chapter to a book put together by Jeremy Pitt as part of PerAda, the EU network of excellence in pervasive systems. We were asked to think about how pervasive computing and social media could change healthcare. This is something quite close to both our hearts -- Aaron perhaps more so than me -- as it's one of the most dramatic examples of how pervasive computing can really make an impact on society. There are plenty of examples of projects that attempt to provide high-tech solutions to the issues of independent living-- some of which we've been closely involved with. For this work, though, we suggest that one of the most cost-effective contributions that technology can make might actually be centred around social media. Isolation really is a killer, in a literal sense. A lot of research has indicated that social isolation is a massive risk factor in both physiological and psychological illnesses, and this is something that's likely to get worse as populations age. Social media can help address this, especially in an age when older people have circles of older friends, and where these friends and family can be far more geographically dispersed than in former times. This isn't to suggest that Twitter and Facebook are the cures of any social ills, but rather that the services they might evolve into could be of enormous utility for older people. Not only do they provide traffic between people, they can be mined to determine whether users' activities are changing over time, identify situations that can be supported, and so provide unintrusive medical feedback -- as well as opening-up massive issues of privacy and surveillance. While today's older generation are perhaps not fully engaged with social media, future generations undoubtedly will be, and it's something to be encouraged. Other authors -- some of them leading names in the various aspects of pervasive systems -- have contributed chapters about implicit interaction, privacy, trust, brain interfaces, power management, sustainability, and a range of other topics in accessible form. The book has a web site (of course), and is available for pre-order on Amazon. Thanks to Jeremy for putting this together: it's been a great opportunity to think more broadly than we often get to do as research scientists, and see how our work might help make the world more liveable-in.
Mainstreaming Smalltalk
The "no applications" idea first surfaced for me at PARC, when we realised that you really wanted to freely construct arbitrary combinations (and could do just that with (mostly media) objects). So, instead of going to a place that has specialised tools for doing just a few things, the idea was to be in an "open studio" and pull the resources you wanted to combine to you. This doesn't mean to say that e.g. a text object shouldn't be pretty capable -- so it's a mini app if you will -- but that it and all the other objects that intermingle with each other should have very similar UIs and have their graphics aspect be essentially totally similar as far as the graphics system is concerned -- and this goes also for user constructed objects. The media presentations I do in Squeak for my talks are good examples of the directions this should be taken for the future.(Anyone who has seen one of Kay's talks -- as I did at the ECOOP conference in 2000 -- can attest to how stunningly engaging they are.) To which I would add that it's equally important today that their data work together seamlessly too, and with the other tools that we'll develop along the way. The use of the browser as a desktop isn't new, of course: it's central to Google Chromium and to cloud-friendly Linux variants like Jolicloud. But it hasn't really been used so much as a development environment, or as the host for a language that lives inside the web's main document data structure. I'm not hung-up on it being Smalltalk -- a direct-manipulation front-end to jQuery UI might be even better -- but some form of highly interactive programming-in-the-web might be interesting to try.
Why we have code
You think you know when you learn, are more sure when you can write, even more when you can teach, but certain only when you can program. Alan PerlisThere are caveats here, of course, the most important of which is that the code be well-written and properly abstracted: it needs to separate-out the details so that there's a clear process description that calls into -- but is separate from -- the details of exactly what each stage of the process does. Code that doesn't do this, for whatever reason, obfuscates rather than explains. A good programming education will aim to impart this skill of separation of concerns, and moreover will do so in a way that's independent of the language being used. Once you adopt this perspective, certain things that are otherwise slightly confusing become clear. Why do programmers always find documentation so awful? Because the code is a clearer explanation of what's going on, because it's a fundamentally better description of process than natural language. This comes through clearly when marking student assessments and exams. When faced with a question of the form "explain this algorithm", some students try to explain it in words without reference to code, because they think explanation requires text. As indeed it does, but a better approach is to sketch the algorithm as code or pseudo-code and then explain with reference to that code -- because the code is the clearest description it's possible to have, and any explanation is just clearing up the details. Some of the other consequences of the discipline of programming are slightly more surprising. Every few years some computer science academic will look at the messy, unstructured, ill-defined rules that govern the processes of a university -- especially those around module choices and student assessment -- and decide that they will be immensely improved by being written in Haskell/Prolog/Perl/whatever. Often they'll actually go to the trouble of writing some or all of the rules in their code of choice, discover inconsistencies and ambiguities, and proclaim that the rules need to be re-written. It never works out, not least because the typical university administrator has not the slightest clue what's being proposed or why, but also because the process always highlights grey areas and boundary cases that can't be codified. This could be seen as a failure, but can also be regarded as a success: coding successfully distinguishes between those parts of an organisation that are structured and those parts that require human judgement, and by doing so makes clear the limits of individual intervention and authority in the processes. The important point is that, by thinking about a non-programming problem within a programming idiom, you clarify and simplify the problem and deepen your understanding of it. So programming has an impact not only on computers, but on everything to which one can bring a description of process; or, put another way, once you can precisely describe processes easily and precisely you're free to spend more time on the motivations and cultural factors that surround those processes without them dominating your thinking. Programmers think differently to other people, and often in a good way that should be encouraged and explored.