Keynotes

Jeremy Pitt: Governance, Justice and Paradox in Self-Organising Rule-Oriented Systems – Thursday, July 9, 2015 from 09:00 to 10:00

Abstract

Many open computing systems — for example grid or cloud computing, sensor or vehicular networks, and virtual organisations — face a similar problem: how to collectivise resources, and distribute them fairly, in the absence of a centralized component. One approach is to define a set of conventional, mutually-agreed and mutable rules — i.e., a self-organising rule-oriented system in which rules are explicit, ‘first class’ entities, typically characterised by an institution. In this talk, we present a formal model of Ostrom’s institutional design principles, and a formal model of Rescher’s theory of distributive justice, as a basis for inclusive (self-)governance for fair and sustainable allocation of common-pool resources, using a type of self-organising rule-oriented system referred to as an electronic institution. However, while this provides a computational framework for a programme of research called computational justice (capturing certain notions of ‘correctness’ in the outcomes of algorithmic decision-making for ‘pro-social’ self-governance), there is an argument that any system which admits unrestricted self-modification of its rules will tend to paradox, incompleteness or inconsistency. We conclude the talk with a discussion of the need for some form of ‘attention’ or ‘awareness’ to avoid unexpected or undesirable consequences of unrestricted self-organisation of rules.

Biography

Jeremy PittJeremy Pitt is Reader in Intelligent Systems in the Department of Electrical & Electronic Engineering at Imperial College London, where he is also Deputy Head of the Intelligent Systems & Networks Group.

His research interests focus on developing formal models of social processes using computational logic, and their application to self-organising and multi-agent systems, for example in agent societies, agent communication languages, and electronic institutions. He also has an strong interest in the social impact of technology, and has edited two recent books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). He has been an investigator on more than 30 national and European research projects and has published more than 150 articles in journals and conferences. He is a Senior Member of the ACM, a Fellow of the BCS, and a Fellow of the IET; he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and an Associate Editor of IEEE Technology and Society Magazine.

Date

Thursday, July 9, 2015 from 09:00 to 10:00

Tarek Abdelzaher: The Social Frontier for Autonomic Systems – Wednesday, July 8, 2015 from 09:00 to 10:00

Abstract

Autonomic systems are distinguished by properties that allow them to respond and adapt successfully to environmental changes. A key property in that space is the ability to monitor and understand their environment, as monitoring is a prerequisite to adaptation. For example, significant investments have been made in data centers to enhance monitoring capabilities. Future autonomic systems, however, will grow beyond data centers and control applications into the societal application arena. Autonomic technology will support smarter social systems such disaster response, national security, vehicular transportation, and residential energy optimization, where the main components of the system being optimized are human, not digital. Indeed humans are the disaster survivors, the first responders, the eye-witnesses, the drivers, and the consumers in these systems. They observe key elements of system state. What would the monitoring subsystem look like in autonomic systems that serve such societal applications? Unlike machines that report their performance, humans have privacy concerns, are not easily retrofitted with sensors, and are generally not reliable as information sources. This talk explores the utility of information distillation from social networks as de facto “APIs” into human systems that voluntarily share information of use by autonomic applications in the societal space. Experiences with early prototypes of monitoring tools built on Twitter are reported in contexts that range from civil unrest to traffic monitoring. Questions such as ascertaining reliability of incoming data are addressed. Future challenges and research questions are discussed that suggest a new autonomic computing research agenda for the social frontier.

Biography

Tarek2Tarek Abdelzaher received his B.Sc. and M.Sc. degrees in Electrical and Computer Engineering from Ain Shams University, Cairo, Egypt, in 1990 and 1994 respectively. He received his Ph.D. from the University of Michigan in 1999 on Quality of Service Adaptation in Real-Time Systems. He has been an Assistant Professor at the University of Virginia, where he founded the Software Predictability Group. He is currently a Professor and Willett Faculty Scholar at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 170 refereed publications in real-time computing, distributed systems, sensor networks, and control. He is an Editor-in-Chief of the Journal of Real-Time Systems, and has served as Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, IEEE Embedded Systems Letters, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal. He chaired (as Program or General Chair) several conferences in his area including RTAS, RTSS, IPSN, Sensys, DCoSS, ICDCS, and ICAC. Abdelzaher’s research interests lie broadly in understanding and influencing performance and temporal properties of networked embedded, social and software systems in the face of increasing complexity, distribution, and degree of interaction with an external physical environment. Tarek Abdelzaher is a recipient of the IEEE Outstanding Technical Achievement and Leadership Award in Real-time Systems (2012), the Xerox Award for Faculty Research (2011), as well as several best paper awards. He is a member of IEEE and ACM.

Date

Wednesday, July 8, 2015 from 09:00 to 10:00

Alexander Keller: A Service Provider Perspective of the past 12 Years of Autonomic Computing – Friday, July 10, 2015 from 09:00 to 10:00

Abstract

Research and IT service providers are continuously increasing the degree of Automation in Hybrid IT environments spanning multi-provider Clouds as well as traditional IT. Autonomic Computing and Cognitive Systems are widely regarded as the foundation for improving the productivity as well as the quality of Service Delivery. At the same time, Hybrid IT is being subject to laws and regulations that impose constraints on who can use the data and where this data is allowed to reside, thus creating challenges in large-scale deployments on a global scale.
Based on our experience of running a service practice that delivers IT Service Management in Hybrid IT environments to customers worldwide, this keynote will review the state of the art in implementing Autonomic Computing technologies and point out gaps encountered in current tools and implementations. By means of real-life examples, key considerations and critical success factors for making AC work in Hybrid IT environments are identified and suggestions are made on how research can help improve the effectiveness of such solutions.

Biography

AKeller2015_smallAlexander Keller is Director, Integrated Service Management with IBM Global Technology Services in Chicago, IL, USA. In this role, he is responsible for setting the technical strategy on a global level and works with many customers on complex Cloud and IT Service Management implementation projects. In addition to being a business executive, Alexander is one of 500 IBM Distinguished Engineers. Distinguished Engineers provide executive technical leadership to their business units and across the company by consulting with management on technical and business strategies and their implementation. Alexander’s core areas of expertise are large-scale Discovery systems, Service Desks, Change & Configuration Management implementations, and Cloud Computing.
Prior to joining IBM’s Global Services organization in January 2007, he managed the Service Delivery Technologies department at the IBM T.J. Watson Research Center in Yorktown Heights, NY. He received his M.Sc. and Ph.D. degrees in Computer Science from Technische Universität München, Germany, in 1994 and 1998, respectively and has published more than 60 refereed papers in the area of distributed systems and IT service management.

Date

Friday, July 10, 2015 from 09:00 to 10:00

Lee Brownston: From Pixels to Planets – Tuesday, July 07, 2015 from 09:00 to 10:00

Abstract

The Kepler Mission was launched in 2009 as NASA’s first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Kepler’s field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives.

The Kepler Science Pipeline consists of a pipeline infrastructure  written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA’s 211,360-core Pleiades supercomputer. The software developers are still seeking ways to increase the throughput.

To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets.

Funding for this mission is provided by NASA’s Science Mission Directorate.

Biography

LeeBrownston3Lee Brownston is a Senior Computer Scientist at SGT, Inc., working on-site at NASA Ames Research Center since 2001. He has a B.A. in Psychology from the University of Michigan, a Ph.D. in psychology from the University of Minnesota, and a B.S. and M.S. in Computer Science from Florida International University, his thesis dealing with ACM’s Core Graphics System standard.

He worked on rule-based expert systems at Carnegie Mellon University and the FMC Corporate Technology Center, and is a co-author of “Programming Expert Systems in OPS-5: An Introduction to Rule-Based Programming.” At Stanford University’s Knowledge Systems Laboratory he helped develop an engine and applications for blackboard systems. After a stint of web development for a sequence of start-ups during the dot-com boom, he started contracting at NASA Ames Research Center, his initial assignment helping to develop engines and applications for model-based diagnosis of engineered systems. He later worked on graphical user interfaces, embedded programming of nanosatellite microcontrollers and robotics. His current assignment is with the Kepler Science Operations Center.

He is a member of the ACM and the IEEE Computer Society.

Date

Tuesday, July 07, 2015 from 09:00 to 10:00

C. Müller-Schloer: THE HOLONIC AGENT – A BUILDING BLOCK FOR SYSTEMS OF SYSTEMS – Tuesday, July 07, 2015 from 14:00 to 15:00

Abstract

Systems of Systems are – according to Sage et al. – characterized by (1) Operational independence of the individual system, i.e. subsystems have their own goals, (2) Managerial independence, (3) Geo-graphic distribution, (4) Emergent behavior, and (5) Evolutionary development. In the context of this presentation we are especially interested in the goal-orientation of such systems.
Technical systems consisting of subsystems (here modeled as agents) become more autonomous and cooperative. Autonomy means they have their own goals. Cooperation means that agents demand services or behaviors from other agents (downward orientation) and can be subject to such demands from the outside (upward orientation). This form of goal-oriented cooperation goes beyond a rigid command (i.e. call/return) relationship because demands can go wrong: The default case is the non-fulfillment or the partial fulfillment of a demand. Agents need the ability to handle such failures and to follow their own goals while also trying to comply with external goals. Such agents are semi-autonomous, two-faced (up and down), and support nesting. We call them Holonic Agents.
In order to construct such agents we need a generic mechanism to organize the goal-based coopera-tion between them. To this end we must develop frames of reference, architectures, methods for quantitative analysis, and design processes.
This talk presents first steps in this direction. It reviews the motivation for goal-oriented systems of systems, presents related work, and introduces “action guidance” as a generalization of goals and norms. Arthur Koestler’s Holon model is used as a starting point for the development of an architectural model of Holonic Agents. It will be shown that this agent type can be viewed as an extension of the classical observer/controller agent from Organic Computing or the MAPE-based agent from Autonomic Computing.
Finally, we will discuss the resulting online design pattern: An agent community, which cooperates via goals, is a highly dynamic system, which must permanently try to reconcile possibly contradicting goals and observed facts from the real world. In this sense it implements an iterative design pattern known as yoyo design in the classical offline design process. However, this yoyo process is now moved from design time to runtime.

BiographyCMS

Date

Tuesday, July 07, 2015 from 14:00 to 15:00

News

Main Sponsors:

Sponsors: