Printable version

QEST 2006 Tutorials ~~~~ Monday, September 11, 2006

QEST 2006 Tutorials      Monday, September 11, 2006

Morning Tutorials       9:00am - 12:30pm       (coffee break 10:30am - 11:00am)
TRACK A: Quantitative evaluation of biological systems. J. Hillston (University of Edinburgh)
Abstract: Systems biology is an emerging multidisciplinary field which seeks to understand biological systems at a process level. Significant advances in biology in recent years mean that much is now known about the components which make up many biological systems. However, this essentially reductionist work does little to explain the processes which use these components and drive the biological functions which they perform. Systems biology seeks to form such explanations through combined use of modelling and laboratory experimentation.
Whilst static models which explain the inter-relationships between components are useful, quantitative kinetic models which allow dynamic properties of the system to be represented, are crucial. This tutorial will outline the challenges of systems biology and explain how some of the modelling and analysis techniques which have been developed for the quantitative analysis of computer systems over the last two decades are now proving useful in this new domain. Several examples of intra-cellular signalling pathways will be used to illustrate the ideas.
Only minimal biological knowledge will be assumed.

Speaker Biography: Jane Hillston received the BA and MS degrees in Mathematics from the University of York (UK) and Lehigh University (USA), respectively. After a brief period working in industry, she joined the Department of Computer Science at the University of Edinburgh, as a research assistant in 1989. She received the PhD degree in computer science from that university in 1994. Her thesis was selected for publication as a Distinguished Dissertations in Computer Science in 1995. In 1995 she became a lecturer, and in 2001 a reader, in computer science at the University of Edinburgh. She is a member of the Laboratory for Foundations of Computer Science. Her principal research interests are in the use of process algebras to model computer systems and the investigation of issues of compositionality with respect to Markov processes. Her work on the stochastic process algebra PEPA was recognized by the British Computer Society in 2004 who awarded her the Roger Needham Award.

More recently she has been investigating the use of stochastic process algebra approaches to model cellular signal transduction pathways and other problems in systems biology. In 2005 she was awarded a five year Advanced Research Fellowship by the UK Engineering and Physical Sciences Research Council and in 2006 she was promoted to a Chair in Quantitative Modelling at the University of Edinburgh.

TRACK B: Statistical techniques for performance measurement and evaluation. D.J. Lilja (University of Minnesota)
Abstract: Computer architects and system designers have made tremendous advances in the performance of computer systems overthe past several decades. Unfortunately,the performance of a computer system is impacted by many different components in extremely complex and nonlinear ways. For example, it is well understood that simply increasing the clock rate will not necessarily produce a proportionate increase in the overall performance. These complex interactions introduce uncertainty into the measurements of a system's performance, which makes it difficult to determine the impact any changes made to the system actually have onthe overall performance. This measurement noise also makes it difficult to compare the performance of different systems. This tutorial provides a gentle introduction to some of the key statistical tools and techniques needed to interpret noisy performance measurements and to sort through large collections of simulation results. It also presents techniques that can be used to appropriately design experiments to obtain the maximum amount of information for a given level of experimental effort.

Speaker Biography: David J. Lilja received the Ph.D. and M.S. degrees, both in Electrical Engineering, from the University of Illinois at Urbana-Champaign, and a B.S. in Computer Engineering from Iowa State University in Ames. He is currently a Professor of Electrical and Computer Engineering, and a Fellowofthe Minnesota Supercomputing Institute, at the University of Minnesota in Minneapolis. He also serves as a member of the graduate faculties in Computer Science and Scientific Computation. He has been a visiting senior engineer in the Hardware Performance Analysis group at IBM in Rochester,Minnesota, and a visiting professor at the University of Western Australia in Perth supported by a Fulbright award. Previously,heworked as a research assistant at the Center for Supercomputing Research and Development at the University of Illinois, and as a development engineer at Tandem Computers Incorporated (now a division of Hewlett-Packard) in Cupertino, California. He has served on and chaired the program committees of numerous conferences, was a distinguished visitor of the IEEE Computer Society, and was elected a Fellow of the IEEE "for contributions to statistical methodologies for performance assessment of computing systems". His primary research interests are in high-performance computer architecture, parallel computing, hardware-software interactions, and performance analysis.

LUNCH     12:30pm - 2:00pm
Afternoon Tutorials       2:00pm - 5:30pm       (coffee break 3:30pm - 4:00pm)
TRACK A: Analysis of simulation traces by run-time verification. O. Sokolsky, U. Sammapun, I. Lee (University of Pennsylvania)
Abstract: Run-time verification is a dynamic analysis technique that has emerged in recent years. Run-time verification targets system models that are too complex to be exhaustively analyzed by model checking, or system implementations that do not have models suitable for verification. Instead of exploring the whole state space of the system, run-time verification techniques check system executions with respect to a set of formally specified requirements. Run-time verification resorts to instrumentation or other kinds of execution monitoring to extract a trace of observations and then applies formal verification to this trace. Presentation will be based on the MaC (Monitoring and Checking) tool, developed at the University of Pennsylvania.
In this tutorial, we will concentrate on the application of run-time verification techniques to simulation runs. We will discuss specification of properties for the runs, extraction of relevant observations, abstraction of observations, and efficient run-time checking of a trace. We will illustrate the presentation with a case study that considered analysis of an ad-hoc routing protocol executed in the ns2 simulator.
We will also consider several extensions to the basic run-time verification techniques to cover more expressive property specifications, gathering of statistics, and tuning of simulation parameters at run time.

Speaker Biography: Oleg Sokolsky is a Research Assistant Professor at the Department of Computer and Information Science at the University of Pennsylvania. His main research interests lie in the area of formal methods and their application to software and systems engineering. For the past several years, he has been involved in the project that aims to increase safety assurance of deployed systems by means of run-time monitoring and checking. Other related projects involve the use of hybrid systems for the modeling of biological systems, formal schedulability analysis of real-time systems based on process-algebraic modeling, and analysis of software architectures.


TRACK B: Probabilistic validation of computer system survivability. W.H. Sanders (University of Illinois at Urbana-Champaign)
Abstract: There is a growing need for systems whose survivability in a specified use and/or attack environment can be assured with confidence. Many techniques have been proposed to validate individual components (e.g., formal methods) or a system as a whole (e.g., red teaming). However, no single technique can provide the breadth of evidence needed to validate a system with respect to high-level survivability requirements. To accomplish this, we propose an integrated validation procedure (IVP) that begins with the formulation of a specific survivability requirement and determines whether a system is valid with respect to the requirement.
The IVP employs a top-down approach that methodically breaks the task of validation into manageable tasks, and for each task, applies techniques best suited to its accomplishment. Stochastic methods, logical arguments (including formal methods), and experimental methods are all employed. These efforts can be largely independent, and the results, which complement and supplement each other, are seamlessly integrated to provide a convincing assurance argument. We then illustrate the IVP by applying it to an intrusion-tolerant information system being developed by the U.S. Department of Defense. In addition to validating the system against high-level survivability requirements, we demonstrate the use of model-based validation techniques, as a part of the overall validation procedure, to guide the system's design by exploring different configurations and evaluating tradeoffs.

Speaker Biography: William S. Sanders is a Donald Biggar Willett Professor of Engineering and the Director of the Information Trust Institute at the University of Illinois. He is a professor in the Department of Electrical and Computer Engineering and the Coordinated Science Laboratory. He is a Fellow of the IEEE and the ACM. In addition, he serves on the editorial boards of Performance Evaluation and IEEE Security and Privacy and is the Area Editor for Simulation and Modeling of Computer Systems for the ACM Transactions on Modeling and Computer Simulation. He is a past Chair of the IEEE Technical Committee on Fault-Tolerant Computing and past Vice-Chair of IFIP Working Group 10.4 on Dependable Computing.

Dr. Sanders's research interests include performance/dependability evaluation, dependable computing, and reliable distributed systems. He has published more than 160 technical papers in those areas. He served as the General Chair of the 2003 Illinois International Multiconference on Measurement, Modelling, and Evaluation of Computer-Communication Systems. He has served as co-Chair of the program committees of the 29th International Symposium on Fault-Tolerant Computing (FTCS-29), the Sixth IFIP Working Conference on Dependable Computing for Critical Applications, Sigmetrics 2003, PNPM 2003, and Performance Tools 2003, and has served on the program committees of numerous conferences and workshops. He is a co-developer of three tools for assessing the performability of systems represented as stochastic activity networks: METASAN, UltraSAN, and Möbius. Möbius and UltraSAN have been distributed widely to industry and academia; more than 400 licenses for the tools have been issued to universities, companies, and NASA for evaluating the performance, dependability, security, and performability of a variety of systems. He is also a co-developer of the Loki distributed system fault injector and the AQuA/ITUA middlewares for providing dependability/security to distributed and networked applications.

Web Maintenance Person