Bag om Quality of Parallel & Distributed Programs & Systems
The field of parallel computing dates back to the mid-fifties, where research labs started the development of so-called supercomputers with the aim to significantly increase the performance, mainly the number of (floating point) operations a machine is able to perform per unit of time. Since then, significant advances in hardware and software technology have brought the field to a point where the long-time challenge of tera-flop computing was reached in 1998. While increases in performance are still a driving factor in parallel and distributed processing, there are many other challenges to be addressed in the field. Enabled by growth of the Internet, the majority of desktop computers nowadays can be seen as part of a huge distributed system, the World Wide Web. Advances in wireless networks extend the scope to a variety of mobile devices (including notebooks, PDAs, or mobile phones). Information is therefore distributed by nature, users require immediate access to information sources, to computing power, and to communication facilities. While performance in the sense defined above is still an important criterion in such kind of systems, other issues, including correctness, reliability, security, ease of use, ubiquitous access, intelligent services, etc. must be considered already in the development process itself. This extended notion of performance covering all those aspects is called "quality of parallel and distributed programs and systems". In order to examine and guarantee quality of parallel and distributed programs and systems special models, metrics and tools are necessary. The six papers selected for this volume tackle various aspects of these problems.
Vis mere