Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
The present volume contains the proceedings of the 5th International Workshop on Formal Aspects in Security and Trust (FAST 2008), held in Malaga, Spain, October 9-10, 2008. FAST is an event a?liated with the 13th European Sym- sium on Research in Computer Security (ESORICS 2008). FAST 2008 was held under the auspices of the IFIP WG 1.7 on Foundations of Security Analysis and Design. The 5th International Workshop on Formal Aspects in Security and Trust (FAST 2008) aimed at continuing the successful e?ort of the previous three FAST workshop editions for fostering the cooperation among researchers in the areas of security and trust. As computing and network infrastructures become increasingly pervasive, and as they carry increasing economic activity, society needs well-matched security and trust mechanisms. These interactions incre- ingly span several enterprises and involve loosely structured communities of - dividuals. Participants in these activities must control interactions with their partners based on trust policies and business logic. Trust-based decisions - fectively determine the security goals for shared information and for access to sensitive or valuable resources. FAST sought for original papers focusing on formal aspects in: security and trust policy models; security protocol design and analysis; formal models of trustand reputation;logicsfor security andtrust;distributed trust management systems;trust-basedreasoning;digitalassetsprotection;dataprotection;privacy and ID issues; information ?ow analysis; language-based security; security and trust aspects in ubiquitous computing; validation/analysis tools; Web service security/trust/privacy; GRID security; security risk assessment; case studies.
This book contains the papers that have been presented at the ninth Very Large Scale Integrated Systems conference VLSI'97 that is organized biannually by IFIP Working Group 10.5. It took place at Hotel Serra Azul, in Gramado Brazil from 26-30 August 1997. Previous conferences have taken place in Edinburgh, Trondheim, Vancouver, Munich, Grenoble and Tokyo. The papers in this book report on all aspects of importance to the design of the current and future integrated systems. The current trend towards the realization of versatile Systems-on-a-Chip require attention of embedded hardware/software systems, dedicated ASIC hardware, sensors and actuators, mixed analog/digital design, video and image processing, low power battery operation and wireless communication. The papers as presented in Jhis book have been organized in two tracks, where one is dealing with VLSI System Design and Applications and the other presents VLSI Design Methods and CAD. The following topics are addressed: VLSI System Design and Applications Track . VLSI for Video and Image Processing. . Microsystem and Mixed-mode design. . Communication And Memory System Design . Cow-voltage & Low-power Analog Circuits. . High Speed Circuit Techniques . Application Specific DSP Architectures. VLSI Design Methods and CAD Track . Specification and Simulation at System Level. . Synthesis and Technology Mapping. . CAD Techniques for Low-Power Design. . Physical Design Issues in Sub-micron Technologies. . Architectural Design and Synthesis. . Testing in Complex Mixed Analog and Digital Systems.
TRACK 1: Innovative Applications in the Public Sector The integration of multimedia based applications and the information superhighway fundamentally concerns the creation of a communication technology to support the ac tivities of people. Communication is a profoundly social activity involving interactions among groups or individuals, common standards of exchange, and national infrastruc tures to support telecommunications activities. The contributions of the invited speakers and others in this track begin to explore the social dimension of communication within the context of integrated, information systems for the public sector. Interactions among businesses and households are described by Ralf Strauss through the development within a real community of a "wired city" with information and electronic services provided by the latest telecommunications technologies. A more specific type of interaction between teacher and student forms the basis of education. John Tiffin demonstrates how virtual classrooms can be used to augment the educational process. Carl Loeffler presents yet another perspective on interaction through the integration of A-life and agent technologies to investigate the dynamics of complex behaviors within networked simulation environments. Common standards for communication in the form of electronic documents or CSCW (Computer Supported Cooperative Work), according to Roland Traunmiiller, provide en abling technologies for a paradigm shift in the management of organizations. As pointed out by William Olle, the impact of standardization work on the future of information technology depends critically upon the interoperability of software systems.
Similarity-based learning methods have a great potential as an intuitive and ?exible toolbox for mining, visualization,and inspection of largedata sets. They combine simple and human-understandable principles, such as distance-based classi?cation, prototypes, or Hebbian learning, with a large variety of di?erent, problem-adapted design choices, such as a data-optimum topology, similarity measure, or learning mode. In medicine, biology, and medical bioinformatics, more and more data arise from clinical measurements such as EEG or fMRI studies for monitoring brain activity, mass spectrometry data for the detection of proteins, peptides and composites, or microarray pro?les for the analysis of gene expressions. Typically, data are high-dimensional, noisy, and very hard to inspect using classic (e. g. , symbolic or linear) methods. At the same time, new technologies ranging from the possibility of a very high resolution of spectra to high-throughput screening for microarray data are rapidly developing and carry thepromiseofane?cient,cheap,andautomaticgatheringoftonsofhigh-quality data with large information potential. Thus, there is a need for appropriate - chine learning methods which help to automatically extract and interpret the relevant parts of this information and which, eventually, help to enable und- standingofbiologicalsystems,reliablediagnosisoffaults,andtherapyofdiseases such as cancer based on this information. Moreover, these application scenarios pose fundamental and qualitatively new challenges to the learning systems - cause of the speci?cs of the data and learning tasks. Since these characteristics are particularly pronounced within the medical domain, but not limited to it and of principled interest, this research topic opens the way toward important new directions of algorithmic design and accompanying theory.
Database research and development has been remarkably successful over the past three decades. Now the field is facing new challenges posted by the rapid advances of technology, especially the penetration of the Web and Internet into everyone's daily life. The economical and financial environment where database systems are used has been changing dramatically. In addition to being able to efficiently manage a large volume of operational data generated internally, the ability to manage data in cyberspace, extract relevant information, and discover knowledge to support decision making is critical to the success of any organization. In order to provide researchers and practitioners with a forum to share their experiences in tackling problems in managing and using data, information, and knowledge in the age of the Internet and Web, the First International Conference on Web-Age Information Management (WAIM 2000) was held in Shanghai, China, June 21-23. The inaugural conference in its series was well received. Researchers from 17 countries and regions, including Austria, Australia, Bahrain, Canada, China, France, Germany, Japan, Korea, Malaysia, The Netherlands, Poland, Singapore, Spain, Taiwan, UK, and USA submitted their recent work. Twenty-seven regular and 14 short papers contained in these proceedings were presented during the two-day conference. These papers cover a large spectrum of issues, from classical data management such as object-oriented modeling, spatial and temporal databases to recent hits like data mining, data warehousing, semi-structured data, and XML.
This book constitutes the refereed proceedings of the 22nd Annual Symposium on Combinatorial Pattern Matching, CPM 2011, held in Palermi, Italy, in June 2011. The 36 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 70 submissions. The papers address issues of searching and matching strings and more complicated patterns such as trees, regular expressions, graphs, point sets, and arrays. The goal is to derive non-trivial combinatorial properties of such structures and to exploit these properties in order to either achieve superior performance for the corresponding computational problems or pinpoint conditions under which searches cannot be performed efficiently. The meeting also deals with problems in computational biology, data compression and data mining, coding, information retrieval, natural language processing and pattern recognition.
This book constitutes the refereed proceedings of the Second International Symposium on Computational Life Sciences, CompLife 2006. The 25 revised full papers presented were carefully reviewed and selected from 56 initial submissions. The papers are organized in topical sections on genomics, data mining, molecular simulation, molecular informatics, systems biology, biological networks/metabolism, and computational neuroscience.
Here are the proceedings of the 4th International Workshop on Principles and Practice of Semantic Web Reasoning, PPSWR 2006. The book presents 14 revised full papers together with 1 invited talk and 6 system demonstrations, addressing major aspects of semantic Web research, namely forms of reasoning with a strong interest in rule-based languages and methods. Coverage includes theoretical work on reasoning methods, concrete reasoning methods and query languages, and practical applications.
This volume contains the papers selected for presentation at the First Int- national Conference on Rough Sets and Knowledge Technology (RSKT 2006) organized in Chongqing, P. R. China, July 24-26, 2003. There were 503 s- missions for RSKT 2006 except for 1 commemorative paper, 4 keynote papers and 10 plenary papers. Except for the 15 commemorative and invited papers, 101 papers were accepted by RSKT 2006 and are included in this volume. The acceptance rate was only 20%. These papers were divided into 43 regular oral presentation papers (each allotted 8 pages), and 58 short oral presentation - pers (each allotted 6 pages) on the basis of reviewer evaluation. Each paper was reviewed by two to four referees. Since the introduction of rough sets in 1981 by Zdzis law Pawlak, many great advances in both the theory and applications have been introduced. Rough set theory is closely related to knowledge technology in a variety of forms such as knowledge discovery, approximate reasoning, intelligent and multiagent systems design, and knowledge intensive computations that signal the emergence of a knowledge technology age. The essence of growth in cutting-edge, state-of-t- art and promising knowledge technologies is closely related to learning, pattern recognition,machine intelligence and automation of acquisition, transformation, communication, exploration and exploitation of knowledge. A principal thrust of such technologies is the utilization of methodologies that facilitate knowledge processing.
Thisbookpresentsmaterialwhichismorealgorithmicallyorientedthanmost alternatives.Italsodealswithtopicsthatareatorbeyondthestateoftheart. Examples include practical and applicable wavelet and other multiresolution transform analysis. New areas are broached like the ridgelet and curvelet transforms. The reader will ?nd in this book an engineering approach to the interpretation of scienti?c data. Compared to the 1st Edition, various additions have been made throu- out, and the topics covered have been updated. The background or en- ronment of this book¿s topics include continuing interest in e-science and the virtual observatory, which are based on web based and increasingly web service based science and engineering. Additional colleagues whom we would like to acknowledge in this 2nd edition include: Bedros Afeyan, Nabila Aghanim, Emmanuel Cand` es, David Donoho, Jalal Fadili, and Sandrine Pires, We would like to particularly - knowledge Olivier Forni who contributed to the discussion on compression of hyperspectral data, Yassir Moudden on multiwavelength data analysis and Vicent Mart¿ ?nez on the genus function. The cover image to this 2nd edition is from the Deep Impact project. It was taken approximately 8 minutes after impact on 4 July 2005 with the CLEAR6 ?lter and deconvolved using the Richardson-Lucy method. We thank Don Lindler, Ivo Busko, Mike A¿Hearn and the Deep Impact team for the processing of this image and for providing it to us.
This book constitutes the joint refereed proceedings of the three workshops held in conjunction with the 6th International Conference on Web Information Systems Engineering, WISE 2005, in New York, NY, USA, in November 2005.A total of 47 papers were submitted to the three workshops, and 28 revised full papers were carefully selected for presentation. The workshop on Web Information Systems Quality (WISQ 2005) - discussing and disseminating research on the quality of WIS and Web services from a holistic point of view - included 7 papers out of 12 submissions. The workshop on Web-based Learning (WBL 2005) accounted for 10 papers from 14 papers submitted - organized in topical sections on tools, models, and innovative applications. The workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2005) included 11 presentations selected from 21 submissions. Topics addressed are scalable repository and reasoning services, practical Semantic Web applications, query handling and optimization techniques.
Researchers in data management have recently recognized the importance of a new class of data-intensive applications that requires managing data streams, i.e., data composed of continuous, real-time sequence of items. Streaming applications pose new and interesting challenges for data management systems. Such application domains require queries to be evaluated continuously as opposed to the one time evaluation of a query for traditional applications. Streaming data sets grow continuously and queries must be evaluated on such unbounded data sets. These, as well as other challenges, require a major rethink of almost all aspects of traditional database management systems to support streaming applications. Stream Data Management comprises eight invited chapters by researchers active in stream data management. The collected chapters provide exposition of algorithms, languages, as well as systems proposed and implemented for managing streaming data. Stream Data Management is designed to appeal to researchers or practitioners already involved in stream data management, as well as to those starting out in this area. This book is also suitable for graduate students in computer science interested in learning about stream data management.
Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency. This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.
This volume comprises papers from the following ?ve workshops that were part of the complete program for the International Conference on Extending Database Technology (EDBT) held in Heraklion, Greece, March 2004: * ICDE/EDBT Joint Ph. D. Workshop (PhD) * Database Technologies for Handling XML-information on the Web (DataX) * Pervasive Information Management (PIM) * Peer-to-Peer Computing and Databases (P2P&DB) * Clustering Information Over the Web (ClustWeb) Together, the ?ve workshops featured 61 high-quality papers selected from appr- imately 180 submissions. It was, therefore, dif?cult to decide on the papers that were to beacceptedforpresentation. Webelievethattheacceptedpaperssubstantiallycontribute to their particular ?elds of research. The workshops were an excellent basis for intense and highly fruitful discussions. The quality and quantity of papers show that the areas of interest for the workshops are highly active. A large number of excellent researchers are working on the aforementioned ?elds producing research output that is not only of interest for other researchers but also for industry. The organizers and participants of the workshops were highly satis?ed with the output. The high quality of the presenters and workshop participants contributed to the success of each workshop. The amazing environment of Heraklion and the location of the EDBT conference also contributed to the overall success. Last, but not least, our sincere thanks to the conference organizers - the organizing team was always willing to help and if there were things that did not work, assistance was quickly available.
We have described the development of a new micro-payment system, NetPay, f- turing different ways of managing electronic money, or e-coins. NetPay provides an off-line, anonymous protocol that supports high-volume, low-cost electronic trans- tions over the Internet. We developed three kinds of e-wallets to manage coins in a NetPay-based system: a sever-side e-wallet allowing multiple computer access to- coins; a client-side e-wallet allowing customer PC management of the e-coins, and a cookie-based e-wallet cache to improve performance of the client-side e-wallet c- munication overhead. Experiences to date with NetPay prototypes have demonstrated it provides an effective micro-payment strategy and customers welcome the ability to manage their electronic coins in different ways. References 1. Dai, X. and Lo, B.: NetPay - An Efficient Protocol for Micropayments on the WWW. Fifth Australian World Wide Web Conference, Australia (1999) 2. Dai, X., Grundy, J. and Lo, B.: Comparing and contrasting micro-payment models for- commerce systems, International Conferences of Info-tech and Info-net (ICII), China (2001) 3. Dai, X., Grundy, J.: Architecture of a Micro-Payment System for Thin-Client Web App- cations. In Proceedings of the 2002 International Conference on Internet Computing, Las Vegas, CSREA Press, June 24-27, 444--450 4. Dai, X. and Grundy J.: "e;Customer Perception of a Thin-client Micro-payment System Issues and Experiences"e;, Journal of End User Computing, 15(4), pp 62-77, (2003).
The IFIP TC-6 9th International Conference on Personal Wireless Communi- tions, PWC 2004 is the main conference of the IFIP Working Group 6. 8, Mobile and Wireless Communications. The ?eld of personal wireless communications is steadily growing in imp- tance,fromanacademic,industrialandsocietalpointofview. Thedroppingcost of WLAN and short-range technologies such as Bluetooth and Zigbee is causing the proliferation of personal devices and appliances equipped with radio int- faces. Together with the gradual deployment of powerful wireless infrastructure networks, such as 3G cellular systems and WLAN hotspots, the conditions are being created for a?ordable ubiquitous communication involving virtually any artifact. This enables new application areas such as ambient intelligence where a world of devices, sensors and actuators surrounding us use wireless technology to create systems that assist us in an unobtrusive way. It also allows the - velopment of personal and personalized environments that accompany a person whereverheorshegoes. ExamplesarePersonalAreaNetworks(PAN)physically surrounding a person, and personal networks with a potentially global reach. PWC 2004 re?ects these developments, which are happening on a global scale. Researchers from all over the world, and in particular a large number from Asia, made contributions to the conference. There were 100 submissions. After a thorough reviewing process, 25 full papers and 13 short papers were retained for presentation in the technical sessions. The papers cover the whole range of wireless and mobile technologies: cellular systems, WLAN, ad hoc and sensor networks, host and network mobility, transport protocols for wireless systems, and the physical layer.
The 11th Conference "e;Artificial Intelligence: Methodology, Systems, Applications - Semantic Web Challenges"e; (AIMSA 2004) continued successfully pursuing the main aim of the AIMSA series of conferences - to foster the multidisciplinary community of artificial intelligence researchers, embracing both the theoretic underpinnings of the field and the practical issues involved in development, deployment, and maintenance of systems with intelligent behavior. Since the first conference in 1984 AIMSA has provided an ideal forum for international scientific exchange between Central/Eastern Europe and the rest of the world and it is even more important nowadays in the uni- ing Europe. The current AIMSA edition is focused on Semantic Web methods and technologies. The Internet is changing the everyday services landscape, and the way we do things in almost every domain of our life. Web services are rapidly becoming the enabling technology of today's e-business and e-commerce systems, and will soon transform the Web as it is now into a distributed computation and application framework. The emerging Semantic Web paradigm promises to annotate Web artefacts to enable automated reasoning about them. When applied to e-services, the paradigm hopes to provide substantial automation for activities such as discovery, invocation, assembly, and monitoring of e-services. One hundred and seventy-six interesting papers were submitted to the conference.
This year marked the coming of age of the British National Conference on Databases with its 21st conference held at Heriot-Watt University, Edinburgh, in July 2004. To mark the occasion the general theme of the conference was "e;When Data Is Key"e;, reflecting not only the traditional key awarded on a 21st birthday, but also the ev- growing importance of electronic data management in every aspect of our modern lives. The conference was run as part of DAMMS (Data Analysis, Manipulation, Management and Storage) Week, which included a number of co-located and complementary conferences and workshops, including the 2nd Workshop on Teaching, Learning and Assessment in Databases (TLAD2), the BNCOD BioInformatics Workshop, and the 1st International Conference on the Future of Consumer Insight Developments in Retail Banking. The aim of this co-location was to develop synergies between the teaching, research and commercial communities involved in all aspects of database activities, and to use BNCOD as a focus for future synergies and developments within these communities. Although this is entitled the British National Conference on Databases, BNCOD has always had an international focus, and this year more than most, with the majority of the papers submitted and accepted coming from outwith the UK.
th CAiSE 2004 was the 16 in the series of International Conferences on Advanced Information Systems Engineering. In the year 2004 the conference was hosted by the Faculty of Computer Science and Information Technology, Riga Technical University, Latvia. Since the late 1980s, the CAiSE conferences have provided a forum for the presentation and exchange of research results and practical experiences within the ?eld of Information Systems Engineering. The conference theme of CAiSE 2004 was Knowledge and Model Driven Information Systems Engineering for Networked Organizations. Modern businesses and IT systems are facing an ever more complex en- ronment characterized by openness, variety, and change. Organizations are - coming less self-su?cient and increasingly dependent on business partners and other actors. These trends call for openness of business as well as IT systems, i.e. the ability to connect and interoperate with other systems. Furthermore, organizations are experiencing ever more variety in their business, in all c- ceivable dimensions. The di?erent competencies required by the workforce are multiplying. In the same way, the variety in technology is overwhelming with a multitude of languages, platforms, devices, standards, and products. Moreover, organizations need to manage an environment that is constantly changing and where lead times, product life cycles, and partner relationships are shortening. ThedemandofhavingtoconstantlyadaptITtochangingtechnologiesandbu- ness practices has resulted in the birth of new ideas which may have a profound impact on the information systems engineering practices in future years, such as autonomic computing, component and services marketplaces and dynamically generated software.
Grid and cooperative computing has emerged as a new frontier of information tech- logy. It aims to share and coordinate distributed and heterogeneous network resources forbetterperformanceandfunctionalitythatcanotherwisenotbeachieved.Thisvolume contains the papers presented at the 2nd International Workshop on Grid and Coope- tive Computing, GCC 2003, which was held in Shanghai, P.R. China, during December 7-10, 2003. GCC is designed to serve as a forum to present current and future work as well as to exchange research ideas among researchers, developers, practitioners, and usersinGridcomputing,Webservicesandcooperativecomputing,includingtheoryand applications. For this workshop, we received over 550 paper submissions from 22 countries and regions. All the papers were peer-reviewed in depth and qualitatively graded on their relevance, originality, signi?cance, presentation, and the overall appropriateness of their acceptance. Any concerns raised were discussed by the program committee. The or- nizing committee selected 176 papers for conference presentation (full papers) and 173 submissions for poster presentation (short papers).The papers included herein represent the forefront of research from China, USA, UK, Canada, Switzerland, Japan, Aust- lia, India, Korea, Singapore, Brazil, Norway, Greece, Iran, Turkey, Oman, Pakistan and other countries. More than 600 attendees participated in the technical section and the exhibition of the workshop.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.