1,072 research outputs found
Dagstuhl Reports : Volume 1, Issue 2, February 2011
Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn
Engineering framework for service-oriented automation systems
Tese de doutoramento. Engenharia Informática. Universidade do Porto. Faculdade de Engenharia. 201
Analysis and Optimization of Mobile Business Processes
Mobility of workers and business processes rapidly gains the attention of businesses and business analysts. A wide variety of definitions exists for mobile business processes. This work considers a type of business processes concerned with the maintenance of distributed technical equipment as, e.g., telecommunication networks, utility networks, or professional office gear. Executing the processes in question, workers travel to the location where the equipment is situated and perform tasks there. Depending on the type of activities to be performed, the workers need certain qualifications to fulfill their duty. Especially in network maintenance processes, activities are often not isolated but depend on the parallel or subsequent execution of other activities at other locations. Like every other economic activity, the out- lined mobile processes are under permanent pressure to be executed more efficiently. Since business process reengineering (BPR) projects are the common way to achieve process improvements, business analysts need methods to model and evaluate mobile business processes.
Mobile processes challenge BPR projects in two ways: (i) the process at- tributes introduced by mobility (traveling, remote synchronization, etc.) complicate process modeling, and (ii) these attributes introduce process dynamics that prevent the straightforward prediction of BPR effects. This work solves these problems by developing a modeling method for mobile processes. The method allows for simulating mobile processes considering the mobility attributes while hiding the complexity of these attributes from the business analysts modeling the processes.
Simulating business processes requires to assign activites to workers, which is called scheduling. The spatial distribution of activities relates scheduling to routing problems known from the logistics domain. To provide the simula- tor with scheduling capabilities the according Mobile Workforce Scheduling Problem with Multitask-Processes (MWSP-MP) is introduced and analyzed in-depth. A set of neighborhood operators was developed to allow for the application of heuristics and meta-heuristics to the problem. Furthermore, methods for generating start solutions of the MWSP-MP are introduced.
The methods introduced throughout this work were validated with real-world data from a German utility. The contributions of this work are a reference model of mobile work, a business domain independent modeling method for mobile business processes, a simulation environment for such processes, and the introduction and analysis of the Mobile Workforce Scheduling Problem with Multitask-Processes
Advances in Grid Computing
This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems
Infrastructures and Algorithms for Testable and Dependable Systems-on-a-Chip
Every new node of semiconductor technologies provides further miniaturization and higher performances, increasing the number of advanced functions that electronic products can offer. Silicon area is now so cheap that industries can integrate in a single chip usually referred to as System-on-Chip (SoC), all the components and functions that historically were placed on a hardware board. Although adding such advanced functionality can benefit users, the manufacturing process is becoming finer and denser, making chips more susceptible to defects. Today’s very deep-submicron semiconductor technologies (0.13 micron and below) have reached susceptibility levels that put conventional semiconductor manufacturing at an impasse. Being able to rapidly develop, manufacture, test, diagnose and verify such complex new chips and products is crucial for the continued success of our economy at-large. This trend is expected to continue at least for the next ten years making possible the design and production of 100 million transistor chips.
To speed up the research, the National Technology Roadmap for Semiconductors identified in 1997 a number of major hurdles to be overcome. Some of these hurdles are related to test and dependability.
Test is one of the most critical tasks in the semiconductor production process where Integrated Circuits (ICs) are tested several times starting from the wafer probing to the end of production test. Test is not only necessary to assure fault free devices but it also plays a key role in analyzing defects in the manufacturing process. This last point has high relevance since increasing time-to-market pressure on semiconductor fabrication often forces foundries to start volume production on a given semiconductor technology node before reaching the defect densities, and hence yield levels, traditionally obtained at that stage. The feedback derived from test is the only way to analyze and isolate many of the defects in today’s processes and to increase process’s yield.
With the increasing need of high quality electronic products, at each new physical assembly level, such as board and system assembly, test is used for debugging, diagnosing and repairing the sub-assemblies in their new environment. Similarly, the increasing reliability, availability and serviceability requirements, lead the users of high-end products performing periodic tests in the field throughout the full life cycle.
To allow advancements in each one of the above scaling trends, fundamental changes are expected to emerge in different Integrated Circuits (ICs) realization disciplines such as IC design, packaging and silicon process. These changes have a direct impact on test methods, tools and equipment. Conventional test equipment and methodologies will be inadequate to assure high quality levels. On chip specialized block dedicated to test, usually referred to as Infrastructure IP (Intellectual Property), need to be developed and included in the new complex designs to assure that new chips will be adequately tested, diagnosed, measured, debugged and even sometimes repaired.
In this thesis, some of the scaling trends in designing new complex SoCs will be analyzed one at a time, observing their implications on test and identifying the key hurdles/challenges to be addressed. The goal of the remaining of the thesis is the presentation of possible solutions. It is not sufficient to address just one of the challenges; all must be met at the same time to fulfill the market requirements
Proceedings Work-In-Progress Session of the 13th Real-Time and Embedded Technology and Applications Symposium
The Work-In-Progress session of the 13th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS\u2707) presents papers describing contributions both to state of the art and state of the practice in the broad field of real-time and embedded systems. The 17 accepted papers were selected from 19 submissions. This proceedings is also available as Washington University in St. Louis Technical Report WUCSE-2007-17, at http://www.cse.seas.wustl.edu/Research/FileDownload.asp?733. Special thanks go to the General Chairs – Steve Goddard and Steve Liu and Program Chairs - Scott Brandt and Frank Mueller for their support and guidance
Efficiently Conducting Quality-of-Service Analyses by Templating Architectural Knowledge
Previously, software architects were unable to effectively and efficiently apply reusable knowledge (e.g., architectural styles and patterns) to architectural analyses. This work tackles this problem with a novel method to create and apply templates for reusable knowledge. These templates capture reusable knowledge formally and can efficiently be integrated in architectural analyses
Facilitating Flexible Link Layer Protocols for Future Wireless Communication Systems
This dissertation addresses the problem of designing link layer protocols
which are flexible enough to accommodate the demands offuture wireless
communication systems (FWCS).We show that entire link layer protocols with
diverse requirements and responsibilities can be composed out of
reconfigurable and reusable components.We demonstrate this by designing and
implementinga novel concept termed Flexible Link Layer (FLL)
architecture.Through extensive simulations and practical experiments, we
evaluate a prototype of the suggested architecture in both
fixed-spectrumand dynamic spectrum access (DSA) networks.
FWCS are expected to overcome diverse challenges including the continual
growthin traffic volume and number of connected devices.Furthermore, they
are envisioned to support a widerange of new application requirements and
operating conditions.Technology trends, including smart homes,
communicating machines, and vehicularnetworks, will not only grow on a
scale that once was unimaginable, they will also become the predominant
communication paradigm, eventually surpassing today's human-produced
network traffic.
In order for this to become reality, today's systems have to evolve in many
ways.They have to exploit allocated resources in a more efficient and
energy-conscious manner.In addition to that, new methods for spectrum
access and resource sharingneed to be deployed.Having the diversification
of applications and network conditions in mind, flexibility at all layers
of a communication system is of paramount importance in order to meet the
desired goals.
However, traditional communication systems are often designed with specific
and distinct applications in mind. Therefore, system designers can tailor
communication systems according to fixedrequirements and operating
conditions, often resulting in highly optimized but inflexible
systems.Among the core problems of such design is the mix of data transfer
and management aspects.Such a combination of concerns clearly hinders the
reuse and extension of existing protocols.
To overcome this problem, the key idea explored in this dissertation is a
component-based design to facilitate the development of more flexible and
versatile link layer protocols.Specifically, the FLL architecture,
suggested in this dissertation, employs a generic, reconfigurable data
transfer protocol around which one or more complementary protocols, called
link layer applications, are responsible for management-related aspects of
the layer.
To demonstrate the feasibility of the proposed approach, we have designed
andimplemented a prototype of the FLL architecture on the basis ofa
reconfigurable software defined radio (SDR) testbed.Employing the SDR
prototype as well as computer simulations, thisdissertation describes
various experiments used to examine a range of link layerprotocols for both
fixed-spectrum and DSA networks.
This dissertation firstly outlines the challenges faced by FWCSand
describes DSA as a possible technology component for their construction.It
then specifies the requirements for future DSA systemsthat provide the
basis for our further considerations.We then review the background on link
layer protocols, surveyrelated work on the construction of flexible
protocol frameworks,and compare a range of actual link layer protocols and
algorithms.Based on the results of this analysis, we design, implement, and
evaluatethe FLL architecture and a selection of actual link layer
protocols.
We believe the findings of this dissertation add substantively to the
existing literature on link layer protocol design and are valuable for
theoreticians and experimentalists alike
- …