699 research outputs found

    Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach

    Get PDF
    The last years have been characterized by the arising of highly distributed computing platforms composed of a heterogeneity of computing and communication resources including centralized high-performance computing architectures (e.g. clusters or large shared-memory machines), as well as multi-/many-core components also integrated into mobile nodes and network facilities. The emerging of computational paradigms such as Grid and Cloud Computing, provides potential solutions to integrate such platforms with data systems, natural phenomena simulations, knowledge discovery and decision support systems responding to a dynamic demand of remote computing and communication resources and services. In this context time-critical applications, notably emergency management systems, are composed of complex sets of application components specialized for executing specific computations, which are able to cooperate in such a way as to perform a global goal in a distributed manner. Since the last years the scientific community has been involved in facing with the programming issues of distributed systems, aimed at the definition of applications featuring an increasing complexity in the number of distributed components, in the spatial distribution and cooperation between interested parties and in their degree of heterogeneity. Over the last decade the research trend in distributed computing has been focused on a crucial objective. The wide-ranging composition of distributed platforms in terms of different classes of computing nodes and network technologies, the strong diffusion of applications that require real-time elaborations and online compute-intensive processing as in the case of emergency management systems, lead to a pronounced tendency of systems towards properties like self-managing, self-organization, self-controlling and strictly speaking adaptivity. Adaptivity implies the development, deployment, execution and management of applications that, in general, are dynamic in nature. Dynamicity concerns the number and the specific identification of cooperating components, the deployment and composition of the most suitable versions of software components on processing and networking resources and services, i.e., both the quantity and the quality of the application components to achieve the needed Quality of Service (QoS). In time-critical applications the QoS specification can dynamically vary during the execution, according to the user intentions and the Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach Gabriele Mencagli and Marco Vanneschi Department of Computer Science, University of Pisa, L. Bruno Pontecorvo, Pisa Italy 2 2 Will-be-set-by-IN-TECH information produced by sensors and services, as well as according to the monitored state and performance of networks and nodes. The general reference point for this kind of systems is the Grid paradigm which, by definition, aims to enable the access, selection and aggregation of a variety of distributed and heterogeneous resources and services. However, though notable advancements have been achieved in recent years, current Grid technology is not yet able to supply the needed software tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability and performance, interoperability, as well as fault tolerance and security, of the emerging applications. For this reason in this chapter we will study a methodology for designing high-performance computations able to exploit the heterogeneity and dynamicity of distributed environments by expressing adaptivity and QoS-awareness directly at the application level. An effective approach needs to address issues like QoS predictability of different application configurations as well as the predictability of reconfiguration costs. Moreover adaptation strategies need to be developed assuring properties like the stability degree of a reconfiguration decision and the execution optimality (i.e. select reconfigurations accounting proper trade-offs among different QoS objectives). In this chapter we will present the basic points of a novel approach that lays the foundations for future programming model environments for time-critical applications such as emergency management systems. The organization of this chapter is the following. In Section 2 we will compare the existing research works for developing adaptive systems in critical environments, highlighting their drawbacks and inefficiencies. In Section 3, in order to clarify the application scenarios that we are considering, we will present an emergency management system in which the run-time selection of proper application configuration parameters is of great importance for meeting the desired QoS constraints. In Section 4we will describe the basic points of our approach in terms of how compute-intensive operations can be programmed, how they can be dynamically modified and how adaptation strategies can be expressed. In Section 5 our approach will be contextualize to the definition of an adaptive parallel module, which is a building block for composing complex and distributed adaptive computations. Finally in Section 6 we will describe a set of experimental results that show the viability of our approach and in Section 7 we will give the concluding remarks of this chapter

    Wireless Communication Protocols for Distributed Computing Environments

    Get PDF
    The distributed computing is an approach relying on the presence of multiple devices that can interact among them in order to perform a pervasive and parallel computing. This chapter deals with the communication protocol aiming to be used in a distributed computing scenario; in particular the considered computing infrastructure is composed by elements (nodes) able to consider specific application requests for the implementation of a service in a distributed manner according to the pervasive grid computing principle (Priol & Vanneschi, 2008; Vanneschi & Veraldi, 2007). In the classical grid computing paradigm, the processing nodes are high performance computers or multicore workstations, usually organized in clusters and interconnected through broadband wired communication networks with small delay (e.g., fiber optic, DSL lines). The pervasive grid computing paradigm overcomes these limitations allowing the development of distributed applications that can perform parallel computations using heterogeneous devices interconnected by different types of communication technologies. In this way, we can resort to a computing environment composed by fixed ormobile devices (e.g., smartphones, PDAs, laptops) interconnected through broadband wireless or wired networks where the devices are able to take part to a grid computing process. Suitable techniques for the pervasive grid computing should be able to discover and organize heterogeneous resources, to allow scaling an application according to the computing power, and to guarantee specific QoS profiles (Darby III & Tzeng, 2010; Roy & Das, 2009). In particular, aim of this chapter is to present the most important challenges for the communication point of view when forming a distributed network for performing parallel and distributed computing. The focus will be mainly on the resource discovery and computation scheduling on wireless not infrastructured networks by considering their capabilities in terms of reliability and adaptation when facing with heterogeneous computing requests

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time

    Interim Report 1: Learning Communities Project

    Get PDF
    This is the first comprehensive report of the research conducted in relation to the Learning Communities Project, a collaboration between Athabasca University and Canadian Natural Resources Ltd.Executive summary This is the first formal report of the Learning Communities Project (LCP), based on results of the evaluation and research activities conducted to date. The major findings of the project, and observations about processes used, are as follows: 1. The project is focused on the learning needs of adults; therefore, andragogy, the art and science of teaching adults, forms part of the basic philosophy of the project. Similarly, distance education, focusing on any time/anyplace learning, is assumed to be the most appropriate type of delivery for courses included in the project. Other elements of the project deemed to be suitable, even required, for adults include prior learning assessment and recognition (PLAR), a focus on essential skills, and instruction designed to recognize the self-direction and autonomy needs of adults. 2. The above having been stated, the project also recognizes that many adult learners have not experienced self-direction in learning, or do not feel confident exercising full on adult autonomy as students. The LCP therefore seeks to provide support and assistance as individually required, to help students feel comfortable and be successful in any learning projects embarked on within the project. (As part of the concern for individual learning preferences, learning styles and preferences are also focus of research, and are considered in instructional design decisions.) 3. Distance education in this project is defined in the classic sense, as learning in which the learner and the tutor are normally separated, technology is used for interaction, there is institutional support throughout the learning process, and the prospect of two-way communication always exists. 4. Based on research to date, potential LCP participants are usually transitory (only a fraction live in the project’s regions), often from outside of Alberta, frequently subject to long commutes, and fully employed (many routinely work overtime). This is especially true of potential students in the CNQ Horizon site. The implications for learner interest and motivation, programming content, instructional design, course and module delivery, and student support, while it is evident there are implications, are being worked out as a core part of the project. 5. Technology is available in the region, due to the availability of Alberta SuperNet, and the technical resources of CNQ (at the Horizon site) and the post-secondary institutions that are already active in the region. As well, agencies such as eCampusAlberta, Alberta North, and the Canadian Virtual University already provide resources and learning opportunities to potential students. Despite these resources, and access to the Alberta SuperNet for broadband Internet connections, it is still true that rural areas are generally less well served technologically than urban areas (especially true of aboriginal communities); however, it is also true that rural residents are often more open to technology-based learning than those in urban areas. 6. Programming interests among CNQ employees or contractors who have inquired about or registered in courses through the project so far are primarily career-related, including business administration, accounting, project management, engineering, Blue Seal, and health and safety courses. In the communities, pre-employment courses, and technology and trades training (especially if including employment-related hands-on experience), have been identified as major areas of interest. 7. Based on survey and interviews, potential students encounter numerous barriers to participation in education and training programs, beginning with the fatigue they experience at the end of long work days, and extending to a potential lack of familiarity, access to, or comfort with technology, lack of familiarity with the distance education as a learning style, and lack of information about the connection between courses, credits, and career advancement. 8. Tracking registrations that result from project activity remains problematic. The project is studying various ways to identify registrations generated by LCP activities, essential to determining the project’s impact. 9. The research portion of the project has produced and circulated five occasional reports, and this interim report. The purpose of research to date has been formative – intended to be of immediate use to project planners and participants. Feedback from project participants indicates that these reports have had the desired impact on project development. 10. The research team have under development of paper for peer review, addressing the question of the programming that is currently available in the project’s regions, and the rationale for what is currently being offered (or not offered). Additional data are being gathered regarding the uptake and efficacy of programming, including registrations and completions, for a future publication. As well, the research team has plans to present at relevant conferences in the first half of 2008 in Nova Scotia, Ontario, and Alberta.Athabasca University; Canadian Natural Resources Ltd

    Caratteristiche della programmazione di applicazioni context-aware e una proposta di modello ad alte prestazioni

    Get PDF
    In questa tesi analizziamo il problema di descrivere applicazioni pervasive ad alte prestazioni. Dopo aver studiato i problemi dei modelli esistenti proponiamo un nuovo approccio nato dall'esperienza maturata con l'ambiente ad alte prestazioni ASSIST

    Collaborative adaptive accessibility and human capabilities

    Get PDF
    This thesis discusses the challenges and opportunities facing the field of accessibility, particularly as computing becomes ubiquitous. It is argued that a new approach is needed that centres around adaptations (specific, atomic changes) to user interfaces and content in order to improve their accessibility for a wider range of people than targeted by present Assistive Technologies (ATs). Further, the approach must take into consideration the capabilities of people at the human level and facilitate collaboration, in planned and ad-hoc environments. There are two main areas of focus: (1) helping people experiencing minor-to-moderate, transient and potentially-overlapping impairments, as may be brought about by the ageing process and (2) supporting collaboration between people by reasoning about the consequences, from different users perspectives, of the adaptations they may require. A theoretical basis for describing these problems and a reasoning process for the semi-automatic application of adaptations is developed. Impairments caused by the environment in which a device is being used are considered. Adaptations are drawn from other research and industry artefacts. Mechanical testing is carried out on key areas of the reasoning process, demonstrating fitness for purpose. Several fundamental techniques to extend the reasoning process in order to take temporal factors (such as fluctuating user and device capabilities) into account are broadly described. These are proposed to be feasible, though inherently bring compromises (which are defined) in interaction stability and the needs of different actors (user, device, target level of accessibility). This technical work forms the basis of the contribution of one work-package of the Sustaining ICT use to promote autonomy (Sus-IT) project, under the New Dynamics of Ageing (NDA) programme of research in the UK. Test designs for larger-scale assessment of the system with real-world participants are given. The wider Sus-IT project provides social motivations and informed design decisions for this work and is carrying out longitudinal acceptance testing of the processes developed here

    Logistic Knowledge Tracing: A Constrained Framework for Learner Modeling

    Full text link
    Adaptive learning technology solutions often use a learner model to trace learning and make pedagogical decisions. The present research introduces a formalized methodology for specifying learner models, Logistic Knowledge Tracing (LKT), that consolidates many extant learner modeling methods. The strength of LKT is the specification of a symbolic notation system for alternative logistic regression models that is powerful enough to specify many extant models in the literature and many new models. To demonstrate the generality of LKT, we fit 12 models, some variants of well-known models and some newly devised, to 6 learning technology datasets. The results indicated that no single learner model was best in all cases, further justifying a broad approach that considers multiple learner model features and the learning context. The models presented here avoid student-level fixed parameters to increase generalizability. We also introduce features to stand in for these intercepts. We argue that to be maximally applicable, a learner model needs to adapt to student differences, rather than needing to be pre-parameterized with the level of each student's ability

    A Control-Theoretic Methodology for Adaptive Structured Parallel Computations

    Get PDF
    Adaptivity for distributed parallel applications is an essential feature whose impor- tance has been assessed in many research fields (e.g. scientific computations, large- scale real-time simulation systems and emergency management applications). Especially for high-performance computing, this feature is of special interest in order to properly and promptly respond to time-varying QoS requirements, to react to uncontrollable environ- mental effects influencing the underlying execution platform and to efficiently deal with highly irregular parallel problems. In this scenario the Structured Parallel Programming paradigm is a cornerstone for expressing adaptive parallel programs: the high-degree of composability of parallelization schemes, their QoS predictability formally expressed by performance models, are basic tools in order to introduce dynamic reconfiguration processes of adaptive applications. These reconfigurations are not only limited to imple- mentation aspects (e.g. parallelism degree modifications), but also parallel versions with different structures can be expressed for the same computation, featuring different levels of performance, memory utilization, energy consumption, and exploitation of the memory hierarchies. Over the last decade several programming models and research frameworks have been developed aimed at the definition of tools and strategies for expressing adaptive parallel applications. Notwithstanding this notable research effort, properties like the optimal- ity of the application execution and the stability of control decisions are not sufficiently studied in the existing work. For this reason this thesis exploits a pioneer research in the context of providing formal theoretical tools founded on Control Theory and Game Theory techniques. Based on these approaches, we introduce a formal model for control- ling distributed parallel applications represented by computational graphs of structured parallelism schemes (also called skeleton-based parallelism). Starting out from the performance predictability of structured parallelism schemes, in this thesis we provide a formalization of the concept of adaptive parallel module per- forming structured parallel computations. The module behavior is described in terms of a Hybrid System abstraction and reconfigurations are driven by a Predictive Control ap- proach. Experimental results show the effectiveness of this work, in terms of execution cost reduction as well as the stability degree of a system reconfiguration: i.e. how long a reconfiguration choice is useful for targeting the required QoS levels. This thesis also faces with the issue of controlling large-scale distributed applications composed of several interacting adaptive components. After a panoramic view of the existing control-theoretic approaches (e.g. based on decentralized, distributed or hierar- chical structures of controllers), we introduce a methodology for the distributed predictive control. For controlling computational graphs, the overall control problem consists in a set of coupled control sub-problems for each application module. The decomposition is- sue has a twofold nature: first of all we need to model the coupling relationships between control sub-problems, furthermore we need to introduce proper notions of negotiation and convergence in the control decisions collectively taken by the parallel modules of the application graph. This thesis provides a formalization through basic concepts of Non-cooperative Games and Cooperative Optimization. In the notable context of the dis- tributed control of performance and resource utilization, we exploit a formal description of the control problem providing results for equilibrium point existence and the compari- son of the control optimality with different adaptation strategies and interaction protocols. Discussions and a first validation of the proposed techniques are exploited through exper- iments performed in a simulation environment

    Evolutionary design assistants for architecture

    Get PDF
    In its parallel pursuit of an increased competitivity for design offices and more pleasurable and easier workflows for designers, artificial design intelligence is a technical, intellectual, and political challenge. While human-machine cooperation has become commonplace through Computer Aided Design (CAD) tools, a more improved collaboration and better support appear possible only through an endeavor into a kind of artificial design intelligence, which is more sensitive to the human perception of affairs. Considered as part of the broader Computational Design studies, the research program of this quest can be called Artificial / Autonomous / Automated Design (AD). The current available level of Artificial Intelligence (AI) for design is limited and a viable aim for current AD would be to develop design assistants that are capable of producing drafts for various design tasks. Thus, the overall aim of this thesis is the development of approaches, techniques, and tools towards artificial design assistants that offer a capability for generating drafts for sub-tasks within design processes. The main technology explored for this aim is Evolutionary Computation (EC), and the target design domain is architecture. The two connected research questions of the study concern, first, the investigation of the ways to develop an architectural design assistant, and secondly, the utilization of EC for the development of such assistants. While developing approaches, techniques, and computational tools for such an assistant, the study also carries out a broad theoretical investigation into the main problems, challenges, and requirements towards such assistants on a rather overall level. Therefore, the research is shaped as a parallel investigation of three main threads interwoven along several levels, moving from a more general level to specific applications. The three research threads comprise, first, theoretical discussions and speculations with regard to both existing literature and the proposals and applications of the thesis; secondly, proposals for descriptive and prescriptive models, mappings, summary illustrations, task structures, decomposition schemes, and integratory frameworks; and finally, experimental applications of these proposals. This tripartite progression allows an evaluation of each proposal both conceptually and practically; thereby, enabling a progressive improvement of the understanding regarding the research question, while producing concrete outputs on the way. Besides theoretical and interpretative examinations, the thesis investigates its subject through a set of practical and speculative proposals, which function as both research instruments and the outputs of the study. The first main output of the study is the “design_proxy” approach (d_p), which is an integrated approach for draft making design assistants. It is an outcome of both theoretical examinations and experimental applications, and proposes an integration of, (1) flexible and relaxed task definitions and representations (instead of strict formalisms), (2) intuitive interfaces that make use of usual design media, (3) evaluation of solution proposals through their similarity to given examples, and (4) a dynamic evolutionary approach for solution generation. The design_proxy approach may be useful for AD researchers that aim at developing practical design assistants, as has been examined and demonstrated with the two applications, i.e., design_proxy.graphics and design_proxy.layout. The second main output, the “Interleaved Evolutionary Algorithm” (IEA, or Interleaved EA) is a novel evolutionary algorithm proposed and used as the underlying generative mechanism of design_proxybased design assistants. The Interleaved EA is a dynamic, adaptive, and multi-objective EA, in which one of the objectives leads the evolution until its fitness progression stagnates; in the sense that the settings and fitness values of this objective is used for most evolutionary decisions. In this way, the Interleaved EA enables the use of different settings and operators for each of the objectives within an overall task, which would be the same for all objectives in a regular multi-objective EA. This property gives the algorithm a modular structure, which offers an improvable method for the utilization of domain-specific knowledge for each sub-task, i.e., objective. The Interleaved EA can be used by Evolutionary Computation (EC) researchers and by practitioners who employ EC for their tasks. As a third main output, the “Architectural Stem Cells Framework” is a conceptual framework for architectural design assistants. It proposes a dynamic and multi-layered method for combining a set of design assistants for larger tasks in architectural design. The first component of the framework is a layer-based, parallel task decomposition approach, which aims at obtaining a dynamic parallelization of sub-tasks within a more complicated problem. The second component of the framework is a conception for the development mechanisms for building drafts, i.e., Architectural Stem Cells (ASC). An ASC can be conceived as a semantically marked geometric structure, which contains the information that specifies the possibilities and constraints for how an abstract building may develop from an undetailed stage to a fully developed building draft. ASCs are required for re-integrating the separated task layers of an architectural problem through solution-based development. The ASC Framework brings together many of the ideas of this thesis for a practical research agenda and it is presented to the AD researchers in architecture. Finally, the “design_proxy.layout” (d_p.layout) is an architectural layout design assistant based on the design_proxy approach and the IEA. The system uses a relaxed problem definition (producing draft layouts) and a flexible layout representation that permits the overlapping of design units and boundaries. User interaction with the system is carried out through intuitive 2D graphics and the functional evaluations are performed by measuring the similarity of a proposal to existing layouts. Functioning in an integrated manner, these properties make the system a practicable and enjoying design assistant, which was demonstrated through two workshop cases. The d_p.layout is a versatile and robust layout design assistant that can be used by architects in their design processes
    • …
    corecore