1,020 research outputs found

    Adaptive network protocols to support queries in dynamic networks

    Get PDF
    Recent technological advancements have led to the popularity of mobile devices, which can dynamically form wireless networks. In order to discover and obtain distributed information, queries are widely used by applications in opportunistically formed mobile networks. Given the popularity of this approach, application developers can choose from a number of implementations of query processing protocols to support the distributed execution of a query over the network. However, different inquiry strategies (i.e., the query processing protocol and associated parameters used to execute a query) have different tradeoffs between the quality of the query's result and the cost required for execution under different operating conditions. The application developer's choice of inquiry strategy is important to meet the application's needs while considering the limited resources of the mobile devices that form the network. We propose adaptive approaches to choose the most appropriate inquiry strategy in dynamic mobile environments. We introduce an architecture for adaptive queries which employs knowledge about the current state of the dynamic mobile network and the history of previous query results to learn the most appropriate inquiry strategy to balance quality and cost tradeoffs in a given setting, and use this information to dynamically adapt the continuous query's execution

    Situation awareness approach to context-aware case-based decision support.

    Get PDF
    Context-aware case-based decision support systems (CACBDSS) use the context of users as one of the features for similarity assessment to provide solutions to problems. The combination of a context-aware case-based reasoning (CBR) with general domain knowledge has been shown to improve similarity assessment, solving domain specific problems and problems of uncertain knowledge. Whilst these CBR approaches in context awareness address problems of incomplete data and domain specific problems, future problems that are situation-dependent cannot be anticipated due to lack of data by the CACBDSS to make predictions. Future problems can be predicted through situation awareness (SA), a psychological concept of knowing what is happening around you in order to know the future. The work conducted in this thesis explores the incorporation of SA to CACBDSS. It develops a framework to decouple the interface and underlying data model using an iterative research and design methodology. Two new approaches of using situation awareness to enhance CACBDSS are presented: (1) situation awareness as a problem identification component of CACBDSS (2) situation awareness for both problem identification and solving in CACBDSS. The first approach comprises of two distinct parts; SA, and CBR parts. The SA part understands the problem by using rules to interpret cues from the environment and users. The CBR part uses the knowledge from the SA part to provide solutions. The second approach is a fusion of the two technologies into a single case-based situation awareness (CBSA) model for situation awareness based on experience rather than rule, and problem solving predictions. The CBSA system perceives the users context and the environment and uses them to understand the current situation by retrieving similar past situations. The futures of new situations are predicted through knowledge of the history of similar past situations. Implementation of the two approaches in flow assurance control domain to predict the formation of hydrate shows improvements in both similarity assessment and problem solving predictions compared to CACBDSS without SA. Specifically, the second approach provides an improved decision support in scenarios where there are experienced situations. In the absence of experienced situations, the second approach offers more reliable solutions because of its rule-based capability. The adaptation of the user interface of the approaches to the current situation and the presentation of a reusable sequence of tasks in the situation reduces memory loads on operators. The integrated research-design methodology used in realising these approaches links theory and practice, thinking and doing, achieving practical as well as research objectives. The action research with practitioners provided the understanding of the domain activities, the social settings, resources, and goals of users. The user-centered design process ensures an understanding of the users. The agile development model ensures an iterative work, enables faster development of a functional prototype, which are more easily communicated and tested, thus giving better input for the next iteration

    Scalable Observation, Analysis, and Tuning for Parallel Portability in HPC

    Get PDF
    It is desirable for general productivity that high-performance computing applications be portable to new architectures, or can be optimized for new workflows and input types, without the need for costly code interventions or algorithmic re-writes. Parallel portability programming models provide the potential for high performance and productivity, however they come with a multitude of runtime parameters that can have significant impact on execution performance. Selecting the optimal set of parameters, so that HPC applications perform well in different system environments and on different input data sets, is not trivial.This dissertation maps out a vision for addressing this parallel portability challenge, and then demonstrates this plan through an effective combination of observability, analysis, and in situ machine learning techniques. A platform for general-purpose observation in HPC contexts is investigated, along with support for its use in human-in-the-loop performance understanding and analysis. The dissertation culminates in a demonstration of lessons learned in order to provide automated tuning of HPC applications utilizing parallel portability frameworks

    Adaptive object-modeling : patterns, tools and applications

    Get PDF
    Tese de Programa Doutoral. Informática. Universidade do Porto. Faculdade de Engenharia. 201

    Affinity-Based Reinforcement Learning : A New Paradigm for Agent Interpretability

    Get PDF
    The steady increase in complexity of reinforcement learning (RL) algorithms is accompanied by a corresponding increase in opacity that obfuscates insights into their devised strategies. Methods in explainable artificial intelligence seek to mitigate this opacity by either creating transparent algorithms or extracting explanations post hoc. A third category exists that allows the developer to affect what agents learn: constrained RL has been used in safety-critical applications and prohibits agents from visiting certain states; preference-based RL agents have been used in robotics applications and learn state-action preferences instead of traditional reward functions. We propose a new affinity-based RL paradigm in which agents learn strategies that are partially decoupled from reward functions. Unlike entropy regularisation, we regularise the objective function with a distinct action distribution that represents a desired behaviour; we encourage the agent to act according to a prior while learning to maximise rewards. The result is an inherently interpretable agent that solves problems with an intrinsic affinity for certain actions. We demonstrate the utility of our method in a financial application: we learn continuous time-variant compositions of prototypical policies, each interpretable by its action affinities, that are globally interpretable according to customers’ financial personalities. Our method combines advantages from both constrained RL and preferencebased RL: it retains the reward function but generalises the policy to match a defined behaviour, thus avoiding problems such as reward shaping and hacking. Unlike Boolean task composition, our method is a fuzzy superposition of different prototypical strategies to arrive at a more complex, yet interpretable, strategy.publishedVersio

    Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

    Full text link
    Artificial Intelligence (AI) has made impressive progress in recent years and represents a key technology that has a crucial impact on the economy and society. However, it is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards and are effectively protected against new AI risks. For instance, AI bears the risk of unfair treatment of individuals when processing personal data e.g., to support credit lending or staff recruitment decisions. The emergence of these new risks is closely linked to the fact that the behavior of AI applications, particularly those based on Machine Learning (ML), is essentially learned from large volumes of data and is not predetermined by fixed programmed rules. Thus, the issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications by stakeholders in politics, business and society. In addition, there is mutual agreement that the requirements for trustworthy AI, which are often described in an abstract way, must now be made clear and tangible. One challenge to overcome here relates to the fact that the specific quality criteria for an AI application depend heavily on the application context and possible measures to fulfill them in turn depend heavily on the AI technology used. Lastly, practical assessment procedures are needed to evaluate whether specific AI applications have been developed according to adequate quality standards. This AI assessment catalog addresses exactly this point and is intended for two target groups: Firstly, it provides developers with a guideline for systematically making their AI applications trustworthy. Secondly, it guides assessors and auditors on how to examine AI applications for trustworthiness in a structured way

    The Utility of Measures of Attention and Situation Awareness for Quantifying Telepresence

    Get PDF
    Telepresence is defined as the sensation of being present at a remote robot task site while physically present at a local control station. This concept has received substantial attention in the recent past as a result of hypothesized benefits of presence experiences on human task performance with teleoperation systems. Human factors research, however, has made little progress in establishing a relationship between the concept of telepresence and teleoperator performance. This has been attributed to the multidimensional nature of telepresence, the lack of appropriate studies to elucidate this relationship, and the lack of a valid and reliable, objective measure of telepresence. Subjective measures (e.g., questionnaires, rating scales) are most commonly used to measure telepresence. Objective measures have been proposed, including behavioral responses to stimuli presented in virtual worlds (e.g. ducking virtual objects). Other research has suggested use of physiological measures, such as cardiovascular responses to indicate the extent of telepresence experiences in teleoperation tasks. The objective of the present study was to assess the utility of using measures of attention allocation and situation awareness (SA) to objectively describe telepresence. Attention and SA have been identified as cognitive constructs potentially underlying telepresence experiences. Participants in this study performed a virtual mine neutralization task involving remote control of a simulated robotic rover and integrated tools to locate, uncover, and dispose of mines. Subjects simultaneously completed two secondary tasks that required them to monitor for low battery signals associated with operation of the vehicle and controls. Subjects were divided into three groups of eight according to task difficulty, which was manipulated by varying the number, and spacing, of mines in the task environment. Performance was measured as average time to neutralize four mines. Telepresence was assessed using a Presence questionnaire. Situation awareness was measured using the Situation Awareness Global Assessment Technique. Attention was measured as a ratio of the number of ?low battery signal detections to the total number of signals presented through the secondary task displays. Analysis of variance results revealed level of difficulty to significantly affect performance time and telepresence. Regression analysis revealed level of difficulty, immersive tendencies, and attention to explain significant portions of the variance in telepresence
    • …
    corecore