596,939 research outputs found

    A Rule Set to Detect Interference of Runtime Enforcement Mechanisms

    Get PDF
    Runtime enforcement aims at verifying the active execution trace of executing software against formally specified properties of the software, and enforcing the properties in case that they are violated in the active execution trace. Enforcement mechanism of individual properties may interfere with each other, causing the overall behavior of the executing software to be erroneous. As the number and the complexity of the properties to be enforced increase, manual detection of the inferences becomes an error-prone and effort-consuming task. Hence, we aim at providing a framework for automatic detection of interferences. As the initial steps to create such a framework, in this paper we first provide formal definitions of an enforcement mechanism and enforcement operators. Second, we define a rule set to detect the interference among properties

    Intelligent monitoring of the health and performance of distribution automation

    Get PDF
    With a move to 'smarter' distribution networks through an increase in distribution automation and active network management, the volume of monitoring data available to engineers also increases. It can be onerous to interpret such data to produce meaningful information about the health and performance of automation and control equipment. Moreover, indicators of incipient failure may have to be tracked over several hours or days. This paper discusses some of the data analysis challenges inherent in assessing the health and performance of distribution automation based on available monitoring data. A rule-based expert system approach is proposed to provide decision support for engineers regarding the condition of these components. Implementation of such a system using a complex event processing system shell, to remove the manual task of tracking alarms over a number of days, is discussed

    The CAMOMILE collaborative annotation platform for multi-modal, multi-lingual and multi-media documents

    Get PDF
    In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.Peer ReviewedPostprint (author's final draft

    Attentional Guidance from Multiple Working Memory Representations: Evidence from Eye Movements

    Get PDF
    Recent studies have shown that the representation of an item in visual working memory (VWM) can bias the deployment of attention to stimuli in the visual scene possessing the same features. When multiple item representations are simultaneously held in VWM, whether these representations, especially those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In the present study we adopted an eye tracking technique to shed light on this issue. In particular, we implemented a manipulation aimed at prioritizing one of the VWM representation to an active status, and tested whether attention could be guided by both the prioritized and the accessory representations when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were able to capture attention suggesting a significant attentional guidance effect. However, such effect was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust experimental design controlling for different factors that might have played a role in shaping these findings. The results showed evidence for attentional guidance from the accessory representation in both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized representation than for the accessory representation across experiments. The overall findings suggest that multiple VWM representations, even the accessory representation, can simultaneously interact with visual attention

    Evaluation of automated decision making methodologies and development of an integrated robotic system simulation: Study results

    Get PDF
    The implementation of a generic computer simulation for manipulator systems (ROBSIM) is described. The program is written in FORTRAN, and allows the user to: (1) Interactively define a manipulator system consisting of multiple arms, load objects, targets, and an environment; (2) Request graphic display or replay of manipulator motion; (3) Investigate and simulate various control methods including manual force/torque and active compliance control; and (4) Perform kinematic analysis, requirements analysis, and response simulation of manipulamotion. Previous reports have described the algorithms and procedures for using ROBSIM. These reports are superseded and additional features which were added are described. They are: (1) The ability to define motion profiles and compute loads on a common base to which manipulator arms are attached; (2) Capability to accept data describing manipulator geometry from a Computer Aided Design data base using the Initial Graphics exchange Specification format; (3) A manipulator control algorithm derived from processing the TV image of known reference points on a target; and (4) A vocabulary of simple high level task commands which can be used to define task scenarios

    Spatial demands of concurrent tasks can compromise spatial learning of a virtual environment: implications for active input control

    Get PDF
    While active explorers in a real-world environment typically remember more about its spatial layout than participants who passively observe that exploration, this does not reliably occur when the exploration takes place in a virtual environment (VE). We argue that this may be because an active explorer in a VE is effectively performing a secondary interfering concurrent task by virtue of having to operate a manual input device to control their virtual displacements. Six groups of participants explored a virtual room containing six distributed objects, either actively or passively while performing concurrent tasks that were simple (such as card turning) or that made more complex cognitive and motoric demands comparable with those typically imposed by input device control. Tested for their memory for virtual object locations, passive controls (with no concurrent task) demonstrated the best spatial learning, arithmetically (but not significantly) better than the active group. Passive groups given complex concurrent tasks performed as poorly as the active group. A concurrent articulatory suppression task reduced memory for object names but not spatial location memory. It was concluded that spatial demands imposed by input device control should be minimized when training or testing spatial memory in VEs, and should be recognized as competing for cognitive capacity in spatial working memory

    Active Task Randomization: Learning Robust Skills via Unsupervised Generation of Diverse and Feasible Tasks

    Full text link
    Solving real-world manipulation tasks requires robots to have a repertoire of skills applicable to a wide range of circumstances. When using learning-based methods to acquire such skills, the key challenge is to obtain training data that covers diverse and feasible variations of the task, which often requires non-trivial manual labor and domain knowledge. In this work, we introduce Active Task Randomization (ATR), an approach that learns robust skills through the unsupervised generation of training tasks. ATR selects suitable tasks, which consist of an initial environment state and manipulation goal, for learning robust skills by balancing the diversity and feasibility of the tasks. We propose to predict task diversity and feasibility by jointly learning a compact task representation. The selected tasks are then procedurally generated in simulation using graph-based parameterization. The active selection of these training tasks enables skill policies trained with our framework to robustly handle a diverse range of objects and arrangements at test time. We demonstrate that the learned skills can be composed by a task planner to solve unseen sequential manipulation problems based on visual inputs. Compared to baseline methods, ATR can achieve superior success rates in single-step and sequential manipulation tasks.Comment: 9 pages, 5 figure

    No Labels? No Problem! Experiments with active learning strategies for multi-class classification in imbalanced low-resource settings

    Get PDF
    Labeling textual corpora in their entirety is infeasible in most practical situations, yet it is a very common need today in public and private organizations. In contexts with large unlabeled datasets, active learning methods may reduce the manual labeling effort by selecting samples deemed more informative for the learning process. The paper elaborates on a method for multi-class classification based on state-of-the-art NLP active learning techniques, performing various experiments in low-resource and imbalanced settings. In particular, we refer to a dataset of Dutch legal documents constructed with two levels of imbalance; we study the performance of task-adapting a pre-trained Dutch language model, BERTje, and of using active learning to fine-tune the model to the task, testing several selection strategies. We find that, on the constructed datasets, an entropy-based strategy slightly improves the F1, precision, and recall convergence rates; and that the improvements are most pronounced in the severely imbalanced dataset. These results show promise for active learning in low-resource imbalanced domains but also leave space for further improvement.</p

    Spatial demands of concurrent tasks can compromise spatial learning of a virtual environment: implications for active input control

    Get PDF
    While active explorers in a real-world environment typically remember more about its spatial layout than participants who passively observe that exploration, this does not reliably occur when the exploration takes place in a virtual environment (VE). We argue that this may be because an active explorer in a VE is effectively performing a secondary interfering concurrent task by virtue of having to operate a manual input device to control their virtual displacements. Six groups of participants explored a virtual room containing six distributed objects, either actively or passively while performing concurrent tasks that were simple (such as card turning) or that made more complex cognitive and motoric demands comparable with those typically imposed by input device control. Tested for their memory for virtual object locations, passive controls (with no concurrent task) demonstrated the best spatial learning, arithmetically (but not significantly) better than the active group. Passive groups given complex concurrent tasks performed as poorly as the active group. A concurrent articulatory suppression task reduced memory for object names but not spatial location memory. It was concluded that spatial demands imposed by input device control should be minimized when training or testing spatial memory in VEs, and should be recognized as competing for cognitive capacity in spatial working memory

    Effects of action on children’s and adults’ mental imagery

    Full text link
    The aim of this study was to investigate whether and which aspects of a concurrent motor activity can facilitate children’s and adults’ performance in a dynamic imagery task. Children (5-, 7-, and 9-year-olds) and adults were asked to tilt empty glasses, filled with varied amounts of imaginary water, so that the imagined water would reach the rim. Results showed that in a manual tilting task where glasses could be tilted actively with visual feedback, even 5-year-olds performed well. However, in a blind tilting task and in a static judgment task, all age groups showed markedly lower performance. This implies that visual movement information facilitates imagery. In a task where the tilting movement was visible but regulated by means of an on-and-off remote control, a clear age trend was found, indicating that active motor control and motor feedback are particularly important in imagery performance of younger children
    • 

    corecore