4,593 research outputs found

    Using Conceptual Modeling for Designing Multi-View Modeling Tools

    Get PDF
    Multi-view modeling methods (MVMM) can cope with the increasing complexity of today\u27s enterprise and information systems by decomposing the corresponding model into several viewpoints. The combination of the instantiated views gives the whole model of the system. Modeling tools are vital for efficient utilization of MVMMs. However, sufficient support in the conceptual design of multi-view modeling tools is not given. The paper at hand introduces the MuVieMoT modeling method, dedicated towards the conceptualization of multi-view modeling tools. The method is focused on how to capture, with conceptual modeling means, the constituents of MVMMs, in terms of viewpoints, modeling procedure and consistency mechanisms specification. The method is aimed at method engineers and tool developers, bridging the gap between tool design and tool development. Applicability of the method is illustrated by a case study, hereby defining the conceptual design of a multi-view modeling tool for an enterprise modeling method

    The representation of cognates and interlingual homographs in the bilingual lexicon

    Get PDF
    Cognates and interlingual homographs are words that exist in multiple languages. Cognates, like ā€œwolfā€ in Dutch and English, also carry the same meaning. Interlingual homographs do not: the word ā€œangelā€ in English refers to a spiritual being, but in Dutch to the sting of a bee. The six experiments included in this thesis examined how these words are represented in the bilingual mental lexicon. Experiment 1 and 2 investigated the issue of task effects on the processing of cognates. Bilinguals often process cognates more quickly than single-language control words (like ā€œcarrotā€, which exists in English but not Dutch). These experiments showed that the size of this cognate facilitation effect depends on the other types of stimuli included in the task. These task effects were most likely due to response competition, indicating that cognates are subject to processes of facilitation and inhibition both within the lexicon and at the level of decision making. Experiment 3 and 4 examined whether seeing a cognate or interlingual homograph in oneā€™s native language affects subsequent processing in oneā€™s second language. This method was used to determine whether non-identical cognates share a form representation. These experiments were inconclusive: they revealed no effect of cross-lingual long-term priming. Most likely this was because a lexical decision task was used to probe an effect that is largely semantic in nature. Given these caveats to using lexical decision tasks, two final experiments used a semantic relatedness task instead. Both experiments revealed evidence for an interlingual homograph inhibition effect but no cognate facilitation effect. Furthermore, the second experiment found evidence for a small effect of cross-lingual long-term priming. After comparing these findings to the monolingual literature on semantic ambiguity resolution, this thesis concludes that it is necessary to explore the viability of a distributed connectionist account of the bilingual mental lexicon

    Affordance-Driven Next-Best-View Planning for Robotic Grasping

    Full text link
    Grasping occluded objects in cluttered environments is an essential component in complex robotic manipulation tasks. In this paper, we introduce an AffordanCE-driven Next-Best-View planning policy (ACE-NBV) that tries to find a feasible grasp for target object via continuously observing scenes from new viewpoints. This policy is motivated by the observation that the grasp affordances of an occluded object can be better-measured under the view when the view-direction are the same as the grasp view. Specifically, our method leverages the paradigm of novel view imagery to predict the grasps affordances under previously unobserved view, and select next observation view based on the highest imagined grasp quality of the target object. The experimental results in simulation and on a real robot demonstrate the effectiveness of the proposed affordance-driven next-best-view planning policy. Project page: https://sszxc.net/ace-nbv/.Comment: Conference on Robot Learning (CoRL) 202

    Simultaneous Multi-View Object Recognition and Grasping in Open-Ended Domains

    Full text link
    A robot working in human-centric environments needs to know which kind of objects exist in the scene, where they are, and how to grasp and manipulate various objects in different situations to help humans in everyday tasks. Therefore, object recognition and grasping are two key functionalities for such robots. Most state-of-the-art tackles object recognition and grasping as two separate problems while both use visual input. Furthermore, the knowledge of the robot is fixed after the training phase. In such cases, if the robot faces new object categories, it must retrain from scratch to incorporate new information without catastrophic interference. To address this problem, we propose a deep learning architecture with augmented memory capacities to handle open-ended object recognition and grasping simultaneously. In particular, our approach takes multi-views of an object as input and jointly estimates pixel-wise grasp configuration as well as a deep scale- and rotation-invariant representation as outputs. The obtained representation is then used for open-ended object recognition through a meta-active learning technique. We demonstrate the ability of our approach to grasp never-seen-before objects and to rapidly learn new object categories using very few examples on-site in both simulation and real-world settings.Comment: arXiv admin note: text overlap with arXiv:2103.1099

    Potential for social involvement modulates activity within the mirror and the mentalizing systems

    Get PDF
    Processing biological motion is fundamental for everyday life activities, such as social interaction, motor learning and nonverbal communication. The ability to detect the nature of a motor pattern has been investigated by means of point-light displays (PLD), sets of moving light points reproducing human kinematics, easily recognizable as meaningful once in motion. Although PLD are rudimentary, the human brain can decipher their content including social intentions. Neuroimaging studies suggest that inferring the social meaning conveyed by PLD could rely on both the Mirror Neuron System (MNS) and the Mentalizing System (MS), but their specific role to this endeavor remains uncertain. We describe a functional magnetic resonance imaging experiment in which participants had to judge whether visually presented PLD and videoclips of human-like walkers (HL) were facing towards or away from them. Results show that coding for stimulus direction specifically engages the MNS when considering PLD moving away from the observer, while the nature of the stimulus reveals a dissociation between MNS -mainly involved in coding for PLD- and MS, recruited by HL moving away. These results suggest that the contribution of the two systems can be modulated by the nature of the observed stimulus and its potential for social involvement
    • ā€¦
    corecore