10,777 research outputs found

    AmIE: An Ambient Intelligent Environment for Assisted Living

    Full text link
    In the modern world of technology Internet-of-things (IoT) systems strives to provide an extensive interconnected and automated solutions for almost every life aspect. This paper proposes an IoT context-aware system to present an Ambient Intelligence (AmI) environment; such as an apartment, house, or a building; to assist blind, visually-impaired, and elderly people. The proposed system aims at providing an easy-to-utilize voice-controlled system to locate, navigate and assist users indoors. The main purpose of the system is to provide indoor positioning, assisted navigation, outside weather information, room temperature, people availability, phone calls and emergency evacuation when needed. The system enhances the user's awareness of the surrounding environment by feeding them with relevant information through a wearable device to assist them. In addition, the system is voice-controlled in both English and Arabic languages and the information are displayed as audio messages in both languages. The system design, implementation, and evaluation consider the constraints in common types of premises in Kuwait and in challenges, such as the training needed by the users. This paper presents cost-effective implementation options by the adoption of a Raspberry Pi microcomputer, Bluetooth Low Energy devices and an Android smart watch.Comment: 6 pages, 8 figures, 1 tabl

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Kinect crowd interaction

    Full text link
    Most of the state-of-the-art commercial simulation software mainly focuses on providing realistic animations and convincing artificial intelligence to avatars in the scenario. However, works on how to trigger the events and avatar reactions in the scenario in a natural and intuitive way are less noticed and developed. Typical events are usually triggered by predefined timestamps. Once the events are set, there is no easy way to interactively generate new events while the scene is running and therefore difficult to dynamically affect the avatar reactions. Based on this situation, we propose a framework to use human gesture as input to trigger events within a DI-Guy simulation scenario in real-time, which could greatly help users to control events and avatar reactions in the scenario. By implementing such a framework, we will be able to identify user’s intentions interactively and ensure that the avatars make corresponding reactions

    Merleau-Ponty

    Get PDF

    Henry James’s The Ambassadors : Anatomy of Silence

    Full text link
    This dissertation examines the use of silence in Henry James\u27s novel The Ambassadors. James uses silence rich in meaning to portray the protagonist Lewis Lambert Strether\u27s unfolding consciousness. James creates different types of silences that reflect a shift from the spoken or written word to alternate symbol systems. James\u27s novel perches on the threshold of modernity, as his work reflects the ideas of a line of thinkers extending back from James and his brother, William, to Ralph Waldo Emerson, Sampson Reed, and Emanuel Swedenborg. At the same time, the novel draws on the contemporary ideas of Charles Darwin, prefigures modern narrative techniques, and even anticipates such current neuroscience theorists as Gerald Edelman and Antonio Damasio. Chapter one is an overview which contextualizes the novel, considering its link to Emersonian thought as well as to William James\u27s description of consciousness, theories of silence, and Darwin\u27s examination of the development of language in The Descent of Man and The Expression of the Emotions in Man and Animals. Chapter two considers the remnants of language and symbol systems, with the silences, language as thing, resonance, and syntax explored. A close reading of the novel demonstrates the artificiality and concreteness of language, with James ultimately moving away from those remnants. Chapter three incorporates the current field of acoustic communication with an analysis of vagueness, impression, and charged silence as Strether searches them for what Wallace Stevens would term the unalterable vibration , or meaning. Chapter four charts the movement to physical representations of Strether\u27s consciousness emerging in moments of what James calls responsive arrest , and Strether\u27s awareness after a fact, examined in relation to current work by Edelman and Damasio. Chapter five describes James\u27s movement to silences that reflect physical expression. Gesture, meeting of eyes, and recognition reflect an awareness of Darwin\u27s view of the development of language from its physical and gestural nature. James develops an alternative to articulated language that portrays Strether as an emerging modern figure whose consciousness is attained through silence

    Vocal-auditory feedback and the modality transition problem in language evolution

    Get PDF
    This is a pre-print version. This article has been published in Reti Saperi Linguaggi, 1/2016 a. 5 (9), 157–178, [DOI: 10.12832/83923]. Copyright Società editrice il Mulino. The publisher should be contacted for permission to re-use or reprint the material in any form.The gestural theories, which see the origins of (proto)linguistic communication not in vocalization but rather in manual gesture, have come to take center stage in today’s academic reflection on the roots of language. The gestural theories, however, suffer from a near-fatal problem of the so-called «modality switch», i.e. of how and why language could have transferred from the mostly-visual to the mostly-vocal form that it now has in human societies almost universally. In our paper, we offer a potential and partial solution to this problem. We take as our starting point a gestural scenario on which emerging language-like communication involves orofacial gestures, and we complement such a scenario with the inclusion of vocal-auditory feedback, which aids signal production. Such benefits of more articulatory precision that accrue to the signal producersmight have constituted one reason behind supplementing orofacial gestures with sound and so increasing the role of vocalization in the emerging (proto)language
    • …
    corecore