735,713 research outputs found

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    Making Graphical Information Accessible Without Vision Using Touch-based Devices

    Get PDF
    Accessing graphical material such as graphs, figures, maps, and images is a major challenge for blind and visually impaired people. The traditional approaches that have addressed this issue have been plagued with various shortcomings (such as use of unintuitive sensory translation rules, prohibitive costs and limited portability), all hindering progress in reaching the blind and visually-impaired users. This thesis addresses aspects of these shortcomings, by designing and experimentally evaluating an intuitive approach —called a vibro-audio interface— for non-visual access to graphical material. The approach is based on commercially available touch-based devices (such as smartphones and tablets) where hand and finger movements over the display provide position and orientation cues by synchronously triggering vibration patterns, speech output and auditory cues, whenever an on-screen visual element is touched. Three human behavioral studies (Exp 1, 2, and 3) assessed usability of the vibro-audio interface by investigating whether its use leads to development of an accurate spatial representation of the graphical information being conveyed. Results demonstrated efficacy of the interface and importantly, showed that performance was functionally equivalent with that found using traditional hardcopy tactile graphics, which are the gold standard of non-visual graphical learning. One limitation of this approach is the limited screen real estate of commercial touch-screen devices. This means large and deep format graphics (e.g., maps) will not fit within the screen. Panning and zooming operations are traditional techniques to deal with this challenge but, performing these operations without vision (i.e., using touch) represents several computational challenges relating both to cognitive constraints of the user and technological constraints of the interface. To address these issues, two human behavioral experiments were conducted, that assessed the influence of panning (Exp 4) and zooming (Exp 5) operations in non-visual learning of graphical material and its related human factors. Results from experiments 4 and 5 indicated that the incorporation of panning and zooming operations enhances the non-visual learning process and leads to development of more accurate spatial representation. Together, this thesis demonstrates that the proposed approach —using a vibro-audio interface— is a viable multimodal solution for presenting dynamic graphical information to blind and visually-impaired persons and supporting development of accurate spatial representations of otherwise inaccessible graphical materials

    The human rights commission of Ethiopia and issues of forced evictions: a case-oriented study of its practice

    Get PDF
    In addition to the past endeavour to meet the MDGs, the Ethiopian government launched an ambitious program, ‘Growth and Transformative Plan (2011-2015)’ in 2011 to transform the country’s economic growth and development. As core part of the Plan, the economic and infrastructure sections envisage massive investment and infrastructure development. While the ultimate target is to improve socio-economic conditions and fulfil basis needs, the Plan will likely heighten massive eviction of people from their ancestral lands and thousands from their houses, and deprive many more of the traditional means of livelihood. In both the past as well as the present development endeavour, little attention has been given to a human rights-approach to development. The discourse on economic growth and development tends to focus more on mere economic improvement, implying needs-based approach. This will reinforce the notion that the fulfilment of economic, social and cultural rights is an aspiration realized by government program, with no obligation on the part of the government. One of the core functions of national human rights institutions (NHRIs) is to investigate complaints. The Human Rights Commission of Ethiopia (Commission) has been receiving complaints on wide range of issues since it started rendering its quasi-judicial functions. Among them, many complaints relating to forced have been brought to its attention. The Commission has rejected the bulk of the complaints or referred them to either courts of law or and the Ombudsman Institute on the ground they do not involve human rights issues (i.e. they are mere administrative matters that do not qualify for its inquiry). By the same token, investigation of complaints on forced eviction is an exception rather than the norm. Disclaiming jurisdiction might shed light on the underlining issues-it might mark fear on the part of the Commission not to confront the economic and development policies of the government; or adoption of pre-conceived approach to issues of economic, social and cultural rights, or erroneous interpretation of its mandate or incapacity to deal with such issues. This paper will deal with the Commission’s handling of complaints pertaining to the right to housing in general and forced eviction in particular. Some of interesting complaints handled by the Commission will be reviewed to see the underlining reasons hampering the Commission from examining the essence of specific cases. Some measures to be adopted so that the Commission could probe complaints of all sorts and of forced evictions in particular are hinted

    Beta event-related desynchronization as an index of individual differences in processing human facial expression: further investigations of autistic traits in typically developing adults

    Get PDF
    The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism

    Investigating five key predictive text entry with combined distance and keystroke modelling

    Get PDF
    This paper investigates text entry on mobile devices using only five-keys. Primarily to support text entry on smaller devices than mobile phones, this method can also be used to maximise screen space on mobile phones. Reported combined Fitt's law and keystroke modelling predicts similar performance with bigram prediction using a five-key keypad as is currently achieved on standard mobile phones using unigram prediction. User studies reported here show similar user performance on five-key pads as found elsewhere for novice nine-key pad users

    A taxonomy of video lecture styles

    Full text link
    Many educational organizations are employing instructional video in their pedagogy, but there is limited understanding of the possible presentation styles. In practice, the presentation style of video lectures ranges from a direct recording of classroom teaching with a stationary camera and screencasts with voice-over, up to highly elaborate video post-production. Previous work evaluated the effectiveness of several presentation styles, but there has not been any consistent taxonomy, which would have made comparisons and meta-analyses possible. In this article, we surveyed the research literature and we examined contemporary video-based courses, which have been produced by diverse educational organizations and teachers across various academic disciplines. We organized video lectures in two dimensions according to the level of human presence and according to the type of instructional media. In addition to organizing existing video lectures in a comprehensive way, the proposed taxonomy offers a design space that facilitates the choice of a suitable presentation style, as well as the preparation of new ones.Comment: 14 pages, 5 figure

    Decoding information in the human hippocampus: a user's guide

    Get PDF
    Multi-voxel pattern analysis (MVPA), or 'decoding', of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded, solely from the pattern of fMRI activity, it must mean there is information about that stimulus represented in the brain region where the pattern across voxels was identified. This ability to examine the representation of information relating to specific stimuli (e.g., memories) in particular brain areas makes MVPA an especially suitable method for investigating memory representations in brain structures such as the hippocampus. This approach could open up new opportunities to examine hippocampal representations in terms of their content, and how they might change over time, with aging, and pathology. Here we consider published MVPA studies that specifically focused on the hippocampus, and use them to illustrate the kinds of novel questions that can be addressed using MVPA. We then discuss some of the conceptual and methodological challenges that can arise when implementing MVPA in this context. Overall, we hope to highlight the potential utility of MVPA, when appropriately deployed, and provide some initial guidance to those considering MVPA as a means to investigate the hippocampus

    Adaptive and reconfigurable robotic gripper hands with a meso-scale gripping range

    Get PDF
    Grippers and robotic hands are essential and important end-effectors of robotic manipulators. Developing a gripper hand that can grasp a large variety of objects precisely and stably is still an aspiration even though research in this area has been carried out for several decades. This thesis provides a development approach and a series of gripper hands which can bridge the gap between micro-gripper and macro-gripper by extending the gripping range to the mesoscopic scale (meso-scale). Reconfigurable topology and variable mobility of the design offer versatility and adaptability for the changing environment and demands. By investigating human grasping behaviours and the unique structures of human hand, a CFB-based finger joint for anthropomorphic finger is developed to mimic a human finger with a large grasping range. The centrodes of CFB mechanism are explored and a contact-aided CFB mechanism is developed to increase stiffness of finger joints. An integrated gripper structure comprising cross four-bar (CFB) and remote-centre-of-motion (RCM) mechanisms is developed to mimic key functionalities of human hand. Kinematics and kinetostatic analyses of the CFB mechanism for multimode gripping are conducted to achieve passive-adjusting motion. A novel RCM-based finger with angular, parallel and underactuated motion is invented. Kinematics and stable gripping analyses of the RCM-based multi-motion finger are also investigated. The integrated design with CFB and RCM mechanisms provides a novel concept of a multi-mode gripper that aims to tackle the challenge of changing over for various sizes of objects gripping in mesoscopic scale range. Based on the novel designed mechanisms and design philosophy, a class of gripper hands in terms of adaptive meso-grippers, power-precision grippers and reconfigurable hands are developed. The novel features of the gripper hands are one degree of freedom (DoF), self-adaptive, reconfigurable and multi-mode. Prototypes are manufactured by 3D printing and the grasping abilities are tested to verify the design approach.EPSR
    corecore