250 research outputs found

    Accommodating persons with sensory disabilities in South African copyright law

    Get PDF
    This dissertation investigates whether the needs of persons with sensory disabilities are accommodated in South African copyright law. Of the approximately 44,8 million people in South Africa counted in Census 2001, 2,3 million were reported as disabled. Of these, 577 000 (1,3 per cent) had a visual disability, 314 000 (0,7 per cent) a hearing disability, whilst others had physical, intellectual and communication disabilities, some with multiple disabilities too. Persons with sensory disabilities, such as visual, hearing and related impairments, experience barriers to accessing information on a daily basis. The dissertation explores barriers in the copyright law and seeks ways to remedy the situation so as to facilitate access to information, particularly for educational, personal and other purposes. To contextualise this research, international and regional copyright trends are explored to establish whether intellectual property agreements allow copyright limitations and exceptions for persons with sensory disabilities in national laws. In addition, the copyright laws of a large number of countries that have already adopted appropriate limitations and exceptions nationally are reviewed. The dissertation highlights the lack of attention that the access needs of persons with sensory disabilities have been afforded in the Copyright Act 98 of 1978, as well as related inadequacies in the Electronic Communications and Transactions Act 25 of 2002. South Africa’s non-compliance with certain international and national obligations relating to human rights and access to information is also highlighted within the context of copyright law. International human rights conventions, the South African Constitution and domestic anti-discriminatory laws all provide the framework for protecting the rights of persons with disabilities, yet their rights to access to knowledge have been neglected by government and the legislature. Some recommendations for further research and possible amendments to the copyright law are provide

    The inclusion of sign language on the Swiss Web ecosystem

    Get PDF
    Websites are a primary means of communication between public/private organisations and the general public. Therefore, websites must be accessible to all internet users to maximise their reach and efficacy, including those with hearing disabilities using sign language. Around 10,000 deaf people and an equal number of non-deaf people – such as CODA, hard-of-hearing, interpreters, and relatives - communicate using sign language (SL) in Switzerland (SGB-FSS, 2016). SL is, in fact, the preferred means of communication among the deaf community for two primary reasons. SL provides greater detailed and accurate information than written communication for its deaf users, given its expressive nature and many deaf people consider it part of their identity. In modern society, the Web has revealed itself as a new medium to convey and receive information since its inception more than 30 years ago. However, little is known about the actual presence of SL in the Web, particularly in the Swiss web ecosystem. This study showcases the preliminary results of our research into the presence of sign language in the Swiss web ecosystem. Looking at 97 websites of Swiss public institutions, universities, companies, news portals, and online shops, we investigated whether videos on their websites provided SL interpretation. We found that less than a third of the websites investigated had one or more videos. We then analysed the common characteristics of a subset of the videos (French-speaking Swiss SL videos) and checked whether they provided an equivalent of the websites’ textual content. We found that those videos were mostly integrated on a web page dedicated to accessibility. They also had non-oral subtitles and were typically medical or legal-themed. Based on our results, we could argue that the presence of SL in the Swiss web ecosystem is anecdotal, especially if compared with the amount of written information that is included on those websites.Peer Reviewe

    Evaluation Strategy for the Re-Development of the Displays and Visitor Facilities at the Museum and Art Gallery, Kelvingrove

    Get PDF
    No abstract available

    An investigation as to how a computerised multimedia intervention could be of use for practitioners supporting learners with Autism Spectrum Disorder (ASD)

    Get PDF
    This practice-based action research investigation seeks to make a valuable, original and academic contribution to knowledge in the computing, language, communication and educational fields. The aim was to establish the therapeutic (language and communication skills) and educational (literacy and numeracy skills) use of individual tailored computer games for practitioners supporting learners (end-users) with Autism Spectrum Disorder (ASD). This was achieved through a continuous collaboration of cohorts of computing undergraduate students and academics (the development team) carrying out an assignment for a module designed and successfully led by this PhD student (the researcher). The researcher continually collaborated with practitioners (users – teaching staff and speech and language therapists in schools) of learners with ASD over many years. The researcher developed a Computerised Multimedia Therapeutic/Educational Intervention (CMT/EI) process, which used an iterative holistic Design-For-One approach for developing individual computer games. An action research methodology was adopted using methodological triangulation ‘quantitative’ and ‘qualitative’ data collection methods. This was to ascertain as to how tailor-made computerised multimedia games developed, could be evaluated by the users as being of therapeutic/educational use for their learners (end-users) with ASD. The researcher originated profiles to establish the diversity of each learner’s spectrum of therapeutic/educational autistic needs, preferences, capabilities, likes, dislikes and interests. The researcher orchestrated, collaborated and supervised the whole process from individual profiles completed by the practitioners, through to the profiles used as a baseline, by the development team, and to the designing, developing and evaluating iterative customised personalised computer games. Four hundred and sixty-four learners with ASD (end-users) and forty-nine practitioners (users) from nine educational establishments across the UK participated in this investigation. Two stages were carried out in an initial application procedure (with one school) and prototype procedure (with a further six schools and 2 educational establishments). Stage I - Planning, collection, organisation, Design-For-One approach and development. Stage II - Testing, Evaluation, Monitoring, Reflection and Maintenance. Optimistic ‘quantitative’ and ‘qualitative’ evidence emerged (using content analysis) from the implementation of games in the classroom and the practitioner’s therapeutic and educational evaluation of storyboards and games. The documented positive findings led to a conclusion that personalised games which had been developed over a ten-year period, showed to be of therapeutic/educational use to practitioners and their learners with ASD

    Automatic Sign Language Recognition from Image Data

    Get PDF
    Tato práce se zabývá problematikou automatického rozpoznávání znakového jazyka z obrazových dat. Práce představuje pět hlavních přínosů v oblasti tvorby systému pro rozpoznávání, tvorby korpusů, extrakci příznaků z rukou a obličeje s využitím metod pro sledování pozice a pohybu rukou (tracking) a modelování znaků s využitím menších fonetických jednotek (sub-units). Metody využité v rozpoznávacím systému byly využity i k tvorbě vyhledávacího nástroje "search by example", který dokáže vyhledávat ve videozáznamech podle obrázku ruky. Navržený systém pro automatické rozpoznávání znakového jazyka je založen na statistickém přístupu s využitím skrytých Markovových modelů, obsahuje moduly pro analýzu video dat, modelování znaků a dekódování. Systém je schopen rozpoznávat jak izolované, tak spojité promluvy. Veškeré experimenty a vyhodnocení byly provedeny s vlastními korpusy UWB-06-SLR-A a UWB-07-SLR-P, první z nich obsahuje 25 znaků, druhý 378. Základní extrakce příznaků z video dat byla provedena na nízkoúrovňových popisech obrazu. Lepších výsledků bylo dosaženo s příznaky získaných z popisů vyšší úrovně porozumění obsahu v obraze, které využívají sledování pozice rukou a metodu pro segmentaci rukou v době překryvu s obličejem. Navíc, využitá metoda dokáže interpolovat obrazy s obličejem v době překryvu a umožňuje tak využít metody pro extrakci příznaků z obličeje, které by během překryvu nefungovaly, jako např. metoda active appearance models (AAM). Bylo porovnáno několik různých metod pro extrakci příznaků z rukou, jako např. local binary patterns (LBP), histogram of oriented gradients (HOG), vysokoúrovnové lingvistické příznaky a nové navržená metoda hand shape radial distance function (hRDF). Bylo také zkoumáno využití menších fonetických jednotek, než jsou celé znaky, tzv. sub-units. Pro první krok tvorby těchto jednotek byl navržen iterativní algoritmus, který tyto jednotky automaticky vytváří analýzou existujících dat. Bylo ukázáno, že tento koncept je vhodný pro modelování a rozpoznávání znaků. Kromě systému pro rozpoznávání je v práci navržen a představen systém "search by example", který funguje jako vyhledávací systém pro videa se záznamy znakového jazyka a může být využit například v online slovnících znakového jazyka, kde je v současné době složité či nemožné v takovýchto datech vyhledávat. Tento nástroj využívá metody, které byly použity v rozpoznávacím systému. Výstupem tohoto vyhledávacího nástroje je seřazený seznam videí, které obsahují stejný nebo podobný tvar ruky, které zadal uživatel, např. přes webkameru.Katedra kybernetikyObhájenoThis thesis addresses several issues of automatic sign language recognition, namely the creation of vision based sign language recognition framework, sign language corpora creation, feature extraction, making use of novel hand tracking with face occlusion handling, data-driven creation of sub-units and "search by example" tool for searching in sign language corpora using hand images as a search query. The proposed sign language recognition framework, based on statistical approach incorporating hidden Markov models (HMM), consists of video analysis, sign modeling and decoding modules. The framework is able to recognize both isolated signs and continuous utterances from video data. All experiments and evaluations were performed on two own corpora, UWB-06-SLR-A and UWB-07-SLR-P, the first containing 25 signs and second 378. As a baseline feature descriptors, low level image features are used. It is shown that better performance is gained by higher level features that employ hand tracking, which resolve occlusions of hands and face. As a side effect, the occlusion handling method interpolates face area in the frames during the occlusion and allows to use face feature descriptors that fail in such a case, for instance features extracted from active appearance models (AAM) tracker. Several state-of-the-art appearance-based feature descriptors were compared for tracked hands, such as local binary patterns (LBP), histogram of oriented gradients (HOG), high-level linguistic features or newly proposed hand shape radial distance function (denoted as hRDF) that enhances the feature description of hand-shape like concave regions. The concept of sub-units, that uses HMM models based on linguistic units smaller than whole sign and covers inner structures of the signs, was investigated in the proposed iterative method that is a first required step for data-driven construction of sub-units, and shows that such a concept is suitable for sign modeling and recognition tasks. Except of experiments in the sign language recognition, additional tool \textit{search by example} was created and evaluated. This tool is a search engine for sign language videos. Such a system can be incorporated into an online sign language dictionary where it is difficult to search in the sign language data. This proposed tool employs several methods which were examined in the sign language recognition task and allows to search in the video corpora based on an user-given query that consists of one or multiple images of hands. As a result, an ordered list of videos that contain the same or similar hand configurations is returned

    Designing Sound for Social Robots: Advancing Professional Practice through Design Principles

    Full text link
    Sound is one of the core modalities social robots can use to communicate with the humans around them in rich, engaging, and effective ways. While a robot's auditory communication happens predominantly through speech, a growing body of work demonstrates the various ways non-verbal robot sound can affect humans, and researchers have begun to formulate design recommendations that encourage using the medium to its full potential. However, formal strategies for successful robot sound design have so far not emerged, current frameworks and principles are largely untested and no effort has been made to survey creative robot sound design practice. In this dissertation, I combine creative practice, expert interviews, and human-robot interaction studies to advance our understanding of how designers can best ideate, create, and implement robot sound. In a first step, I map out a design space that combines established sound design frameworks with insights from interviews with robot sound design experts. I then systematically traverse this space across three robot sound design explorations, investigating (i) the effect of artificial movement sound on how robots are perceived, (ii) the benefits of applying compositional theory to robot sound design, and (iii) the role and potential of spatially distributed robot sound. Finally, I implement the designs from prior chapters into humanoid robot Diamandini, and deploy it as a case study. Based on a synthesis of the data collection and design practice conducted across the thesis, I argue that the creation of robot sound is best guided by four design perspectives: fiction (sound as a means to convey a narrative), composition (sound as its own separate listening experience), plasticity (sound as something that can vary and adapt over time), and space (spatial distribution of sound as a separate communication channel). The conclusion of the thesis presents these four perspectives and proposes eleven design principles across them which are supported by detailed examples. This work contributes an extensive body of design principles, process models, and techniques providing researchers and designers with new tools to enrich the way robots communicate with humans

    Evaluation Strategy for the Re-Development of the Displays and Visitor Facilities at the Museum and Art Gallery, Kelvingrove

    Get PDF
    No abstract available

    Getting Under Your Skin Until You Jump Out of It: The Psychological Effects of Music on The Experience of Film

    Get PDF
    Music is like magic. It can sweep you off your feet and spirit you away to places you never thought possible: it can serve as a teleportation device, achieve time travel, and let us read minds. Some pieces of music exist for their own sake, like Rachmaninoff’s Isle of the Dead, while others accompany different forms of media: ballets such as The Nutcracker and operas like La Bohème are instantly recognizable for their grandiose and immersive scores. For a moment in time, audiences can really believe that they are traveling to a magical world with Clara, and even without the stage one can see in their mind’s eye a looming and grave island of mortality… and it’s thanks to the music. This paper looks to examine the influential effects of music from a psychological perspective through the lens of film. Looking at three classic horror movies, Psycho, Halloween, and Scream, I aim to illustrate how music plays with our expectations to influence our perceptions of the screen and beyond

    The Effectiveness Of Two Types Of Visual Aid Treatments On Eye Movement Performance Of Educationally Handicapped Pupils In The Elementary School

    Get PDF
    The study was designed to test the effectiveness of two visual aid treatments: Controlled Reader and Tachistoscopic-X machines on educationally handicapped students in elementary school. In addition, this study was conducted to test the effectiveness of such an instructional program on efficient eye-movements on the educationally handicapped student during the reading act
    corecore