3 research outputs found

    E Is for Everyone: The Case for Inclusive Game Design

    Get PDF
    Part of the Volume on the Ecology of Games: Connecting Youth, Games, and Learning In this chapter I examine the accessibility of today's games, or rather the lack of. Even common medical conditions such as arthritis, repetitive stress injuries, and diminished vision may prevent individuals from playing today's top software titles, not to speak of the barriers that these titles pose to the blind, deaf, and immobile. The clearest and most disheartening manifestation can be found when examining the special-needs sector. There we find children who cannot partake in their most coveted play activities, due to inconsiderate (and therefore inflexible) game design. I chose this sector to both define the problem and explore its solutions. Written from the perspective of a designer, the chapter first describes the lack-of-play and its residual impact as perceived in a school that caters to over 200 children with special needs. In an attempt to create the "ultimate-accessible" game, I demonstrate how games can be designed to be intrinsically accessible while retaining their original playability. Lastly, I show how normalization-of-play may improve upon the social, educational, and therapeutic aspects of the children's daily lives. Tying this fringe-case with the grander ecology of games, I discusses how better accessibility may encourage more people to enjoy games -- be they gamers, students, or patients

    SUGGESTING TITLES FOR AUDIO RECORDINGS

    Get PDF
    Techniques of this disclosure may enable a computing device to suggest one or more titles based on the content of audio being recorded or audio that was previously recorded, and other data such as time and location. Rather than applying a general default title or audio file name, the computing device may request authorization from a user to analyze the contents of a recorded audio file and, after receiving explicit authorization from the user, analyze the audio, including speech, and automatically suggest titles that are indicative of the content of the audio and/or other data. The computing device may convert speech included in the audio into text and extract a plurality of terms from the text based on various factors, such as word classes (e.g., convert audio that includes “this meatball recipe adds parmesan cheese” into text and extract a plurality of nouns such as “meatball,” “recipe,” “parmesan,” and “cheese” from the text). Based on various factors, such as term frequency in the text and the relative uniqueness of the terms in the spoken language, the computing device may identify a plurality of words from the plurality of terms to represent the overall content of the audio (e.g., identify “meatball” and “recipe” from “meatball,” “recipe,” “parmesan,” and “cheese” based on term frequency in the text). The computing device may also classify non-speech audio (e.g. applause, dog barking, music) and use the classification, including metadata associated with the classified audio object, such as song titles, to identify a plurality of words to represent the overall content of the audio. The speech terms, non-speech audio classification, classified audio object metadata, and other data may be combined to identify a plurality of words to represent the overall content of the audio. The computing device may display the identified words as suggested words to be included in the title of the audio file. The user may select one or more of the identified words as the title or combine one or more of the identified words with one or more other words entered by the user. The computing device may use the selected and/or entered words as the title for the audio and/or for the name of the audio file
    corecore