1,446,747 research outputs found

    Challenges in development of the American Sign Language Lexicon Video Dataset (ASLLVD) corpus

    Full text link
    The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in citation form, each produced by 1-6 native ASL signers, for a total of almost 9,800 tokens. This dataset, including multiple synchronized videos showing the signing from different angles, will be shared publicly once the linguistic annotations and verifications are complete. Linguistic annotations include gloss labels, sign start and end time codes, start and end handshape labels for both hands, morphological and articulatory classifications of sign type. For compound signs, the dataset includes annotations for each morpheme. To facilitate computer vision-based sign language recognition, the dataset also includes numeric ID labels for sign variants, video sequences in uncompressed-raw format, camera calibration sequences, and software for skin region extraction. We discuss here some of the challenges involved in the linguistic annotations and categorizations. We also report an example computer vision application that leverages the ASLLVD: the formulation employs a HandShapes Bayesian Network (HSBN), which models the transition probabilities between start and end handshapes in monomorphemic lexical signs. Further details and statistics for the ASLLVD dataset, as well as information about annotation conventions, are available from http://www.bu.edu/asllrp/lexicon

    Weighted Multiplex Networks

    Get PDF
    One of the most important challenges in network science is to quantify the information encoded in complex network structures. Disentangling randomness from organizational principles is even more demanding when networks have a multiplex nature. Multiplex networks are multilayer systems of NN nodes that can be linked in multiple interacting and co-evolving layers. In these networks, relevant information might not be captured if the single layers were analyzed separately. Here we demonstrate that such partial analysis of layers fails to capture significant correlations between weights and topology of complex multiplex networks. To this end, we study two weighted multiplex co-authorship and citation networks involving the authors included in the American Physical Society. We show that in these networks weights are strongly correlated with multiplex structure, and provide empirical evidence in favor of the advantage of studying weighted measures of multiplex networks, such as multistrength and the inverse multiparticipation ratio. Finally, we introduce a theoretical framework based on the entropy of multiplex ensembles to quantify the information stored in multiplex networks that would remain undetected if the single layers were analyzed in isolation.Comment: (22 pages, 10 figures

    Exploiting Phonological Constraints for Handshape Inference in ASL Video

    Full text link
    Handshape is a key articulatory parameter in sign language, and thus handshape recognition from signing video is essential for sign recognition and retrieval. Handshape transitions within monomorphemic lexical signs (the largest class of signs in signed languages) are governed by phonological rules. For example, such transitions normally involve either closing or opening of the hand (i.e., to exclusively use either folding or unfolding of the palm and one or more fingers). Furthermore, akin to allophonic variations in spoken languages, both inter- and intra- signer variations in the production of specific handshapes are observed. We propose a Bayesian network formulation to exploit handshape co-occurrence constraints, also utilizing information about allophonic variations to aid in handshape recognition. We propose a fast non-rigid image alignment method to gain improved robustness to handshape appearance variations during computation of observation likelihoods in the Bayesian network. We evaluate our handshape recognition approach on a large dataset of monomorphemic lexical signs. We demonstrate that leveraging linguistic constraints on handshapes results in improved handshape recognition accuracy. As part of the overall project, we are collecting and preparing for dissemination a large corpus (three thousand signs from three native signers) of American Sign Language (ASL) video. The video have been annotated using SignStream® [Neidle et al.] with labels for linguistic information such as glosses, morphological properties and variations, and start/end handshapes associated with each ASL sign.National Science Foundation grants 0705749 and 085506

    A Performance Analysis of the IRIDIUM Low Earth Orbit Satellite System

    Get PDF
    This thesis provides a performance evaluation of the IRIDIUM Low Earth Orbit Satellite system. It examines the system\u27s ability to meet real time communications constraints with a degraded satellite constellation. The analysis is conducted via computer simulation. The simulation is run at low, medium, and high loading levels with both uniform and non-uniform traffic distributions. An algorithmic approach is used to select critical satellites to remove from the constellation. Each combination of loading level and traffic distribution is analyzed with zero, three, five and seven non-operational satellites. The measured outputs are end-to-end packet delay and packet rejection rate. In addition to the delay analysis, a user\u27s ability to access the network with a degraded satellite constellation is evaluated. The average number of visible satellites, cumulative outage time, and maximum continuous outage time are analyzed for both an Equatorial city and a North American city. The results demonstrate that the IRIDIUM network is capable of meeting real-time communication requirements with several non-operational satellites. Both the high loading level and the non-uniform traffic distribution have a significant effect on the network\u27s performance. The analysis of both network delay performance and network access provides a good measure of the overall network performance with a degraded satellite constellation

    Assessing the Network Neutrality Debate in the United States

    Get PDF
    Over the last decade in the United States network neutrality has evolved from a primarily technical concern to a national debate about the future of American communications regulation as well as technology and innovation policy in general. In October 2009 the U.S. Federal Communications Commission (FCC) issued a notice of proposed rulemaking (NPRM) to codify six principles of network neutrality. This proceeding which is unlikely to be completed before mid-2010 could have profound economic consequences for consumers content and applications providers and network operators.Network neutrality is a shorthand for a series of policy prescriptions that would restrict the ability of broadband internet service providers (ISPs) to manage network traffic. These restrictions include barring network operators from charging content and applications providers (as opposed to end users) for entering into business-to-business transactions for quality-of-service (QoS) enhancements for packet delivery. Although the initial objective for advocates of network neutrality regulation was to secure regulation of wireline networks the debate has expanded since its inception to include wireless networks

    The legality of deep packet inspection

    Get PDF
    Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values
    • …
    corecore