127,900 research outputs found

    Inter-CubeSat Communication with V-band "Bull's eye" antenna

    Get PDF
    We present the study of a simple communication scenario between two CubeSats using a V-band “Bull's eye” antenna that we designed for this purpose. The return loss of the antenna has a -10dB bandwidth of 0.7 GHz and a gain of 15.4dBi at 60 GHz. Moreover, the low-profile shape makes it easily integrable in a CubeSat chassis. The communication scenario study shows that, using 0.01W VubiQ modules and V-band “Bull’s eye” antennas, CubeSats can efficiently transmit data within a 500 MHz bandwidth and with a 10-6 BER while being separated by up to 98m, under ideal conditions, or 50m under worst case operating conditions (5° pointing misalignment in E- and H-plane of the antenna, and 5° polarisation misalignment)

    Yemaya, No. 28, August 2008

    Get PDF
    Reflections/ Women in Fisheries Policies - Meeting the challenge. Africa/ South Africa - Righting gender injustices. Asia/ China - Contributing significantly. Europe/ Norway - Taking along the 'crewmembers'. Africa/ Uganda - Bringing in the catch. Reflections/ Women in Fisheries Policies - Recognizing women in fisheries: Policy considerations for developing countries. Asia/ The Philippines - 'Engendering' the fisheries industry development plan. Yemaya Recommends - Women in the Fishing: The Roots of Power between the Sexes. Profile - Meet Sherry Pictou. Q&A - Interview with Dr. Cornelia E. Nauen. Milestones - International legal instruments of relevance to women in fisheries. What's New, Webby? - Statement from Women’s Workshop, South Africa. Yemaya Mama in Bangkok – Cartoon. Poem - Ancient food for future generations

    Culture shapes how we look at faces

    Get PDF
    Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures

    Reading Scene Text in Deep Convolutional Sequences

    Full text link
    We develop a Deep-Text Recurrent Network (DTRN) that regards scene text reading as a sequence labelling problem. We leverage recent advances of deep convolutional neural networks to generate an ordered high-level sequence from a whole word image, avoiding the difficult character segmentation problem. Then a deep recurrent model, building on long short-term memory (LSTM), is developed to robustly recognize the generated CNN sequences, departing from most existing approaches recognising each character independently. Our model has a number of appealing properties in comparison to existing scene text recognition methods: (i) It can recognise highly ambiguous words by leveraging meaningful context information, allowing it to work reliably without either pre- or post-processing; (ii) the deep CNN feature is robust to various image distortions; (iii) it retains the explicit order information in word image, which is essential to discriminate word strings; (iv) the model does not depend on pre-defined dictionary, and it can process unknown words and arbitrary strings. Codes for the DTRN will be available.Comment: To appear in the 13th AAAI Conference on Artificial Intelligence (AAAI-16), 201

    Spoken content retrieval: A survey of techniques and technologies

    Get PDF
    Speech media, that is, digital audio and video containing spoken content, has blossomed in recent years. Large collections are accruing on the Internet as well as in private and enterprise settings. This growth has motivated extensive research on techniques and technologies that facilitate reliable indexing and retrieval. Spoken content retrieval (SCR) requires the combination of audio and speech processing technologies with methods from information retrieval (IR). SCR research initially investigated planned speech structured in document-like units, but has subsequently shifted focus to more informal spoken content produced spontaneously, outside of the studio and in conversational settings. This survey provides an overview of the field of SCR encompassing component technologies, the relationship of SCR to text IR and automatic speech recognition and user interaction issues. It is aimed at researchers with backgrounds in speech technology or IR who are seeking deeper insight on how these fields are integrated to support research and development, thus addressing the core challenges of SCR
    corecore