25 research outputs found

    Projective-Plane Iteratively Decodable Block Codes for WDM High-Speed Long-Haul Transmission Systems

    Full text link

    Can an Accelerated Intervention Close the School Readiness Gap for Disadvantaged Children? An Evaluation of the Effects of the LEARN Project’s Summer Pre-Primary Program on Literacy Outcomes in Northern Lao PDR

    Get PDF
    Developed against the backdrop of Sustainable Development Goal 4, as well as a global trend towards rigorous assessment of early childhood programs, this thesis answers questions about the effects of an accelerated school readiness intervention for non-Lao children in disadvantaged communities of Lao People’s Democratic Republic. Through a longitudinal, cluster randomized control trial, the study employs multi-level regression with an analytical sample of 391 children to examine the outcomes of a summer pre-primary program piloted from 2015-2018 by the Lao government with support from Plan International and Save the Children International in the Dubai-Cares funded Lao Educational Access, Research, and Networking (LEARN) Project. Research questions are investigated through a design in which the same panel of children are assessed against a control group at three intervals using the Measurement of Development and Early Learning. The thesis identifies significant associations between receiving the treatment and achieving higher gain scores on several emergent literacy tasks between baseline and midline, with effects roughly in line with similar interventions in other contexts. At the same time, the thesis finds that those effects had largely faded by endline. An interaction between treatment and ethnicity was only evident in a few instances, suggesting that the intervention may have boosted school readiness for Khmu children more by the start of grade 1 and for Hmong children more during grade 1. The thesis raises important recommendations about how to improve the fit between the ultimate objectives of accelerated interventions, the evaluations they undergo, and the needs of the broader education system. New contributions to knowledge are also found by interrogating a global assessment paradigm through a comparative linguistic lens, so that forthcoming evaluations benefit from the lessons learned based on LEARN’s attempt to fit a square peg into a unique alpha-syllabic, tonal Southeast Asian language

    Improving Group Integrity of Tags in RFID Systems

    Get PDF
    Checking the integrity of groups containing radio frequency identification (RFID) tagged objects or recovering the tag identifiers of missing objects is important in many activities. Several autonomous checking methods have been proposed for increasing the capability of recovering missing tag identifiers without external systems. This has been achieved by treating a group of tag identifiers (IDs) as packet symbols encoded and decoded in a way similar to that in binary erasure channels (BECs). Redundant data are required to be written into the limited memory space of RFID tags in order to enable the decoding process. In this thesis, the group integrity of passive tags in RFID systems is specifically targeted, with novel mechanisms being proposed to improve upon the current state of the art. Due to the sparseness property of low density parity check (LDPC) codes and the mitigation of the progressive edge-growth (PEG) method for short cycles, the research is begun with the use of the PEG method in RFID systems to construct the parity check matrix of LDPC codes in order to increase the recovery capabilities with reduced memory consumption. It is shown that the PEG-based method achieves significant recovery enhancements compared to other methods with the same or less memory overheads. The decoding complexity of the PEG-based LDPC codes is optimised using an improved hybrid iterative/Gaussian decoding algorithm which includes an early stopping criterion. The relative complexities of the improved algorithm are extensively analysed and evaluated, both in terms of decoding time and the number of operations required. It is demonstrated that the improved algorithm considerably reduces the operational complexity and thus the time of the full Gaussian decoding algorithm for small to medium amounts of missing tags. The joint use of the two decoding components is also adapted in order to avoid the iterative decoding when the missing amount is larger than a threshold. The optimum value of the threshold value is investigated through empirical analysis. It is shown that the adaptive algorithm is very efficient in decreasing the average decoding time of the improved algorithm for large amounts of missing tags where the iterative decoding fails to recover any missing tag. The recovery performances of various short-length irregular PEG-based LDPC codes constructed with different variable degree sequences are analysed and evaluated. It is demonstrated that the irregular codes exhibit significant recovery enhancements compared to the regular ones in the region where the iterative decoding is successful. However, their performances are degraded in the region where the iterative decoding can recover some missing tags. Finally, a novel protocol called the Redundant Information Collection (RIC) protocol is designed to filter and collect redundant tag information. It is based on a Bloom filter (BF) that efficiently filters the redundant tag information at the tag’s side, thereby considerably decreasing the communication cost and consequently, the collection time. It is shown that the novel protocol outperforms existing possible solutions by saving from 37% to 84% of the collection time, which is nearly four times the lower bound. This characteristic makes the RIC protocol a promising candidate for collecting redundant tag information in the group integrity of tags in RFID systems and other similar ones

    Intertwined results on linear codes and Galois geometries

    Get PDF

    Subject Index Volumes 1–200

    Get PDF

    Circuit-aware system design techniques for wireless communication

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 211-218).When designing wireless communication systems, many hardware details are hidden from the algorithm designer, especially with analog hardware. While it is difficult for a designer to understand all aspects of a complex system, some knowledge of circuit constraints can improve system performance by relaxing design constraints. The specifications of a circuit design are generally not equally difficult to meet, allowing excess margin in one area to be used to relax more difficult design constraints. We first propose an uplink/downlink architecture for a network with a multiple antenna central server. This design takes advantage of the central server to allow the nodes to achieve multiplexing gain by forming virtual arrays without coordination, or diversity gain to decrease SNR requirements. Computation and memory are offloaded from the nodes to the server, allowing less complex, inexpensive nodes to be used. We can further use this SNR margin to reduce circuit area and power consumption, sacrificing system capacity for circuit optimization. Besides the more common transmit power reduction, large passive analog components can be removed to reduce chip area, and bias currents lowered to save power at the expense of noise figure. Given the inevitable crosstalk coupling of circuits, we determine the minimum required crosstalk isolation in terms of circuit gain and signal range.(cont.) Viewing the crosstalk as a static fading channel, we derive a formula for the asymptotic SNR loss, and propose phase randomization to reduce the strong phase dependence of the crosstalk SNR loss. Because the high peak to average power (PAPR) that results from multicarrier systems is difficult for analog circuits to handle, the result is low power efficiencies. We propose two algorithms, both of which can decrease the PAPR by 4 dB or more, resulting in an overall power reduction by over a factor of three in the high and low SNR regimes, when combined with an outphasing linear amplifier.by Everest Wang Huang.Ph.D

    Circuit-Aware System Design Techniques for Wireless Communication

    Get PDF
    Thesis Supervisor: Gregory W. Wornell Title: ProfessorWhen designing wireless communication systems, many hardware details are hidden from the algorithm designer, especially with analog hardware. While it is difficult for a designer to understand all aspects of a complex system, some knowledge of circuit constraints can improve system performance by relaxing design constraints. The specifications of a circuit design are generally not equally difficult to meet, allowing excess margin in one area to be used to relax more difficult design constraints. We first propose an uplink/downlink architecture for a network with a multiple antenna central server. This design takes advantage of the central server to allow the nodes to achieve multiplexing gain by forming virtual arrays without coordination, or diversity gain to decrease SNR requirements. Computation and memory are offloaded from the nodes to the server, allowing less complex, inexpensive nodes to be used. We can further use this SNR margin to reduce circuit area and power consumption, sacrificing system capacity for circuit optimization. Besides the more common trans- mit power reduction, large passive analog components can be removed to reduce chip area, and bias currents lowered to save power at the expense of noise figure. Given the inevitable crosstalk coupling of circuits, we determine the minimum required crosstalk isolation in terms of circuit gain and signal range. Viewing the crosstalk as a static fading channel, we derive a formula for the asymptotic SNR loss, and propose phase randomization to reduce the strong phase dependence of the crosstalk SNR loss. Because the high peak to average power (PAPR) that results from multicarrier systems is difficult for analog circuits to handle, the result is low power efficiencies. We propose two algorithms, both of which can decrease the PAPR by 4 dB or more, resulting in an overall power reduction by over a factor of three in the high and low SNR regimes, when combined with an outphasing linear amplifier.MIT, the Semiconductor Research Corpo- ration and MARCO C2S2, and Lincoln Laboratory

    the plenoptic sensor

    Get PDF
    In this thesis, we will introduce the innovative concept of a plenoptic sensor that can determine the phase and amplitude distortion in a coherent beam, for example a laser beam that has propagated through the turbulent atmosphere.. The plenoptic sensor can be applied to situations involving strong or deep atmospheric turbulence. This can improve free space optical communications by maintaining optical links more intelligently and efficiently. Also, in directed energy applications, the plenoptic sensor and its fast reconstruction algorithm can give instantaneous instructions to an adaptive optics (AO) system to create intelligent corrections in directing a beam through atmospheric turbulence. The hardware structure of the plenoptic sensor uses an objective lens and a microlens array (MLA) to form a mini “Keplerian” telescope array that shares the common objective lens. In principle, the objective lens helps to detect the phase gradient of the distorted laser beam and the microlens array (MLA) helps to retrieve the geometry of the distorted beam in various gradient segments. The software layer of the plenoptic sensor is developed based on different applications. Intuitively, since the device maximizes the observation of the light field in front of the sensor, different algorithms can be developed, such as detecting the atmospheric turbulence effects as well as retrieving undistorted images of distant objects. Efficient 3D simulations on atmospheric turbulence based on geometric optics have been established to help us perform optimization on system design and verify the correctness of our algorithms. A number of experimental platforms have been built to implement the plenoptic sensor in various application concepts and show its improvements when compared with traditional wavefront sensors. As a result, the plenoptic sensor brings a revolution to the study of atmospheric turbulence and generates new approaches to handle turbulence effect better

    Fixing meaning: intertextuality, inferencing and genre in interpretation

    Get PDF
    The intertextual theories of V. N. Voloshinov, Mikhail Bakhtin and the early Julia Kristeva provide the most convincing account of the processes of textual production, conceived as constitutively social, cultural and historical. However, the ways in which intertextual accounts of reading (or 'use') have extended such theories have foreclosed their potential. In much contemporary literary and cultural theory, it is assumed that reading, conceived intertextually, is no simple decoding process, but there is little interest in what interpretation, as a process, is, and its relations to reading. It is these questions which this thesis seeks to answer. The introduction sets the scene both for the problem and its methodological treatment: drawing certain post-structuralist and pragmatic theories of meaning into confrontation, and producing a critical synthesis. Part one (chapters one to three) elaborate these two traditions of meaning and stages the encounter. Chapter one offers detailed expositions of Voloshinov, Bakhtin and Kristeva, contrasting these with other intertextual theories of production and reception. Chapter two examines inferential accounts of communication within pragmatics, focusing on Paul Grice and on Dan Sperber and Deirdre Wilson's Relevance theory. Chapter three stages an encounter between these radically different traditions. A common ground is identified: both are rhetorical approaches to meaning, focusing on the relations between texts, contexts and their producers and interpreters. Each tradition is then subjected to the theoretical scrutiny of the other. Inferential theories expose the lack of specificity in intertextual accounts which completely ignore inferencing as a process. Intertextual theories reveal that text and context have semantically substantive intertextual dimensions, most particularly genre and register (conceived intertextually) which are ignored by inferential theories. Text and context are therefore far more semantically fixed than such theories suppose. Both traditions ignore the role of production practices other than 'speech' or 'writing', i.e. they ignore how publishing practices - editing, design, production and marketing - constitute genre and shape reading. In Part Two (chapters four to six), the critique is developed into an account of interpretation. Interpretation, conceived intertextually, is significantly, though not exclusively, inferential, but inferential processes do not 'work' in the ways proposed by existing inferential theories. Patterns of inference are ordered by the relations between discourses (in Foucault's sense) and genres in the text, the reader's knowledge and the conditions of reading. Chapter four elaborates the concepts required for such an account of interpretation, centring on the role of publishing processes and the text's material form in shaping interpretation. The limits of existing accounts of the edition and publishing, specifically Gerard Genette's Paratexts and work in the 'new' textual studies, call for a more expansive account of how publishing shapes genre and interpretation. Chapters five and six develop two case-studies which extend these concepts and arguments. These examine two contemporary publishing categories: 'classics' (Penguin, Everyman etc.) and literary theory textbooks (Introductions and Readers). Through the detailed analyses of particular editions, I develop and substantiate a stronger and richer account of interpretation as process and practice and its relation to reading. This is expanded in the final chapter
    corecore