1,573 research outputs found

    K-Band TWTA for the NASA Lunar Reconnaissance Orbiter

    Get PDF
    This paper presents the K-Band traveling wave tube amplifier (TWTA) developed for the Lunar Reconnaissance Orbiter and discusses the new capabilities it provides

    High-Efficiency K-Band Space Traveling-Wave Tube Amplifier for Near-Earth High Data Rate Communications

    Get PDF
    The RF performance of a new K-Band helix conduction cooled traveling-wave tube amplifier (TWTA) is presented in this paper. A total of three such units were manufactured, tested and delivered. The first unit is currently flying onboard NASA s Lunar Reconnaissance Orbiter (LRO) spacecraft and has flawlessly completed over 2000 orbits around the Moon. The second unit is a proto-flight model. The third unit will fly onboard NASA s International Space Station (ISS) as a very compact and lightweight transmitter package for the Communications, Navigation and Networking Reconfigurable Testbed (CoNNeCT), which is scheduled for launch in 2011. These TWTAs were characterized over the frequencies 25.5 to 25.8 GHz. The saturated RF output power is >40 W and the saturated RF gain is >46 dB. The saturated AM-to- PM conversion is 3.5 /dB and the small signal gain ripple is 0.46 dB peak-to-peak. The overall efficiency of the TWTA, including that of the electronic power conditioner (EPC) is as high as 45 percent

    Clinicopathological evaluation of chronic traumatic encephalopathy in players of American football

    Full text link
    IMPORTANCE: Players of American football may be at increased risk of long-term neurological conditions, particularly chronic traumatic encephalopathy (CTE). OBJECTIVE: To determine the neuropathological and clinical features of deceased football players with CTE. DESIGN, SETTING, AND PARTICIPANTS: Case series of 202 football players whose brains were donated for research. Neuropathological evaluations and retrospective telephone clinical assessments (including head trauma history) with informants were performed blinded. Online questionnaires ascertained athletic and military history. EXPOSURES: Participation in American football at any level of play. MAIN OUTCOMES AND MEASURES: Neuropathological diagnoses of neurodegenerative diseases, including CTE, based on defined diagnostic criteria; CTE neuropathological severity (stages I to IV or dichotomized into mild [stages I and II] and severe [stages III and IV]); informant-reported athletic history and, for players who died in 2014 or later, clinical presentation, including behavior, mood, and cognitive symptoms and dementia. RESULTS: Among 202 deceased former football players (median age at death, 66 years [interquartile range, 47-76 years]), CTE was neuropathologically diagnosed in 177 players (87%; median age at death, 67 years [interquartile range, 52-77 years]; mean years of football participation, 15.1 [SD, 5.2]), including 0 of 2 pre–high school, 3 of 14 high school (21%), 48 of 53 college (91%), 9 of 14 semiprofessional (64%), 7 of 8 Canadian Football League (88%), and 110 of 111 National Football League (99%) players. Neuropathological severity of CTE was distributed across the highest level of play, with all 3 former high school players having mild pathology and the majority of former college (27 [56%]), semiprofessional (5 [56%]), and professional (101 [86%]) players having severe pathology. Among 27 participants with mild CTE pathology, 26 (96%) had behavioral or mood symptoms or both, 23 (85%) had cognitive symptoms, and 9 (33%) had signs of dementia. Among 84 participants with severe CTE pathology, 75 (89%) had behavioral or mood symptoms or both, 80 (95%) had cognitive symptoms, and 71 (85%) had signs of dementia. CONCLUSIONS AND RELEVANCE: In a convenience sample of deceased football players who donated their brains for research, a high proportion had neuropathological evidence of CTE, suggesting that CTE may be related to prior participation in football.This study received support from NINDS (grants U01 NS086659, R01 NS078337, R56 NS078337, U01 NS093334, and F32 NS096803), the National Institute on Aging (grants K23 AG046377, P30AG13846 and supplement 0572063345-5, R01 AG1649), the US Department of Defense (grant W81XWH-13-2-0064), the US Department of Veterans Affairs (I01 CX001038), the Veterans Affairs Biorepository (CSP 501), the Veterans Affairs Rehabilitation Research and Development Traumatic Brain Injury Center of Excellence (grant B6796-C), the Department of Defense Peer Reviewed Alzheimer’s Research Program (grant 13267017), the National Operating Committee on Standards for Athletic Equipment, the Alzheimer’s Association (grants NIRG-15-362697 and NIRG-305779), the Concussion Legacy Foundation, the Andlinger Family Foundation, the WWE, and the NFL

    ParaLog: enabling and accelerating online parallel monitoring of multithreaded applications

    Get PDF
    Instruction-grain lifeguards monitor the events of a running application at the level of individual instructions in order to identify and help mitigate application bugs and security exploits. Because such lifeguards impose a 10-100X slowdown on existing platforms, previous studies have proposed hardware designs to accelerate lifeguard processing. However, these accelerators are either tailored to a specific class of lifeguards or suitable only for monitoring singlethreaded programs. We present ParaLog, the first design of a system enabling fast online parallel monitoring of multithreaded parallel applications. ParaLog supports a broad class of software-defined lifeguards. We show how three existing accelerators can be enhanced to support online multithreaded monitoring, dramatically reducing lifeguard overheads. We identify and solve several challenges in monitoring parallel applications and/or parallelizing these accelerators, including (i) enforcing inter-thread data dependences, (ii) dealing with inter-thread effects that are not reflected in coherence traffic, (iii) dealing with unmonitored operating system activity, and (iv) ensuring lifeguards can access shared metadata with negligible synchronization overheads. We present our system design for both Sequentially Consistent and Total Store Ordering processors. We implement and evaluate our design on a 16 core simulated CMP, using benchmarks from SPLASH-2 and PARSEC and two lifeguards: a data-flow tracking lifeguard and a memory-access checker lifeguard. Our results show that (i) our parallel accelerators improve performance by 2-9X and 1.13-3.4X for our two lifeguards, respectively, (ii) we are 5-126X faster than the time-slicing approach required by existing techniques, and (iii) our average overheads for applications with eight threads are 51% and 28% for the two lifeguards, respectively

    Large herbivores transform plant-pollinator networks in an African savanna

    Get PDF
    Pollination by animals is a key ecosystem service1,2 and interactions between plants and their pollinators are a model system for studying ecological networks,3,4 yet plant-pollinator networks are typically studied in isolation from the broader ecosystems in which they are embedded. The plants visited by pollinators also interact with other consumer guilds that eat stems, leaves, fruits, or seeds. One such guild, large mammalian herbivores, are well-known ecosystem engineers5, 6, 7 and may have substantial impacts on plant-pollinator networks. Although moderate herbivory can sometimes promote plant diversity,8 potentially benefiting pollinators, large herbivores might alternatively reduce resource availability for pollinators by consuming flowers,9 reducing plant density,10 and promoting somatic regrowth over reproduction.11 The direction and magnitude of such effects may hinge on abiotic context—in particular, rainfall, which modulates the effects of ungulates on vegetation.12 Using a long-term, large-scale experiment replicated across a rainfall gradient in central Kenya, we show that a diverse assemblage of native large herbivores, ranging from 5-kg antelopes to 4,000-kg African elephants, limited resource availability for pollinators by reducing flower abundance and diversity; this in turn resulted in fewer pollinator visits and lower pollinator diversity. Exclusion of large herbivores increased floral-resource abundance and pollinator-assemblage diversity, rendering plant-pollinator networks larger, more functionally redundant, and less vulnerable to pollinator extinction. Our results show that species extrinsic to plant-pollinator interactions can indirectly and strongly alter network structure. Forecasting the effects of environmental change on pollination services and interaction webs more broadly will require accounting for the effects of extrinsic keystone species

    Parallel depth first vs. work stealing schedulers on CMP architectures

    Get PDF
    In chip multiprocessors (CMPs), limiting the number of off-chip cache misses is crucial for good performance. Many multithreaded programs provide opportunities for constructive cache sharing, in which concurrently scheduled threads share a largely overlapping working set. In this brief announcement, we highlight our ongoing study [4] comparing the performance of two schedulers designed for fine-grained multithreaded programs: Parallel Depth First (PDF) [2], which is designed for constructive sharing, and Work Stealing (WS) [3], which takes a more traditional approach.Overview of schedulers. In PDF, processing cores are allocated ready-to-execute program tasks such that higher scheduling priority is given to those tasks the sequential program would have executed earlier. As a result, PDF tends to co-schedule threads in a way that tracks the sequential execution. Hence, the aggregate working set is (provably) not much larger than the single thread working set [1]. In WS, each processing core maintains a local work queue of readyto-execute threads. Whenever its local queue is empty, the core steals a thread from the bottom of the first non-empty queue it finds. WS is an attractive scheduling policy because when there is plenty of parallelism, stealing is quite rare. However, WS is not designed for constructive cache sharing, because the cores tend to have disjoint working sets.CMP configurations studied. We evaluated the performance of PDF and WS across a range of simulated CMP configurations. We focused on designs that have fixed-size private L1 caches and a shared L2 cache on chip. For a fixed die size (240 mm2), we varied the number of cores from 1 to 32. For a given number of cores, we used a (default) configuration based on current CMPs and realistic projections of future CMPs, as process technologies decrease from 90nm to 32nm.Summary of findings. We studied a variety of benchmark programs to show the following findings.For several application classes, PDF enables significant constructive sharing between threads, leading to better utilization of the on-chip caches and reducing off-chip traffic compared to WS. In particular, bandwidth-limited irregular programs and parallel divide-and-conquer programs present a relative speedup of 1.3-1.6X over WS, observing a 13- 41% reduction in off-chip traffic. An example is shown in Figure 1, for parallel merge sort. For each schedule, the number of L2 misses (i.e., the off-chip traffic) is shown on the left and the speed-up over running on one core is shown on the right, for 1 to 32 cores. Note that reducing the offchip traffic has the additional benefit of reducing the power consumption. Moreover, PDF's smaller working sets provide opportunities to power down segments of the cache without increasing the running time. Furthermore, when multiple programs are active concurrently, the PDF version is also less of a cache hog and its smaller working set is more likely to remain in the cache across context switches.For several other applications classes, PDF and WS have roughly the same execution times, either because there is only limited data reuse that can be exploited or because the programs are not limited by off-chip bandwidth. In the latter case, the constructive sharing PDF enables does provide the power and multiprogramming benefits discussed above.Finally, most parallel benchmarks to date, written for SMPs, use such a coarse-grained threading that they cannot exploit the constructive cache behavior inherent in PDF.We find that mechanisms to finely grain multithreaded applications are crucial to achieving good performance on CMPs
    • …
    corecore