4,018 research outputs found

    A comparison of airborne and ground-based radar observations with rain gages during the CaPE experiment

    Get PDF
    The vicinity of KSC, where the primary ground truth site of the Tropical Rainfall Measuring Mission (TRMM) program is located, was the focal point of the Convection and Precipitation/Electrification (CaPE) experiment in Jul. and Aug. 1991. In addition to several specialized radars, local coverage was provided by the C-band (5 cm) radar at Patrick AFB. Point measurements of rain rate were provided by tipping bucket rain gage networks. Besides these ground-based activities, airborne radar measurements with X- and Ka-band nadir-looking radars on board an aircraft were also recorded. A unique combination data set of airborne radar observations with ground-based observations was obtained in the summer convective rain regime of central Florida. We present a comparison of these data intending a preliminary validation. A convective rain event was observed simultaneously by all three instrument types on the evening of 27 Jul. 1991. The high resolution aircraft radar was flown over convective cells with tops exceeding 10 km and observed reflectivities of 40 to 50 dBZ at 4 to 5 km altitude, while the low resolution surface radar observed 35 to 55 dBZ echoes and a rain gage indicated maximum surface rain rates exceeding 100 mm/hr. The height profile of reflectivity measured with the airborne radar show an attenuation of 6.5 dB/km (two way) for X-band, corresponding to a rainfall rate of 95 mm/hr

    Building Programmable Wireless Networks: An Architectural Survey

    Full text link
    In recent times, there have been a lot of efforts for improving the ossified Internet architecture in a bid to sustain unstinted growth and innovation. A major reason for the perceived architectural ossification is the lack of ability to program the network as a system. This situation has resulted partly from historical decisions in the original Internet design which emphasized decentralized network operations through co-located data and control planes on each network device. The situation for wireless networks is no different resulting in a lot of complexity and a plethora of largely incompatible wireless technologies. The emergence of "programmable wireless networks", that allow greater flexibility, ease of management and configurability, is a step in the right direction to overcome the aforementioned shortcomings of the wireless networks. In this paper, we provide a broad overview of the architectures proposed in literature for building programmable wireless networks focusing primarily on three popular techniques, i.e., software defined networks, cognitive radio networks, and virtualized networks. This survey is a self-contained tutorial on these techniques and its applications. We also discuss the opportunities and challenges in building next-generation programmable wireless networks and identify open research issues and future research directions.Comment: 19 page

    Investigating Single Precision Floating General Matrix Multiply in Heterogeneous Hardware

    Get PDF
    The fundamental operation of matrix multiplication is ubiquitous across a myriad of disciplines. Yet, the identification of new optimizations for matrix multiplication remains relevant for emerging hardware architectures and heterogeneous systems. Frameworks such as OpenCL enable computation orchestration on existing systems, and its availability using the Intel High Level Synthesis compiler allows users to architect new designs for reconfigurable hardware using C/C++. Using the HARPv2 as a vehicle for exploration, we investigate the utility of several of the most notable matrix multiplication optimizations to better understand the performance portability of OpenCL and the implications for such optimizations on this and future heterogeneous architectures. Our results give targeted insights into the applicability of best practices that were for existing architectures when used on emerging heterogeneous systems

    Simplified vector-thread architectures for flexible and efficient data-parallel accelerators

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 165-170).This thesis explores a new approach to building data-parallel accelerators that is based on simplifying the instruction set, microarchitecture, and programming methodology for a vector-thread architecture. The thesis begins by categorizing regular and irregular data-level parallelism (DLP), before presenting several architectural design patterns for data-parallel accelerators including the multiple-instruction multiple-data (MIMD) pattern, the vector single-instruction multiple-data (vector-SIMD) pattern, the single-instruction multiple-thread (SIMT) pattern, and the vector-thread (VT) pattern. Our recently proposed VT pattern includes many control threads that each manage their own array of microthreads. The control thread uses vector memory instructions to efficiently move data and vector fetch instructions to broadcast scalar instructions to all microthreads. These vector mechanisms are complemented by the ability for each microthread to direct its own control flow. In this thesis, I introduce various techniques for building simplified instances of the VT pattern. I propose unifying the VT control-thread and microthread scalar instruction sets to simplify the microarchitecture and programming methodology. I propose a new single-lane VT microarchitecture based on minimal changes to the vector-SIMD pattern.(cont.) Single-lane cores are simpler to implement than multi-lane cores and can achieve similar energy efficiency. This new microarchitecture uses control processor embedding to mitigate the area overhead of single-lane cores, and uses vector fragments to more efficiently handle both regular and irregular DLP as compared to previous VT architectures. I also propose an explicitly data-parallel VT programming methodology that is based on a slightly modified scalar compiler. This methodology is easier to use than assembly programming, yet simpler to implement than an automatically vectorizing compiler. To evaluate these ideas, we have begun implementing the Maven data-parallel accelerator. This thesis compares a simplified Maven VT core to MIMD, vector-SIMD, and SIMT cores. We have implemented these cores with an ASIC methodology, and I use the resulting gate-level models to evaluate the area, performance, and energy of several compiled microbenchmarks. This work is the first detailed quantitative comparison of the VT pattern to other patterns. My results suggest that future data-parallel accelerators based on simplified VT architectures should be able to combine the energy efficiency of vector-SIMD accelerators with the flexibility of MIMD accelerators.by Christopher Francis Batten.Ph.D

    Securing Embedded Systems for Unmanned Aerial Vehicles

    Get PDF
    This project focuses on securing embedded systems for unmanned aerial vehicles (UAV). Over the past two decades UAVs have evolved from a primarily military tool into one that is used in many commercial and civil applications. As the market for these products increases the need to protect transmitted data becomes more important. UAVs are flying missions that contain crucial data and without the right protection they can be vulnerable to malicious attacks. This project focuses on building a UAV platform and working to protect the data transmitted on it. The platform was able to detect red color and wirelessly transmit the coordinates of the color to a remote laptop. Areas that were focused on for security included the image processing and wireless communications modules

    Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Full text link

    Predictive Analysis for Network Data Storm

    Get PDF
    The project ‘Predictive Analysis for Network Data Storm’ involves the analysis of big data in Splunk, which indexes machine-generated big data and allows efficient querying and visualization, to develop a set of thresholds to predict a network meltdown, or commonly known as a data storm. The WPI team analyzed multiple datasets to spot patterns and determine the major differences between the normal state and the storm state of the network. A set of rules and thresholds were fully developed for the Fixed Income Transversal Tools team in BNP Paribas, who implemented the model in their internal real-time monitoring tool ‘SCADA’ to predict and prevent network data storms

    Keeping checkpoint/restart viable for exascale systems

    Get PDF
    Next-generation exascale systems, those capable of performing a quintillion operations per second, are expected to be delivered in the next 8-10 years. These systems, which will be 1,000 times faster than current systems, will be of unprecedented scale. As these systems continue to grow in size, faults will become increasingly common, even over the course of small calculations. Therefore, issues such as fault tolerance and reliability will limit application scalability. Current techniques to ensure progress across faults like checkpoint/restart, the dominant fault tolerance mechanism for the last 25 years, are increasingly problematic at the scales of future systems due to their excessive overheads. In this work, we evaluate a number of techniques to decrease the overhead of checkpoint/restart and keep this method viable for future exascale systems. More specifically, this work evaluates state-machine replication to dramatically increase the checkpoint interval (the time between successive checkpoints) and hash-based, probabilistic incremental checkpointing using graphics processing units to decrease the checkpoint commit time (the time to save one checkpoint). Using a combination of empirical analysis, modeling, and simulation, we study the costs and benefits of these approaches on a wide range of parameters. These results, which cover of number of high-performance computing capability workloads, different failure distributions, hardware mean time to failures, and I/O bandwidths, show the potential benefits of these techniques for meeting the reliability demands of future exascale platforms
    • …
    corecore