1,661,154 research outputs found

    Gates to Gregg High Voltage Transmission Line Study

    Get PDF
    The usefulness of LANDSAT data in the planning of transmission line routes was assessed. LANDSAT digital data and image processing techniques, specifically a multi-date supervised classification aproach, were used to develop a land cover map for an agricultural area near Fresno, California. Twenty-six land cover classes were identified, of which twenty classes were agricultural crops. High classification accuracies (greater than 80%) were attained for several classes, including cotton, grain, and vineyards. The primary products generated were 1:24,000, 1:100,000 and 1:250,000 scale maps of the classification and acreage summaries for all land cover classes within four alternate transmission line routes

    EC-CENTRIC: An Energy- and Context-Centric Perspective on IoT Systems and Protocol Design

    Get PDF
    The radio transceiver of an IoT device is often where most of the energy is consumed. For this reason, most research so far has focused on low power circuit and energy efficient physical layer designs, with the goal of reducing the average energy per information bit required for communication. While these efforts are valuable per se, their actual effectiveness can be partially neutralized by ill-designed network, processing and resource management solutions, which can become a primary factor of performance degradation, in terms of throughput, responsiveness and energy efficiency. The objective of this paper is to describe an energy-centric and context-aware optimization framework that accounts for the energy impact of the fundamental functionalities of an IoT system and that proceeds along three main technical thrusts: 1) balancing signal-dependent processing techniques (compression and feature extraction) and communication tasks; 2) jointly designing channel access and routing protocols to maximize the network lifetime; 3) providing self-adaptability to different operating conditions through the adoption of suitable learning architectures and of flexible/reconfigurable algorithms and protocols. After discussing this framework, we present some preliminary results that validate the effectiveness of our proposed line of action, and show how the use of adaptive signal processing and channel access techniques allows an IoT network to dynamically tune lifetime for signal distortion, according to the requirements dictated by the application

    Google Earth Visualizations: Preview and Delivery of Hydrographic and Other Marine Datasets

    Get PDF
    Existing hydrographic data analysis and visualization tools are very powerful, but lack easy access to web data management tools. Virtual globe software provides a gateway to a host of important data products in formats usable by specialized tools such as CARIS, Fledermaus, and Arc/Info. With virtual globe interfaces, users see complimentary and consistent geographic representations of available data in an easy-tonavigate format. We present a preview of visualizations that build upon virtual globe software. These examples are viewed in Google Earth, but could also be implemented in a number of alternative programs (e.g. NASA World Wind, Dapple, OSSIM Planet). We have assembled Google Earth visualizations from three datasets to illustrate each of the four primary types of data (handle point, line, area, and time data). The USCG Marine Information for Safety and Law Enforcement (MISLE) database of ship incidents illustrates point data. A short sample of the USCG National Automatic Identification System logs (N-AIS) demonstrates rendering of line data. Area data is exemplified in the United Nations Convention f the Law of the Sea (UNCLOS) multibeam bathymetry. Point, line and area data are combined to present a preview of S57 chart information. Finally, the MISLE database uses time to show maritime incidents that occurred in US waterways. The visualizations for our initial work were created with hand coding and small scripts. However, tools such as Fledermaus and RockWare have added Google Earth export functionality that makes authoring Google Earth resources easy to construct. For large dataset that require additional processing and analyses, Google Earth visualizations can offer users a range of download formats and suggest what software to use. We believe that this virtual globe-based-approach can make geospatial data sets more widely accessible via the world-wide-web

    Erbium-doped fiber amplifier elements for structural analysis sensors

    Get PDF
    The use of erbium-doped fiber amplifiers (EDFA's) in optical fiber sensor systems for structural analysis is described. EDFA's were developed for primary applications as periodic regenerator amplifiers in long-distance fiber-based communication systems. Their in-line amplification performance also makes them attractive for optical fiber sensor systems which require long effective lengths or the synthesis of special length-dependent signal processing functions. Sensor geometries incorporating EDFA's in recirculating and multiple loop sensors are discussed. Noise and polarization birefringence are also considered, and the experimental development of system components is discussed

    A model for the space shuttle main engine high pressure oxidizer turbopump shaft seal system

    Get PDF
    A simple static model is presented which solves for the flow properties of pressure, temperature, and mass flow in the Space Shuttle Main Engine pressure Oxidizer Turbopump Shaft Seal Systems. This system includes the primary and secondary turbine seals, the primary and secondary turbine drains, the helium purge seals and feed line, the primary oxygen drain, and the slinger/labyrinth oxygen seal pair. The model predicts the changes in flow variables that occur during and after failures of the various seals. Such information would be particularly useful in a post flight situation where processing of sensor information using this model could identify a particular seal that had experienced excessive wear. Most of the seals in the system are modeled using simple one dimensional equations which can be applied to almost any seal provided that the fluid is gaseous. A failure is modeled as an increase in the clearance between the shaft and the seal. Thus, the model does not attempt to predict how the failure process actually occurs (e.g., wear, seal crack initiation). The results presented were obtained using a FORTRAN implementation of the model running on a VAX computer. Solution for the seal system properties is obtained iteratively; however, a further simplified implementation (which does not include the slinger/labyrinth combination) was also developed which provides fast and reasonable results for most engine operating conditions. Results from the model compare favorably with the limited redline data available

    The role of edge-based and surface-based information in natural scene categorization: evidence from behavior and event-related potentials

    Get PDF
    A fundamental question in vision research is whether visual recognition is determined by edge-based information (e.g., edge, line, and conjunction) or surface-based information (e.g., color, brightness, and texture). To investigate this question, we manipulated the stimulus onset asynchrony (SOA) between the scene and the mask in a backward masking task of natural scene categorization. The behavioral results showed that correct classification was higher for line-drawings than for color photographs when the SOA was 13ms, but lower when the SOA was longer. The ERP results revealed that most latencies of early components were shorter for the line-drawings than for the color photographs, and the latencies gradually increased with the SOA for the color photographs but not for the line-drawings. The results provide new evidence that edge-based information is the primary determinant of natural scene categorization, receiving priority processing; by contrast, surface information takes longer to facilitate natural scene categorization

    Massively Parallel Algorithms for Distance Approximation and Spanners

    Full text link
    Over the past decade, there has been increasing interest in distributed/parallel algorithms for processing large-scale graphs. By now, we have quite fast algorithms -- usually sublogarithmic-time and often poly(loglogn)poly(\log\log n)-time, or even faster -- for a number of fundamental graph problems in the massively parallel computation (MPC) model. This model is a widely-adopted theoretical abstraction of MapReduce style settings, where a number of machines communicate in an all-to-all manner to process large-scale data. Contributing to this line of work on MPC graph algorithms, we present poly(logk)poly(loglogn)poly(\log k) \in poly(\log\log n) round MPC algorithms for computing O(k1+o(1))O(k^{1+{o(1)}})-spanners in the strongly sublinear regime of local memory. To the best of our knowledge, these are the first sublogarithmic-time MPC algorithms for spanner construction. As primary applications of our spanners, we get two important implications, as follows: -For the MPC setting, we get an O(log2logn)O(\log^2\log n)-round algorithm for O(log1+o(1)n)O(\log^{1+o(1)} n) approximation of all pairs shortest paths (APSP) in the near-linear regime of local memory. To the best of our knowledge, this is the first sublogarithmic-time MPC algorithm for distance approximations. -Our result above also extends to the Congested Clique model of distributed computing, with the same round complexity and approximation guarantee. This gives the first sub-logarithmic algorithm for approximating APSP in weighted graphs in the Congested Clique model

    Looking through the eyes of the painter: from visual perception to non-photorealistic rendering

    Get PDF
    In this paper we present a brief overview of the processing in the primary visual cortex, the multi-scale line/edge and keypoint representations, and a model of brightness perception. This model, which is being extended from 1D to 2D, is based on a symbolic line and edge interpretation: lines are represented by scaled Gaussians and edges by scaled, Gaussian-windowed error functions. We show that this model, in combination with standard techniques from graphics, provides a very fertile basis for non-photorealistic image rendering

    Data reconstruction with the LHCb VELO: Hit processing, tracking, vertexing and luminosity monitoring

    Get PDF
    The LHCb experiment is dedicated to performing a detailed study of CP symmetry violation and rare decays of B and D mesons. In order to reach these physics goals the LHCb spectrometer must provide excellent vertexing and tracking performance both off-line and on-line in the trigger. The LHCb VELO (VErtex LOcator) is the silicon microstrip detector which surrounds the collision point and hence is critical to these aims. Its hit processing and zero suppression is performed in a series of algorithms implemented on FPGAs. The tuning of the parameters of these algorithms is performed using a bit-perfect emulation of these algorithms integrated in to the full off-line software of the experiment. Tracking and vertexing is then performed using the clusters produced. These algorithms are described and the results for primary and secondary vertex resolutions are given. Finally, a novel technique for measuring the absolute luminosity using gas injection in the VELO is described

    rPICARD: A CASA-based Calibration Pipeline for VLBI Data

    Full text link
    Currently, HOPS and AIPS are the primary choices for the time-consuming process of (millimeter) Very Long Baseline Interferometry (VLBI) data calibration. However, for a full end-to-end pipeline, they either lack the ability to perform easily scriptable incremental calibration or do not provide full control over the workflow with the ability to manipulate and edit calibration solutions directly. The Common Astronomy Software Application (CASA) offers all these abilities, together with a secure development future and an intuitive Python interface, which is very attractive for young radio astronomers. Inspired by the recent addition of a global fringe-fitter, the capability to convert FITS-IDI files to measurement sets, and amplitude calibration routines based on ANTAB metadata, we have developed the the CASA-based Radboud PIpeline for the Calibration of high Angular Resolution Data (rPICARD). The pipeline will be able to handle data from multiple arrays: EHT, GMVA, VLBA and the EVN in the first release. Polarization and phase-referencing calibration are supported and a spectral line mode will be added in the future. The large bandwidths of future radio observatories ask for a scalable reduction software. Within CASA, a message passing interface (MPI) implementation is used for parallelization, reducing the total time needed for processing. The most significant gain is obtained for the time-consuming fringe-fitting task where each scan be processed in parallel.Comment: 6 pages, 1 figure, EVN 2018 symposium proceeding
    corecore