220 research outputs found

    SEAPAK user's guide, version 2.0. Volume 1: System description

    Get PDF
    The SEAPAK is a user interactive satellite data analysis package that was developed for the processing and interpretation of Nimbus-7/Coastal Zone Color Scanner (CZCS) and the NOAA Advanced Very High Resolution Radiometer (AVHRR) data. Significant revisions were made to version 1.0 of the guide, and the ancillary environmental data analysis module was expanded. The package continues to emphasize user friendliness and user interactive data analyses. Additionally, because the scientific goals of the ocean color research being conducted have shifted to large space and time scales, batch processing capabilities for both satellite and ancillary environmental data analyses were enhanced, thus allowing large quantities of data to be ingested and analyzed in background

    SEAPAK user's guide, version 2.0. Volume 2: Descriptions of programs

    Get PDF
    The SEAPAK is a user-interactive satellite data analysis package that was developed for the processing and interpretation of Nimbus-7/Coastal Zone Color Scanner (CZCS) and the NOAA Advanced Very High Resolution Radiometer (AVHRR) data. Significant revisions were made since version 1.0, and the ancillary environmental data analysis module was greatly expanded. The package continues to be user friendly and user interactive. Also, because the scientific goals of the ocean color research being conducted have shifted to large space and time scales, batch processing capabilities for both satellite and ancillary environmental data analyses were enhanced, thus allowing for large quantities of data to be ingested and analyzed

    SeaWiFS calibration and validation plan, volume 3

    Get PDF
    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) will be the first ocean-color satellite since the Nimbus-7 Coastal Zone Color Scanner (CZCS), which ceased operation in 1986. Unlike the CZCS, which was designed as a proof-of-concept experiment, SeaWiFS will provide routine global coverage every 2 days and is designed to provide estimates of photosynthetic concentrations of sufficient accuracy for use in quantitative studies of the ocean's primary productivity and biogeochemistry. A review of the CZCS mission is included that describes that data set's limitations and provides justification for a comprehensive SeaWiFS calibration and validation program. To accomplish the SeaWiFS scientific objectives, the sensor's calibration must be constantly monitored, and robust atmospheric corrections and bio-optical algorithms must be developed. The plan incorporates a multi-faceted approach to sensor calibration using a combination of vicarious (based on in situ observations) and onboard calibration techniques. Because of budget constraints and the limited availability of ship resources, the development of the operational algorithms (atmospheric and bio-optical) will rely heavily on collaborations with the Earth Observing System (EOS), the Moderate Resolution Imaging Spectrometer (MODIS) oceans team, and projects sponsored by other agencies, e.g., the U.S. Navy and the National Science Foundation (NSF). Other elements of the plan include the routine quality control of input ancillary data (e.g., surface wind, surface pressure, ozone concentration, etc.) used in the processing and verification of the level-0 (raw) data to level-1 (calibrated radiances), level-2 (derived products), and level-3 (gridded and averaged derived data) products

    A Computational Approach to Packet Classification

    Full text link
    Multi-field packet classification is a crucial component in modern software-defined data center networks. To achieve high throughput and low latency, state-of-the-art algorithms strive to fit the rule lookup data structures into on-die caches; however, they do not scale well with the number of rules. We present a novel approach, NuevoMatch, which improves the memory scaling of existing methods. A new data structure, Range Query Recursive Model Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of the accesses to main memory with model inference computations. We describe an efficient training algorithm that guarantees the correctness of the RQ-RMI-based classification. The use of RQ-RMI allows the rules to be compressed into model weights that fit into the hardware cache. Further, it takes advantage of the growing support for fast neural network processing in modern CPUs, such as wide vector instructions, achieving a rate of tens of nanoseconds per lookup. Our evaluation using 500K multi-field rules from the standard ClassBench benchmark shows a geometric mean compression factor of 4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all state-of-the-art algorithms.Comment: To appear in SIGCOMM 202

    Evaluation of NAVA-PAP in premature neonates with apnea of prematurity: minimal backup ventilation and clinically significant events

    Get PDF
    BackgroundNeonates with apnea of prematurity (AOP) clinically deteriorate because continuous positive airway pressure (CPAP) provides inadequate support during apnea. Neurally adjusted ventilatory assist (NAVA) provides proportional ventilator support from the electrical activity of the diaphragm. When the NAVA level is 0 cmH2O/mcV (NAVA-PAP), patients receive CPAP when breathing and backup ventilation when apneic. This study evaluates NAVA-PAP and time spent in backup ventilation.MethodsThis was a prospective, two-center, observational study of preterm neonates on NAVA-PAP for AOP. Ventilator data were downloaded after 24 h. The number of clinically significant events (CSEs) was collected. A paired t-test was used to perform statistical analysis.ResultsThe study was conducted on 28 patients with a gestational age of 25 ± 1.8 weeks and a study age of 28 ± 23 days. The number of CSEs was 4 ± 4.39/24 h. The patients were on NAVA-PAP for approximately 90%/min, switched to backup mode 2.5 ± 1.1 times/min, and spent 10.6 ± 7.2% in backup.ConclusionPreterm neonates on NAVA-PAP had few CSEs with minimal time in backup ventilation

    The P4->NetFPGA Workflow for Line-Rate Packet Processing

    Get PDF
    P4 has emerged as the de facto standard language for describing how network packets should be processed, and is becoming widely used by network owners, systems developers, researchers and in the classroom. The goal of the work presented here is to make it easier for engineers, researchers and students to learn how to program using P4, and to build prototypes running on real hardware. Our target is the NetFPGA SUME platform, a 4x10 Gb/s PCIe card designed for use in universities for teaching and research. Until now, NetFPGA users have needed to learn an HDL such as Verilog or VHDL, making it off limits to many software developers and students. Therefore, we developed the P4->NetFPGA workflow, allowing developers to describe how packets are to be processed in the high-level P4 language, then compile their P4 programs to run at line rate on the NetFPGA SUME board. The P4->NetFPGA workflow is built upon the Xilinx P4-SDNet compiler and the NetFPGA SUME open source code base. In this tutorial paper, we provide an overview of the P4 programming language and describe the P4->NetFPGA workflow. We also describe how the workflow is being used by the P4 community to build research prototypes, and to teach how network systems are built by providing students with hands-on experience working with real hardware.Leverhulme Trust Isaac Newton Trust TBD other

    SeaWiFS technical report series. Volume 26: Results of the SeaWiFS Data Analysis Round-Robin, July 1994 (DARR-1994)

    Get PDF
    The accurate determination of upper ocean apparent optical properties (AOP's) is essential for the vicarious calibration of the sea-viewing wide field-of-view sensor (SeaWiFS) instrument and the validation of the derived data products. To evaluate the role that data analysis methods have upon values of derived AOP's, the first Data Analysis Round-Robin (DARR-94) workshop was sponsored by the SeaWiFS Project during 21-23 July, 1994. The focus of this intercomparison study was the estimation of the downwelling irradiance spectrum just beneath the sea surface, E(sub d)(0(sup -), lambda); the upwelling nadir radiance just beneath the sea surface, L(sub u)(0(sup -), lambda); and the vertical profile of the diffuse attenuation coefficient spectrum, K(sub d)(z, lambda). In the results reported here, different methodologies from four research groups were applied to an identical set of 10 spectroradiometry casts in order to evaluate the degree to which data analysis methods influence AOP estimation, and whether any general improvements can be made. The overall results of DARR-94 are presented in Chapter 1 and the individual methods of the four groups are presented in Chapters 2-5. The DARR-94 results do not show a clear winner among data analysis methods evaluated. It is apparent, however, that some degree of outlier rejection is required in order to accurately estimate L(sub u)(0(sup -), lambda) or E(sub d)(0(sup -), lambda). Furthermore, the calculation, evaluation and exploitation of confidence intervals for the AOP determinations needs to be explored. That is, the SeaWiFS calibration and validation problem should be recast in statistical terms where the in situ AOP values are statistical estimates with known confidence intervals

    Metronome: adaptive and precise intermittent packet retrieval in DPDK

    Full text link
    DPDK (Data Plane Development Kit) is arguably today's most employed framework for software packet processing. Its impressive performance however comes at the cost of precious CPU resources, dedicated to continuously poll the NICs. To face this issue, this paper presents Metronome, an approach devised to replace the continuous DPDK polling with a sleep&wake intermittent mode. Metronome revolves around two main innovations. First, we design a microseconds time-scale sleep function, named hr_sleep(), which outperforms Linux' nanosleep() of more than one order of magnitude in terms of precision when running threads with common time-sharing priorities. Then, we design, model, and assess an efficient multi-thread operation which guarantees service continuity and improved robustness against preemptive thread executions, like in common CPU-sharing scenarios, meanwhile providing controlled latency and high polling efficiency by dynamically adapting to the measured traffic load
    • …
    corecore