16 research outputs found

    The AU Microscopii Debris Disk: Multiwavelength Imaging and Modeling

    Full text link
    (abridged) Debris disks around main sequence stars are produced by the erosion and evaporation of unseen parent bodies. AU Microscopii (GJ 803) is a compelling object to study in the context of disk evolution across different spectral types, as it is an M dwarf whose near edge-on disk may be directly compared to that of its A5V sibling beta Pic. We resolve the disk from 8-60 AU in the near-IR JHK' bands at high resolution with the Keck II telescope and adaptive optics, and develop a novel data reduction technique for the removal of the stellar point spread function. The point source detection sensitivity in the disk midplane is more than a magnitude less sensitive than regions away from the disk for some radii. We measure a blue color across the near-IR bands, and confirm the presence of substructure in the inner disk. Some of the structural features exhibit wavelength-dependent positions. The disk architecture and characteristics of grain composition are inferred through modeling. We approach the modeling of the dust distribution in a manner that complements previous work. Using a Monte Carlo radiative transfer code, we compare a relatively simple model of the distribution of porous grains to a broad data set, simultaneously fitting to midplane surface brightness profiles and the spectral energy distribution. Our model confirms that the large-scale architecture of the disk is consistent with detailed models of steady-state grain dynamics. Here, a belt of parent bodies from 35-40 AU is responsible for producing dust that is then swept outward by the stellar wind and radiation pressures. We infer the presence of very small grains in the outer region, down to sizes of ~0.05 micron. These sizes are consistent with stellar mass-loss rates Mdot_* << 10^2 Mdot_sun.Comment: ApJ accepted, 56 pages, preprint style. Version in emulateapj with high-resolution figures available at http://tinyurl.com/y6ent

    Magnesium isotopes of the bulk solar wind from Genesis diamond‐like carbon films

    Get PDF
    NASA's Genesis Mission returned solar wind (SW) to the Earth for analysis to derive the composition of the solar photosphere from solar material. SW analyses control the precision of the derived solar compositions, but their ultimate accuracy is limited by the theoretical or empirical models of fractionation due to SW formation. Mg isotopes are “ground truth” for these models since, except for CAIs, planetary materials have a uniform Mg isotopic composition (within ≤1‰) so any significant isotopic fractionation of SW Mg is primarily that of SW formation and subsequent acceleration through the corona. This study analyzed Mg isotopes in a bulk SW diamond‐like carbon (DLC) film on silicon collector returned by the Genesis Mission. A novel data reduction technique was required to account for variable ion yield and instrumental mass fractionation (IMF) in the DLC. The resulting SW Mg fractionation relative to the DSM‐3 laboratory standard was (−14.4‰, −30.2‰) ± (4.1‰, 5.5‰), where the uncertainty is 2ơ SE of the data combined with a 2.5‰ (total) error in the IMF determination. Two of the SW fractionation models considered generally agreed with our data. Their possible ramifications are discussed for O isotopes based on the CAI nebular composition of McKeegan et al. (2011)

    Supersparse Linear Integer Models for Optimized Medical Scoring Systems

    Full text link
    Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction. These models are in widespread use by the medical community, but are difficult to learn from data because they need to be accurate and sparse, have coprime integer coefficients, and satisfy multiple operational constraints. We present a new method for creating data-driven scoring systems called a Supersparse Linear Integer Model (SLIM). SLIM scoring systems are built by solving an integer program that directly encodes measures of accuracy (the 0-1 loss) and sparsity (the 0\ell_0-seminorm) while restricting coefficients to coprime integers. SLIM can seamlessly incorporate a wide range of operational constraints related to accuracy and sparsity, and can produce highly tailored models without parameter tuning. We provide bounds on the testing and training accuracy of SLIM scoring systems, and present a new data reduction technique that can improve scalability by eliminating a portion of the training data beforehand. Our paper includes results from a collaboration with the Massachusetts General Hospital Sleep Laboratory, where SLIM was used to create a highly tailored scoring system for sleep apnea screeningComment: This version reflects our findings on SLIM as of January 2016 (arXiv:1306.5860 and arXiv:1405.4047 are out-of-date). The final published version of this articled is available at http://www.springerlink.co

    Heat transfer delay method for the fluid velocity evaluation in a multi-turn pulsating heat pipe

    Get PDF
    A multi-turn closed loop pulsating heat pipe made of aluminium is tested in vertical bottom heated mode and different condenser temperatures with the aim of providing quantitative information regarding its flow dynamics through a novel post-processing technique on the local wall-to-fluid heat flux, evaluated within the adiabatic section. The studied device is made of an annealed aluminium tube (inner/outer diameter: 3/5 mm), folded in 14 turns and partially filled with methanol (volumetric filling ratio: 50%). The aluminium channels are coated with a high-emissivity opaque paint, thus allowing thermographic measurements on the outer wall by means of a high-resolution medium wave infrared camera. The proposed method, named Heat Transfer Delay Method, is validated by means of a dedicated experimental approach. Then, the acquired time-space temperature maps are used as input data for the inverse heat conduction problem resolution approach to estimate the local convective heat flux locally exchanged at the inner wall-fluid interface. The resulting wall-to-fluid heat fluxes are then post- processed by applying the Heat Transfer Delay Method to the oscillatory and circulatory flow modes. The average fluid velocity is assessed at varying working conditions during the circulatory flow, finding values up to 0.77 m/s and 0.3 m/s for condenser temperature equal to 20 ◦C and 10 ◦ C, respectivel

    Toward Understanding Tip Leakage Flows in Small Compressor Cores Including Stator Leakage Flow

    Get PDF
    The focus of this work was to provide additional data to supplement the work reported in NASA/CR-2015-218868 (Berdanier and Key, 2015b). The aim of that project was to characterize the fundamental flow physics and the overall performance effects due to increased rotor tip clearance heights in axial compressors. Data have been collected in the three-stage axial research compressor at Purdue University with a specific focus on analyzing the multistage effects resulting from the tip leakage flow. Three separate rotor tip clearances were studied with nominal tip clearance gaps of 1.5 percent, 3.0 percent, and 4.0 percent based on a constant annulus height. Overall compressor performance was previously investigated at four corrected speedlines (100 percent, 90 percent, 80 percent, and 68 percent) for each of the three tip clearance configurations. This study extends the previously published results to include detailed steady and time-resolved pressure data at two loading conditions, nominal loading (NL) and high loading (HL), on the 100 percent corrected speedline for the intermediate clearance level (3.0 percent). Steady detailed radial traverses of total pressure at the exit of each stator row are supported by flow visualization techniques to identify regions of flow recirculation and separation. Furthermore, detailed radial traverses of time-resolved total pressures at the exit of each rotor row have been measured with a fast-response pressure probe. These data were combined with existing three-component velocity measurements to identify a novel technique for calculating blockage in a multistage compressor. Time-resolved static pressure measurements have been collected over the rotor tips for all rotors with each of the three tip clearance configurations for up to five loading conditions along the 100 percent corrected speedline using fast-response piezoresistive pressure sensors. These time-resolved static pressure measurements reveal new knowledge about the trajectory of the tip leakage flow through the rotor passage. Further, these data extend previous measurements identifying a modulation of the tip leakage flow due to upstream stator wake propagation. Finally, a novel instrumentation technique has been implemented to measure pressures in the shrouded stator cavities. These data provide boundary conditions relating to the flow across the shrouded stator knife seal teeth. Moreover, the utilization of fast-response pressure sensors provides a new look at the time-resolved pressure field, leading to instantaneous differential pressures across the seal teeth. Ultimately, the data collected for this project represent a unique data set which contributes to build a better understanding of the tip leakage flow field and its associated loss mechanisms. These data will facilitate future engine design goals leading to small blade heights in the rear stages of high pressure compressors and aid in the development of new blade designs which are desensitized to the performance penalties attributed to rotor tip leakage flows

    Mixed-Mode I + II fracture characterization of bonded joints using a novel Multi-Mode Apparatus

    Get PDF
    The present work presents the experimental test results to assess the toughness of an adhesive joint, using a previouslydefined crack equivalent data reduction scheme applied to a new multi-mode apparatus, inspired in a load jig previouslydeveloped by Fernlund and Spelt. The patented jig allows for easy alteration of the mode-mixity and permits coveringthe full range of mixed-mode I+II combinations. A data reduction scheme based on specimen compliance, beam theoryand crack equivalent concept is used to overcome several difficulties inherent to the test analysis. The method assumesthat the performed test can be viewed as a combination of the double cantilever beam and asymmetrically loaded endnotchedflexure tests, which provide modes I and II fracture characterization, respectively. A numerical analysisincluding a cohesive mixed-mode I+II damage model was performed considering different mixed-mode loadingconditions to validate the proposed data reduction scheme. Issues regarding self-similar crack growth and fractureprocess zone development are discussed. It was verified that the considered in-plane mix mode fracture criterion is wellcaptured using the proposed data reduction scheme

    Phase-averaged flow statistics in compressors using a rotated hot-wire technique

    Get PDF
    A technique based on a rotated hot wire has been developed to characterise the unsteady, three-dimensional flow field between compressor blade rows. Data are acquired from a slanted hot wire rotated through a number of orientations at each measurement point. Phase-averaged velocity statistics are obtained by solving a set of sensor response equations using a weighted, non-linear regression algorithm. The accuracy and robustness of the method were verified a priori by conducting a series of tests using synthetic data. The method is demonstrated by acquiring a full set of phase-averaged flow statistics in the wake of a compressor stator blade row. The technique allows three components of phase-averaged velocity, six components of phase-averaged deterministic stress, and six components of phase-averaged Reynolds stress to be recovered using a single rotated hot-wire probe.The authors gratefully acknowledge Rolls-Royce plc and the UK TSB TUFT programme for funding this work and granting permission for its publication

    Enhanced non-parametric sequence learning scheme for internet of things sensory data in cloud infrastructure

    Get PDF
    The Internet of Things (IoT) Cloud is an emerging technology that enables machine-to-machine, human-to-machine and human-to-human interaction through the Internet. IoT sensor devices tend to generate sensory data known for their dynamic and heterogeneous nature. Hence, it makes it elusive to be managed by the sensor devices due to their limited computation power and storage space. However, the Cloud Infrastructure as a Service (IaaS) leverages the limitations of the IoT devices by making its computation power and storage resources available to execute IoT sensory data. In IoT-Cloud IaaS, resource allocation is the process of distributing optimal resources to execute data request tasks that comprise data filtering operations. Recently, machine learning, non-heuristics, multi-objective and hybrid algorithms have been applied for efficient resource allocation to execute IoT sensory data filtering request tasks in IoT-enabled Cloud IaaS. However, the filtering task is still prone to some challenges. These challenges include global search entrapment of event and error outlier detection as the dimension of the dataset increases in size, the inability of missing data recovery for effective redundant data elimination and local search entrapment that leads to unbalanced workloads on available resources required for task execution. In this thesis, the enhancement of Non-Parametric Sequence Learning (NPSL), Perceptually Important Point (PIP) and Efficient Energy Resource Ranking- Virtual Machine Selection (ERVS) algorithms were proposed. The Non-Parametric Sequence-based Agglomerative Gaussian Mixture Model (NPSAGMM) technique was initially utilized to improve the detection of event and error outliers in the global space as the dimension of the dataset increases in size. Then, Perceptually Important Points K-means-enabled Cosine and Manhattan (PIP-KCM) technique was employed to recover missing data to improve the elimination of duplicate sensed data records. Finally, an Efficient Resource Balance Ranking- based Glow-warm Swarm Optimization (ERBV-GSO) technique was used to resolve the local search entrapment for near-optimal solutions and to reduce workload imbalance on available resources for task execution in the IoT-Cloud IaaS platform. Experiments were carried out using the NetworkX simulator and the results of N-PSAGMM, PIP-KCM and ERBV-GSO techniques with N-PSL, PIP, ERVS and Resource Fragmentation Aware (RF-Aware) algorithms were compared. The experimental results showed that the proposed NPSAGMM, PIP-KCM, and ERBV-GSO techniques produced a tremendous performance improvement rate based on 3.602%/6.74% Precision, 9.724%/8.77% Recall, 5.350%/4.42% Area under Curve for the detection of event and error outliers. Furthermore, the results indicated an improvement rate of 94.273% F1-score, 0.143 Reduction Ratio, and with minimum 0.149% Root Mean Squared Error for redundant data elimination as well as the minimum number of 608 Virtual Machine migrations, 47.62% Resource Utilization and 41.13% load balancing degree for the allocation of desired resources deployed to execute sensory data filtering tasks respectively. Therefore, the proposed techniques have proven to be effective for improving the load balancing of allocating the desired resources to execute efficient outlier (Event and Error) detection and eliminate redundant data records in the IoT-based Cloud IaaS Infrastructure
    corecore