998 research outputs found

    Block copolymers of poly(ethylene oxide) and poly(methyl methacrylate)

    Get PDF
    A series of five AB block copolymers of poly(ethylene oxide) (PEO) and poly(methyl methacrylate) (PMMA) has been synthesised by the coupling of mono-functional homopolymers by an esterification reaction. In this series polymers all contain a PMMA component of number average molecular weight 908 gmo1·1, as measured by end group analysis, and the PEO components have number average molecular weights of 596, 689, 979, 2023 and 2884 gmol·1, as measured by proton nuclear magnetic resonance spectroscopy. Aqueous "solutions" of these copolymers have been prepared both by direct mixing and via methanol, a solvent for both blocks of the copolymer. Cloud points for these copolymers have been determined and range from 275K to 368K for the lowest and highest PEO blocks respectively. Small angle X-ray scattering (SAXS) of aqueous solutions has been interpreted in terms of a core - shell model and dimensions determined for both. Radii of gyration for the micellar cores have been shown to vary very little with variations in copolymer composition, concentration and temperature up to the cloud point. Fringe thicknesses show a dependence on PEO block length and relating measured fringe thicknesses to calculated chain conformations indicates that the the chain conformation is best described as an unperturbed chain. The measured fringe thickness is not altered by concentration or temperature up to the cloud point. Above the cloud point it is not possible to interpret the SAXS data in terms of a core shell micellar model. Significant differences in the SAXS data have been observed depending upon the mode of addition of copolymer to water. This can be interpreted as differences in micellar II structure With the exception of the lowest molecular weight copolymer all of the copolymers could be used as steric stabilisers for the aqueous emulsion polymerisation of methyl methacrylate. Polymerisations were only successful if the copolymer was introduced to the aqueous phase either via methanol or via the monomer. Direct addition of copolymer to water resulted in low polymerisation rates and unstable/flocculated products. Emulsions produced have been shown to be stable at pH levels where the electrophoretic mobility was zero, ie. the emulsions were sterically stabilised with no contribution from ionic I dipole interactions

    Black swans, cognition and the power of learning from failure

    Get PDF
    Failure carries undeniable stigma and is difficult to confront for individuals, teams, and organizations. Disciplines such as commercial and military aviation, medicine, and business have long histories of grappling with it, beginning with the recognition that failure is inevitable in every human endeavor. While conservation may arguably be more complex, conservation professionals can draw upon the research and experience of these other disciplines to institutionalize activities and attitudes that foster learning from failures, whether they are minor setbacks or major disasters. Understanding the role of individual cognitive biases, team psychological safety, and organizational willingness to support critical self-examination all contribute to creating a cultural shift in conservation to one that is open to the learning opportunity that failure provides. This new approach to managing failure is a necessary next step in the evolution of conservation effectiveness. This article is protected by copyright. All rights reserved

    Query-based Hard-Image Retrieval for Object Detection at Test Time

    Full text link
    There is a longstanding interest in capturing the error behaviour of object detectors by finding images where their performance is likely to be unsatisfactory. In real-world applications such as autonomous driving, it is also crucial to characterise potential failures beyond simple requirements of detection performance. For example, a missed detection of a pedestrian close to an ego vehicle will generally require closer inspection than a missed detection of a car in the distance. The problem of predicting such potential failures at test time has largely been overlooked in the literature and conventional approaches based on detection uncertainty fall short in that they are agnostic to such fine-grained characterisation of errors. In this work, we propose to reformulate the problem of finding "hard" images as a query-based hard image retrieval task, where queries are specific definitions of "hardness", and offer a simple and intuitive method that can solve this task for a large family of queries. Our method is entirely post-hoc, does not require ground-truth annotations, is independent of the choice of a detector, and relies on an efficient Monte Carlo estimation that uses a simple stochastic model in place of the ground-truth. We show experimentally that it can be applied successfully to a wide variety of queries for which it can reliably identify hard images for a given detector without any labelled data. We provide results on ranking and classification tasks using the widely used RetinaNet, Faster-RCNN, Mask-RCNN, and Cascade Mask-RCNN object detectors. The code for this project is available at https://github.com/fiveai/hardest

    Use of MMG signals for the control of powered orthotic devices: Development of a rectus femoris measurement protocol

    Get PDF
    Copyright © 2009 Rehabilitation Engineering and Assistive Technology Society (RESNA). This is an Author's Accepted Manuscript of an article published in Assistive Technology, 21(1), 1 - 12, 2009, copyright Taylor & Francis, available online at: http://www.tandfonline.com/10.1080/10400430902945678.A test protocol is defined for the purpose of measuring rectus femoris mechanomyographic (MMG) signals. The protocol is specified in terms of the following: measurement equipment, signal processing requirements, human postural requirements, test rig, sensor placement, sensor dermal fixation, and test procedure. Preliminary tests of the statistical nature of rectus femoris MMG signals were performed, and Gaussianity was evaluated by means of a two-sided Kolmogorov-Smirnov test. For all 100 MMG data sets obtained from the testing of two volunteers, the null hypothesis of Gaussianity was rejected at the 1%, 5%, and 10% significance levels. Most skewness values were found to be greater than 0.0, while all kurtosis values were found to be greater than 3.0. A statistical convergence analysis also performed on the same 100 MMG data sets suggested that 25 MMG acquisitions should prove sufficient to statistically characterize rectus femoris MMG. This conclusion is supported by the qualitative characteristics of the mean rectus femoris MMG power spectral densities obtained using 25 averages

    Optimisation of variables for studying dilepton transverse momentum distributions at hadron colliders

    Get PDF
    In future measurements of the dilepton (Z/γZ/\gamma^*) transverse momentum, \Qt, at both the Tevatron and LHC, the achievable bin widths and the ultimate precision of the measurements will be limited by experimental resolution rather than by the available event statistics. In a recent paper the variable \at, which corresponds to the component of \Qt\ that is transverse to the dilepton thrust axis, has been studied in this regard. In the region, \Qt\ << 30 GeV, \at\ has been shown to be less susceptible to experimental resolution and efficiency effects than the \Qt. Extending over all \Qt, we now demonstrate that dividing \at\ (or \Qt) by the measured dilepton invariant mass further improves the resolution. In addition, we propose a new variable, \phistarEta, that is determined exclusively from the measured lepton directions; this is even more precisely determined experimentally than the above variables and is similarly sensitive to the \Qt. The greater precision achievable using such variables will enable more stringent tests of QCD and tighter constraints on Monte Carlo event generator tunes.Comment: 8 pages, 5 figures, 2 table

    The design and characterization of a 300 channel, optimized full-band millimeter filterbank for science with SuperSpec

    Get PDF
    SuperSpec is an integrated, on-chip spectrometer for millimeter and sub-millimeter astronomy. We report the approach, design optimization, and partial characterization of a 300 channel filterbank covering the 185 to 315 GHz frequency band that targets a resolving power R ~ 310, and fits on a 3.5×5.5 cm chip. SuperSpec uses a lens and broadband antenna to couple radiation into a niobium microstrip that feeds a bank of niobium microstrip half-wave resonators for frequency selectivity. Each half-wave resonator is coupled to the inductor of a titanium nitride lumped-element kinetic inductance detector (LEKID) that detects the incident radiation. The device was designed for use in a demonstration instrument at the Large Millimeter Telescope (LMT)

    Radiation damage in the LHCb vertex locator

    Get PDF
    The LHCb Vertex Locator (VELO) is a silicon strip detector designed to reconstruct charged particle trajectories and vertices produced at the LHCb interaction region. During the first two years of data collection, the 84 VELO sensors have been exposed to a range of fluences up to a maximum value of approximately 45 × 1012 1 MeV neutron equivalent (1 MeV neq). At the operational sensor temperature of approximately −7 °C, the average rate of sensor current increase is 18 μA per fb−1, in excellent agreement with predictions. The silicon effective bandgap has been determined using current versus temperature scan data after irradiation, with an average value of Eg = 1.16±0.03±0.04 eV obtained. The first observation of n+-on-n sensor type inversion at the LHC has been made, occurring at a fluence of around 15 × 1012 of 1 MeV neq. The only n+-on-p sensors in use at the LHC have also been studied. With an initial fluence of approximately 3 × 1012 1 MeV neq, a decrease in the Effective Depletion Voltage (EDV) of around 25 V is observed. Following this initial decrease, the EDV increases at a comparable rate to the type inverted n+-on-n type sensors, with rates of (1.43±0.16) × 10−12 V/ 1 MeV neq and (1.35±0.25) × 10−12 V/ 1 MeV neq measured for n+-on-p and n+-on-n type sensors, respectively. A reduction in the charge collection efficiency due to an unexpected effect involving the second metal layer readout lines is observed

    Mouse transcriptome reveals potential signatures of protection and pathogenesis in human tuberculosis

    Get PDF
    Although mouse infection models have been extensively used to study the host response to Mycobacterium tuberculosis, their validity in revealing determinants of human tuberculosis (TB) resistance and disease progression has been heavily debated. Here, we show that the modular transcriptional signature in the blood of susceptible mice infected with a clinical isolate of M. tuberculosis resembles that of active human TB disease, with dominance of a type I interferon response and neutrophil activation and recruitment, together with a loss in B lymphocyte, natural killer and T cell effector responses. In addition, resistant but not susceptible strains of mice show increased lung B cell, natural killer and T cell effector responses in the lung upon infection. Notably, the blood signature of active disease shared by mice and humans is also evident in latent TB progressors before diagnosis, suggesting that these responses both predict and contribute to the pathogenesis of progressive M. tuberculosis infection
    corecore