520 research outputs found

    Dominant g(9/2)^2 neutron configuration in the 4+1 state of 68Zn based on new g factor measurements

    Full text link
    The gg factor of the 41+4_1^+ state in 68^{68}Zn has been remeasured with improved energy resolution of the detectors used. The value obtained is consistent with the previous result of a negative gg factor thus confirming the dominant 0g9/20g_{9/2} neutron nature of the 41+4_1^+ state. In addition, the accuracy of the gg factors of the 21+2_1^+, 22+2_2^+ and 31−3_1^- states has been improved an d their lifetimes were well reproduced. New large-scale shell model calculations based on a 56^{56}Ni core and an 0f5/21pg9/20f_{5/2}1pg_{9/2} model space yield a theoretical value, g(41+)=+0.008g(4_1^+) = +0.008. Although the calculated value is small, it cannot fully explain the experimental value, g(41+)=−0.37(17)g(4_1^+) = -0.37(17). The magnitude of the deduced B(E2) of the 41+4_1^+ and 21+2_1^+ transition is, however, rather well described. These results demonstrate again the importance of gg factor measurements for nuclear structure determination s due to their specific sensitivity to detailed proton and neutron components in the nuclear wave functions.Comment: 7 pages, 3 figs, submitted to PL

    Controversy in statistical analysis of functional magnetic resonance imaging data

    Get PDF
    To test the validity of statistical methods for fMRI data analysis, Eklund et al. (1) used, for the first time, large-scale experimental data rather than simulated data. Using resting-state fMRI measurements to represent a null hypothesis of no task-induced activation, the authors compare familywise error rates for voxel-based and cluster-based inferences for both parametric and nonparametric methods. Eklund et al.’s study used three fMRI statistical analysis packages. They found that, for a target familywise error rate of 5%, the parametric methods gave invalid cluster-based inferences and conservative voxel-based inferences

    Automatic, global registration in laparoscopic liver surgery

    Get PDF
    PURPOSE: The initial registration of a 3D pre-operative CT model to a 2D laparoscopic video image in augmented reality systems for liver surgery needs to be fast, intuitive to perform and with minimal interruptions to the surgical intervention. Several recent methods have focussed on using easily recognisable landmarks across modalities. However, these methods still need manual annotation or manual alignment. We propose a novel, fully automatic pipeline for 3D-2D global registration in laparoscopic liver interventions. METHODS: Firstly, we train a fully convolutional network for the semantic detection of liver contours in laparoscopic images. Secondly, we propose a novel contour-based global registration algorithm to estimate the camera pose without any manual input during surgery. The contours used are the anterior ridge and the silhouette of the liver. RESULTS: We show excellent generalisation of the semantic contour detection on test data from 8 clinical cases. In quantitative experiments, the proposed contour-based registration can successfully estimate a global alignment with as little as 30% of the liver surface, a visibility ratio which is characteristic of laparoscopic interventions. Moreover, the proposed pipeline showed very promising results in clinical data from 5 laparoscopic interventions. CONCLUSIONS: Our proposed automatic global registration could make augmented reality systems more intuitive and usable for surgeons and easier to translate to operating rooms. Yet, as the liver is deformed significantly during surgery, it will be very beneficial to incorporate deformation into our method for more accurate registration

    Gesture Recognition in Robotic Surgery with Multimodal Attention

    Get PDF
    Automatically recognising surgical gestures from surgical data is an important building block of automated activity recognition and analytics, technical skill assessment, intra-operative assistance and eventually robotic automation. The complexity of articulated instrument trajectories and the inherent variability due to surgical style and patient anatomy make analysis and fine-grained segmentation of surgical motion patterns from robot kinematics alone very difficult. Surgical video provides crucial information from the surgical site with context for the kinematic data and the interaction between the instruments and tissue. Yet sensor fusion between the robot data and surgical video stream is non-trivial because the data have different frequency, dimensions and discriminative capability. In this paper, we integrate multimodal attention mechanisms in a two-stream temporal convolutional network to compute relevance scores and weight kinematic and visual feature representations dynamically in time, aiming to aid multimodal network training and achieve effective sensor fusion. We report the results of our system on the JIGSAWS benchmark dataset and on a new in vivo dataset of suturing segments from robotic prostatectomy procedures. Our results are promising and obtain multimodal prediction sequences with higher accuracy and better temporal structure than corresponding unimodal solutions. Visualization of attention scores also gives physically interpretable insights on network understanding of strengths and weaknesses of each sensor

    Genome-wide association study of Stayability and Heifer Pregnancy in Red Angus cattle

    Get PDF
    Reproductive performance is the most important component of cattle production from the standpoint of economic sustainability of commercial beef enterprises. Heifer Pregnancy (HPG) and Stayability (STAY) genetic predictions are 2 selection tools published by the Red Angus Association of America (RAAA) to assist with improvements in reproductive performance. Given the importance of HPG and STAY to the profitability of commercial beef enterprises, the objective of this study was to identify QTL associated with both HPG and STAY in Red Angus cattle. A genome-wide association study (GWAS) was performed using deregressed HPG and STAY EBV, calculated using a single-trait animal model and a 3-generation pedigree with data from the Spring 2015 RAAA National Cattle Evaluation. Each individual animal possessed 74,659 SNP genotypes. Individual animals with a deregressed EBV reliability \u3e 0.05 were merged with the genotype file and marker quality control was performed. Criteria for sifting genotypes consisted of removing those markers where any of the following were found: average call rate less than 0.85, minor allele frequency \u3c 0.01, lack of Hardy–Weinberg equilibrium (P \u3c 0.0001), or extreme linkage disequilibrium (r2 \u3e 0.99). These criteria resulted in 2,664 animals with 62,807 SNP available for GWAS. Association studies were performed using a Bayes Cπ model in the BOLT software package. Marker significance was calculated as the posterior probability of inclusion (PPI), or the number of instances a specific marker was sampled divided by the total number of samples retained from the Markov chain Monte Carlo chains. Nine markers, with a PPI ≥ 3% were identified as QTL associated with HPG on BTA 1, 11, 13, 23, and 29. Twelve markers, with a PPI ≥ 75% were identified as QTL associated with STAY on BTA 6, 8, 9, 12, 15, 18, 22, and 23

    A learning robot for cognitive camera control in minimally invasive surgery

    Get PDF
    Background!#!We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons.!##!Methods!#!The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR.!##!Results!#!The duration of each operation decreased with the robot's increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%.!##!Conclusions!#!The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs

    Integrative GWAS and co-localisation analysis suggests novel genes associated with age-related multimorbidity

    Get PDF
    Advancing age is the greatest risk factor for developing multiple age-related diseases. Therapeutic approaches targeting the underlying pathways of ageing, rather than individual diseases, may be an effective way to treat and prevent age-related morbidity while reducing the burden of polypharmacy. We harness the Open Targets Genetics Portal to perform a systematic analysis of nearly 1,400 genome-wide association studies (GWAS) mapped to 34 age-related diseases and traits, identifying genetic signals that are shared between two or more of these traits. Using locus-to-gene (L2G) mapping, we identify 995 targets with shared genetic links to age-related diseases and traits, which are enriched in mechanisms of ageing and include known ageing and longevity-related genes. Of these 995 genes, 128 are the target of an approved or investigational drug, 526 have experimental evidence of binding pockets or are predicted to be tractable, and 341 have no existing tractability evidence, representing underexplored genes which may reveal novel biological insights and therapeutic opportunities. We present these candidate targets for exploration and prioritisation in a web application

    Transient field g factor and mean-life measurements with a rare isotope beam of 126Sn

    Get PDF
    Background: The g factors and lifetimes of the 21+ states in the stable, proton-rich Sn isotopes have been measured, but there is scant information on neutron-rich Sn isotopes. Purpose: Measurement of the g factor and the lifetime of the 21+ state at 1.141 MeV in neutron-rich 126Sn (T1/2=2. 3×105y). Method: Coulomb excitation in inverse kinematics together with the transient field and the Doppler shift attenuation techniques were applied to a radioactive beam of 126Sn at the Holifield Radioactive Ion Beam Facility. Results: g(21+)=-0.25(21) and τ(21+)=1.5(2) ps were obtained. Conclusions: The data are compared to large-scale shell-model and quasiparticle random-phase calculations. Neutrons in the h11/2 and d3/2 orbitals play an important role in the structure of the 21+ state of 126Sn. Challenges, limitations, and implications for such experiments at future rare isotope beam facilities are discussed
    • …
    corecore