112 research outputs found

    Substrate stabilisation and small structures in coral restoration: State of knowledge, and considerations for management and implementation.

    Full text link
    Coral reef ecosystems are under increasing pressure from local and regional stressors and a changing climate. Current management focuses on reducing stressors to allow for natural recovery, but in many areas where coral reefs are damaged, natural recovery can be restricted, delayed or interrupted because of unstable, unconsolidated coral fragments, or rubble. Rubble fields are a natural component of coral reefs, but repeated or high-magnitude disturbances can prevent natural cementation and consolidation processes, so that coral recruits fail to survive. A suite of interventions have been used to target this issue globally, such as using mesh to stabilise rubble, removing the rubble to reveal hard substrate and deploying rocks or other hard substrates over the rubble to facilitate recruit survival. Small, modular structures can be used at multiple scales, with or without attached coral fragments, to create structural complexity and settlement surfaces. However, these can introduce foreign materials to the reef, and a limited understanding of natural recovery processes exists for the potential of this type of active intervention to successfully restore local coral reef structure. This review synthesises available knowledge about the ecological role of coral rubble, natural coral recolonisation and recovery rates and the potential benefits and risks associated with active interventions in this rapidly evolving field. Fundamental knowledge gaps include baseline levels of rubble, the structural complexity of reef habitats in space and time, natural rubble consolidation processes and the risks associated with each intervention method. Any restoration intervention needs to be underpinned by risk assessment, and the decision to repair rubble fields must arise from an understanding of when and where unconsolidated substrate and lack of structure impair natural reef recovery and ecological function. Monitoring is necessary to ascertain the success or failure of the intervention and impacts of potential risks, but there is a strong need to specify desired outcomes, the spatial and temporal context and indicators to be measured. With a focus on the Great Barrier Reef, we synthesise the techniques, successes and failures associated with rubble stabilisation and the use of small structures, review monitoring methods and indicators, and provide recommendations to ensure that we learn from past projects

    Trainability of cold induced vasodilatation in fingers and toes

    Get PDF
    Subjects that repeatedly have to expose the extremities to cold may benefit from a high peripheral temperature to maintain dexterity and tissue integrity. Therefore, we investigated if repeated immersions of a hand and a foot in cold water resulted in increased skin temperatures. Nine male and seven female subjects (mean 20.4; SD 2.2 years) immersed their right (trained) hand and foot simultaneously in 8°C water, 30 min daily for 15 days. During the pre and post-test (days 1 and 15, respectively) the left (untrained) hand and foot were immersed as well. Pain, tactile sensitivity and skin temperatures were measured every day. Mean (SD) toe temperature of the trained foot increased from 9.49°C (0.89) to 10.03°C (1.38) (p < 0.05). The trained hand, however, showed a drop in mean finger temperature from 9.28°C (0.54) to 8.91°C (0.44) (p < 0.001) and the number of cold induced vasodilation (CIVD) reactions decreased from 52% during the first test to 24% during the last test. No significant differences occurred in the untrained extremities. Pain diminished over time and tactile sensitivity decreased with skin temperature. The combination of less CIVD responses in the fingers after training, reduced finger skin temperatures in subjects that did show CIVD and the reduced pain and tactile sensitivity over time may lead to an increased risk for finger cold injuries. It is concluded that repeated cold exposure of the fingers does not lead to favorable adaptations, but may instead increase the injury risk

    High-speed linear optics quantum computing using active feed-forward

    Get PDF
    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.Comment: 19 pages, 4 figure

    Whole genome association mapping by incompatibilities and local perfect phylogenies

    Get PDF
    BACKGROUND: With current technology, vast amounts of data can be cheaply and efficiently produced in association studies, and to prevent data analysis to become the bottleneck of studies, fast and efficient analysis methods that scale to such data set sizes must be developed. RESULTS: We present a fast method for accurate localisation of disease causing variants in high density case-control association mapping experiments with large numbers of cases and controls. The method searches for significant clustering of case chromosomes in the "perfect" phylogenetic tree defined by the largest region around each marker that is compatible with a single phylogenetic tree. This perfect phylogenetic tree is treated as a decision tree for determining disease status, and scored by its accuracy as a decision tree. The rationale for this is that the perfect phylogeny near a disease affecting mutation should provide more information about the affected/unaffected classification than random trees. If regions of compatibility contain few markers, due to e.g. large marker spacing, the algorithm can allow the inclusion of incompatibility markers in order to enlarge the regions prior to estimating their phylogeny. Haplotype data and phased genotype data can be analysed. The power and efficiency of the method is investigated on 1) simulated genotype data under different models of disease determination 2) artificial data sets created from the HapMap ressource, and 3) data sets used for testing of other methods in order to compare with these. Our method has the same accuracy as single marker association (SMA) in the simplest case of a single disease causing mutation and a constant recombination rate. However, when it comes to more complex scenarios of mutation heterogeneity and more complex haplotype structure such as found in the HapMap data our method outperforms SMA as well as other fast, data mining approaches such as HapMiner and Haplotype Pattern Mining (HPM) despite being significantly faster. For unphased genotype data, an initial step of estimating the phase only slightly decreases the power of the method. The method was also found to accurately localise the known susceptibility variants in an empirical data set – the ΔF508 mutation for cystic fibrosis – where the susceptibility variant is already known – and to find significant signals for association between the CYP2D6 gene and poor drug metabolism, although for this dataset the highest association score is about 60 kb from the CYP2D6 gene. CONCLUSION: Our method has been implemented in the Blossoc (BLOck aSSOCiation) software. Using Blossoc, genome wide chip-based surveys of 3 million SNPs in 1000 cases and 1000 controls can be analysed in less than two CPU hours

    RNAi-Mediated Knock-Down of Arylamine N-acetyltransferase-1 Expression Induces E-cadherin Up-Regulation and Cell-Cell Contact Growth Inhibition

    Get PDF
    Arylamine N-acetyltransferase-1 (NAT1) is an enzyme that catalyzes the biotransformation of arylamine and hydrazine substrates. It also has a role in the catabolism of the folate metabolite p-aminobenzoyl glutamate. Recent bioinformatics studies have correlated NAT1 expression with various cancer subtypes. However, a direct role for NAT1 in cell biology has not been established. In this study, we have knocked down NAT1 in the colon adenocarcinoma cell-line HT-29 and found a marked change in cell morphology that was accompanied by an increase in cell-cell contact growth inhibition and a loss of cell viability at confluence. NAT1 knock-down also led to attenuation in anchorage independent growth in soft agar. Loss of NAT1 led to the up-regulation of E-cadherin mRNA and protein levels. This change in E-cadherin was not attributed to RNAi off-target effects and was also observed in the prostate cancer cell-line 22Rv1. In vivo, NAT1 knock-down cells grew with a longer doubling time compared to cells stably transfected with a scrambled RNAi or to parental HT-29 cells. This study has shown that NAT1 affects cell growth and morphology. In addition, it suggests that NAT1 may be a novel drug target for cancer therapeutics

    Validity and Reliability of the Strengths and Difficulties Questionnaire in 5–6 Year Olds: Differences by Gender or by Parental Education?

    Get PDF
    Introduction: The Strengths and Difficulties Questionnaire (SDQ) is a relatively short instrument developed to detect psychosocial problems in children aged 3-16 years. It addresses four dimensions: emotional problems, conduct problems, hyperactivity/inattention problems, peer problems that count up to the total difficulties score, and a fifth dimension; prosocial behaviour. The validity and reliability of the SDQ has not been fully investigated in younger age groups. Therefore, this study assesses the validity and reliability of the parent and teacher versions of the SDQ in children aged 5-6 years in the total sample, and in subgroups according to child gender and parental education level. Methods: The SDQ was administered as part of the Dutch regularly provided preventive health check for children aged 5-6 years. Parents provided information on 4750 children and teachers on 4516 children. Results: Factor analyses of the parent and teacher SDQ confirmed that the original five scales were present (parent RMSEA = 0.05; teacher RMSEA = 0.07). Interrater correlations between parents and teachers were small (ICCs of 0.21-0.44) but comparable to what is generally found for psychosocial problem assessments in children. These correlations were larger for males than for females. Cronbach's alphas for the total difficulties score were 0.77 for the parent SDQ and 0.81 for the teacher SDQ. Four of the subscales on the parent SDQ and two of the subscales on the teacher SDQ had an alpha <0.70. Alphas were generally higher for male children and for low parental education level. Discussion: The validity and reliability of the total difficulties score of the parent and teacher SDQ are satisfactory in all groups by informant, child gender, and parental education level. Our results support the use of the SDQ in younger age groups. However, some subscales are less reliable and we recommend only to use the total difficulties score for screening purposes

    Experimental One-Way Quantum Computing

    Full text link
    Standard quantum computation is based on sequences of unitary quantum logic gates which process qubits. The one-way quantum computer proposed by Raussendorf and Briegel is entirely different. It has changed our understanding of the requirements for quantum computation and more generally how we think about quantum physics. This new model requires qubits to be initialized in a highly-entangled cluster state. From this point, the quantum computation proceeds by a sequence of single-qubit measurements with classical feedforward of their outcomes. Because of the essential role of measurement a one-way quantum computer is irreversible. In the one-way quantum computer the order and choices of measurements determine the algorithm computed. We have experimentally realized four-qubit cluster states encoded into the polarization state of four photons. We fully characterize the quantum state by implementing the first experimental four-qubit quantum state tomography. Using this cluster state we demonstrate the feasibility of one-way quantum computing through a universal set of one- and two-qubit operations. Finally, our implementation of Grover's search algorithm demonstrates that one-way quantum computation is ideally suited for such tasks.Comment: 36 pages, 6 figures, 2 table

    Clinical performance and radiation dosimetry of no-carrier-added vs carrier-added 123I-metaiodobenzylguanidine (MIBG) for the assessment of cardiac sympathetic nerve activity

    Get PDF
    Purpose We hypothesized that assessment of myocardial sympathetic activity with no-carrier-added (nca) I-123-metaiodobenzylguanidine (MIBG) compared to carrier-added (ca) I-123-MIBG would lead to an improvement of clinical performance without major differences in radiation dosimetry. Methods In nine healthy volunteers, 15 min and 4 h planar thoracic scintigrams and conjugate whole-body scans were performed up to 48 h following intravenous injection of 185 MBq I-123-MIBG. The subjects were given both nca and ca I-123-MIBG. Early heart/mediastinal ratios (H/M), late H/M ratios and myocardial washout were calculated. The fraction of administered activity in ten source organs was quantified from the attenuation-corrected geometric mean counts in conjugate views. Radiation-absorbed doses were estimated with OLINDA/EXM software. Results Both early and late H/M were higher for nca I-123-MIBG (ca I-123-MIBG early H/M 2.46 +/- 0.15 vs nca I-123-MIBG 2.84 +/- 0.15, p = 0.001 and ca I-123-MIBG late H/M 2.69 +/- 0.14 vs nca I-123-MIBG 3.34 +/- 0.18, p = 0.002). Myocardial washout showed a longer retention time for nca I-123-MIBG (p <0.001). The effective dose equivalent (adult male model) for nca I-123-MIBG was similar to that for ca I-123-MIBG (0.025 +/- 0.002 mSv/MBq vs 0.026 +/- 0.002 mSv/MBq, p = 0.055, respectively). Conclusion No-carrier-added I-123-MIBG yields a higher relative myocardial uptake and is associated with a higher myocardial retention. This difference between nca I-123-MIBG and ca I-123-MIBG in myocardial uptake did not result in major differences in estimated absorbed doses. Therefore, nca I-123-MIBG is to be preferred over ca I-123-MIBG for the assessment of cardiac sympathetic activit

    A Rapid and Sensitive Method for Measuring NAcetylglucosaminidase Activity in Cultured Cells

    Get PDF
    A rapid and sensitive method to quantitatively assess N-acetylglucosaminidase (NAG) activity in cultured cells is highly desirable for both basic research and clinical studies. NAG activity is deficient in cells from patients with Mucopolysaccharidosis type IIIB (MPS IIIB) due to mutations in NAGLU, the gene that encodes NAG. Currently available techniques for measuring NAG activity in patient-derived cell lines include chromogenic and fluorogenic assays and provide a biochemical method for the diagnosis of MPS IIIB. However, standard protocols require large amounts of cells, cell disruption by sonication or freeze-thawing, and normalization to the cellular protein content, resulting in an error-prone procedure that is material- and time-consuming and that produces highly variable results. Here we report a new procedure for measuring NAG activity in cultured cells. This procedure is based on the use of the fluorogenic NAG substrate, 4- Methylumbelliferyl-2-acetamido-2-deoxy-alpha-D-glucopyranoside (MUG), in a one-step cell assay that does not require cell disruption or post-assay normalization and that employs a low number of cells in 96-well plate format. We show that the NAG one-step cell assay greatly discriminates between wild-type and MPS IIIB patient-derived fibroblasts, thus providing a rapid method for the detection of deficiencies in NAG activity. We also show that the assay is sensitive to changes in NAG activity due to increases in NAGLU expression achieved by either overexpressing the transcription factor EB (TFEB), a master regulator of lysosomal function, or by inducing TFEB activation chemically. Because of its small format, rapidity, sensitivity and reproducibility, the NAG one-step cell assay is suitable for multiple procedures, including the high-throughput screening of chemical libraries to identify modulators of NAG expression, folding and activity, and the investigation of candidate molecules and constructs for applications in enzyme replacement therapy, gene therapy, and combination therapies
    corecore