58 research outputs found

    MASSIV: Mass Assembly Survey with SINFONI in VVDS. V. The major merger rate of star-forming galaxies at 0.9 < z < 1.8 from IFS-based close pairs

    Full text link
    We aim to measure the major merger rate of star-forming galaxies at 0.9 < z <1.8, using close pairs identified from integral field spectroscopy (IFS). We use the velocity field maps obtained with SINFONI/VLT on the MASSIV sample, selected from the star-forming population in the VVDS. We identify physical pairs of galaxies from the measurement of the relative velocity and the projected separation (r_p) of the galaxies in the pair. Using the well constrained selection function of the MASSIV sample we derive the gas-rich major merger fraction (luminosity ratio mu = L_2/L_1 >= 1/4), and, using merger time scales from cosmological simulations, the gas-rich major merger rate at a mean redshift up to z = 1.54. We find a high gas-rich major merger fraction of 20.8+15.2-6.8 %, 20.1+8.0-5.1 % and 22.0+13.7-7.3 % for close pairs with r_p <= 20h^-1 kpc in redshift ranges z = [0.94, 1.06], [1.2, 1.5) and [1.5, 1.8), respectively. This translates into a gas-rich major merger rate of 0.116+0.084-0.038 Gyr^-1, 0.147+0.058-0.037 Gyr^-1 and 0.127+0.079-0.042 Gyr^-1 at z = 1.03, 1.32 and 1.54, respectively. Combining our results with previous studies at z < 1, the gas-rich major merger rate evolves as (1+z)^n, with n = 3.95 +- 0.12, up to z = 1.5. From these results we infer that ~35% of the star-forming galaxies with stellar masses M = 10^10 - 10^10.5 M_Sun have undergone a major merger since z ~ 1.5. We develop a simple model which shows that, assuming that all gas-rich major mergers lead to early-type galaxies, the combined effect of gas-rich and dry mergers is able to explain most of the evolution in the number density of massive early-type galaxies since z ~ 1.5, with our measured gas-rich merger rate accounting for about two-thirds of this evolution.Comment: Published in Astronomy and Astrophysics, 24 pages, 30 figures, 2 tables. Appendix with the residual images from GALFIT added. Minor changes with respect to the initial versio

    Using Anisotropic Micro-Scale Topography to Manipulate the Wettability of Aluminum and Reduce the Retention of Water

    Get PDF
    A method is described for fabricating controlled micro-scale, topographical features on aluminum surfaces for the purpose of exploiting those features to affect the surface wettability. Using a photolithographic approach, a photoresist-masked surface is subjected to a plasma etch in a mixture of gaseous BCl3 and Cl2. Parallel grooves, microns to tens of microns in width, depth and spacing are studied, because this geometry is scaleable for mass production by roll-to-roll micro-embossing, and because the anisotropic nature of these features provides a directional change in wettability that can reduce the retention of water on the surface. Aluminum was studied because it is naturally hydrophilic and widely used in wet-surface heat exchanger applications, because of its low cost and excellent mechanical and thermal properties. Water droplets placed on a micro-grooved aluminum surface using a micro-syringe exhibit significantly increased apparent contact angles, and for water condensed onto an inclined, micro-grooved surface, the droplet volume at incipient sliding is reduced by more than 50% compared to droplets on a surface without micro-grooves. No chemical surface treatment is necessary to achieve this water repellency; it is accomplished solely through the anisotropic surface topography. The droplet geometry shows an elongated base contour relative to a surface without micro-grooves, and discontinuities in the three-phase contact line are also introduced by the grooves. A mechanistic model is presented for predicting the critical droplet size on micro-grooved surfaces. This model extends earlier work by accounting for the droplet geometry and contact-line changes caused by the micro-grooves. The model is validated through comparisons of predicted to measured critical droplet sizes, and it is then used to provide guidance for the development of surfaces with enhanced water drainage behavior. In a broad range of air-cooling applications, water retention on the air-side surface of metallic heat exchangers is problematic, because it can reduce the air-side heat transfer coefficient, increase core pressure drop, and provide a site for biological activity. In refrigeration systems, the accumulation of frost on metallic fins requires periodic defrosting and reduces energy efficiency. When water is retained on these surfaces following the defrost cycle, ice is more readily formed in the subsequent cooling period, and such ice can lead to shorter operation times before the next defrost is required. Thus the management and control of water droplets on heat-transfer and airhandling surfaces is vital to energy efficiency, functionality, and maintenance in air-cooling systems. The microstructured surfaces introduced in this work are proposed for use in air-cooling and dehumidifying applications, but they may have other applications where the management of liquids on a surface is important.Air Conditioning and Refrigeration Project 166Air Conditioning and Refrigeration Project 20

    The detection of wetlands using remote sensing in Qoqodala, Eastern Cape

    Get PDF
    Bibliography: leaves 66-68.This dissertation aims to establish the possibilities of mapping wetlands in Qoqodala, Eastern Cape Province, South Africa, using Landsat and/or Aster imagery. The methodology for mapping wetlands using Landsat imagery, proposed by Thompson, Marneweck, Bell, Kotze, Muller, Cox and Smith (2002) is adapted and applied to the study area. The same methodology is modified for use with Aster imagery and applied to the study area. In addition, the possibilities of treating Aster as a hyperspectral image are investigated, and a methodology using hyperspectral processing techniques is implemented

    Incorporating automated rail fatigue damage detection algorithms with crack growth modelling

    Get PDF
    This thesis examines the feasibility of incorporating Non Destructive Testing (NDT) of rail surface damage by means of combining image processing with damage prediction models. As rail traffic and adherence to safety measures become increasingly strict on the network, the associated maintenance cost of rail infrastructure must be kept at a minimum. Proactive maintenance is crucial to maintaining the competitive advantage of rail transport. A considerable amount of research has been done on improving the practical tediousness associated with popular condition monitoring techniques in rail industry e.g. Ultrasonic, and Eddy current method. This thesis aims to fill the gap of yet to be explored benefit, of combining detection and prediction of RCF damage. This research project will contribute to the rail industry by simplifying maintenance operations and support decision making. In this thesis, a summary of existing image-based NDT and crack growth models is presented as a foundation on which the novel application is built.It could be said that similar research mainly focuses on quantifying severity of damage without predicting crack behaviour. The simulated results of the proposed image processing algorithm confirm superiority of local illumination invariant enhancement, multi-window segmentation, and cascaded feature extraction. The influential parameters of these methods are consistent within each image data set but differ across all sets. This is observed to be as a result of difference in environmental and reflection properties of acquired images.A sensitivity analysis of the proposed algorithm on data set 2 suggests a non-linear relationship between severity of damage and pixel mean intensity including variance. Taking to account fracture mechanics aspect of this thesis, the influence of crack geometry on growth rate and path has been established by case study of newly initiated and critically grown cracks. It was further established that larger cracks are observed to grow faster than smaller ones. In addition, the influence of track curve radius and supporting structures on wheel rail contact dynamics is well understood from the structural mechanic’s tests related to contact forces and bending moment. These translate to increase or decrease in contact stresses, strains, and the propagation rate of defects. Unlike other predictive models, the method developed in this thesis focuses on replicating the actual surface condition of the rail prior to estimating the fracture parameters (using detailed 3D Finite Element model) that dictate residual life of the rail asset. The model makes it possible to combine two separate maintenance activities i.e. detection and prediction without inducing down time of the service. A direct impact of this novel application is the utilisation of the actual crack boundary for prediction of fracture behaviour. It is insinuated that stress distribution of actual crack boundary differs from elliptical equivalent assumptions. Further work would include improving detection aspect of the novel application to avoid intersecting boundary coordinates, which are not readily imported into the Linear Elastic Fracture Mechanics (LEFM) prediction model. It is also beneficial to expand the prediction aspect of the research work to include influence of neighbouring cracks and fluid entrapment for more flexible analysis of other environmental and contact conditions. To improve on current work, it will be useful to conduct laboratory investigations on the influence of Image Acquisition System (IAS) light source in relation to illumination inequality within the captured image. Also fracture mechanics experimental validation can be used to assert the accuracy of the metho

    Exploring the impact of health and lifestyle factors on brain function in humans : insights from large-scale neuroimaging data

    Get PDF
    This thesis performs investigations with data analysis methods and modelling approaches with large-scale neuroimaging data, and applies them to brain-wide association studies to investigate the brain systems associated with human behaviours such as memory decline, food preferences, and lifestyle habits. This thesis utilises data from the largest neuroimaging dataset by far, the UK Biobank, which is the most detailed and extensive multi-modal human dataset with over 500,000 participants. The large number of participants is helpful to mine datasets so that we can investigate brain function in health and disease. In addition to the large sample size to improve the reproducibility of results, the replication method based on independent large datasets is used to evaluate the robustness of these big data analysis methods and models. The large number of participants for analysis and replication presented a significant challenge to computational technology, necessitating data cleaning, preprocessing, feature extraction, data analysis, and modelling. As the basis for all studies in this thesis, normalised whole brain functional connectivity is measured for each individual based on the timeseries signals at the voxel level of the brain, which are divided into brain regions with a selected brain atlas and reflects the strength of interactions between each pair of brain regions. The large number of functional connectivity links and the number of statistical comparisons create a high risk of false discoveries. So, methods were implemented that correct for multiple comparisons. In a large-scale investigation of human memory decline, a standard structural equation model and a mediation model was developed. The mediation model was used to identify the brain regions that mediate between memory impairment and hypertension and is used in this particular application for the first time in this study. Lower functional connectivity of the hippocampus was found to mediate the association between poor prospective memory performance and hypertension, as indicated by a significant (11.5%) mediation effect. Clinically, this is the first time that the mediator role of the hippocampus in memory loss related to hypertension has been identified, confirming that it is the primary region associated with human memory in a large data set with approximately 20,000 participants. In addition, an extensive neuroimaging investigation of decision-making was conducted using an innovative methodology. The novel method employs association patterns of the functional connectivity of a network of brain regions with behaviour, instead of using the conventional method's individual links. In this study, the "liking for sweet foods" was significantly correlated with individuals’ body mass index (BMI) (False Discovery Rate (FDR) corrected, P<0.05). Further, functional connectivity in the orbitofrontal cortex (OFC) was positively correlated with a higher BMI (FDR corrected, P<0.05), compared to the functional connectivity involved in the whole brain, which had a significantly negative association with a higher BMI (FDR corrected, P<0.05). This research demonstrated an association between the OFC functional connectivity and obesity that was related to food preferences. Finally, the relationships between seven lifestyle factors, nine mental health measures, brain structure, and cortical functional connectivity were investigated using large datasets containing numerous behaviour measures. These findings highlight the extensive association between lifestyle risk and a broad spectrum of mental health problems in later life. In addition, they provide insight into the neural mechanisms associated with daily habits, including brain regions involved in motor, auditory, decision-making, emotion, face processing, and memory functions (Bonferroni corrected, P<0.05). Overall, by exploring computational methods and modelling approaches on large-scale neuroimaging data for analyses of brain function and behavioural measures, efficient and reliable models have been developed. Moreover, significant progress has been made in applying the big data analysis methods to understanding the brain regions involved in memory decline, dietary preference and obesity, and human lifestyle behaviour

    A Geometric Perspective on Functional Outlier Detection

    Get PDF

    Machine learning techniques for personalised medicine approaches in immune-mediated chronic inflammatory diseases: Applications and challenges

    Get PDF
    In the past decade, the emergence of machine learning (ML) applications has led to significant advances towards implementation of personalised medicine approaches for improved health care, due to the exceptional performance of ML models when utilising complex big data. The immune-mediated chronic inflammatory diseases are a group of complex disorders associated with dysregulated immune responses resulting in inflammation affecting various organs and systems. The heterogeneous nature of these diseases poses great challenges for tailored disease management and addressing unmet patient needs. Applying novel ML techniques to the clinical study of chronic inflammatory diseases shows promising results and great potential for precision medicine applications in clinical research and practice. In this review, we highlight the clinical applications of various ML techniques for prediction, diagnosis and prognosis of autoimmune rheumatic diseases, inflammatory bowel disease, autoimmune chronic kidney disease, and multiple sclerosis, as well as ML applications for patient stratification and treatment selection. We highlight the use of ML in drug development, including target identification, validation and drug repurposing, as well as challenges related to data interpretation and validation, and ethical concerns related to the use of artificial intelligence in clinical research

    Intelligent Pattern Analysis of the Foetal Electrocardiogram

    Get PDF
    The aim of the project on which this thesis is based is to develop reliable techniques for foetal electrocardiogram (ECG) based monitoring, to reduce incidents of unnecessary medical intervention and foetal injury during labour. World-wide electronic foetal monitoring is based almost entirely on the cardiotocogram (CTG), which is a continuous display of the foetal heart rate (FHR) pattern together with the contraction of the womb. Despite the widespread use of the CTG, there is no significant improvement in foetal outcome. In the UK alone it is estimated that birth related negligence claims cost the health authorities over ÂŁ400M per-annum. An expert system, known as INFANT, has recently been developed to assist CTG interpretation. However, the CTG alone does not always provide all the information required to improve the outcome of labour. The widespread use of ECG analysis has been hindered by the difficulties with poor signal quality and the difficulties in applying the specialised knowledge required for interpreting ECG patterns, in association with other events in labour, in an objective way. A fundamental investigation and development of optimal signal enhancement techniques that maximise the available information in the ECG signal, along with different techniques for detecting individual waveforms from poor quality signals, has been carried out. To automate the visual interpretation of the ECG waveform, novel techniques have been developed that allow reliable extraction of key features and hence allow a detailed ECG waveform analysis. Fuzzy logic is used to automatically classify the ECG waveform shape using these features by using knowledge that was elicited from expert sources and derived from example data. This allows the subtle changes in the ECG waveform to be automatically detected in relation to other events in labour, and thus improve the clinicians position for making an accurate diagnosis. To ensure the interpretation is based on reliable information and takes place in the proper context, a new and sensitive index for assessing the quality of the ECG has been developed. New techniques to capture, for the first time in machine form, the clinical expertise / guidelines for electronic foetal monitoring have been developed based on fuzzy logic and finite state machines, The software model provides a flexible framework to further develop and optimise rules for ECG pattern analysis. The signal enhancement, QRS detection and pattern recognition of important ECG waveform shapes have had extensive testing and results are presented. Results show that no significant loss of information is incurred as a result of the signal enhancement and feature extraction techniques

    A Compton Camera for In-vivo Dosimetry in Ion-beam Radiotherapy

    Get PDF
    In dieser Arbeit wird die Bildgebung durch eine Compton-Kamera zur Überwachung der Partikelstrahlentherapie erstmals an der Technischen Universität Dresden untersucht. Die inhärenten Beschränkungen der Methode wurden durch Berechnungen und Monte Carlo Simulationen studiert. Im Zuge dieser Untersuchungen erschien der Raumtemperatur-Halbleiter Cadmium Zink Tellurid als ein vielversprechendes Detektor-Material. Zur weiteren Untersuchung wurde eine einfache Compton-Kamera konstruiert bestehend aus einem Cadmium Zink Tellurid Detektor und einem ortsempfndlichen Szintillationsdetektor. Das System hat gezeigt, dass eine akkurate Bildgebung mit radioaktiven Punktquellen unter Laborbedingungen möglich ist. Weitere praktische Beschränkungen der Compton-Bildgebung unter Strahlbedingungen konnten durch Experimente an einem Protonen-Strahl hergeleitet werden. Durch die experimentellen Erfahrungen mit der in dieser Arbeit entwickelten Compton-Kamera konnten wertvolle Informationen gesammelt werden, die erlauben, die Bildrekonstruktion zu evaluieren und dazu beitragen, die weitere Forschung hin zu einer klinisch anwendbaren Compton-Kamera zu leiten.:Abstract/Zusammenfassung Illustration Index Index of Tables List of Abbreviations 0 Introduction 0.1 Motivation 0.2 Task 1 Physical Background 1.1 Interaction of Ionizing Radiation with Matter 1.1.1 Coherent Photon Scattering 1.1.2 Incoherent Photon Scattering 1.1.3 Complete Absorption in the Nuclear Electric Field 1.1.4 Pair Production 1.1.5 Total Photon Cross Section 1.1.6 Directly Ionizing Radiation 1.2 Prompt Gamma-rays from Nuclear Reactions 1.3 Detector Technology 1.3.1 Semiconductor Detectors 1.3.2 Scintillation Detectors 1.4 Compton Imaging 1.4.1 Image Formation 1.4.2 History and Application of Compton Cameras 1.5 Prompt Gamma-ray Imaging for In-vivo Dosimetry – Work of Other Groups 2 Design Study 2.1 Introduction 2.1.1 Emission Spectra – Available Data 2.2 Materials and Methods 2.2.1 Angular Resolution 2.2.2 Efficiency 2. Results 2.4 Conclusions 3 Prototype System 3.1 Overview 3.2 System Components 3.2.1 CdZnTe Detector and its Front-end-electronics 3.2.2 LSO Block-Detector 3.2.3 Mounting Frame 3.2.4 DAQ Hardware and Software 3.3 Results 3.3.1 Detector Performance 3.3.2 System Performance 3.4 Conclusions 4 Beam Experiments 4.1 Introduction 4.2 Materials and Methods 4.3 Results 4.3.1 Source Test 4.3.2 Beam Profile 4.3.3 Trigger Rate 4.3.4 Pixel Selection in the LSO 4.3.5 Phantom Measurement 4.4 Conclusions 5 Discussion Appendix A A.1 Technical Drawing of the CdZnTe Electrode Layout Bibliography Danksagung ErklärungThis work presents the first efforts at the Dresden University of Technology to study the feasibility of Compton imaging as a modality to monitor ion beam radiation therapy. The inherent limitations of the method have been studied by means of calculation and Monte Carlo simulation. As a result, the room-temperature semiconductor cadmium zinc telluride appeared as a promising detector material for a clinical device. For more detailed investigation, a simple Compton camera has been constructed comprising a cadmium zinc telluride detector and a position sensitive scintillation detector. This system has proven that accurate imaging of radioactive point sources in the laboratory is feasible. More practical restrictions of Compton imaging in beam conditions have been derived through experiments at a proton facility. Through the experimental work with the Compton camera developed in this work, valuable information was gathered which allowed to test the image reconstruction and to direct the further research towards a clinical Compton camera system.:Abstract/Zusammenfassung Illustration Index Index of Tables List of Abbreviations 0 Introduction 0.1 Motivation 0.2 Task 1 Physical Background 1.1 Interaction of Ionizing Radiation with Matter 1.1.1 Coherent Photon Scattering 1.1.2 Incoherent Photon Scattering 1.1.3 Complete Absorption in the Nuclear Electric Field 1.1.4 Pair Production 1.1.5 Total Photon Cross Section 1.1.6 Directly Ionizing Radiation 1.2 Prompt Gamma-rays from Nuclear Reactions 1.3 Detector Technology 1.3.1 Semiconductor Detectors 1.3.2 Scintillation Detectors 1.4 Compton Imaging 1.4.1 Image Formation 1.4.2 History and Application of Compton Cameras 1.5 Prompt Gamma-ray Imaging for In-vivo Dosimetry – Work of Other Groups 2 Design Study 2.1 Introduction 2.1.1 Emission Spectra – Available Data 2.2 Materials and Methods 2.2.1 Angular Resolution 2.2.2 Efficiency 2. Results 2.4 Conclusions 3 Prototype System 3.1 Overview 3.2 System Components 3.2.1 CdZnTe Detector and its Front-end-electronics 3.2.2 LSO Block-Detector 3.2.3 Mounting Frame 3.2.4 DAQ Hardware and Software 3.3 Results 3.3.1 Detector Performance 3.3.2 System Performance 3.4 Conclusions 4 Beam Experiments 4.1 Introduction 4.2 Materials and Methods 4.3 Results 4.3.1 Source Test 4.3.2 Beam Profile 4.3.3 Trigger Rate 4.3.4 Pixel Selection in the LSO 4.3.5 Phantom Measurement 4.4 Conclusions 5 Discussion Appendix A A.1 Technical Drawing of the CdZnTe Electrode Layout Bibliography Danksagung Erklärun
    • …
    corecore