170 research outputs found

    Fast Monte Carlo Algorithms for Computing a Low-Rank Approximation to a Matrix

    Get PDF
    Many of today\u27s applications deal with big quantities of data; from DNA analysis algorithms, to image processing and movie recommendation algorithms. Most of these systems store the data in very large matrices. In order to perform analysis on the collected data, these big matrices have to be stored in the RAM (random-access memory) of the computing system. But this is a very expensive process since RAM is a scarce computational resource. Ideally, one would like to be able to store most of the data matrices on the memory disk (hard disk drive) while loading only the necessary parts of the data in the RAM. In order to do so, the data matrix has to be decomposed into smaller matrices. Singular value decomposition (SVD) is an algorithm that can be used to find a low-rank approximation of the input matrix, creating thus an approximation of smaller sizes. Methods like SVD require memory and time that are super-linear (increase at a rate higher than linear) in the sizes of the input matrix. This constraint is a burden for many of the applications that analyze large quantities of data. In this thesis we are presenting a more efficient algorithm based on Monte Carlo methods, LinearTimeSVD, that achieves a low-rank approximation of the input matrix while maintaining memory and time requirements that are only linear in regards to the sizes of the original matrix. Moreover, we will prove that the errors associated to this new construction method are bounded in terms of properties of the input matrix. The main idea behind the algorithm is a sampling step that will construct a lower size matrix from a subset of the columns of the input matrix. Using SVD on this new matrix (that has a constant number of columns with respect to the sizes of the input matrix), the method presented will generate approximations of the top k singular values and corresponding singular vectors of A, where k will denote the rank of the approximated matrix. By sampling enough columns, it can be shown that the approximation error can be decreased

    Construction Algorithms for Expander Graphs

    Get PDF
    Graphs are mathematical objects that are comprised of nodes and edges that connect them. In computer science they are used to model concepts that exhibit network behaviors, such as social networks, communication paths or computer networks. In practice, it is desired that these graphs retain two main properties: sparseness and high connectivity. This is equivalent to having relatively short distances between two nodes but with an overall small number of edges. These graphs are called expander graphs and the main motivation behind studying them is the efficient network structure that they can produce due to their properties. We are specifically interested in the study of k-regular expander graphs, which are expander graphs whose nodes are each connected to exactly k other nodes. The goal of this project is to compare explicit and random methods of generating expander graphs based on the quality of the graphs they produce. This is done by analyzing the graphs’ spectral property, which is an algebraic method of comparing expander graphs. The explicit methods we are considering are due to G. A. Margulis (for 5-regular graphs) and D. Angluin (for 3-regular graphs) and they are algebraic ways of generating expander graphs through a series of rules that connect initially disjoint nodes. The authors proved that these explicit methods would construct expander graphs. Moreover, the random methods generate random graphs that, experimentally, are proven to be just as good expanders as the ones constructed by these explicit methods. This project’s approach to the random methods was influenced by a paper of K. Chang where the author evaluated the quality of 3 and 7-regular expander graphs resulted from random methods by using their spectral property. Therefore, our project implements these methods and provides a unified, experimental comparison between 3 and 5-regular expander graphs generated through explicit and random methods, by evaluating their spectral property. We conclude that even though the explicit methods produce better expanders for graphs with a small number of nodes, they stop producing them as we increase the number of nodes, while the random methods still generate reasonably good expander graphs

    Design and characterisation of metallic glassy alloys of high neutron shielding capability

    Get PDF
    This paper reports the design, making and characterisation of a series of Fe-based bulk metallic glass alloys with the aim of achieving the combined properties of high neutron absorption capability and sufficient glass forming ability. Synchrotron X-ray diffraction and pair distribution function methods were used to characterise the crystalline or amorphous states of the samples. Neutron transmission and macroscopic attenuation coefficients of the designed alloys were measured using energy resolved neutron imaging method and the very recently developed microchannel plate detector. The study found that the newly designed alloy (Fe48Cr15Mo14C15B6Gd2 with a glass forming ability of Ø5.8 mm) has the highest neutron absorption capability among all Fe-based bulk metallic glasses so far reported. It is a promising material for neutron shielding applications

    A portable triaxial cell for beamline imaging of rocks under triaxial state of stress

    Get PDF
    Acknowledgements The development of the cell was supported by the Research and Teaching Excellence Fund of the School of Engineering, University of Aberdeen. Experiments at BT2, NCNR were supported by UK Engineering and Physical Sciences Research Council grant number EP/N021665/1, NIST and the Physical Measurement Laboratory. Experiments at IMAT were supported by the UK STFC, Experiment number: 1910331 (https://doi.org/10.5286/ISIS.E.RB1910331).Peer reviewedPublisher PD

    Recovering the second moment of the strain distribution from neutron Bragg edge data

    Get PDF
    Point by point strain scanning is often used to map the residual stress (strain) in engineering materials and components. However, the gauge volume and hence spatial resolution is limited by the beam defining apertures and can be anisotropic for very low and high diffraction (scattering) angles. Alternatively, wavelength resolved neutron transmission imaging has a potential to retrieve information tomographically about residual strain induced within materials through measurement in transmission of Bragg edges - crystallographic fingerprints whose locations and shapes depend on microstructure and strain distribution. In such a case the spatial resolution is determined by the geometrical blurring of the measurement setup and the detector point spread function. Mathematically, reconstruction of strain tensor field is described by the longitudinal ray transform; this transform has a non-trivial null-space, making direct inversion impossible. A combination of the longitudinal ray transform with physical constraints was used to reconstruct strain tensor fields in convex objects. To relax physical constraints and generalise reconstruction, a recently introduced concept of histogram tomography can be employed. Histogram tomography relies on our ability to resolve the distribution of strain in the beam direction, as we discuss in the paper. More specifically, Bragg edge strain tomography requires extraction of the second moment (variance about zero) of the strain distribution which has not yet been demonstrated in practice. In this paper we verify experimentally that the second moment can be reliably measured for a previously well characterised aluminium ring and plug sample. We compare experimental measurements against numerical calculation and further support our conclusions by rigorous uncertainty quantification of the estimated mean and variance of the strain distribution

    Investigating root architectural differences in lines of Arabidopsis thaliana. L. with altered stomatal density using high resolution X-Ray synchrotron imaging

    Get PDF
    Purpose Freshwater is an increasingly scarce natural resource, essential for agricultural production. As plants consume 70% of the world’s freshwater, a reduction in their water use would greatly reduce global water scarcity. Plants with improved Water Use Efficiency (WUE) such as those with altered expression of the Epidermal Patterning Factor (EPF) family of genes regulating stomatal density, could help reduce plant water footprint. Little however, is known about how this modification in Arabidopsis thaliana. L. affects root architectural development in soil, thus we aim to improve our understanding of root growth when stomatal density is altered. Methods We used X-Ray synchrotron and neutron imaging to measure in three dimensions, the root system architecture (RSA) of Arabidopsis thaliana. L. plants of three different genotypes, namely that of the wild type Columbia (Col 0) and two different EPF mutants, EPF2OE and epf2-1 (which show reduced and increased stomatal density, respectively). We also used the total biomass and carbon isotope discrimination (Δ) methods to determine how WUE varies in these genotypes when grown in a sandy loam soil under controlled conditions. Results Our results confirm that the EPF2OE line had superior WUE as compared to the wild type using both the Δ and total biomass method. The epf2-1 mutant, on the other hand, had significantly reduced WUE using the Δ but not with the biomass method. In terms of root growth, the RSAs of the different genotypes had no significant difference between each other. There was also no significant difference in rhizosphere porosity around their roots as compared to bulk soil for all genotypes. Conclusion Our results indicate that the EPF mutation altering stomatal density in Arabidopsis thaliana. L. plants did not have an adverse effect on root characteristics thus their wide adoption to reduce the global freshwater footprint is unlikely to compromise their soil foraging ability

    Delineation of dominant and recessive forms of LZTR1-associated Noonan syndrome.

    Get PDF
    Noonan syndrome (NS) is characterised by distinctive facial features, heart defects, variable degrees of intellectual disability and other phenotypic manifestations. Although the mode of inheritance is typically dominant, recent studies indicate LZTR1 may be associated with both dominant and recessive forms. Seeking to describe the phenotypic characteristics of LZTR1-associated NS, we searched for likely pathogenic variants using two approaches. First, scrutiny of exomes from 9624 patients recruited by the Deciphering Developmental Disorders (DDDs) study uncovered six dominantly-acting mutations (p.R97L; p.Y136C; p.Y136H, p.N145I, p.S244C; p.G248R) of which five arose de novo, and three patients with compound-heterozygous variants (p.R210*/p.V579M; p.R210*/p.D531N; c.1149+1G>T/p.R688C). One patient also had biallelic loss-of-function mutations in NEB, consistent with a composite phenotype. After removing this complex case, analysis of human phenotype ontology terms indicated significant phenotypic similarities (P = 0.0005), supporting a causal role for LZTR1. Second, targeted sequencing of eight unsolved NS-like cases identified biallelic LZTR1 variants in three further subjects (p.W469*/p.Y749C, p.W437*/c.-38T>A and p.A461D/p.I462T). Our study strengthens the association of LZTR1 with NS, with de novo mutations clustering around the KT1-4 domains. Although LZTR1 variants explain ~0.1% of cases across the DDD cohort, the gene is a relatively common cause of unsolved NS cases where recessive inheritance is suspected
    • 

    corecore