868 research outputs found

    Senior Recital: Erik Jonsson, clarinet

    Get PDF

    Construction and benchmarking of adaptive parameterized linear multistep methods

    Get PDF
    A recent publication introduced a new way to define all k-step linear multistep methods of order k and k+1, in a parametric form that builds in variable step-size. In this framework it is possible to continuously change method and step-size, making it possible to create better behaving adaptive numerical solvers. In this thesis general numerical solvers based on this framework have been implemented, utilizing variable step-size and variable order, based on control theory and digital filters. To test and analyze the solvers, libraries of test problems, methods and filters have been implemented. In the analysis, the solvers were also compared to commercial (Matlab) solvers. The conclusion of this investigation is that the solvers show potential to become competitive in the field.Many important problems in engineering and science need to be solved using computers. Unlike people, computers do not make mistakes, and they are much faster at certain tasks. However, computers have a big weakness: they do not know what to do without being instructed. These instructions are created by people designing algorithms, and constructing software. Our thesis project involves implementing and testing an algorithm designed to solve a particular type of mathematical equations. To simply explain how the solver works, we use the following example: You are dropping a stone from your roof. According to physics, the stone will accelerate constantly 9.82 ms−2^{-2} by gravity towards the ground. If you know at what height the stone was dropped, you are able to calculate where the stone will be a second later. This new approximated position can then be used to calculate where the stone will be two seconds and so forth. This is exactly what our software --- which we call a numerical solver --- does. The time passed between each calculation of the position of the stone in the previous example, is called a time-step, and generally, the longer a time-step is, the less precise will the next position approximation be. Herein lies a problem: We would like as few calculations as possible, but at the same time as accurate a solution as possible. By taking longer steps, we do not need as many calculations, but at the same time, longer steps means less accuracy. A numerical solver can be compared to a car. Assume a self-driving car only able to use one speed. This speed is chosen in the beginning of your car ride and can not be changed during the same ride, only in the beginning of the next. The speed might be adequate in some situations, on some roads, but in others it might be too fast or too slow, which means that you have to choose the speed carefully in the beginning of your ride. This is analogous to a numerical solver only able to use one step-size throughout the same calculation. Instead we let the car use different speeds during the ride. However, it is only allowed to half the speed or double the speed at every change. This would lead to a pretty ``bumpy'' ride. Most numerical solvers today use this kind of regulation to change the step-size. Since the roads we drive on might change character a lot very fast, we would instead like the car to continuously be able to change the speed, such that we always manage to stay on the road and at the same time do not need to spend all day in the car. In the corresponding way, a mathematical problem can change character very fast. In our solver we let the step-size change continuously, removing this bad ``bumpy'' behavior. The solver can not only vary the length of the time-step, but also change the underlying numerical method during the calculation of a particular problem. The difference between these methods is what we call order. Higher order methods often allow us to take longer time-steps, while still getting the same amount of precision as a lower order method would get using a shorter time-step. This means getting the same precision for less work. The order can be compared to the gears in a car. Depending on the surface of the road, the slope, and other factors, we want to use the correct gear to be able to drive as efficiently as possible. The problem with using the highest order method all the time, which may seem like the best way to go, is that all problems are not alike, just like all roads are not alike. One method working well with one problem, may not work as well with another. Also, the character of a problem may change during the calculation in such a way that a method that was working well two seconds ago, is no longer a good choice. Our thesis has focused on these two control systems, that is, a system controlling the step-size during the calculation, and a system controlling the method/order during the calculation. The software we have built, is using a certain type of numerical methods called linear multistep methods, and the control systems mentioned, are based on control theory, which is a branch of science and mathematics, used to build systems controlling everything from thermostats to jets, and of course, cars. The software package was tested after the implementation, and it turns out that it has the potential of becoming better than the corresponding packages used today, that do not use control theory

    Visual Servoing for Floppy Robots Using LWPR

    Get PDF
    We have combined inverse kinematics learned by LWPR with visual servoing to correct for inaccuracies in a low cost robotic arm. By low cost we mean weak inaccurate servos and no available joint-feedback. We show that from the trained LWPR model the Jacobian can be estimated. The Jacobian maps wanted changes in position to corresponding changes in control signals. Estimating the Jacobian for the first iteration of visual servoing is straightforward and we propose an approximative updating scheme for the following iterations when the Jacobian can not be estimated exactly. This results in a sufficient accuracy to be used in a shape sorting puzzle.

    Cost Affecting Factors Related to Fillet Joints

    Get PDF
    Fillet welds are by far the most frequent arc welding joint type in the fabrication industry with about 80% of all arc welded joints worldwide. Alt-hough the joint is well established, there are many aspects to consider when pro-ducing an ideal weld. This paper reveals and connects several problematic issues related to the joint type and the difficulties to fabricate a weld with correct strength, cost, and quality. Excessive welding of fillet welds is common, resulting in increased fabrication cost. There could be several causes for this; the designers do not customize the weld demand for the different stress levels and the production adds even more to handle the variation in the process. Previous studies shows that the combination of these factors can result in 100% extra weld metal, compared to what should be needed to fulfil the strength demands. Inspections are another contributor to excess welding. The capability of the weld size measurement method used by welders and inspectors is unsatisfactory. Measurement system analyses show that the scatter from the measurement system itself is in the same range as the scatter from the process. A critical summary of the current state-of-the-art is that fillet welds are hard to specify and fabricate with the right size, that the measuring method is incapable and the connection between size and strength is weak

    Statistical evaluation of methods for identification of differentially abundant genes in comparative metagenomics

    Get PDF
    Background: Metagenomics is the study of microbial communities by sequencing of genetic material directly from environmental or clinical samples. The genes present in the metagenomes are quantified by annotating and counting the generated DNA fragments. Identification of differentially abundant genes between metagenomes can provide important information about differences in community structure, diversity and biological function. Metagenomic data is however high-dimensional, contain high levels of biological and technical noise and have typically few biological replicates. The statistical analysis is therefore challenging and many approaches have been suggested to date. Results: In this article we perform a comprehensive evaluation of 14 methods for identification of differentially abundant genes between metagenomes. The methods are compared based on the power to detect differentially abundant genes and their ability to correctly estimate the type I error rate and the false discovery rate. We show that sample size, effect size, and gene abundance greatly affect the performance of all methods. Several of the methods also show non-optimal model assumptions and biased false discovery rate estimates, which can result in too large numbers of false positives. We also demonstrate that the performance of several of the methods differs substantially between metagenomic data sequenced by different technologies. Conclusions: Two methods, primarily designed for the analysis of RNA sequencing data (edgeR and DESeq2) together with a generalized linear model based on an overdispersed Poisson distribution were found to have best overall performance. The results presented in this study may serve as a guide for selecting suitable statistical methods for identification of differentially abundant genes in metagenomes

    On the origin of trisomy 21 Down syndrome

    Get PDF
    Background: Down syndrome, characterized by an extra chromosome 21 is the most common genetic cause for congenital malformations and learning disability. It is well known that the extra chromosome 21 most often originates from the mother, the incidence increases with maternal age, there may be aberrant maternal chromosome 21 recombination and there is a higher recurrence in young women. In spite of intensive efforts to understand the underlying reason(s) for these characteristics, the origin still remains unknown. We hypothesize that maternal trisomy 21 ovarian mosaicism might provide the major causative factor. Results: We used fluorescence in situ hybridization (FISH) with two chromosome 21-specific probes to determine the copy number of chromosome 21 in ovarian cells from eight female foetuses at gestational age 14–22 weeks. All eight phenotypically normal female foetuses were found to be mosaics, containing ovarian cells with an extra chromosome 21. Trisomy 21 occurred with about the same frequency in cells that had entered meiosis as in pre-meiotic and ovarian mesenchymal stroma cells. Conclusion: We suggest that most normal female foetuses are trisomy 21 ovarian mosaics and the maternal age effect is caused by differential selection of these cells during foetal and postnatal development until ovulation. The exceptional occurrence of high-grade ovarian mosaicism may explain why some women have a child with Down syndrome already at young age as well as the associated increased incidence at subsequent conceptions. We also propose that our findings may explain the aberrant maternal recombination patterns previously found by family linkage analysis

    Estimating non-marginal willingness to pay for railway noise abatement: Application of the two-step hedonic regression technique

    Get PDF
    In this study we estimate the demand for peace and quiet, and thus also the willingness to pay\ud for railway noise abatement, based on both steps of the hedonic model regression on property prices.\ud The estimated demand relationship suggests welfare gains for a 1 dB reduction of railway noise as;\ud USD 162 per individual per year at the baseline noise level of 71 dB, and USD 86 at the baseline\ud noise level of 61 dB. Below a noise level of 49.1 dB, individuals have no willingness to pay for railway\ud noise abatement. Our results also show the risk of using benefit transfer, i.e. we show empirically\ud that the estimated implicit price for peace and quiet differs substantially across the housing markets.\ud From a policy perspective our results are useful, not only for benefit-cost analysis, but also as the\ud monetary component on infrastructure use charges that internalize the noise externality
    • …
    corecore