48 research outputs found

    A Hybrid Least Squares and Principal Component Analysis Algorithm for Raman Spectroscopy

    Get PDF
    Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm’s superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles

    Scholarly Context Not Found: One in Five Articles Suffers from Reference Rot

    Get PDF
    The emergence of the web has fundamentally affected most aspects of information communication, including scholarly communication. The immediacy that characterizes publishing information to the web, as well as accessing it, allows for a dramatic increase in the speed of dissemination of scholarly knowledge. But, the transition from a paper-based to a web-based scholarly communication system also poses challenges. In this paper, we focus on reference rot, the combination of link rot and content drift to which references to web resources included in Science, Technology, and Medicine (STM) articles are subject. We investigate the extent to which reference rot impacts the ability to revisit the web context that surrounds STM articles some time after their publication. We do so on the basis of a vast collection of articles from three corpora that span publication years 1997 to 2012. For over one million references to web resources extracted from over 3.5 million articles, we determine whether the HTTP URI is still responsive on the live web and whether web archives contain an archived snapshot representative of the state the referenced resource had at the time it was referenced. We observe that the fraction of articles containing references to web resources is growing steadily over time. We find one out of five STM articles suffering from reference rot, meaning it is impossible to revisit the web context that surrounds them some time after their publication. When only considering STM articles that contain references to web resources, this fraction increases to seven out of ten. We suggest that, in order to safeguard the long-term integrity of the web-based scholarly record, robust solutions to combat the reference rot problem are required. In conclusion, we provide a brief insight into the directions that are explored with this regard in the context of the Hiberlink project

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    The importance of parameter choice in modelling dynamics of the eye lens

    Get PDF
    The lens provides refractive power to the eye and is capable of altering ocular focus in response to visual demand. This capacity diminishes with age. Current biomedical technologies, which seek to design an implant lens capable of replicating the function of the biological lens, are unable as yet to provide such an implant with the requisite optical quality or ability to change the focussing power of the eye. This is because the mechanism of altering focus, termed accommodation, is not fully understood and seemingly conflicting theories require experimental support which is difficult to obtain from the living eye. This investigation presents finite element models of the eye lens based on data from human lenses aged 16 and 35 years that consider the influence of various modelling parameters, including material properties, a wide range of angles of force application and capsular thickness. Results from axisymmetric models show that the anterior and posterior zonules may have a greater impact on shape change than the equatorial zonule and that choice of capsular thickness values can influence the results from modelled simulations

    Research Communication Costs in Australia: Emerging Opportunities and Benefits

    Full text link

    Simultaneous reconstruction and registration algorithm for limited view transmission tomography using a multiple cluster approximation to the joint histogram with an anatomical prior.

    No full text
    We develop a novel simultaneous reconstruction and registration algorithm for limited view transmission tomography. We derive a cost function using Bayesian probability theory, and propose a similarity metric based on the explicit modeling of the joint histogram as a sum of bivariate clusters. The resulting algorithm shows a robust mitigation of the data insufficiency problem in limited view tomography. To our knowledge, our work represents the first attempt to incorporate non-registered, multimodal anatomical priors into limited view transmission tomography by using joint histogram based similarity measures

    Simultaneous reconstruction and segmentation algorithm for positron emission tomography and transmission tomography

    No full text
    We present a new reconstruction algorithm for emission and transmission tomography. The algorithm performs maximum likelihood reconstruction and doubly stochastic segmentation simultaneously. The resulting reconstructions show promising edge-preservation as well as suppression of measurement noise. ©2008 IEEE

    Regularising limited view tomography using anatomical reference images and information theoretic similarity metrics.

    No full text
    This paper is concerned with limited view tomography. Inspired by the application of digital breast tomosynthesis (DBT), which is but one of an increasing number of applications of limited view tomography, we concentrate primarily on cases where the angular range is restricted to a narrow wedge of approximately ±30°, and the number of views is restricted to 10-30. The main challenge posed by these conditions is undersampling, also known as the null space problem. As a consequence of the Fourier Slice Theorem, a limited angular range leaves large swathes of the object's Fourier space unsampled, leaving a large space of possible solutions, reconstructed volumes, for a given set of inputs. We explore the feasibility of using same- or different-modality images as anatomical priors to constrain the null space, hence the solution. To allow for different-modality priors, we choose information theoretic measures to quantify the similarity between reconstructions and their priors. We demonstrate the limitations of two popular choices, namely mutual information and joint entropy, and propose robust alternatives that overcome their limitations. One of these alternatives is essentially a joint mixture model of the image and its prior. Promising mitigation of the data insufficiency problem is demonstrated using 2D synthetic as well as clinical phantoms. This work initially assumes a priori registered priors, and is then extended to allow for the registration to be performed simultaneously with the reconstruction

    ROBUST INCORPORATION OF ANATOMICAL PRIORS INTO LIMITED VIEW TOMOGRAPHY USING MULTIPLE CLUSTER MODELLING OF THE JOINT HISTOGRAM

    No full text
    We apply the joint entropy prior to limited view transmission tomography and demonstrate its sensitivity to local optima. We propose to increase robustness by modelling the joint histogram as the sum of a limited number of bivariate clusters. The method is illustrated for the case of Gaussian distributions. This approximation increases robustness by reducing the possible number of local optima in the cost function. The resulting reconstruction prior mimicks the behaviour of the joint entropy prior in that it narrows clusters in the joint histogram, and yields promisingly accurate reconstruction results despite the null space problem. © 2009 IEEE
    corecore