33 research outputs found

    Near-infrared hyperspectral imaging to map collagen content in prehistoric bones for radiocarbon dating

    Get PDF
    Many of the rarest prehistoric bones found by archaeologists are enormously precious and are considered to be part of our cultural and historical patrimony. Radiocarbon dating is a well-established technique that estimates the ages of bones by analysing the collagen still present. However, this method is destructive, and its use must be limited. In this study, we used imaging technology to quantify the presence of collagen in bone samples in a non-destructive way to select the most suitable samples (or sample regions) to be submitted to radiocarbon dating analysis. Near-infrared spectroscopy (NIR) that was connected to a camera with hyperspectral imaging (HSI) was used along with a chemometric model to create chemical images of the distribution of collagen in ancient bones. This model quantifies the collagen at every pixel and thus provides a chemical mapping of collagen content. Our results will offer significant advances for the study of human evolution as we will be able to minimise the destruction of valuable bone material, which is under the protection and enhancement of European cultural heritage and thus allow us to contextualise the valuable object by providing an accurate calendar age.The collagen present in rare prehistoric bones allows for their age to be estimated by radiocarbon dating, but this method is destructive towards these precious archaeological remains. Here, the authors report a non-destructive method based on near-infrared hyperspectral imaging to precisely localize the collagen preserved in parts of ancient specimens suitable for radiocarbon dating

    Combining excitation-emission matrix fluorescence spectroscopy, parallel factor analysis, cyclodextrin-modified micellar electrokinetic chromatography and partial least squares class-modelling for green tea characterization

    Get PDF
    In this study, an alternative analytical approach for analyzing and characterizing green tea (GT) samples is proposed, based on the combination of excitation–emission matrix (EEM) fluorescence spectroscopy and multivariate chemometric techniques. The three-dimensional spectra of 63 GT samples were recorded using a Perkin–Elmer LS55 luminescence spectrometer; emission spectra were recorded between 295 and 800 nm at excitation wavelength ranging from 200 to 290 nm, with excitation and emission slits both set at 10 nm. The excitation and emission profiles of two factors were obtained using Parallel Factor Analysis (PARAFAC) as a 3-way decomposition method. In this way, for the first time, the spectra of two main fluorophores in green teas have been found. Moreover, a cyclodextrin-modified micellar electrokinetic chromatography method was employed to quantify the most represented catechins and methylxanthines in a subset of 24 GT samples in order to obtain complementary information on the geographical origin of tea. The discrimination ability between the two types of tea has been shown by a Partial Least Squares Class-Modelling performed on the electrokinetic chromatography data, being the sensitivity and specificity of the class model built for the Japanese GT samples 98.70% and 98.68%, respectively. This comprehensive work demonstrates the capability of the combination of EEM fluorescence spectroscopy and PARAFAC model for characterizing, differentiating and analyzing GT samples

    Tutorial: Time series hyperspectral image analysis

    No full text
    A hyperspectral image is a large dataset in which each pixel corresponds to a spectrum, thus providing high-quality detail of a sample surface. Hyperspectral images are thus characterised by dual information, spectral and spatial, which allows for the acquisition of both qualitative and quantitative information from a sample. A hyperspectral image, commonly known as a 'hypercube', comprises two spatial dimensions and one spectral dimension. The data of such a file contain both chemical and physical information. Such files need to be analysed with a computational 'chemometric' approach in order to reduce the dimensionality of the data, while retaining the most useful spectral information. Time series hyperspectral imaging data comprise multiple hypercubes, each presenting the sample at a different time point, requiring additional considerations in the data analysis. This paper provides a step-by-step tutorial for time series hyperspectral data analysis, with detailed command line scripts in the Matlab and R computing languages presented in the supplementary data. The example time series data, available for download, are a set of time series hyperspectral images following the setting of a cement-based biomaterial. Starting from spectral pre-processing (image acquisition, background removal, dead pixels and spikes, masking) and pre-treatments, the typical steps encountered in time series hyperspectral image processing are then presented, including unsupervised and supervised chemometric methods. At the end of the tutorial paper, some general guidelines on hyperspectral image processing are proposed.European Commission - Seventh Framework Programme (FP7)European Research Counci

    Chemometrics: multivariate analysis of chemical data

    No full text
    Data mining is usually the last, but not for this less important, step of any food analysis process. It rather represents a critical phase: in fact, a proper data processing allows the extraction of useful information about the system under study from large amounts of collected data\u2014and getting information is usually the main objective in analytical chemistry. The classical univariate approach, which considers one variable at a time, underutilizes the global data structure and offers just a partial image of it. Instead, multivariate strategies allow a more complete interpretation of data and exploitation of the information contained therein. Multivariate techniques can be used both for exploratory purposes and for qualitative or quantitative modeling. Generally, modeling is performed for predictive applications: in such cases, a thorough model validation is always required

    The impact of signal pre-processing on the final interpretation of analytical outcomes \u2013 A tutorial

    No full text
    The present tutorial paper is aimed to analyse and critically discuss the consequences of row pre-processing (conversion of measurement units, derivatives, and standard normal variate transform) on the evaluation of final outcomes of chemometric data analysis. An in-depth focus on pre-processing effects both on the signal shape and on misinterpretation of results \u2013 a crucial and disregarded issue in the analytical field \u2013 is presented. It is shown how this preliminary step of data processing may lead, in many cases, to draw incongruous conclusions, not actually based on real information embodied within data, but on artefacts arising from the mathematical transforms. This tutorial is not limited to a description of the problem, it also introduces strategies and tools for overcoming such unwanted effects, allowing a direct interpretation of the importance of original variables to be performed, explaining the chemical information that actually characterises samples. The dangerous implications of row pre-processing on instrumental signals is demonstrated on real datasets coming from different analytical techniques: transmission and attenuated total reflection infrared spectroscopy, cyclic voltammetry, X-ray fluorescence spectroscopy, Raman spectroscopy, and ultraviolet\u2013visible spectroscopy. Hence, the impact of this widespread problem in most of the branches of analytical chemistry is illustrated
    corecore