10 research outputs found

    Computing with functions in the ball

    Full text link
    A collection of algorithms in object-oriented MATLAB is described for numerically computing with smooth functions defined on the unit ball in the Chebfun software. Functions are numerically and adaptively resolved to essentially machine precision by using a three-dimensional analogue of the double Fourier sphere method to form "ballfun" objects. Operations such as function evaluation, differentiation, integration, fast rotation by an Euler angle, and a Helmholtz solver are designed. Our algorithms are particularly efficient for vector calculus operations, and we describe how to compute the poloidal-toroidal and Helmholtz--Hodge decomposition of a vector field defined on the ball.Comment: 23 pages, 9 figure

    Certification of many-body bosonic interference in 3D photonic chips

    Get PDF
    Quantum information and quantum optics have reached several milestones during the last two decades. Starting from the 1980s, when Feynman and laid the foundations of quantum computation and information, in the last years there have been significant progresses both in theoretical and experimental aspects. A series of quantum algorithms has been proposed that promise computational speed-up with respect to its classical counterpart. If fully exploited, quantum computers are expected to be able to markedly outperform classical ones in several specific tasks. More generally, quantum computers would change the paradigm of what we currently consider efficiently computable, being based on a completely different way to encode and elaborate data, which relies on the unique properties of quantum mechanics such as linear superposition and entanglement. The building block of quantum computation is the qubit, which incorporates in its definition the revolutionary aspects that would enable overcoming classical computation in terms of efficiency and security. However recent developments in technologies claimed the realizations of devices with hundreds of controllable qubits, provoking an important debate of what exactly is a quantum computing process and how to unambiguously recognize the presence of a quantum speed-up. Nevertheless, the question of what exactly makes a quantum computer faster than a classical one has currently no clear answer. Its applications could spread from cryptography, with a significant enhancement in terms of security, to communication and simulation of quantum systems. In particular, in the latter case it was shown by Feynman that some problems in quantum mechanics are intractable by means of only classical approaches, due to the exponential increase in the dimension of the Hilbert space. Clearly the question of where quantum capabilities in computation are significant is still open and the hindrance to answer to these problems brought the scientific community to focus its efforts in trying to develop these kind of systems. As a consequence, significant progresses have been made in trapped ions, superconducting circuits, neutral atoms and linear optics permitting the first implementations of such devices. Among all the scheme introduced, the approach suggested by linear optics, uses photons to encode information and is believed to be promising in most tasks. For instance, photons are important for quantum communication and cryptography protocols because of their natural tendency to behave as "flying" qubits. Moreover, with identical properties (energy, polarization, spatial and temporal profiles), indistinguishable photons can interfere with each other due to their boson nature. These features have a direct application in the task of performing quantum protocols. In fact they are suitable for several recent scheme such as for example graph- and cluster-state photonic quantum computation . In particular, it has been proved that universal quantum computation is possible using only simple optical elements, single photon sources, number resolving photo-detectors and adaptative measurements. thus confirming the pivotal importance of these particles. Although the importance of linear optics has been confirmed in the last decades, its potentialities were already anticipated years before when (1) Burnham et al. discovered the Spontaneous Parametric Down-Conversion, (2) Hong, Ou and Mandel discovered the namesake effect (HOM) and (3) Reck et al. showed how a particular combination of simple optical elements can reproduce any unitary transformation. (1) SPDC consists in the generation of entangled photon pairs through a nonlinear crystal pumped with a strong laser and despite recent advancements in other approaches, it has been the keystone of single photon generation for several years , due to the possibility to create entangled photon pairs with high spectral correlation. (2) The HOM effect demonstrated the tendency of indistinguishable photon pairs to "bunch" in the same output port of a balanced beam splitter, de-facto showing a signature of quantum interference. Finally, (3) the capability to realize any unitary operation in the space of the occupation modes led to the identification of interferometers as pivotal objects for quantum information protocols with linear optics. At this point, once recognized the importance of all these ingredients, linear optics aimed to reach large implementations to perform protocols with a concrete quantum advantage. Unfortunately, the methods exploited by bulk optics suffer of strong mechanical instabilities, which prevent a transition to large-size experiments. The need for both stability and scalability has led to the miniaturization of such bulk optical devices. Several techniques have been employed to reach this goal, such as lithographic processes and implementations on silica materials. All these approaches are significant in terms of stability and ease of manipulation, but they are still expensive in terms of costs and fabrication time and, moreover, they do not permit to exploit the 3D dimension to realize more complex platforms. A powerful approach to transfer linear optical elements on an integrated photonic platform able to overcome these limitations has been recognized in the femtosecond laser micromachining. FLM, developed in the last two decades, exploits the mechanism of non-linear absorption in a medium with focused femtosecond pulses to design arbitrary 3D structures inside an optical substrate. Miniaturized beam splitters and phase shifters are then realized inducing a localized change in the refractive index of the medium. This technique allows to write complex 3D circuits by moving the sample along the desired path at constant velocity, perpendicularly with respect to the laser beam. 3D structures can also be realized either polarization sensitive or insensitive, due to the low birefringence of the material used (borosilicate glass), enabling polarization-encoded qubits and polarization-entangled photons to realize protocol of quantum computation \cite{linda1,linda2}. As a consequence, integrated photonics gives us a starting point to implement quantum simulation processes in a very stable configuration. This feature could pave the way to investigate larger size experiments, where a higher number of photons and optical elements are involved. Recently, it has been suggested that many-particle bosonic interference can be used as a testing tool for the computational power of quantum computers and quantum simulators. Despite the important constraints that we need to satisfy to build a universal quantum computerand perform quantum computation in linear optics, bosonic statistics finds a new promising simpler application in pinpointing the ingredients for a quantum advantage. In this context, an interesting model was recently introduced: the Boson Sampling problem. This model exploits the evolution of indistinguishable bosons into an optical interferometer described by an unitary transformation and it consists in sampling from its output distribution. The core behind this model is the many-body boson interference: although measuring the outcomes seems to be easy to perform, simulating the output of this device, is believed to be intrinsically hard classically in terms of physical resources and time, even approximatively. For this reason Boson Sampling captured the interest of the optical community, which concentrated its efforts to realize experimentally this kind of platforms. This phenomenon can be interpreted as a generalization of the Hong-Ou-Mandel effect of a nn-photon state that interferes into an mm-mode interferometer. In principle, if we are able to reach large dimensions (in n and m), this method can provide the first evidence of quantum over classical advantage and, moreover, it could open the way to the implementation of quantum computation based on quantum interference. Although the path seems promising, this approach has non-trivial drawbacks. First, (a) we need to reach large scale implementations in order to observe quantum advantage, so how can we scale them up? There are two roads that we can follow: (a1) to scale with the number of modes with the techniques developed in integrated photonics, trying to find the best implementation for our interferometers in terms of robustness against losses and choosing the best implementation, or (a2) to scale up the number of photons, identifying appropriate sources for this task. Second, (b) in order to perform quantum protocols we should "trust" on the effective true interference that is supposed to occur the protagonist of the phenomenon. For large-scale implementations, simulating the physical behaviour by means of classical approaches, becomes quickly intractable. In this case the road that we chose is (1) to identify the transformation that are optimal in discriminating true photon interference and (2) to use classification protocols as machine learning techniques and statistical tools to extract information and correlations from output data. Following these premises, the main goal of this thesis is to address a solution to these problems by following the suggested paths. Firstly, we will give an overview of the theoretical and experimental tools used and, secondly, we will show the subsequent analyses that we have carried out. Regarding point \textbf{(a1)} we performed several analyses under broad and realistic conditions. We studied quantitatively the difference between the three known architectures to identify which scheme is more appropriate for the realization of unitary transformations in our interferometers, in terms of scalability and robustness to losses and noise. We also studied the problem comparing our results to the recent developments in integrated photonics. Regarding point (a2) we studied different experimental realizations which seem promising for scaling up both the number of photons and the performances of the quantum device. First, we used multiple SPDC sources to improve the generation rate of single photons. Second, we performed an analysis on the performances of on-demand single-photon sources using a 3-mode integrated photonic circuit and quantum dots as deterministic single photon sources. This investigation has been carried out in a collaboration with the Optic of Semiconductor nanoStructures Group (GOSS) led by Prof. Pascale Senellart in Laboratoire de Photonique et de Nanostructures (C2N). Finally, we focused on problem \textbf{(b)} trying to answer the question of how to validate genuine multi-photon interference in an efficient way. Using optical chips built with FLM we performed several experiments based on protocols suitable for the problem. We performed an analysis on finding the optimal transformations for identifying genuine quantum interference. For this scope, we employed different figures of merit as Total Variation Distance (TVD) and Bayesian tests to exclude alternative hyphotheses on the experimental data. The result of these analysis is the identification of two unitaries which belong to the class of Hadamard matrices, namely the Fourier and Sylvester transformations. Thanks to the unique properties associated to the symmetries of these unitaries, we are able to formalize rules to identify real photon interference, the so-called zero-transmission laws, by looking at specific outputs of the interferometers which are efficiently predictable. Subsequently, we will further investigate on the validation problem by looking at the target from a different perspective. We will exploit two roads: retrieving signatures of quantum interference through machine learning classification techniques and extracting information from the experimental data by means of statistical tools. These approaches are based on choosing training samples from data which are used as reference in order to classify the whole set of output data accordingly, in this case, to its physical behaviour. In this way we are able to rule out against alternative hypotheses not based on true quantum interference

    Sub-10nm Transistors for Low Power Computing: Tunnel FETs and Negative Capacitance FETs

    Get PDF
    One of the major roadblocks in the continued scaling of standard CMOS technology is its alarmingly high leakage power consumption. Although circuit and system level methods can be employed to reduce power, the fundamental limit in the overall energy efficiency of a system is still rooted in the MOSFET operating principle: an injection of thermally distributed carriers, which does not allow subthreshold swing (SS) lower than 60mV/dec at room temperature. Recently, a new class of steep-slope devices like Tunnel FETs (TFETs) and Negative-Capacitance FETs (NCFETs) have garnered intense interest due to their ability to surpass the 60mV/dec limit on SS at room temperature. The focus of this research is on the simulation and design of TFETs and NCFETs for ultra-low power logic and memory applications. Using full band quantum mechanical model within the Non-Equilibrium Greens Function (NEGF) formalism, source-underlapping has been proposed as an effective technique to lower the SS in GaSb-InAs TFETs. Band-tail states, associated with heavy source doping, are shown to significantly degrade the SS in TFETs from their ideal value. To solve this problem, undoped source GaSb-InAs TFET in an i-i-n configuration is proposed. A detailed circuit-to-system level evaluation is performed to investigate the circuit level metrics of the proposed devices. To demonstrate their potential in a memory application, a 4T gain cell (GC) is proposed, which utilizes the low-leakage and enhanced drain capacitance of TFETs to realize a robust and long retention time GC embedded-DRAMs. The device/circuit/system level evaluation of proposed TFETs demonstrates their potential for low power digital applications. The second part of the thesis focuses on the design space exploration of hysteresis-free Negative Capacitance FETs (NCFETs). A cross-architecture analysis using HfZrOx ferroelectric (FE-HZO) integrated on bulk MOSFET, fully-depleted SOI-FETs, and sub-10nm FinFETs shows that FDSOI and FinFET configurations greatly benefit the NCFET performance due to their undoped body and improved gate-control which enables better capacitance matching with the ferroelectric. A low voltage NC-FinFET operating down to 0.25V is predicted using ultra-thin 3nm FE-HZO. Next, we propose one-transistor ferroelectric NOR type (Fe-NOR) non-volatile memory based on HfZrOx ferroelectric FETs (FeFETs). The enhanced drain-channel coupling in ultrashort channel FeFETs is utilized to dynamically modulate memory window of storage cells thereby resulting in simple erase-, program-and read-operations. The simulation analysis predicts sub-1V program/erase voltages in the proposed Fe-NOR memory array and therefore presents a significantly lower power alternative to conventional FeRAM and NOR flash memories

    High-power few-cycle pulse generation towards the gigawatt frontier

    Get PDF
    The advent of precision spectroscopic techniques has brought about diverse opportunities in extending our understanding of fundamental physics and bio-medical sciences. This is especially true when harnessing radiation in the exotic extreme ultra-violet (XUV) and mid-infrared (IR) regions of the electromagnetic spectrum. While the former covers a multitude of atomic and molecular electronic transitions, the latter contains fundamental vibrational and rotational modes of numerous biologically-relevant molecules. Regardless of spectral range, many of the novel spectroscopic methodologies rely on the availability of broadband, waveform-controlled radiation with high brightness. The lack of suitable laser gain media in the aforementioned wavelength ranges means such radiation is conventionally generated by nonlinearly converting high-power, femtosecond laser pulses in the near-IR spectral range, such as those generated by thin-disk oscillators. However, those pulses generally have durations in the hundreds of femtoseconds — too long for the desired high peak-power and broad spectral coverage for effective nonlinear frequency conversion. Their electric waveform also varies randomly from pulse to pulse, hindering their applications to, among others, frequency-comb spectroscopy. This thesis describes the experimental development of various techniques to further compress the pulse duration, and the active stabilization of the output waveform in high-power thin-disk oscillators. It is shown that dispersion-controlled Herriott-type multipass-cells constitute an efficient means to broaden the spectral bandwidth of laser pulses with, in contrast to many other techniques, practically no degradation to the spatial beam quality. It presents the first time Herriott-cells operating in the net-negative dispersion regime have been used for spectral broadening with thin-disk oscillators. The demonstration yielded the highest broadening factor obtained from any multipass-cell broadening scheme using a single nonlinear bulk medium. Spectral broadening in the positive dispersion regime is also described. Two Herriott-cells in tandem facilitated the generation of 15.6 fs pulses with an unprecedented peak power of 463 MW — a record for a system driven directly by a laser oscillator with no amplification stages. Further compression of this dual-stage output was achieved by introducing the distributed quasi-waveguide approach. This technique enables the independent tailoring of nonlinearity and dispersion, which is essential for pulse compression towards few-optical-cycle durations. With a pulse duration of 10.8 fs and a peak and average power of 0.64 GW and 101 W, respectively, this marks the dawn of a new class of gigawatt-scale amplifier-free thin-disk laser system. The few-cycle laser pulses are shown to drive, via intra-pulse difference-frequency generation, the formation of broadband, waveform-stable mid-IR radiation with an exceptionally short cut-off wavelength. The achieved spectral extension down to 3.6 µm (at -30 dB level), at an average output power of 7.6 mW, opens up new perspectives for extending field-resolved spectroscopy to the biologically important amide functional groups. To actively stabilize the near-IR driver laser waveform — crucial for deriving from it a frequency comb in the XUV region — a novel, power-scalable concept for controlling the carrier-envelope-offset (CEO) frequency of Kerr-lens mode-locked oscillators was developed. It yielded CEO-frequency-stable pulses with sub-90 mrad in-loop phase noise at an unprecedented average output power of 105 W. The envisioned combination of waveform control with the presented nonlinear pulse compression techniques will pave the way for a new generation of compact, low-noise frequency combs with high photon-flux in the XUV spectral range. The various advancements presented in this thesis not only mark a substantial development of the respective techniques themselves, but also represent a significant contribution to the coming-of-age of high-precision laser-based spectrometers for scientific and medical applications.Das Aufkommen hochpräziser Spektroskopiemethoden, insbesondere im extremen Ultraviolett (XUV) und im mittleren Infrarot, hat eine Vielzahl von Möglichkeiten eröffnet, unser Verständnis physikalischer und biomedizinischer Zusammenhänge grundlegend zu erweitern. Dabei deckt der XUV-Bereich eine große Anzahl elektronischer Übergänge in Atomen und Molekülen ab, während das mittlere Infrarot zahlreiche fundamentale Schwingungs- und Rotationsmoden verschiedenster biologisch relevanter Moleküle enthält. Unabhängig vom Spektralbereich sind viele dieser neuartigen Spektroskopieverfahren gleichermaßen auf die Verfügbarkeit von breitbandiger Strahlung mit einem kontrollierten Feldverlauf und einer hohen Brillanz angewiesen. Da es in den oben genannten Wellenlängenbereichen keine geeigneten Laserverstärkungsmedien gibt, wird derartige Strahlung üblicherweise durch die nichtlineare Frequenzkonversion von hochintensiven Femtosekunden-Laserimpulsen im Spektralbereich des nahen Infrarot erzeugt, wie sie beispielsweise von Dünnscheibenoszillatoren generiert werden. Diese Impulse haben jedoch im Allgemeinen eine Dauer von Hunderten von Femtosekunden — zu lang, um die gewünschte hohe Spitzenleistung und breite spektrale Abdeckung für eine effektive nichtlineare Frequenzumwandlung bereitstellen zu können. Außerdem variiert ihre elektrische Wellenform von Impuls zu Impuls nach dem Zufallsprinzip, was ihre Anwendung für beispielsweise die Frequenzkammspektroskopie behindert. Diese Arbeit beschreibt experimentelle Entwicklungen von Methoden zur weiteren Komprimierung der Impulsdauer sowie zur aktiven Stabilisierung des elektrischen Feldverlaufs von hochintensiven Dünnscheibenoszillatoren. Es wird gezeigt, dass dispersionskontrollierte Herriott-Multipasszellen ein effizientes Mittel zur Erweiterung der spektralen Bandbreite von Laserpulsen darstellen, wobei im Gegensatz zu vielen anderen Techniken nahezu keine Verschlechterung der räumlichen Strahlqualität auftritt. Erstmalig wurde die durch einen Dünnscheibenlaser getriebene spektrale Verbreiterung in einer Herriott-Zelle im negativen Dispersionsregime durchgeführt. Die spektrale Verbreiterung erreichte dabei höhere Verbreiterungsfaktoren, als sie jemals zuvor mit einem auf Multipass-Zellen basierenden Verbreiterungsschema mit einem einzigen nichtlinearen Medium erzielt wurden. Darüber hinaus wurde auch die spektrale Verbreiterung im positiven Dispersionsregime untersucht. Das Hintereinanderschalten zweier Herriott-Zellen ermöglichte die Erzeugung von 15.6 fs kurzen Impulsen mit einer zuvor unerreichten Spitzenleistung von 463 MW — ein Rekord für ein System, das ohne weitere Verstärkerstufen direkt von einem Laseroszillator getrieben wird. Die weitere zeitliche Kompression am Ausgang dieses zweistufigen Systems wurde mit dem Ansatz eines verteilten Quasi-Wellenleiters gelöst. Diese Technik ermöglicht die unabhängige Anpassung von Nichtlinearität und Dispersion, was für die Impulskompression in Richtung einer Dauer von wenigen optischen Zyklen unerlässlich ist. Mit einer Impulsdauer von lediglich 10.8 fs bei einer Spitzen- und Durchschnittsleistung von 0.64 GW und 101 W markieren die erzeugten Laserimpulse den Beginn einer neuen Ära von verstärkerfreien Dünnscheibenlasersystemen im Gigawattbereich. Des Weiteren wurden die beschriebenen Laserimpulse dafür genutzt, um mittels Differenzfrequenzerzeugung breitbandige und phasenstarre Strahlung im mittleren Infrarot zu erzeugen. Letztere zeichnet sich insbesondere durch ihre außergewöhnlich niedrige Grenzwellenlänge aus. Die erreichte spektrale Ausdehnung auf 3.6 µm (auf - 30 dB-Niveau) mit einer mittleren Ausgangsleistung von 7.6 mW eröffnet neue Perspektiven für die feldaufgelöste Spektroskopie von biologisch relevanten funktionellen Amidgruppen. Um die Wellenform des nahinfraroten Lasers aktiv zu stabilisieren — unabdingbar für die Ableitung eines Frequenzkamms im XUV-Spektralbereich — wurde ein neuartiges und leistungsskalierbares Konzept entwickelt. Dieses erlaubt, die Träger-Einhüllenden-Frequenz von Kerr-Linsen-modengekoppelten Oszillatoren zu kontrollieren und zu stabilisieren. Das dabei erreichte Phasenrauschen lässt sich auf weniger als 90 mrad bei einer beispiellosen Durchschnittsleistung von 105 W beziffern. Die mögliche Kombination einer Feldverlaufstabilisierung mit den zuvor vorgestellten nichtlinearen Pulskompressionstechniken ebnet den Weg für die Entwicklung einer neuen Generation kompakter oszillatorbasierter Frequenzkämme mit hohem Photonenfluss im XUV-Spektralbereich. Die zahlreichen in dieser Dissertation vorgestellten Entwicklungen beschränken sich nicht nur auf einen Fortschritt der jeweiligen Techniken selbst, sondern liefern auch einen wichtigen Beitrag für die zukünftige Entwicklung hochpräziser laserbasierter Spektrometer für wissenschaftliche und medizinische Anwendungen

    OptiÄŤka svojstva nanoklastera plemenitih metala unutar hibridnih sustava i njihova primjena u biosenzorici

    Get PDF
    Linear and nonlinear optical properties of functionalized forms of noble metal nanoclusters are addressed in this thesis. Noble metal nanoclusters belong to nonscalable regime in which each atom counts. They exhibit molecular like electronic transitions, due to quantum confinement effects, which are strongly influenced by structure. These systems have potential for applications in biosensing and bioimaging, based on their unique optical, electronic and structural properties, after being stabilized either by formation of hybrid systems of metal nanoclusters and biomolecules or protecting nanoclusters by ligands. Using density functional theory and its time dependent version interplay between electronic and structural properties in hybrid nanocluster-biomolecules and ligated clusters has been investigated in order to gain insight into origins of their unique optical properties. To understand optical properties of hybrid systems, silver cluster-histidine complexes, as well as nonlinear properties of ligated silver nanoclusters have been investigated theoretically, and compared with experimental results obtained by expert international collaborators. Within this thesis the concept has been worked out allowing to design systems with large two-photon cross sections which can serve for efficient imaging of tissues and cells.U ovoj tezi su opisana linearna i nelinearna optička svojstva funkcioniziranih oblika nanoklastera plemenitih metala. Nanoklasteri plemenitih metala pripadaju neskalirajućim sustavima u kojima se svaki atom broji. Zbog efekata kvantnog zatočenja, imaju elektronske prijelaze slične onima u molekulama, na koje jako utječe struktura. Zbog svojih jedinstvenih optičkih, elektronskih i strukturnih svojstava, potencijalni su kandidati za primjene u biosenzorici i biooslikavanju, nakon što se stabiliziraju formiranjem hibridnih sustava s biomolekulama ili ih se zaštiti ligandima. U tezi je istraživan je odnos elektronskih i strukturnih svojstava u hibridnim sustavima biomolekula i metalnih klastera kao i u ligandiranim klasterima kako bi se razumio uzrok njihovih jedinstvenih optički svojstava, koristeći teoriju funkcionala gustoće i njenu vremenski ovisnu varijantu. Da bi se razumjela svojstva hibridnih sustava, teorijski su istraživani hibridni sustavi srebrenih klastera i histidina te nelinearna svojstva ligandiranih srebrnih klastera, i uspoređeni s eksperimentalnim rezultatima koje su napravili međunarodni suradnici. Razvijen je koncept koji bi mogao pomoći dizajniranju sustava s jakom dvo-fotonskom apsorpcijom koji bi služili za oslikavanje tkiva i stanica

    Reports of planetary geology and geophysics program, 1989

    Get PDF
    Abstracts of reports from Principal Investigators of NASA's Planetary Geology and Geophysics Program are compiled. The research conducted under this program during 1989 is summarized. Each report includes significant accomplishments in the area of the author's funded grant or contract

    X-ray fluorescence applied to yellow pigments based on lead, tin and antimony: comparison of laboratory and portable instrumentation

    Get PDF
    X-ray fluorescence is a diagnostic approach particularly suited to be utilized in cultural heritage sector since it falls in the non-destructive and non-invasive analytical tools. However there are big differences between portable and laboratory instrumentation that make difficult to perform a comparison in terms of quality and reliability of the results. The present study is specifically addressed to investigate these differences in respect of the same analytical sample-set. To reach this goal a comparison was thus carried out between portable and bench top devices X-ray fluorescence devices and techniques were used on different type of yellow pigments based on lead, tin and antimony obtained in laboratory, reproducing the instructions described in “old” recipes, that is: i) mortar of lead and tin produced on the basis of the recipe 13 /c V of the “Manuscript of Danzica” and “ Li tre libri dell’arte del Vasaio” by Cipriano Piccolpasso; ii) two types of lead and tin yellow (Pb2SnO4 and PbSnO3) produced starting from the indications of the 272 and 273 recipes of the “Bolognese Manuscript”; iii) lead antimonate (Pb2Sb2O7) obtained by following the instructions of the Piccolpasso’s treatise and those contained on the “Istoria delle pitture in maiolica fatte in Pesaro e ne’ luoghi circonvicini di Giambattista Passeri” and finally iv) lead, tin and antimony yellow (Pb2SnSbO6,5) obtained starting from the information contained in the paper 30 R of “Manuscript of Danzica” [1]. The XRF analysis were performed using a laboratory instrumentation (Bruker M4 Tornado) and a handset analytical device (Assing Surface Monitor). In order to perform a significant statistical comparison among acquired and processed data, all the analyses have been carried out utilizing the same sample, the same acquisition set up and operative conditions. A chemometric approach, based on the utilization of Principal Component Analysis (PCA) and multivariate analytical based tools [2], was utilized in order to verify the spectral differences, and related informative content, among the different produced yellow pigments. The multivariate approach on the results revealed instrumental differences between the two systems and allowed to compare the common characteristics of the set of pigments analyzed
    corecore