2,797 research outputs found

    Deciphering Okmok Volcano's restless years (2002-2005)

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2015Okmok Volcano is an active island-arc shield volcano located in the central Aleutian islands of Alaska. It is defined by a 10-km-diameter caldera that formed in two cataclysmic eruptions, the most recent being ~2050 years ago. Subsequent eruptions created several cinder cones within the caldera. The youngest of these, Cone A, was the active vent from 1815 through its 1997 eruption. On July 12 2008 Okmok erupted from new vents located northwest of Cone D. Between 2001 and 2004, geodetic measurements showed caldera inflation. These studies suggested that new magma might be entering the system. In 2002, a newly installed seismic network recorded quasi-periodic ("banded") seismic tremor signals occurring at the rate of two or more episodes per hour. This tremor was a near-continuous signal from the day the seismic network was installed. Although the volcano was not erupting, it was clearly in a state of unrest. This unrest garnered considerable attention because the volcano had erupted just six years prior. The seismic tremor potentially held insight as to whether the unrest was a remnant of the 1997 eruption, or whether it signaled a possible rejuvenation of activity and the potential for eruption. To determine the root cause and implications of this remarkable seismic tremor sequence, I created a catalog of over ~17,000 tremor events recorded between 2003 and mid-2005. Tremor patterns evolved on the scale of days, but remained the dominant seismic signal. In order to facilitate the analysis of several years of data I created a MATLAB toolbox, known as "The Waveform Suite". This toolbox made it feasible for me to work with several years of digital data and forego my introductory analyses that were based on paper "helicorder" records. I first attempted to locate the tremor using the relative amplitudes of the seismograms to determine where the tremor was being created. Candidate tremor locations were constrained to a few locations along a corridor between Cone A and the caldera center. I then determined theoretical ratios between a reference station and stations nearby the candidate sources. Results suggested that the signal originated in the shallow portion of the corridor connecting the surface of Cone A to the top of the central magma chamber. This study also suggested that the source migrated along this corridor. I integrated the tremor patterns with other studies and proposed that heat and pressure from continued injections of magma were responsible for maintaining an open venting system at Cone A. The tremor resulted from the boiling of a shallow hydrothermal system in the vicinity of Cone A and volatiles potentially coming from the magma itself. The tremor catalog demonstrates that the seismic signal waned during the study period suggesting that fewer fresh volatiles entered the system, which may have allowed the pathways connecting the magma and volatiles to the surface to close up. By the time new magma entered the system in 2006, this network of pathways was closed, forcing the volatiles to seek a new exit. In hindsight, the 2003-2005 period of varied and waning seismic tremor, and the inferred end of massive open venting, may have been a pivotal era at Okmok that eventually led to the 2008 eruption

    Auto-Denoising for EEG Signals Using Generative Adversarial Network.

    Full text link
    The brain-computer interface (BCI) has many applications in various fields. In EEG-based research, an essential step is signal denoising. In this paper, a generative adversarial network (GAN)-based denoising method is proposed to denoise the multichannel EEG signal automatically. A new loss function is defined to ensure that the filtered signal can retain as much effective original information and energy as possible. This model can imitate and integrate artificial denoising methods, which reduces processing time; hence it can be used for a large amount of data processing. Compared to other neural network denoising models, the proposed model has one more discriminator, which always judges whether the noise is filtered out. The generator is constantly changing the denoising way. To ensure the GAN model generates EEG signals stably, a new normalization method called sample entropy threshold and energy threshold-based (SETET) normalization is proposed to check the abnormal signals and limit the range of EEG signals. After the denoising system is established, although the denoising model uses the different subjects' data for training, it can still apply to the new subjects' data denoising. The experiments discussed in this paper employ the HaLT public dataset. Correlation and root mean square error (RMSE) are used as evaluation criteria. Results reveal that the proposed automatic GAN denoising network achieves the same performance as the manual hybrid artificial denoising method. Moreover, the GAN network makes the denoising process automatic, representing a significant reduction in time

    Automated mass spectrometry-based metabolomics data processing by blind source separation methods

    Get PDF
    Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.Una de les principals limitacions de la metabolòmica és la transformació de dades crues en informació biològica. A més, la metabolòmica basada en espectrometria de masses genera grans quantitats de dades complexes caracteritzades per la co-elució de compostos i artefactes experimentals. L'objectiu d'aquesta tesi és desenvolupar estratègies automatitzades basades en deconvolució cega del senyal per millorar les capacitats dels mètodes existents que tracten les limitacions de les diferents passes del processament de dades en metabolòmica. L'objectiu d'aquesta tesi és també desenvolupar eines capaces d'executar el flux de treball del processament de dades en metabolòmica, que inclou el preprocessament de dades, deconvolució espectral, alineament i identificació. Com a resultat, tres nous mètodes automàtics per deconvolució espectral basats en deconvolució cega del senyal van ser desenvolupats. Aquests mètodes van ser inclosos en dues eines computacionals que permeten convertir automàticament dades crues en informació biològica interpretable i per tant, permeten resoldre hipòtesis biològiques i adquirir nous coneixements biològics.Una de las principales limitaciones de la metabolómica es la transformación de datos crudos en información biológica. Además, la metabolómica basada en espectrometría de masas genera grandes cantidades de datos complejos caracterizados por la co-elución de compuestos y artefactos experimentales. El objetivo de esta tesis es desarrollar estrategias automatizadas basadas en deconvolución ciega de la señal para mejorar las capacidades de los métodos existentes que tratan las limitaciones de los diferentes pasos del procesamiento de datos en metabolómica. El objetivo de esta tesis es también desarrollar herramientas capaces de ejecutar el flujo de trabajo del procesamiento de datos en metabolómica, que incluye el preprocessamiento de datos, deconvolución espectral, alineamiento e identificación. Como resultado, tres nuevos métodos automáticos para deconvolución espectral basados en deconvolución ciega de la señal fueron desarrollados. Estos métodos fueron incluidos en dos herramientas computacionales que permiten convertir automáticamente datos crudos en información biológica interpretable y por lo tanto, permiten resolver hipótesis biológicas y adquirir nuevos conocimientos biológicos.One of the major bottlenecks in metabolomics is to convert raw data samples into biological interpretable information. Moreover, mass spectrometry-based metabolomics generates large and complex datasets characterized by co-eluting compounds and with experimental artifacts. This thesis main objective is to develop automated strategies based on blind source separation to improve the capabilities of the current methods that tackle the different metabolomics data processing workflow steps limitations. Also, the objective of this thesis is to develop tools capable of performing the entire metabolomics workflow for GC--MS, including pre-processing, spectral deconvolution, alignment and identification. As a result, three new automated methods for spectral deconvolution based on blind source separation were developed. These methods were embedded into two computation tools able to automatedly convert raw data into biological interpretable information and thus, allow resolving biological answers and discovering new biological insights

    Replication, development and evaluation of a GDP indicator for the Swedish business cycle. Can the framework of ECB’s indicator ALI be used to create an indicator for the Swedish business cycle?

    Get PDF
    This study shows that the framework used to construct the successful Euro Area Leading Indicator (ALI), developed by de Bondt and Hahn on behalf of the European Central Bank, can successfully be applied to the Swedish business cycle. The ALI type indicator constructed in this study is a weighted index of nine macroeconomic series and accurately predicts the Swedish business cycle by three months (or longer). A data driven indicator using Principal Component Analysis is constructed for comparison, and leads the Swedish business cycle by seven months. It has zero bias whereas the ALI Sweden indicator slightly underestimates the business cycle, but is slightly more accurate than the data driven indicator. Both indicators are efficient. An out-of-sample real time evaluation confirms the indicators’ good performance. A comparison of the series filtered with the HP-filter and the one sided RW-filter shows that the RW-filtered is superior as it filters out the high frequent noise and doesn’t introduce spurious cycles. Service Production Index is found to not be a better proxy for the business cycle than Industrial Production Index. This study suggests the use of both indicators: the ALI Sweden indicator for its accurate and transparent forecast and the data driven indicator for its long lead time and zero bias

    Non Linear Modelling of Financial Data Using Topologically Evolved Neural Network Committees

    No full text
    Most of artificial neural network modelling methods are difficult to use as maximising or minimising an objective function in a non-linear context involves complex optimisation algorithms. Problems related to the efficiency of these algorithms are often mixed with the difficulty of the a priori estimation of a network's fixed topology for a specific problem making it even harder to appreciate the real power of neural networks. In this thesis, we propose a method that overcomes these issues by using genetic algorithms to optimise a network's weights and topology, simultaneously. The proposed method searches for virtually any kind of network whether it is a simple feed forward, recurrent, or even an adaptive network. When the data is high dimensional, modelling its often sophisticated behaviour is a very complex task that requires the optimisation of thousands of parameters. To enable optimisation techniques to overpass their limitations or failure, practitioners use methods to reduce the dimensionality of the data space. However, some of these methods are forced to make unrealistic assumptions when applied to non-linear data while others are very complex and require a priori knowledge of the intrinsic dimension of the system which is usually unknown and very difficult to estimate. The proposed method is non-linear and reduces the dimensionality of the input space without any information on the system's intrinsic dimension. This is achieved by first searching in a low dimensional space of simple networks, and gradually making them more complex as the search progresses by elaborating on existing solutions. The high dimensional space of the final solution is only encountered at the very end of the search. This increases the system's efficiency by guaranteeing that the network becomes no more complex than necessary. The modelling performance of the system is further improved by searching not only for one network as the ideal solution to a specific problem, but a combination of networks. These committces of networks are formed by combining a diverse selection of network species from a population of networks derived by the proposed method. This approach automatically exploits the strengths and weaknesses of each member of the committee while avoiding having all members giving the same bad judgements at the same time. In this thesis, the proposed method is used in the context of non-linear modelling of high-dimensional financial data. Experimental results are'encouraging as both robustness and complexity are concerned.Imperial Users onl

    Essays on the nonlinear and nonstochastic nature of stock market data

    Get PDF
    The nature and structure of stock-market price dynamics is an area of ongoing and rigourous scientific debate. For almost three decades, most emphasis has been given on upholding the concepts of Market Efficiency and rational investment behaviour. Such an approach has favoured the development of numerous linear and nonlinear models mainly of stochastic foundations. Advances in mathematics have shown that nonlinear deterministic processes i.e. "chaos" can produce sequences that appear random to linear statistical techniques. Till recently, investment finance has been a science based on linearity and stochasticity. Hence it is important that studies of Market Efficiency include investigations of chaotic determinism and power laws. As far as chaos is concerned, there are rather mixed or inconclusive research results, prone with controversy. This inconclusiveness is attributed to two things: the nature of stock market time series, which are highly volatile and contaminated with a substantial amount of noise of largely unknown structure, and the lack of appropriate robust statistical testing procedures. In order to overcome such difficulties, within this thesis it is shown empirically and for the first time how one can combine novel techniques from recent chaotic and signal analysis literature, under a univariate time series analysis framework. Three basic methodologies are investigated: Recurrence analysis, Surrogate Data and Wavelet transforms. Recurrence Analysis is used to reveal qualitative and quantitative evidence of nonlinearity and nonstochasticity for a number of stock markets. It is then demonstrated how Surrogate Data, under a statistical hypothesis testing framework, can be simulated to provide similar evidence. Finally, it is shown how wavelet transforms can be applied in order to reveal various salient features of the market data and provide a platform for nonparametric regression and denoising. The results indicate that without the invocation of any parametric model-based assumptions, one can easily deduce that there is more to linearity and stochastic randomness in the data. Moreover, substantial evidence of recurrent patterns and aperiodicities is discovered which can be attributed to chaotic dynamics. These results are therefore very consistent with existing research indicating some types of nonlinear dependence in financial data. Concluding, the value of this thesis lies in its contribution to the overall evidence on Market Efficiency and chaotic determinism in financial markets. The main implication here is that the theory of equilibrium pricing in financial markets may need reconsideration in order to accommodate for the structures revealed

    Evaluation of commercially available post-consumer recycled pet to produce bottles for mineral water

    Get PDF
    O Polietileno Tereftalato (PET) é o principal polímero para a produção de garrafas de água e refrigerantes, sendo cada vez mais importante, no contexto global, o combate ao desperdício e descarte através da reciclagem deste importante recurso. Neste trabalho, seis resinas processo mecânicos aprovados pela EFSA foram avaliadas. As propriedades de cor, viscosidade intrínseca, temperatura de fusão, concentração de benzeno, limoneno, oligómeros e substâncias não intencionalmente adicionadas (NIAS) foram avaliadas. No que toca à determinação das concentrações de Benzeno, Limoneno e Oligómeros, foram encontrados valores superiores comparativamente ao reportado em bibliografia. As concentrações obtidas foram de 30 – 410 μg kg-1 PET para o benzeno, 20 – 66 μg kg-1 PET para o limoneno; 52 – 78 mg kg-1 PET para o dímero e 999 – 1394 mg kg-1 PET para o trímero. A concentração de NIAS detetada nas resinas conduz a um nível de exposição estimado (considerando uma garrafa de 8,5 g e 0,3 L) inferior ao correspondente à Classe 3 de Cramer da abordagem TTC para o limite de risco toxicológico. A análise estatística dos dados pelo modelo paramétrico univariado agrupou as amostras em 3 sub-grupos de homogeneidade: o primeiro grupo compreende as amostras IN, NO e F, o segundo as amostras F, FBL e BA e o terceiro a amostra MO. A análise por componentes independentes (ICA) confirmou alguns dos resultados deste teste. Foi possível verificar a similaridade das amostras MO e BA pelos conteúdos em nonanal, F e FBL pelo etilhexilacetato, dodecano e o difenil éter e as amostras FBL e IN pelo farneceno. A amostra NO foi a única que não apresentou correlação com as restantes.Polyethylene terephthalate (PET) is the most important polymer for the production of bottles for water and soft drinks, being increasingly important globally to reduce waste by recycling this material. In this work, six PET resins from different mechanical recycling processes, with positive opinions from EFSA, were evaluated for the properties: color, intrinsic viscosity, melting temperature and for the concentration of benzene, limonene, oligomers, and non-intentionally added substances (NIAS). Regarding the determination of Benzene, Limonene, and Oligomers, the samples in study have higher concentration values than those found in the literature. The obtained concentrations are 30 – 410 μg kg-1 PET for benzene, 20 – 66 μg kg-1 PET, for limonene and 52 – 78 mg kg-1 PET for PET dimer and 999 – 1394 mg kg-1 PET for trimer. The unknowns and NIAS concentration detected in the resins, yield and estimated exposure levels (considering a bottle of 8,5 g and 0,3 L) lower than that corresponding to the Cramer Class 3 of TTC approach for toxicology risk. The statistical analysis by univariate approach grouped the samples into 3 subsets: one group including the samples IN, NO, and F, the second group including the samples F, FBL, and BA, and NO as the only sample in the third group. The ICA approach confirmed some results from the univariate model: it was found out that MO and BA correlate by nonanal, F and FBL by the ethylhexylacetate, dodecane and diphenyl ether, and FBL and IN by farnesene. NO showed no correlation with the remaining

    Rapid sensing of volcanic SO₂ fluxes using a dual ultraviolet camera system: new techniques and measurements at Southern Italian volcanoes

    Get PDF
    Volatiles carry crucial information on pre- to sin-eruptive processes at active volcanoes. Measurements of gas emission rates (crater plumes, fumaroles, diffuse soil degassing) therefore improve our understanding of degassing processes and subsurface magmatic and hydrothermal conditions, and contribute to eruption forecasting. Recent technological developments in spectroscopy have allowed, over the last 30 years, the remote sensing of magmatic volatile emissions from quiescent and erupting degassing volcanoes. These data-sets have contributed to discovering cyclic gas flux components due to periodic magma supply and replenishment in magma storage zones, and/or timescales of magma migration (and degassing) within the feeding conduit systems of volcanoes (chapter 2). In spite of these relevant achievements, a number of magmatic degassing processes have remained elusive to measure, as they occur at a faster rate than the time resolution of most available spectroscopic techniques. In this study, I take advantage of a novel technique - the UV camera (chapter 3) - to image SO2 emissions from the Italian volcanoes with improved high temporal resolution. The UV camera heralds the much awaited prospect of capturing transient (≤ tens of seconds) volcanic gas-driven phenomena, such as Strombolian explosions and puffing. Here, this technique has been updated to a new configuration (dual-camera system), which combines higher temporal resolution (0.5-1.2 Hz) and improved accuracy relative to the single-camera setup. During the first year of this PhD, the methodology has been extensively tested and improved, whilst developing a user-friendly control software (Vulcamera) and a calibration technique (in tandem DOAS-SO2 quartz cells calibration), which simplify instrument deployment, acquisition and data analysis (chapter 4). The results of the volcano applications of the UV camera are described in chapter 5. A first application (chapter 5.2) was focused on SO2 gas flux measurements at individual fumaroles from the La Fossa crater (Vulcano island, Italy) fumarolic field. There, the dual- UV camera technique allowed the simultaneous imaging of multiple-source emissions, discriminating between SO2 contributions from the four main fumarolic areas. The UV camera-derived individual fumarole SO2 fluxes have been used in tandem with MultiGASderived gas/SO2 molar ratios to accurately assess CO2, H2O, and H2S fluxes. Results highlight a factor ~2 increase in CO2 and H2O degassing during the La Fossa crater degassing/heating unrest event of November-December 2009. Bubbles nucleation (birth), coalescence (growth), outgassing and fragmentation (death), are stages of volatile's life within the magma. Our understanding of these processes mainly comes from modelling and textural studies. In this work, I have attempted to retrace part of the gas bubbles' life by measuring - at high rate - SO2 outgassing rates from two openvent volcanoes: Stromboli and Etna. On Stromboli (Chapter 5.3), the UV camera-derived data allowed the first simultaneous estimate of the SO2 flux contribution from the three main forms of degassing at Stromboli (passive degassing, 84-92 %; explosive degassing, 5-8 %; puffing, 3-8 %). The obtained high frequency SO2 flux time-series also revealed the existence of a periodic SO2 degassing pattern over timescales of minutes, modulated by rhythmic strombolian explosions. Also I report on systematic in tandem UV camera-geophysical observations. Among the key results, I provide experimental evidence for a positive correlation between seismic (very-long period; VLP) thermal, and gas (eruptive SO2 mass) signals irradiated by individual Strombolian explosions. During each strombolian event, onset of the SO2 flux emission systematically coincides with deflation of the conduit upon gas slug bursting during the explosion. At Mount Etna (Chapter 5.4), degassing mechanisms and rates have been studied during two field campaigns on Pizzi Dineri (northern rim of Valle del Bove), from which a clear view of the pulsate gas emissions (gas puffing) from the North-east crater was available. The >10 hour acquired SO2 flux time series highlighted a periodic degassing behaviour for this vent, with characteristic periods in the 60-250 s range. This allows deriving new constraints on model gas bubble distribution in a magmatic conduit. The data obtained here support a process of gas packaging into trains of discrete bubble-rich layers. This, coupled with time variations in ascent rate of individual gas bubble layers, may well account for the time-dependent periodicity of observed volcanic SO2 flux emissions
    corecore