15 research outputs found

    Recent Advances in Single-Particle Tracking: Experiment and Analysis

    Get PDF
    This Special Issue of Entropy, titled “Recent Advances in Single-Particle Tracking: Experiment and Analysis”, contains a collection of 13 papers concerning different aspects of single-particle tracking, a popular experimental technique that has deeply penetrated molecular biology and statistical and chemical physics. Presenting original research, yet written in an accessible style, this collection will be useful for both newcomers to the field and more experienced researchers looking for some reference. Several papers are written by authorities in the field, and the topics cover aspects of experimental setups, analytical methods of tracking data analysis, a machine learning approach to data and, finally, some more general issues related to diffusion

    Algorizmi: A Configurable Virtual Testbed to Generate Datasets for Offline Evaluation of Intrusion Detection Systems

    Get PDF
    Intrusion detection systems (IDSes) are an important security measure that network administrators adopt to defend computer networks against malicious attacks and intrusions. The field of IDS research includes many challenges. However, one open problem remains orthogonal to the others: IDS evaluation. In other words, researchers have not yet succeeded to agree on a general systematic methodology and/or a set of metrics to fairly evaluate different IDS algorithms. This leads to another problem: the lack of an appropriate IDS evaluation dataset that satisfies the common research needs. One major contribution in this area is the DARPA dataset offered by the Massachusetts Institute of Technology Lincoln Lab (MIT/LL), which has been extensively used to evaluate a number of IDS algorithms proposed in the literature. Despite this, the DARPA dataset received a lot of criticism concerning the way it was designed, especially concerning its obsoleteness and inability to incorporate new sorts of network attacks. In this thesis, we survey previous research projects that attempted to provide a system for IDS offline evaluation. From the survey, we identify a set of design requirements for such a system based on the research community needs. We, then, propose Algorizmi as an open-source configurable virtual testbed for generating datasets for offline IDS evaluation. We provide an architectural overview of Algorizmi and its software and hardware components. Algorizmi provides its users with tools that allow them to create their own experimental testbed using the concepts of virtualization and cloud computing. Algorizmi users can configure the virtual machine instances running in their experiments, select what background traffic those instances will generate and what attacks will be launched against them. At any point in time, an Algorizmi user can generate a dataset (network traffic trace) for any of her experiments so that she can use this dataset afterwards to evaluate an IDS the same way the DARPA dataset is used. Our analysis shows that Algorizmi satisfies more requirements than previous research projects that target the same research problem of generating datasets for IDS offline evaluation. Finally, we prove the utility of Algorizmi by building a sample network of machines, generate both background and attack traffic within that network. We then download a snapshot of the dataset for that experiment and run it against Snort IDS. Snort successfully detected the attacks we launched against the sample network. Additionally, we evaluate the performance of Algorizmi while processing some of the common usages of a typical user based on 5 metrics: CPU time, CPU usage, memory usage, network traffic sent/received and the execution time

    Deformation-Induced Electric Currents in Marble Under Simulated Crustal Conditions: Non-Extensivity, Superstatistical Dynamics and Implications for Earthquake Hazard

    Get PDF
    This thesis investigates electric current signals generated spontaneously in specimens of Carrara marble during deformation under crustal conditions. It extends previous work where similar currents were observed during uniaxial deformation of marble. Since marble is a non-piezoelectric material, one of the main questions is how these currents are related to the mechanical processes of deformation. Another question is whether it is possible to extract from these electric currents information about the deformation dynamics. This is particularly important in light of recent claims that geoelectric anomalies observed in the field are related to crustal deformation and can inform us about changes in the organisation of the fault network in a focal region prior to an earthquake. Using an approach that combines rock deformation experiments and statistical modelling, I examine how these electric currents evolve with deformation at the laboratory scale and make several original discoveries regarding their behaviour. To establish how the current signals varied with experimental condition and deformation mechanism across the brittle-ductile transition, I conducted constant strain rate triaxial compression experiments recording differential electric current flow through the rock samples at various confining pressures, strain rates and pore fluid conditions. I acquired mechanical data, ultrasonic velocities and acoustic emissions simultaneously, along with electric current, to constrain the relationship between electric current and deformation. For the statistical modelling, I used a novel entropy-based model, derived from non-extensive statistical mechanics (Tsallis, 1988), which has the advantage of including a term to account for interactions in the system. Interactions are effectively modelled by the non-extensive q-parameter. Small (nanoAmpere) electric currents are generated and sustained during deformation under all the conditions tested. Spontaneous electric current flow in the dry samples is seen only in the region of permanent deformation and is due to the presence of localised electric dipoles. This current flow is correlated to the damage induced by microcracking, with a contribution from other intermittent ductile mechanisms. Current and charge densities are consistent with proposed models of crack separation charging and migrating charged edge dislocations. The onset of current flow occurs only after a 10% reduction in P-wave velocity, implying that some degree of crack damage and/or crack connectivity is required before current will flow through the samples. Electric current evolution exhibits three separate time-scales of behaviour, the absolute and fluctuating components of which can be related to the evolution of stress, deformation mechanism, damage and localisation of deformation leading up to sample failure. In the brittle regime, electric current exhibits a precursory change as the stress drop accelerates towards failure, which is particularly distinct at dynamic strain rates. Current and charge production depend strongly on the experimental conditions. Power-law relationships are seen with confining pressure and strain rate, with the first corresponding to increased microcrack suppression and the second to time-dependent differences in deformation mechanism across the brittle-ductile transition. In the presence of an ionic pore fluid, electrokinetic effects dominate over solid-state mechanisms but development of the crack network and charge contribution from solid-state deformation processes drive the variation in electrokinetic parameters. Current flow in the dry samples is approximately proportional to stress within 90% of peak stress. In the fluid-saturated samples, proportionality holds from 40% peak stress, with a significant increase in the rate of current production from 90% peak stress, and is associated with fluid flow during dilatancy. This proportionality, together with the power-law relationship between current and strain rate is reminiscent of power-law creep, where deformation rate varies as a power-law function of stress, and suggests that the electric signals could be used as a proxy for stress. High frequency fluctuations in the electric current signal can be described by `fat-tailed' q-Gaussian statistics, consistent with an origin in non-extensive statistical mechanics. These distributions can be explained as arising from superstatistical dynamics (Beck, 2001; Beck and Cohen, 2003), i.e., the superposition of local mechanical relaxations in the presence of a slowly varying driving force. The macroscopic distribution parameters provide an excellent prediction of the experimentally observed mean energy dissipation rate of the system (as modelled by the superstatistical β-parameter), particularly at slow strain rates. Furthermore, characteristic q-values are obtained for different deformation regimes across the brittle-ductile transition, and the evolution of q during deformation reveals a two-stage precursory anomaly prior to sample failure, consistent with the stress intensity evolution as modelled from fracture mechanics. These findings indicate that the dynamics of rock deformation are reflected in the statistical properties of the recorded electric current. My findings support the notion that electric currents in the crust can be generated purely from deformation processes themselves. Scaling up the laboratory results to large stressed rock volumes at shallow crustal pressures and constant crustal strain rates, deformation induced transient telluric current systems may be as large as 1 MA, even accounting for >99% dissipation, which corresponds to a huge accumulated net charge of 10 ZC. This implies that a significant amount of charge from deforming tectonic regions contributes to the Earth's telluric currents and electric field, although due to conduction away from the stressed rock volume, it is unlikely that accumulated charge of this quantity would ever be measured in the field. Electric current evolution and its precursory characteristics can be related to models for electric earthquake precursors and fault-zone damage organisation, developed from field observations, providing experimental support for them. However, given the oscillatory nature of the current evolution observed during cataclastic flow processes in the laboratory, there is a high probability of false alarms. Furthermore, the potential for electric anomalies to be useful as earthquake precursors remains contentious due to the difficulties of separating deformation-induced signals from other telluric noise and the wider issue of establishing a statistically significant link with earthquakes

    Weather persistence on sub-seasonal to seasonal timescales: a methodological review

    Get PDF
    Persistence is an important concept in meteorology. It refers to surface weather or the atmospheric circulation either remaining in approximately the same state (stationarity) or repeatedly occupying the same state (recurrence) over some prolonged period of time. Persistence can be found at many different timescales; however, the sub-seasonal to seasonal (S2S) timescale is especially relevant in terms of impacts and atmospheric predictability. For these reasons, S2S persistence has been attracting increasing attention by the scientific community. The dynamics responsible for persistence and their potential evolution under climate change are a notable focus of active research. However, one important challenge facing the community is how to define persistence, from both a qualitative and quantitative perspective. Despite a general agreement on the concept, many different definitions and perspectives have been proposed over the years, among which it is not always easy to find one’s way. The purpose of this review is to present and discuss existing concepts of weather persistence, associated methodologies and physical interpretations. In particular, we call attention to the fact that persistence can be defined as a global or as a local property of a system, with important implications in terms of methods but also impacts. We also highlight the importance of timescale and similarity metric selection, and illustrate some of the concepts using the example of summertime atmospheric circulation over Western Europ

    The Chern-Simons current in systems of DNA-RNA transcriptions

    Full text link
    A Chern-Simons current, coming from ghost and anti-ghost fields of supersymmetry theory, can be used to define a spectrum of gene expression in new time series data where a spinor field, as alternative representation of a gene, is adopted instead of using the standard alphabet sequence of bases A,T,C,G,UA, T, C, G, U. After a general discussion on the use of supersymmetry in biological systems, we give examples of the use of supersymmetry for living organism, discuss the codon and anti-codon ghost fields and develop an algebraic construction for the trash DNA, the DNA area which does not seem active in biological systems. As a general result, all hidden states of codon can be computed by Chern-Simons 3 forms. Finally, we plot a time series of genetic variations of viral glycoprotein gene and host T-cell receptor gene by using a gene tensor correlation network related to the Chern-Simons current. An empirical analysis of genetic shift, in host cell receptor genes with separated cluster of gene and genetic drift in viral gene, is obtained by using a tensor correlation plot over time series data derived as the empirical mode decomposition of Chern-Simons current.Comment: 45 pages, 30 figures, 2 table

    Weather persistence on sub-seasonal to seasonal timescales: a methodological review

    Get PDF
    Persistence is an important concept in meteorology. It refers to surface weather or the atmospheric circulation either remaining in approximately the same state (quasi-stationarity) or repeatedly occupying the same state (recurrence) over some prolonged period of time. Persistence can be found at many different timescales; however, sub-seasonal to seasonal (S2S) timescales are especially relevant in terms of impacts and atmospheric predictability. For these reasons, S2S persistence has been attracting increasing attention from the scientific community. The dynamics responsible for persistence and their potential evolution under climate change are a notable focus of active research. However, one important challenge facing the community is how to define persistence from both a qualitative and quantitative perspective. Despite a general agreement on the concept, many different definitions and perspectives have been proposed over the years, among which it is not always easy to find one's way. The purpose of this review is to present and discuss existing concepts of weather persistence, associated methodologies and physical interpretations. In particular, we call attention to the fact that persistence can be defined as a global or as a local property of a system, with important implications in terms of methods and impacts. We also highlight the importance of timescale and similarity metric selection and illustrate some of the concepts using the example of summertime atmospheric circulation over western Europe

    Anomalous diffusion : from life to machines

    Get PDF
    Diffusion refers to numerous phenomena, by which particles and bodies of all kinds move throughout any kind of material, has emerged as one of the most prominent subjects in the study of complex systems. Motivated by the recent developments in experimental techniques, the field had an important burst in theoretical research, particularly in the study of the motion of particles in biological environments. Just with the information retrieved from the trajectories of particles we are now able to characterize many properties of the system with astonishing accuracy. For instance, when Einstein introduced the diffusion theory back in 1905, he used the motion of microscopic particles to calculate the size of the atoms of the liquid these were suspended. Initially, most of the experimental evidence showed that such systems follow Brownian-like dynamics, i.e. the homogeneous interaction between the particles and the environment led to its stochastic, but uncorrelated motion. However, we know now that such a simple explanation lacks crucial phenomena that have been shown to arise in a plethora of physical systems. The divergence from Brownian dynamics led to the theory of anomalous diffusion, in which the particles are affected in a way or another by their interactions with the environment such that their diffusion changes drastically. For instance features such as ergodicity, Gaussianity, or ageing are now crucial for in the understanding of diffusion processes, well beyond Brownian motion. In theoretical terms, anomalous diffusion has a well-developed framework, able to explain most of the current experimental observations. However, it has been usually focused in describing the systems in terms of its macroscopic behaviour. This means that the processes are described by means of general models, able to predict the average or collective features. Even though such an approach leads to a correct description of the system and hints on the actual underlying phenomena, it lacks the understanding of the particular microscopic interactions leading to anomalous diffusion. The work presented in this Thesis has two main goals. First, we will explore how one may use microscopical (or phenomenological) models to understand anomalous diffusion. By microscopical model we refer to a model in which we will set exactly how the interactions between the various components of a system are. Then, we will explore how these interactions may be tuned in order to recover and control anomalous diffusion and how its features depend on the properties of the system. We will explore crucial topics arising in recent experimental observations, such as weak-ergodicity breaking or liquid-liquid phase separation. Second, we will survey the topic of trajectory characterization. Even if our theories are extremely well developed, without an accurate tool for studying the trajectories observed in experiments, we will be unable to correctly make any faithful prediction. In particular, we will introduce one of the first machine learning techniques that can be used for such purpose, even in systems where previous techniques failed largely.La difusión es el fenómeno por el cual partículas de todas formas y tamaños se mueven a través del entorno que les rodea. Su estudio se ha convertido en una potente herramienta para entender el comportamiento de sistemas complejos. Gracias al reciente desarrollo de diferentes técnicas experimentales, este fenómeno ha generado un enorme interés tanto desde el punto de vista experimental como del teórico, y en especial,en el estudio del movimiento de partículas microscópicas en entornos biológicos. Mediante el análisis de las trayectorias de estas partículas, no solo somos capaces de caracterizar sus propiedades, sino también las de su entorno. El propio Albert Einstein, autor junto con Marian Smoluchowski de la teoría de la difusión, demostró que era posible calcular el radio de los átomos de un líquido simplemente mediante el análisis del movimiento de una partícula suspendida en este. Esta teoría, que dio origen a lo que hoy conocemos como movimiento Browniano, consideraba que la interacción homogénea de una partícula con su entorno provocaba el movimiento aleatorio de esta última. Aunque el movimiento Browniano haya sido utilizado para describir una enorme cantidad de experimentos, hoy sabemos que existen sistemas particulares que se desvían de sus predicciones. Esta divergencia ha dado pie al desarrollo de la teoría de la difusión anómala, en la que, debido a las propiedades de las partículas y sus entornos, la difusión difiere drásticamente de las predicciones de la teoría Browniana. Algunos fenómenos como la ergodicidad, Gausianidad o el envejecimiento de difusión, particulares de la difusión anómala, son hoy en día cruciales para entender el movimiento de partículas en sistemas complejos. En términos teóricos, la difusión anómala tiene unas bases firmes, con las cuáles se explica gran parte de las observaciones experimentales más recientes. Esta teoría, sin embargo, suele centrarse en la descripción de la difusión desde un punto de vista macroscópico. Esto quiere decir: analizar un sistema mediante modelos generales, capaces de predecir propiedades colectivas o globales. Aunque las teorías macroscópicas consiguen describir correctamente la mayoría de los procesos de difusión, no tienen la capacidad de discernir qué tipo de interacciones dan lugar a la difusión anómala. El trabajo presentado en esta tesis tiene dos objetivos principales. El primero es explorar el uso de modelos microscópicos (o fenomenológicos) para entender la difusión anómala. Un modelo microscópico, en contraposición al macroscópico, describe el sistema a partir de sus propiedades específicas. En este caso, a partir del tipo de interacciones que existen entre las partículas y su entorno. El objetivo es por lo tanto entender cuáles de estas interacciones producen difusión anómala. Además, caracterizaremos los parámetros macroscópicos de la difusión, como el exponente anómalo, y mostraremos como depende de las propiedades del sistema. En el camino, exploraremos cómo fenómenos como la rotura débil de la ergodicidad (weak-ergodicity breaking) o la separación de fase aparecen en sistemas con interacciones complejas. El segundo objetivo consiste en el desarrollo de técnicas para la caracterización de trayectorias provenientes de procesos de difusión. Aunque nuestro entendimiento teórico llegue a niveles insospechados en los próximos años, sin un análisis correcto y preciso de las trayectorias experimentales, jamás podremos construir un puente entre teoría y experimentos. Por tanto, el desarrollo de técnicas con las que analizar con la mayor precisión posible dichas trayectorias es un problema igual de importante que el desarrollo teórico de la difusión. En este trabajo, estudiaremos cómo las técnicas de aprendizaje automático (Machine Learning) pueden ser utilizadas para caracterizar dichas trayectorias, llegando a niveles de precisión y análisis muy por encim

    Dynamical Systems

    Get PDF
    Complex systems are pervasive in many areas of science integrated in our daily lives. Examples include financial markets, highway transportation networks, telecommunication networks, world and country economies, social networks, immunological systems, living organisms, computational systems and electrical and mechanical structures. Complex systems are often composed of a large number of interconnected and interacting entities, exhibiting much richer global scale dynamics than the properties and behavior of individual entities. Complex systems are studied in many areas of natural sciences, social sciences, engineering and mathematical sciences. This special issue therefore intends to contribute towards the dissemination of the multifaceted concepts in accepted use by the scientific community. We hope readers enjoy this pertinent selection of papers which represents relevant examples of the state of the art in present day research. [...

    Numerical Modeling Of Collision And Agglomeration Of Adhesive Particles In Turbulent Flows

    Get PDF
    Particle motion, clustering and agglomeration play an important role in natural phenomena and industrial processes. In classical computational fluid dynamics (CFD), there are three major methods which can be used to predict the flow field and consequently the behavior of particles in flow-fields: 1) direct numerical simulation (DNS) which is very expensive and time consuming, 2) large eddy simulation (LES) which resolves the large scale but not the small scale fluctuations, and 3) Reynolds-Averaged Navier-Stokes (RANS) which can only predict the mean flow. In order to make LES and RANS usable for studying the behavior of small suspended particles, we need to introduce small scale fluctuations to these models, since these small scales have a huge impact on the particle behavior. The first part of this dissertation both extends and critically examines a new method for the generation of small scale fluctuations for use with RANS simulations. This method, called the stochastic vortex structure (SVS) method, uses a series of randomly positioned and oriented vortex tubes to induce the small-scale fluctuating flow. We first use SVS in isotropic homogenous turbulence and validate the predicted flow characteristics and collision and agglomeration of particles from the SVS model with full DNS computations. The calculation speed for the induced velocity from the vortex structures is improved by about two orders of magnitude using a combination of the fast multiple method and a local Taylor series expansion. Next we turn to the problem of extension of the SVS method to more general turbulent flows. We propose an inverse method by which the initial vortex orientation can be specified to generate a specific anisotropic Reynolds stress field. The proposed method is validated for turbulence measures and colliding particle transport in comparison to DNS for turbulent jet flow. The second part of the dissertation uses DNS to examine in more detail two issues raised during developing the SVS model. The first issue concerns the effect of two-way coupling on the agglomeration of adhesive particles. The SVS model as developed to date does not account for the effect of particles on the flow-field (one-way coupling). We focused on examination of the local flow around agglomerates and the effect of agglomeration on modulation of the turbulence. The second issue examines the microphysics of turbulent agglomeration by examining breakup and collision of agglomerates in a shear flow. DNS results are reported both for one agglomerate in shear and for collision of two agglomerates, with a focus on the physics and role of the particle-induced flow field on the particle dynamics

    Impacts of climate variability and climate change on renewable power generation

    Get PDF
    Anthropogenic climate change represents a major risk for human civilization and its mitigation requires reductions of greenhouse gas emissions. To stay consistent with the long-term temperature targets of international climate policy, global greenhouse gas emissions have to reach zero within a few decades. Such a dramatic transition towards sustainability in all sectors of human activity requires the decarbonization of power generation at an early stage. In absence of other viable technology choices and given the significant cost declines, renewable power generation forms the backbone of the decarbonization. In contrast to thermal power plants, most renewables are not dispatchable but their generation dynamics are governed by the weather. This dissertation adds to the quantification of impacts of climate variability on wind power generation on different time scales. In particular, it shows that inter-annual wind power generation variability already today has a strong influence on congestion management costs in Germany. Understanding this variability as a normal system feature helps to prevent short-sighted reactions in legislation and power system design. Moreover, it is shown that relevant multi-decadal wind power generation variability exists. Owing to timescales of up to 50 years, these modes are not sufficiently sampled in any modern reanalysis (e.g., MERRA2 or ERA-Interim), which currently cover around 40 years. Consequently, power system assessments based on modern reanalyses may be flawed and should be complemented by multi-decadal assessments. In this context, I also show that 20th century reanalyses (ERA-20C, CERA20C, 20CRv2c) disagree strongly and systematically with respect to long-term wind speed trends. The discrepancy can be traced back to marine wind speed observations which also feature strong upward wind trends that are likely due to an evolving measurement technique. As a consequence, 20th century reanalyses should be employed with care and cross-validation of results is recommended. Due to their weather dependency, renewables are potentially vulnerable to climate change. Indeed, I show that the benefits of large-scale transmission infrastructure in Europe shrink under strong climate change (RCP8.5). The effect is robust across a five member EUROCORDEX ensemble and can be solidified in a larger CMIP5 ensemble. It is rooted in more homogeneous wind conditions over Europe that lead to less smoothing effects via large scale spatial integration. Lastly, the debate around negative emission technologies to enlarge the carbon budget currently focuses on land-based approaches such as Bioenergy with Carbon Capture and Storage. Based on a schematic integration of Direct Air Capture (DAC), we show that its flexibility complements renewable generation variability and can help to integrate large shares of renewables
    corecore