419 research outputs found

    Unitary and nonunitary approaches in quantum field theory

    Get PDF
    We use a simplified essential state model to compare two quantum field theoretical approaches to study the creation of electron-positron pairs from the vacuum. In the unitary approach the system is characterized by a state with different numbers of particles that is described by occupation numbers and evolves with conserved norm. The nonunitary approach can predict the evolution of wave functions and density operators with a fixed number of particles but time-dependent norms. As an example to illustrate the differences between both approaches, we examine the degree of entanglement for the Klein paradox, which describes the creation of an electron-positron pair from vacuum in the presence of an initial electron. We demonstrate how the Pauli blocking by the initial electron comes at the expense of a gain in entanglement of this electron with the created electron as well as with the created positron

    Dynamic Analysis of Executables to Detect and Characterize Malware

    Full text link
    It is needed to ensure the integrity of systems that process sensitive information and control many aspects of everyday life. We examine the use of machine learning algorithms to detect malware using the system calls generated by executables-alleviating attempts at obfuscation as the behavior is monitored rather than the bytes of an executable. We examine several machine learning techniques for detecting malware including random forests, deep learning techniques, and liquid state machines. The experiments examine the effects of concept drift on each algorithm to understand how well the algorithms generalize to novel malware samples by testing them on data that was collected after the training data. The results suggest that each of the examined machine learning algorithms is a viable solution to detect malware-achieving between 90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the performance evaluation on an operational network may not match the performance achieved in training. Namely, the CAA may be about the same, but the values for precision and recall over the malware can change significantly. We structure experiments to highlight these caveats and offer insights into expected performance in operational environments. In addition, we use the induced models to gain a better understanding about what differentiates the malware samples from the goodware, which can further be used as a forensics tool to understand what the malware (or goodware) was doing to provide directions for investigation and remediation.Comment: 9 pages, 6 Tables, 4 Figure

    Managed Control of Composite Cloud Systems

    Get PDF
    Cloud providers have just begun to provide primitive functionality enabling users to configure and easily provision resources, primarily in the infrastructure as a service domain. In order to effectively manage cloud resources in an automated fashion, systems must automate quality-of-service (QoS) metric measurement as a part of a larger usage management strategy. Collected metrics can then be used within control loops to manage and provision cloud resources. This basic approach can be scaled to monitor the use of system artifacts as well as simple QoS parameters, and can also address the needs of large systems spanning the boundaries of single service providers though the problem seems to moving toward intractability

    Neurogenesis Deep Learning

    Full text link
    Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.Comment: 8 pages, 8 figures, Accepted to 2017 International Joint Conference on Neural Networks (IJCNN 2017

    Exploring discordant low amyloid beta and high neocortical tau positron emission tomography cases

    Get PDF
    Introduction: Neocortical 3R4R (3-repeat/4-repeat) tau aggregates are rarely observed in the absence of amyloid beta (Aβ). 18F-MK6240 binds specifically to the 3R4R form of tau that is characteristic of Alzheimer\u27s disease (AD). We report four cases with negative Aβ, but positive tau positron emission tomography (PET) findings. Methods: All Australian Imaging, Biomarkers and Lifestyle study of aging (AIBL) study participants with Aβ (18F-NAV4694) and tau (18F-MK6240) PET scans were included. Centiloid \u3c 25 defined negative Aβ PET (Aβ–). The presence of neocortical tau was defined quantitatively and visually. Results: Aβ– PET was observed in 276 participants. Four of these participants (one cognitively unimpaired [CU], two mild cognitive impairment [MCI], one AD) had tau tracer retention in a pattern consistent with Braak tau stages V to VI. Fluid biomarkers supported a diagnosis of AD. In silico analysis of APP, PSEN1, PSEN2, and MAPT genes did not identify relevant functional mutations. Discussion: Discordant cases were infrequent (1.4% of all Aβ– participants). In these cases, the Aβ PET ligand may not be detecting the Aβ that is present

    Galaxy Star Formation as a Function of Environment in the Early Data Release of the Sloan Digital Sky Survey

    Get PDF
    We present in this paper a detailed analysis of the effect of environment on the star formation activity of galaxies within the Early Data Release (EDR) of the Sloan Digital Sky Survey (SDSS). We have used the Halpha emission line to derive the star formation rate (SFR) for each galaxy within a volume-limited sample of 8598 galaxies with 0.05 less than or equal to z less than or equal to 0.095 and M (r*) less than or equal to 20.45. We find that the SFR of galaxies is strongly correlated with the local ( projected) galaxy density, and thus we present here a density-SFR relation that is analogous to the density-morphology relation. The effect of density on the SFR of galaxies is seen in three ways. First, the overall distribution of SFRs is shifted to lower values in dense environments compared with the field population. Second, the effect is most noticeable for the strongly star-forming galaxies (Halpha EW > 5 Angstrom) in the 75th percentile of the SFR distribution. Third, there is a break ( or characteristic density) in the density-SFR relation at a local galaxy density of similar to1 h(75)(-2) Mpc(-2). To understand this break further, we have studied the SFR of galaxies as a function of clustercentric radius from 17 clusters and groups objectively selected from the SDSS EDR data. The distribution of SFRs of cluster galaxies begins to change, compared with the field population, at a clustercentric radius of 3-4 virial radii (at the >1sigma statistical significance), which is consistent with the characteristic break in density that we observe in the density-SFR relation. This effect with clustercentric radius is again most noticeable for the most strongly star-forming galaxies. Our tests suggest that the density-morphology relation alone is unlikely to explain the density-SFR relation we observe. For example, we have used the ( inverse) concentration index of SDSS galaxies to classify late-type galaxies and show that the distribution of the star-forming (EW Halpha > 5Angstrom) late-type galaxies is different in dense regions ( within 2 virial radii) compared with similar galaxies in the field. However, at present, we are unable to make definitive statements about the independence of the density-morphology and density-SFR relation. We have tested our work against potential systematic uncertainties including stellar absorption, reddening, SDSS survey strategy, SDSS analysis pipelines, and aperture bias. Our observations are in qualitative agreement with recent simulations of hierarchical galaxy formation that predict a decrease in the SFR of galaxies within the virial radius. Our results are in agreement with recent 2dF Galaxy Redshift Survey results as well as consistent with previous observations of a decrease in the SFR of galaxies in the cores of distant clusters. Taken together, these works demonstrate that the decrease in SFR of galaxies in dense environments is a universal phenomenon over a wide range in density (from 0.08 to 10 h(75)(-2) Mpc(-2)) and redshift (out to z similar or equal to 0.5)

    The C4 Clustering Algorithm: Clusters of Galaxies in the Sloan Digital Sky Survey

    Get PDF
    We present the "C4 Cluster Catalog", a new sample of 748 clusters of galaxies identified in the spectroscopic sample of the Second Data Release (DR2) of the Sloan Digital Sky Survey (SDSS). The C4 cluster--finding algorithm identifies clusters as overdensities in a seven-dimensional position and color space, thus minimizing projection effects which plagued previous optical clusters selection. The present C4 catalog covers ~2600 square degrees of sky with groups containing 10 members to massive clusters having over 200 cluster members with redshifts. We provide cluster properties like sky location, mean redshift, galaxy membership, summed r--band optical luminosity (L_r), velocity dispersion, and measures of substructure. We use new mock galaxy catalogs to investigate the sensitivity to the various algorithm parameters, as well as to quantify purity and completeness. These mock catalogs indicate that the C4 catalog is ~90% complete and 95% pure above M_200 = 1x10^14 solar masses and within 0.03 <=z <= 0.12. The C4 algorithm finds 98% of X-ray identified clusters and 90% of Abell clusters within 0.03 <= z <= 0.12. We show that the L_r of a cluster is a more robust estimator of the halo mass (M_200) than the line-of-sight velocity dispersion or the richness of the cluster. L_r. The final SDSS data will provide ~2500 C4 clusters and will represent one of the largest and most homogeneous samples of local clusters.Comment: 32 pages of figures and text accepted in AJ. Electronic version with additional tables, links, and figures is available at http://www.ctio.noao.edu/~chrism/c

    The Effects of Global Change Upon United States Air Quality

    Get PDF
    To understand more fully the effects of global changes on ambient concentrations of ozone and particulate matter with aerodynamic diameter smaller than 2.5 μm (PM2.5) in the United States (US), we conducted a comprehensive modeling effort to evaluate explicitly the effects of changes in climate, biogenic emissions, land use and global/regional anthropogenic emissions on ozone and PM2.5 concentrations and composition. Results from the ECHAM5 global climate model driven with the A1B emission scenario from the Intergovernmental Panel on Climate Change (IPCC) were downscaled using the Weather Research and Forecasting (WRF) model to provide regional meteorological fields. We developed air quality simulations using the Community Multiscale Air Quality Model (CMAQ) chemical transport model for two nested domains with 220 and 36 km horizontal grid cell resolution for a semi-hemispheric domain and a continental United States (US) domain, respectively. The semi-hemispheric domain was used to evaluate the impact of projected global emissions changes on US air quality. WRF meteorological fields were used to calculate current (2000s) and future (2050s) biogenic emissions using the Model of Emissions of Gases and Aerosols from Nature (MEGAN). For the semi-hemispheric domain CMAQ simulations, present-day global emissions inventories were used and projected to the 2050s based on the IPCC A1B scenario. Regional anthropogenic emissions were obtained from the US Environmental Protection Agency National Emission Inventory 2002 (EPA NEI2002) and projected to the future using the MARKet ALlocation (MARKAL) energy system model assuming a business as usual scenario that extends current decade emission regulations through 2050. Our results suggest that daily maximum 8 h average ozone (DM8O) concentrations will increase in a range between 2 to 12 parts per billion (ppb) across most of the continental US. The highest increase occurs in the South, Central and Midwest regions of the US due to increases in temperature, enhanced biogenic emissions and changes in land use. The model predicts an average increase of 1–6 ppb in DM8O due to projected increase in global emissions of ozone precursors. The effects of these factors are only partially offset by reductions in DM8O associated with decreasing US anthropogenic emissions. Increases in PM2.5 levels between 4 and 10 μg m−3 in the Northeast, Southeast, Midwest and South regions are mostly a result of increase in primary anthropogenic particulate matter (PM), enhanced biogenic emissions and land use changes. Changes in boundary conditions shift the composition but do not alter overall simulated PM2.5 mass concentrations
    • …
    corecore