609 research outputs found

    The QED coupling at the Z pole and jet studies of small χ dynamics

    Get PDF
    In the first half of this thesis, motivated by significant progress in both theoretical and empirical studies of e(^+)e(^-) annihilation into hadrons, we perform a reevaluation of the running of the QED coupling to the Z-pole, paying particular attention to the hadronic contribution to vacuum polarization. We use a comprehensive collection of the presently available data and perturbative QCD expressions. This new determination of the running of the coupling is then used as input into a global fit to electroweak data to estimate a preferred value of the Standard Model Higgs boson. An estimate is obtained of M(_H) = 110 GeV, marginally above the zone excluded by direct searches at LEP2.We then investigate the potential for further constraining the hadronic contribution to the vacuum polarization function through mechanisms incorporating analytic continuation from the timelike domain of s 0. Intrinsic sensitivity in the QCD description to the pole masses force us to conclude there is no advantage to be gained in comparison with the direct timelike estimation, although by demanding consistency between the complementary approaches we can both generate an estimate of the charm mass and elucidate low energy data ambiguities, finding a preferred value of m(_c) = 1.4. In the latter half of the thesis, we examine forward jet and pion production in electron - proton deep inelastic scattering in the small x region of the HERA collider at DESY. We demonstrate the imposition of physically motivated dominant subleading corrections to all orders on the leading logarithmic BFKL equation, and that this leads to stable phenomenological predictions. We compare the calculations of differential cross-section distributions incorporating the higher order effects with the experimental profiles for a single jet, an identified and dijets in the very forward region and investigate the sensitivity of the calculation to residual parametric freedom

    Population based spatio-temporal probabilistic modelling of fMRI data

    Get PDF
    High-dimensional functional magnetic resonance imaging (fMRI) data is characterized by complex spatial and temporal patterns related to neural activation. Mixture based Bayesian spatio-temporal modelling is able to extract spatiotemporal components representing distinct haemodyamic response and activation patterns. A recent development of such approach to fMRI data analysis is so-called spatially regularized mixture model of hidden process models (SMM-HPM). SMM-HPM can be used to reduce the four-dimensional fMRI data of a pre-determined region of interest (ROI) to a small number of spatio-temporal prototypes, sufficiently representing the spatio-temporal features of the underlying neural activation. Summary statistics derived from these features can be interpreted as quantification of (1) the spatial extent of sub-ROI activation patterns, (2) how fast the brain respond to external stimuli; and (3) the heterogeneity in single ROIs. This thesis aims to extend the single-subject SMM-HPM to a multi-subject SMM-HPM so that such features can be extracted at group-level, which would enable more robust conclusion to be drawn

    Consumer demand and labor supply : goods, monetary assets, and time

    Get PDF
    This book, although widely available in libraries, is out of print. The author now makes it openly available in PDF format. Since scanners are not perfect in scanning mathematical formulas, the printed book may be preferred

    Validação de heterogeneidade estrutural em dados de Crio-ME por comitês de agrupadores

    Get PDF
    Orientadores: Fernando José Von Zuben, Rodrigo Villares PortugalDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Análise de Partículas Isoladas é uma técnica que permite o estudo da estrutura tridimensional de proteínas e outros complexos macromoleculares de interesse biológico. Seus dados primários consistem em imagens de microscopia eletrônica de transmissão de múltiplas cópias da molécula em orientações aleatórias. Tais imagens são bastante ruidosas devido à baixa dose de elétrons utilizada. Reconstruções 3D podem ser obtidas combinando-se muitas imagens de partículas em orientações similares e estimando seus ângulos relativos. Entretanto, estados conformacionais heterogêneos frequentemente coexistem na amostra, porque os complexos moleculares podem ser flexíveis e também interagir com outras partículas. Heterogeneidade representa um desafio na reconstrução de modelos 3D confiáveis e degrada a resolução dos mesmos. Entre os algoritmos mais populares usados para classificação estrutural estão o agrupamento por k-médias, agrupamento hierárquico, mapas autoorganizáveis e estimadores de máxima verossimilhança. Tais abordagens estão geralmente entrelaçadas à reconstrução dos modelos 3D. No entanto, trabalhos recentes indicam ser possível inferir informações a respeito da estrutura das moléculas diretamente do conjunto de projeções 2D. Dentre estas descobertas, está a relação entre a variabilidade estrutural e manifolds em um espaço de atributos multidimensional. Esta dissertação investiga se um comitê de algoritmos de não-supervisionados é capaz de separar tais "manifolds conformacionais". Métodos de "consenso" tendem a fornecer classificação mais precisa e podem alcançar performance satisfatória em uma ampla gama de conjuntos de dados, se comparados a algoritmos individuais. Nós investigamos o comportamento de seis algoritmos de agrupamento, tanto individualmente quanto combinados em comitês, para a tarefa de classificação de heterogeneidade conformacional. A abordagem proposta foi testada em conjuntos sintéticos e reais contendo misturas de imagens de projeção da proteína Mm-cpn nos estados "aberto" e "fechado". Demonstra-se que comitês de agrupadores podem fornecer informações úteis na validação de particionamentos estruturais independetemente de algoritmos de reconstrução 3DAbstract: Single Particle Analysis is a technique that allows the study of the three-dimensional structure of proteins and other macromolecular assemblies of biological interest. Its primary data consists of transmission electron microscopy images from multiple copies of the molecule in random orientations. Such images are very noisy due to the low electron dose employed. Reconstruction of the macromolecule can be obtained by averaging many images of particles in similar orientations and estimating their relative angles. However, heterogeneous conformational states often co-exist in the sample, because the molecular complexes can be flexible and may also interact with other particles. Heterogeneity poses a challenge to the reconstruction of reliable 3D models and degrades their resolution. Among the most popular algorithms used for structural classification are k-means clustering, hierarchical clustering, self-organizing maps and maximum-likelihood estimators. Such approaches are usually interlaced with the reconstructions of the 3D models. Nevertheless, recent works indicate that it is possible to infer information about the structure of the molecules directly from the dataset of 2D projections. Among these findings is the relationship between structural variability and manifolds in a multidimensional feature space. This dissertation investigates whether an ensemble of unsupervised classification algorithms is able to separate these "conformational manifolds". Ensemble or "consensus" methods tend to provide more accurate classification and may achieve satisfactory performance across a wide range of datasets, when compared with individual algorithms. We investigate the behavior of six clustering algorithms both individually and combined in ensembles for the task of structural heterogeneity classification. The approach was tested on synthetic and real datasets containing a mixture of images from the Mm-cpn chaperonin in the "open" and "closed" states. It is shown that cluster ensembles can provide useful information in validating the structural partitionings independently of 3D reconstruction methodsMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    Auditory group theory with applications to statistical basis methods for structured audio

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.Includes bibliographical references (p. 161-172).Michael Anthony Casey.Ph.D

    Advanced and novel modeling techniques for simulation, optimization and monitoring chemical engineering tasks with refinery and petrochemical unit applications

    Get PDF
    Engineers predict, optimize, and monitor processes to improve safety and profitability. Models automate these tasks and determine precise solutions. This research studies and applies advanced and novel modeling techniques to automate and aid engineering decision-making. Advancements in computational ability have improved modeling software’s ability to mimic industrial problems. Simulations are increasingly used to explore new operating regimes and design new processes. In this work, we present a methodology for creating structured mathematical models, useful tips to simplify models, and a novel repair method to improve convergence by populating quality initial conditions for the simulation’s solver. A crude oil refinery application is presented including simulation, simplification tips, and the repair strategy implementation. A crude oil scheduling problem is also presented which can be integrated with production unit models. Recently, stochastic global optimization (SGO) has shown to have success of finding global optima to complex nonlinear processes. When performing SGO on simulations, model convergence can become an issue. The computational load can be decreased by 1) simplifying the model and 2) finding a synergy between the model solver repair strategy and optimization routine by using the initial conditions formulated as points to perturb the neighborhood being searched. Here, a simplifying technique to merging the crude oil scheduling problem and the vertically integrated online refinery production optimization is demonstrated. To optimize the refinery production a stochastic global optimization technique is employed. Process monitoring has been vastly enhanced through a data-driven modeling technique Principle Component Analysis. As opposed to first-principle models, which make assumptions about the structure of the model describing the process, data-driven techniques make no assumptions about the underlying relationships. Data-driven techniques search for a projection that displays data into a space easier to analyze. Feature extraction techniques, commonly dimensionality reduction techniques, have been explored fervidly to better capture nonlinear relationships. These techniques can extend data-driven modeling’s process-monitoring use to nonlinear processes. Here, we employ a novel nonlinear process-monitoring scheme, which utilizes Self-Organizing Maps. The novel techniques and implementation methodology are applied and implemented to a publically studied Tennessee Eastman Process and an industrial polymerization unit

    Receptive fields optimization in deep learning for enhanced interpretability, diversity, and resource efficiency.

    Get PDF
    In both supervised and unsupervised learning settings, deep neural networks (DNNs) are known to perform hierarchical and discriminative representation of data. They are capable of automatically extracting excellent hierarchy of features from raw data without the need for manual feature engineering. Over the past few years, the general trend has been that DNNs have grown deeper and larger, amounting to huge number of final parameters and highly nonlinear cascade of features, thus improving the flexibility and accuracy of resulting models. In order to account for the scale, diversity and the difficulty of data DNNs learn from, the architectural complexity and the excessive number of weights are often deliberately built in into their design. This flexibility and performance usually come with high computational and memory demands both during training and inference. In addition, insight into the mappings DNN models perform and human ability to understand them still remain very limited. This dissertation addresses some of these limitations by balancing three conflicting objectives: computational/ memory demands, interpretability, and accuracy. This dissertation first introduces some unsupervised feature learning methods in a broader context of dictionary learning. It also sets the tone for deep autoencoder learning and constraints for data representations in light of removing some of the aforementioned bottlenecks such as the feature interpretability of deep learning models with nonnegativity constraints on receptive fields. In addition, the two main classes of solution to the drawbacks associated with overparameterization/ over-complete representation in deep learning models are also presented. Subsequently, two novel methods, one for each solution class, are presented to address the problems resulting from over-complete representation exhibited by most deep learning models. The first method is developed to achieve inference-cost-efficient models via elimination of redundant features with negligible deterioration of prediction accuracy. This is important especially for deploying deep learning models into resource-limited portable devices. The second method aims at diversifying the features of DNNs in the learning phase to improve their performance without undermining their size and capacity. Lastly, feature diversification is considered to stabilize adversarial learning and extensive experimental outcomes show that these methods have the potential of advancing the current state-of-the-art on different learning tasks and benchmark datasets

    Galaxies and clusters as probes of the large-scale structure of the Universe

    Get PDF
    The large-scale structure of the Universe is delineated by the spatial distributions of galaxies and clusters of galaxies. This thesis describes three projects concerned with the use of galaxies and clusters as cosmological probes, following the presentation of necessary background material in Chapter 1.Chapter 2 is concerned with spatial correlations of clusters of galaxies. After compre­hensively reviewing previous work addressing this topic from both observational and theoretical points of view, we present, test and apply an important new method for computing theoretical cluster correlations. Our method combines the theory of peaks in Gaussian random fields with the evolution of the cosmological density field by the Zeldoviclr Approximation: this is the first analytic calculation of the cluster correlation function to take account of the nonlinear evolution of the cosmological density field on cluster scales. We find good agreement between our results and those from recent nu­merical simulations, except for the richest cluster samples, for which our method yields stronger clustering. Comparison of our predicted correlations with those observed in recent optical cluster samples reveal that the once-popular Einstein - de Sitter Cold Dark Matter (CDM) model lacks the large-scale power required to match the observed clustering. We also apply our method in the first theoretical study of the spatial corre­lations of ROSAT clusters. Our results here favour cosmogonies with more large-scale power than CDM, in accordance with those we obtained from optical cluster samples.The projects in Chapters 3 and 4 are concerned with galaxy clustering. In Chapter 3 we consider the redshift-space clustering of samples of IRAS galaxies selected on the basis of their dust emission temperature, having argued that there might be a relation between the temperature of the galaxy and density of the environment in which the galaxy is located. We find, however, no conclusive evidence for a difference in the clustering strength of the “warm” and “cool” samples in redshift space. This validates the use of redshift samples of IRAS galaxies as tracers of large-scale structure, as well as constraining models of merger-induced star formation.In Chapter 4 we show, through the novel analysis of liigli-resolution numerical simulation data, how the observed power spectra of optical and IRAS galaxy clustering constrain the initial power spectrum of density fluctuations and the relation between the galaxy dis­tribution and the underlying density field. Motivated by recent N-body/hydrodynamic simulations, we employ a biasing prescription in which the local galaxy number density at redshift zero is determined by the present local mass density. We determine which combinations of initial power spectrum and biasing prescription are consistent with the observed clustering of optical galaxies and use the observed relation between the dis­tributions of optical and IRAS galaxies to predict corresponding redshift-spa.ee IRASpower spectra. These are compared with observations, as are the pairwise velocity dis­persions predicted by the models. In this way, building, in part, on our results from Chapter 3, we are able to construct a coherent picture of galaxy clustering which is in accord with our results 011 cluster correlations from Chapter 2, showing that galaxies and clusters are consistent probes of the large-scale structure of the Universe

    Modelling a labour market: the case of engineering craftsmen

    Get PDF
    The purpose of this thesis is to contribute to the development of economic models of labour markets. The case of engineering crafts­men is explored with special reference to the recruitment of apprentices. The research methodology involves a synthesis of the evidence available from studies of local labour markets, management decision-making in relation to manpower, and occupational choice, as well as from econometric investigations of different aspects of labour markets reflected in aggregate statistics. A population or manpower accounting system is proposed as a useful framework for statistical analysis and modelling of manpower stocks and flows. The information available about the engin­eering craft group is viewed through this device and, because of its more promising situation as regards data and its importance in terms of active manpower policy in the United Kingdom, the apprentice recruit­ment flow is selected for econometric analysis in the central part of the thesis. The relationship between modelling a labour market and evaluating national training policies is then considered. The thesis concludes by recording the main empirical findings established for engineering craftsmen, summarising the model which has begun to evolve for this labour market and discussing areas for further research and, finally, by making some general points on modelling labour markets, drawn from the study of engineering craftsmen
    corecore