2,061 research outputs found

    Single particle 2D Electron crystallography for membrane protein structure determination

    Get PDF
    Proteins embedded into or attached to the cellular membrane perform crucial biological functions. Despite such importance, they remain among the most challenging targets of structural biology. Dedicated methods for membrane protein structure determination have been devised since decades, however with only partial success if compared to soluble proteins. One of these methods is 2D electron crystallography, in which the proteins are periodically arranged into a lipid bilayer. Using transmission electron microscopy to acquire projection images of samples containing such 2D crystals, which are embedded into a thin vitreous ice layer for radiation protection (cryo-EM), computer algorithms can be used to generate a 3D reconstruction of the protein. Unfortunately, in nearly every case, the 2D crystals are not flat and ordered enough to yield high-resolution reconstructions. Single particle analysis, on the other hand, is a technique that aligns projections of proteins isolated in solution in order to obtain a 3D reconstruction with a high success rate in terms of high resolution structures. In this thesis, we couple 2D crystal data processing with single particle analysis algorithms in order to perform a local correction of crystal distortions. We show that this approach not only allows reconstructions of much higher resolution than expected from the diffraction patterns obtained, but also reveals the existence of conformational heterogeneity within the 2D crystals. This structural variability can be linked to protein function, providing novel mechanistic insights and an explanation for why 2D crystals do not diffract to high resolution, in general. We present the computational methods that enable this hybrid approach, as well as other tools that aid several steps of cryo-EM data processing, from storage to postprocessing

    Validação de heterogeneidade estrutural em dados de Crio-ME por comitês de agrupadores

    Get PDF
    Orientadores: Fernando José Von Zuben, Rodrigo Villares PortugalDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Análise de Partículas Isoladas é uma técnica que permite o estudo da estrutura tridimensional de proteínas e outros complexos macromoleculares de interesse biológico. Seus dados primários consistem em imagens de microscopia eletrônica de transmissão de múltiplas cópias da molécula em orientações aleatórias. Tais imagens são bastante ruidosas devido à baixa dose de elétrons utilizada. Reconstruções 3D podem ser obtidas combinando-se muitas imagens de partículas em orientações similares e estimando seus ângulos relativos. Entretanto, estados conformacionais heterogêneos frequentemente coexistem na amostra, porque os complexos moleculares podem ser flexíveis e também interagir com outras partículas. Heterogeneidade representa um desafio na reconstrução de modelos 3D confiáveis e degrada a resolução dos mesmos. Entre os algoritmos mais populares usados para classificação estrutural estão o agrupamento por k-médias, agrupamento hierárquico, mapas autoorganizáveis e estimadores de máxima verossimilhança. Tais abordagens estão geralmente entrelaçadas à reconstrução dos modelos 3D. No entanto, trabalhos recentes indicam ser possível inferir informações a respeito da estrutura das moléculas diretamente do conjunto de projeções 2D. Dentre estas descobertas, está a relação entre a variabilidade estrutural e manifolds em um espaço de atributos multidimensional. Esta dissertação investiga se um comitê de algoritmos de não-supervisionados é capaz de separar tais "manifolds conformacionais". Métodos de "consenso" tendem a fornecer classificação mais precisa e podem alcançar performance satisfatória em uma ampla gama de conjuntos de dados, se comparados a algoritmos individuais. Nós investigamos o comportamento de seis algoritmos de agrupamento, tanto individualmente quanto combinados em comitês, para a tarefa de classificação de heterogeneidade conformacional. A abordagem proposta foi testada em conjuntos sintéticos e reais contendo misturas de imagens de projeção da proteína Mm-cpn nos estados "aberto" e "fechado". Demonstra-se que comitês de agrupadores podem fornecer informações úteis na validação de particionamentos estruturais independetemente de algoritmos de reconstrução 3DAbstract: Single Particle Analysis is a technique that allows the study of the three-dimensional structure of proteins and other macromolecular assemblies of biological interest. Its primary data consists of transmission electron microscopy images from multiple copies of the molecule in random orientations. Such images are very noisy due to the low electron dose employed. Reconstruction of the macromolecule can be obtained by averaging many images of particles in similar orientations and estimating their relative angles. However, heterogeneous conformational states often co-exist in the sample, because the molecular complexes can be flexible and may also interact with other particles. Heterogeneity poses a challenge to the reconstruction of reliable 3D models and degrades their resolution. Among the most popular algorithms used for structural classification are k-means clustering, hierarchical clustering, self-organizing maps and maximum-likelihood estimators. Such approaches are usually interlaced with the reconstructions of the 3D models. Nevertheless, recent works indicate that it is possible to infer information about the structure of the molecules directly from the dataset of 2D projections. Among these findings is the relationship between structural variability and manifolds in a multidimensional feature space. This dissertation investigates whether an ensemble of unsupervised classification algorithms is able to separate these "conformational manifolds". Ensemble or "consensus" methods tend to provide more accurate classification and may achieve satisfactory performance across a wide range of datasets, when compared with individual algorithms. We investigate the behavior of six clustering algorithms both individually and combined in ensembles for the task of structural heterogeneity classification. The approach was tested on synthetic and real datasets containing a mixture of images from the Mm-cpn chaperonin in the "open" and "closed" states. It is shown that cluster ensembles can provide useful information in validating the structural partitionings independently of 3D reconstruction methodsMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    Predictive performance of value-at-risk models: Covid-19 “Pandemonium”

    Get PDF
    Mestrado em Mathematical FinanceAtualmente, os modelos Value-at-Risk (VaR), têm um papel muito importante a nível dos Mercados Financeiros, sendo uma das ferramentas mais utilizadas, por analistas financeiros, para gestão e estimação de risco de mercado. Nesta tese, três métodos de estimação de VaR, nomeadamente o método de Simulação Histórica, GARCH(1,1) e o método EVT-POT Dinâmico, foram aplicados. O propósito deste trabalho é estimar modelos VaR para os países: EUA, França, Alemanha, Itália, Japão, Reino Unido, China, Espanha e Portugal, com um intervalo de tempo desde 1 de Janeiro de 2007 até 31 de Agosto de 2020 e um nível de confiança de 99%. Estas estimações serão então testadas por via Backtest, permitindo identificar quando ocorreram a maioria das falhas, e se estas ocorreram em períodos normais ou de crise (por exemplo, a Pandemia COVID-19). Adicionalmente, é estudado se existe alguma relação entre o número de mortos por país e o movimento dos retornos e da volatilidade dos índices de stocks. O modelo que mostrou ter maior precisão aquando da estimação de períodos de crise foi o EVT-POT dinâmico, sendo HS o modelo menos preciso. É possível observar que a maioria das falhas, causadas por observações incomuns, ocorreram durante os anos 2008, 2011, 2013, 2018 e 2020, que são considerados períodos de crise. Foi também possível concluir que o movimento dos índices de stocks é influenciado pelo aumento do número de mortes por COVID-19, mostrando assim que existe uma relação entre ambos (quando o número de mortes aumenta, os mercados tornam-se mais voláteis).Nowadays, Value-at-Risk (VaR) models play a crucial role in Financial Markets, being one of the most widely risk management tools used by financial analysts, to estimate market risk. In this thesis, three widely used approaches to estimate VaR, namely Historical Simulation, GARCH(1,1) and Dynamic EVT-POT, were applied. We will estimate VaR models for the countries: USA, UK, France, Germany, Italy, Japan, China, Spain and Portugal with a time horizon from 1st of January of 2007 to 31st of August 2020. It was chosen a confidence level of 99%. These estimations will then be backtested, enabling a conclusion of when the majority of the exceedances happen in a "normal" period or in a crisis period (e.g. COVID-19 Pandemic). Further, it is studied if there is any relation between the mortality number in each country and the movement in returns or volatility of stock indices. The model that showed to be the most accurate when estimating crisis periods is Dynamic EVT-POT model. The model that showed less accuracy is the HS. It is possible to see that most of the exceedances, caused by outlier observations, occur during years 2008, 2011, 2013, 2018 and 2020 which are years known to be crisis periods. It was also possible to conclude that the movement in the stock indices is influenced with the increase in COVID-19 related deaths, showing therefore that there is some sort of relation between the two phenomena (when the number of deaths increase, the markets are more volatile).info:eu-repo/semantics/publishedVersio

    Membrane and soluble protein structure determination by cryo-TEM

    Get PDF
    Proteins are biological polymers ubiquitous through all forms of life. Essential processes such as ion conduction, enzymatic catalysis, signal detection and transduction rely on proteins. Structural aspects of the cell such as the cell shape or the compact packing of DNA in a chromosome also require proteins. While DNA carries the genetic information, which ultimately defines the response of a cell to any given event, virtually all processes in the cell depend on proteins to occur. In this thesis cryogenic electron microscopy and single particle analysis workflow are used to determine electron density maps of a set of both soluble and membrane proteins. The obtained structural information is used to elucidate biological processes of the analysed proteins therefore linking structure to function

    Financialisation and the slowdown of labour productivity in Portugal: A post-Keynesian approach

    Get PDF
    The aim of this paper is to conduct a time series econometric analysis in order to empirically evaluate the role of financialisation in the slowdown of labour productivity in Portugal during the period from 1980 to 2017. During that time, the Portuguese economy faced a financialisation phenomenon due to the European integration process and the corresponding imposition of a strong wave of privatisation, liberalisation and deregulation of the Portuguese financial system. At the same time, Portuguese labour productivity exhibited a sustained downward trend, which seems to contradict the well-entrenched mainstream hypothesis on the finance–productivity nexus. Based on the post-Keynesian literature, we identify four channels through which the phenomenon of financialisation has impaired labour productivity, namely weak economic performance, the fall in labour’s share of income, the rise of inequality in personal income and an intensification of the degree of financialisation. The paper finds that lagged labour productivity, economic performance and labour income share positively impact labour productivity in Portugal, while personal income inequality and the degree of financialisation negatively impact labour productivity in Portugal. The paper also finds that the main triggers for the slowdown of labour productivity in Portugal are the degree of financialisation and personal income inequality over the last decades

    Preventing Atomicity Violations with Contracts

    Full text link
    Software developers are expected to protect concurrent accesses to shared regions of memory with some mutual exclusion primitive that ensures atomicity properties to a sequence of program statements. This approach prevents data races but may fail to provide all necessary correctness properties.The composition of correlated atomic operations without further synchronization may cause atomicity violations. Atomic violations may be avoided by grouping the correlated atomic regions in a single larger atomic scope. Concurrent programs are particularly prone to atomicity violations when they use services provided by third party packages or modules, since the programmer may fail to identify which services are correlated. In this paper we propose to use contracts for concurrency, where the developer of a module writes a set of contract terms that specify which methods are correlated and must be executed in the same atomic scope. These contracts are then used to verify the correctness of the main program with respect to the usage of the module(s). If a contract is well defined and complete, and the main program respects it, then the program is safe from atomicity violations with respect to that module. We also propose a static analysis based methodology to verify contracts for concurrency that we applied to some real-world software packages. The bug we found in Tomcat 6.0 was immediately acknowledged and corrected by its development team

    Sensor indutivo para guiamento de veículos

    Get PDF
    Este artigo apresenta um sensor indutivo vocacionado para o guiamento de veículos. Trata-se de um trabalho desenvolvido no âmbito da cadeira de Projecto de Electrónica do Curso de Engenharia de Sistemas das Telecomunicações e Electrónica. Esta comunicação descreve o enquadramento, projecto e implementação do referido sensor.info:eu-repo/semantics/publishedVersio
    corecore