63 research outputs found

    3D Protein structure prediction with genetic tabu search algorithm

    Get PDF
    Abstract Background Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. Results In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. Conclusions The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic algorithm by using the flexible memory functions of TS. Compared with some previous algorithms, GATS algorithm has better performance in global optimization and can predict 3D protein structure more effectively

    Estimation of aquifer properties using electrical resistivity data in parts of Nsukka L.G.A., Enugu State

    Get PDF
    Communications in Physical Sciences Vol. 2. No. 1, 1-13. (2017) The study was carried out to investigate the variation of hydrodynamic parameters in parts of Nsukka Local Government Area, Enugu State, via vertical electrical sounding (VES) technique employing Schlumberger electrode configuration. The results from measured parameters were used in estimating other parameters such as hydraulic conductivity, transmissivity, porosity, formation factor and tortuosity. The third layer was delineated as aquiferous layer, with relative thickness compared to the overlying layers. The range of results obtained shows a high variation of these parameters, hydraulic conductivity ranges from 0.0989 to 0.5079m/day with an average of 0.3025m/day. Transmissivity has range between 6.5779 and 57.9546m2/day, with the average value of 18.7491m2/day; porosity ranged from 27.6863 to 29.3226%, and its average is 28.6524%. Formation factor and tortuosity range from 0.00043 to 0.00049 and 0.1129 to 0.1167 respectively. Their variation was clearly displayed on the contour maps, and this was attributed to changes in properties of subsurface, such as grain sizes, pore shapes and sizes. The result of this study will be a useful guide in exploration and abstraction of groundwater repositories in the study area &nbsp

    Optimización de algoritmos bioinspirados en sistemas heterogéneos CPU-GPU.

    Get PDF
    Los retos científicos del siglo XXI precisan del tratamiento y análisis de una ingente cantidad de información en la conocida como la era del Big Data. Los futuros avances en distintos sectores de la sociedad como la medicina, la ingeniería o la producción eficiente de energía, por mencionar sólo unos ejemplos, están supeditados al crecimiento continuo en la potencia computacional de los computadores modernos. Sin embargo, la estela de este crecimiento computacional, guiado tradicionalmente por la conocida “Ley de Moore”, se ha visto comprometido en las últimas décadas debido, principalmente, a las limitaciones físicas del silicio. Los arquitectos de computadores han desarrollado numerosas contribuciones multicore, manycore, heterogeneidad, dark silicon, etc, para tratar de paliar esta ralentización computacional, dejando en segundo plano otros factores fundamentales en la resolución de problemas como la programabilidad, la fiabilidad, la precisión, etc. El desarrollo de software, sin embargo, ha seguido un camino totalmente opuesto, donde la facilidad de programación a través de modelos de abstracción, la depuración automática de código para evitar efectos no deseados y la puesta en producción son claves para una viabilidad económica y eficiencia del sector empresarial digital. Esta vía compromete, en muchas ocasiones, el rendimiento de las propias aplicaciones; consecuencia totalmente inadmisible en el contexto científico. En esta tesis doctoral tiene como hipótesis de partida reducir las distancias entre los campos hardware y software para contribuir a solucionar los retos científicos del siglo XXI. El desarrollo de hardware está marcado por la consolidación de los procesadores orientados al paralelismo masivo de datos, principalmente GPUs Graphic Processing Unit y procesadores vectoriales, que se combinan entre sí para construir procesadores o computadores heterogéneos HSA. En concreto, nos centramos en la utilización de GPUs para acelerar aplicaciones científicas. Las GPUs se han situado como una de las plataformas con mayor proyección para la implementación de algoritmos que simulan problemas científicos complejos. Desde su nacimiento, la trayectoria y la historia de las tarjetas gráficas ha estado marcada por el mundo de los videojuegos, alcanzando altísimas cotas de popularidad según se conseguía más realismo en este área. Un hito importante ocurrió en 2006, cuando NVIDIA (empresa líder en la fabricación de tarjetas gráficas) lograba hacerse con un hueco en el mundo de la computación de altas prestaciones y en el mundo de la investigación con el desarrollo de CUDA “Compute Unified Device Arquitecture. Esta arquitectura posibilita el uso de la GPU para el desarrollo de aplicaciones científicas de manera versátil. A pesar de la importancia de la GPU, es interesante la mejora que se puede producir mediante su utilización conjunta con la CPU, lo que nos lleva a introducir los sistemas heterogéneos tal y como detalla el título de este trabajo. Es en entornos heterogéneos CPU-GPU donde estos rendimientos alcanzan sus cotas máximas, ya que no sólo las GPUs soportan el cómputo científico de los investigadores, sino que es en un sistema heterogéneo combinando diferentes tipos de procesadores donde podemos alcanzar mayor rendimiento. En este entorno no se pretende competir entre procesadores, sino al contrario, cada arquitectura se especializa en aquella parte donde puede explotar mejor sus capacidades. Donde mayor rendimiento se alcanza es en estos clústeres heterogéneos, donde múltiples nodos son interconectados entre sí, pudiendo dichos nodos diferenciarse no sólo entre arquitecturas CPU-GPU, sino también en las capacidades computacionales dentro de estas arquitecturas. Con este tipo de escenarios en mente, se presentan nuevos retos en los que lograr que el software que hemos elegido como candidato se ejecuten de la manera más eficiente y obteniendo los mejores resultados posibles. Estas nuevas plataformas hacen necesario un rediseño del software para aprovechar al máximo los recursos computacionales disponibles. Se debe por tanto rediseñar y optimizar los algoritmos existentes para conseguir que las aportaciones en este campo sean relevantes, y encontrar algoritmos que, por su propia naturaleza sean candidatos para que su ejecución en dichas plataformas de alto rendimiento sea óptima. Encontramos en este punto una familia de algoritmos denominados bioinspirados, que utilizan la inteligencia colectiva como núcleo para la resolución de problemas. Precisamente esta inteligencia colectiva es la que les hace candidatos perfectos para su implementación en estas plataformas bajo el nuevo paradigma de computación paralela, puesto que las soluciones pueden ser construidas en base a individuos que mediante alguna forma de comunicación son capaces de construir conjuntamente una solución común. Esta tesis se centrará especialmente en uno de estos algoritmos bioinspirados que se engloba dentro del término metaheurísticas bajo el paradigma del Soft Computing, el Ant Colony Optimization “ACO”. Se realizará una contextualización, estudio y análisis del algoritmo. Se detectarán las partes más críticas y serán rediseñadas buscando su optimización y paralelización, manteniendo o mejorando la calidad de sus soluciones. Posteriormente se pasará a implementar y testear las posibles alternativas sobre diversas plataformas de alto rendimiento. Se utilizará el conocimiento adquirido en el estudio teórico-práctico anterior para su aplicación a casos reales, más en concreto se mostrará su aplicación sobre el plegado de proteínas. Todo este análisis es trasladado a su aplicación a un caso concreto. En este trabajo, aunamos las nuevas plataformas hardware de alto rendimiento junto al rediseño e implementación software de un algoritmo bioinspirado aplicado a un problema científico de gran complejidad como es el caso del plegado de proteínas. Es necesario cuando se implementa una solución a un problema real, realizar un estudio previo que permita la comprensión del problema en profundidad, ya que se encontrará nueva terminología y problemática para cualquier neófito en la materia, en este caso, se hablará de aminoácidos, moléculas o modelos de simulación que son desconocidos para los individuos que no sean de un perfil biomédico.Ingeniería, Industria y Construcció

    Normal mode computations and applications

    Get PDF
    Proteins are fundamental functional units in cells. Proteins form stable and yet somewhat flexible 3D structures and often function by interacting with other molecules. Their functional behaviors are determined by their 3-D structures as well as their flexibilities. In this thesis, I focus my study on protein dynamics and its role in protein function. One of the most powerful computational methods for studying protein dynamics is normal mode analysis (NMA). Especially its low frequency modes having the intrinsic dynamics of proteins are of interest for most of protein dynamics studies. Although NMA provides analytical solutions to a protein\u27s collective motions, it is inconvenient to use because of its requirement of energy minimization, and it is prohibitive due to the large memory consumption and the long computation time especially when the system is too large. Additionally, it is unclear what meanings the frequencies of normal modes have, and if those meanings can be validated by comparison with the experimental results. The majority of this thesis resolves the above issues. I have addressed following sequence of questions and developed several simplified NMAs as answers: (1) what is the role of inter residue forces; (2) how to remove the energy minimization requirement in NMA yet to keep most of accuracy; (3) how to efficiently build the coarse-grained model from the all-atomic model with keeping atomic accuracy. Additionally, using newly developed models and traditional NMA, I have examined the meaning of normal modes in all frequency range, and have found the connection with experimental results. The last part of this thesis addresses, as an application of normal modes, how the normal modes can depict the sequence of breathing motion of myoglobin to find the transition pathway that dynamically opens ligand migration channels. The results have an excellent agreement with molecular dynamics simulation results and experimentally determined reaction rate constants

    Quantum annealing and advanced optimization strategies of closed and open quantum systems

    Get PDF
    Adiabatic quantum computation and quantum annealing are powerful methods designed to solve optimization problems more efficiently than classical computers. The idea is to encode the solution to the optimization problem into the ground state of an Ising Hamiltonian, which can be hard to diagonalize exactly and can involve long-range and multiple-body interactions. The adiabatic theorem of quantum mechanics is exploited to drive a quantum system towards the target ground state. More precisely, the evolution starts from the ground state of a transverse field Hamiltonian, providing the quantum fluctuations needed for quantum tunneling between trial solution states. The Hamiltonian is slowly changed to target the Ising Hamiltonian of interest. If this evolution is infinitely slow, the system is guaranteed to stay in its ground state. Hence, at the end of the dynamics, the state can be measured, yielding the solution to the problem. In real devices, such as in the D-Wave quantum annealers, the evolution lasts a finite amount of time, which gives rise to Landau-Zener diabatic transitions, and occurs in the presence of an environment, inducing thermal excitations outside the ground state. Both these limitations have to be carefully addressed in order to understand the true potential of these devices. The present thesis aims to find strategies to overcome these limitations. In the first part of this work, we address the effects of dissipation. We show that a low-temperature Markovian environment can improve quantum annealing, compared with the closed-system case, supporting other previous results known in the literature as thermally-assisted quantum annealing. In the second part, we combine dissipation with advanced annealing schedules, featuring pauses and iterated or adiabatic reverse annealing, which, in combination with low-temperature environments, can favor relaxation to the ground state and improve quantum annealing compared to the standard algorithm. In general, however, dissipation is detrimental for quantum annealing especially when the annealing time is longer than the typical thermal relaxation and decoherence time scales. For this reason, it is essential to devise shortcuts to adiabaticity so as to reach the adiabatic limit for relatively short times in order to decrease the impact of thermal noise on the performances of QA. To this end, in the last part of this thesis we study the counterdiabatic driving approach to QA. In counterdiabatic driving, a new term is added to the Hamiltonian to suppress Landau-Zener transitions and achieve adiabaticity for any finite sweep rate. Although the counterdiabatic potential is nonlocal and hardly implementable on quantum devices, we can obtain approximate potentials that dramatically enhance the success probability of short-time quantum annealing following a variational formulation

    A complex systems approach to education in Switzerland

    Get PDF
    The insights gained from the study of complex systems in biological, social, and engineered systems enables us not only to observe and understand, but also to actively design systems which will be capable of successfully coping with complex and dynamically changing situations. The methods and mindset required for this approach have been applied to educational systems with their diverse levels of scale and complexity. Based on the general case made by Yaneer Bar-Yam, this paper applies the complex systems approach to the educational system in Switzerland. It confirms that the complex systems approach is valid. Indeed, many recommendations made for the general case have already been implemented in the Swiss education system. To address existing problems and difficulties, further steps are recommended. This paper contributes to the further establishment complex systems approach by shedding light on an area which concerns us all, which is a frequent topic of discussion and dispute among politicians and the public, where billions of dollars have been spent without achieving the desired results, and where it is difficult to directly derive consequences from actions taken. The analysis of the education system's different levels, their complexity and scale will clarify how such a dynamic system should be approached, and how it can be guided towards the desired performance

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma
    corecore