70 research outputs found

    New Strategies for Global Optimization of Chemical Engineering Applications by Differential Evolution

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Computational structure‐based drug design: Predicting target flexibility

    Get PDF
    The role of molecular modeling in drug design has experienced a significant revamp in the last decade. The increase in computational resources and molecular models, along with software developments, is finally introducing a competitive advantage in early phases of drug discovery. Medium and small companies with strong focus on computational chemistry are being created, some of them having introduced important leads in drug design pipelines. An important source for this success is the extraordinary development of faster and more efficient techniques for describing flexibility in three‐dimensional structural molecular modeling. At different levels, from docking techniques to atomistic molecular dynamics, conformational sampling between receptor and drug results in improved predictions, such as screening enrichment, discovery of transient cavities, etc. In this review article we perform an extensive analysis of these modeling techniques, dividing them into high and low throughput, and emphasizing in their application to drug design studies. We finalize the review with a section describing our Monte Carlo method, PELE, recently highlighted as an outstanding advance in an international blind competition and industrial benchmarks.We acknowledge the BSC-CRG-IRB Joint Research Program in Computational Biology. This work was supported by a grant from the Spanish Government CTQ2016-79138-R.J.I. acknowledges support from SVP-2014-068797, awarded by the Spanish Government.Peer ReviewedPostprint (author's final draft

    Traffic engineering in dynamic optical networks

    Get PDF
    Traffic Engineering (TE) refers to all the techniques a Service Provider employs to improve the efficiency and reliability of network operations. In IP over Optical (IPO) networks, traffic coming from upper layers is carried over the logical topology defined by the set of established lightpaths. Within this framework then, TE techniques allow to optimize the configuration of optical resources with respect to an highly dynamic traffic demand. TE can be performed with two main methods: if the demand is known only in terms of an aggregated traffic matrix, the problem of automatically updating the configuration of an optical network to accommodate traffic changes is called Virtual Topology Reconfiguration (VTR). If instead the traffic demand is known in terms of data-level connection requests with sub-wavelength granularity, arriving dynamically from some source node to any destination node, the problem is called Dynamic Traffic Grooming (DTG). In this dissertation new VTR algorithms for load balancing in optical networks based on Local Search (LS) techniques are presented. The main advantage of using LS is the minimization of network disruption, since the reconfiguration involves only a small part of the network. A comparison between the proposed schemes and the optimal solutions found via an ILP solver shows calculation time savings for comparable results of network congestion. A similar load balancing technique has been applied to alleviate congestion in an MPLS network, based on the efficient rerouting of Label-Switched Paths (LSP) from the most congested links to allow a better usage of network resources. Many algorithms have been developed to deal with DTG in IPO networks, where most of the attention is focused on optimizing the physical resources utilization by considering specific constraints on the optical node architecture, while very few attention has been put so far on the Quality of Service (QoS) guarantees for the carried traffic. In this thesis a novel Traffic Engineering scheme is proposed to guarantee QoS from both the viewpoint of service differentiation and transmission quality. Another contribution in this thesis is a formal framework for the definition of dynamic grooming policies in IPO networks. The framework is then specialized for an overlay architecture, where the control plane of the IP and optical level are separated, and no information is shared between the two. A family of grooming policies based on constraints on the number of hops and on the bandwidth sharing degree at the IP level is defined, and its performance analyzed in both regular and irregular topologies. While most of the literature on DTG problem implicitly considers the grooming of low-speed connections onto optical channels using a TDM approach, the proposed grooming policies are evaluated here by considering a realistic traffic model which consider a Dynamic Statistical Multiplexing (DSM) approach, i.e. a single wavelength channel is shared between multiple IP elastic traffic flows

    Energy flow polynomials: A complete linear basis for jet substructure

    Get PDF
    We introduce the energy flow polynomials: a complete set of jet substructure observables which form a discrete linear basis for all infrared- and collinear-safe observables. Energy flow polynomials are multiparticle energy correlators with specific angular structures that are a direct consequence of infrared and collinear safety. We establish a powerful graph-theoretic representation of the energy flow polynomials which allows us to design efficient algorithms for their computation. Many common jet observables are exact linear combinations of energy flow polynomials, and we demonstrate the linear spanning nature of the energy flow basis by performing regression for several common jet observables. Using linear classification with energy flow polynomials, we achieve excellent performance on three representative jet tagging problems: quark/gluon discrimination, boosted W tagging, and boosted top tagging. The energy flow basis provides a systematic framework for complete investigations of jet substructure using linear methods.Comment: 41+15 pages, 13 figures, 5 tables; v2: updated to match JHEP versio

    Multi-objective optimal power resources planning of microgrids with high penetration of intermittent nature generation and modern storage systems

    Get PDF
    Microgrids are self-controlled entities at the distribution voltage level that interconnect distributed energy resources (DERs) with loads and can be operated in either grid-connected or islanded mode. This type of active distribution network has evolved as a powerful concept to guarantee a reliable, efficient and sustainable electricity delivery as part of the power systems of the future. However, benefits of microgrids, such as the ancillary services (AS) provision, are not possible to be properly exploited before traditional planning methodologies are updated. Therefore, in this doctoral thesis, a named Probabilistic Multi-objective Microgrid Planning methodology with two versions, POMMP and POMMP2, is proposed for effective decision-making on the optimal allocation of DERs and topology definition under the paradigm of microgrids with capacity for providing AS to the main power grid. The methodologies are defined to consider a mixed generation matrix with dispatchable and non-dispatchable technologies, as well as, distributed energy storage systems and both conventional and power-electronic-based operation configurations. The planning methodologies are formulated based on a so-called true-multi-objective optimization problem with a configurable set of three objective functions. Accordingly, the capacity to supply AS is optimally enhanced with the maximization of the available active residual power in grid-connected operation mode; the capital, maintenance, and operation costs of microgrid are minimized, while the revenues from the services provision and participation on liberalized markets are maximized in a cost function; and the active power losses in microgrid´s operation are minimized. Furthermore, a probabilistic technique based on the simulation of parameters from their probabilistic density function and Monte Carlo Simulation is adopted to model the stochastic behavior of the non-dispatchable renewable generation resources and load demand as the main sources of uncertainties in the planning of microgrids. Additionally, POMMP2 methodology particularly enhances the proposal in POMMP by modifying the methodology and optimization model to consider the optimal planning of microgrid's topology with the allocation of DERs simultaneously. In this case, the concept of networked microgrid is contemplated, and a novel holistic approach is proposed to include a multilevel graph-partitioning technique and subsequent iterative heuristic optimization for the optimal formation of clusters in the topology planning and DERs allocation process. This microgrid planning problem leads to a complex non-convex mixed-integer nonlinear optimization problem with multiple contradictory objective functions, decision variables, and diverse constraint conditions. Accordingly, the optimization problem in the proposed POMMP/POMMP2 methodologies is conceived to be solved using multi-objective population-based metaheuristics, which gives rise to the adaptation and performance assessment of two existing optimization algorithms, the well-known Non-dominated Sorting Genetic Algorithm II (NSGAII) and the Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D). Furthermore, the analytic hierarchy process (AHP) is tested and proposed for the multi-criteria decision-making in the last step of the planning methodologies. The POMMP and POMMP2 methodologies are tested in a 69-bus and 37-bus medium voltage distribution network, respectively. Results show the benefits of an a posteriori decision making with the true-multi-objective approach as well as a time-dependent planning methodology. Furthermore, the results from a more comprehensive planning strategy in POMMP2 revealed the benefits of a holistic planning methodology, where different planning tasks are optimally and simultaneously addressed to offer better planning results.Las microrredes son entes autocontrolados que operan en media o baja tensión, interconectan REDs con las cargas y pueden ser operadas ya sea en modo conectado a la red o modo isla. Este tipo de red activa de distribución ha evolucionado como un concepto poderoso para garantizar un suministro de electricidad fiable, eficiente y sostenible como parte de los sistemas de energía del futuro. Sin embargo, para explotar los beneficios potenciales de las microrredes, tales como la prestación de servicios auxiliares (AS), primero es necesario formular apropiadas metodologías de planificación. En este sentido, en esta tesis doctoral, una metodología probabilística de planificación de microrredes con dos versiones, POMMP y POMMP2, es propuesta para la toma de decisiones efectiva en la asignación óptima de DERs y la definición de la topología de microrredes bajo el paradigma de una microrred con capacidad para proporcionar AS a la red principal. Las metodologías se definen para considerar una matriz de generación mixta con tecnologías despachables y no despachables, así como sistemas distribuidos para el almacenamiento de energía y la interconnección de recursos con o sin una interfaz basada en dispositivos de electrónica de potencia. Las metodologías de planificación se formulan sobre la base de un problema de optimización multiobjetivo verdadero con un conjunto configurable de tres funciones objetivo. Con estos se pretende optimizar la capacidad de suministro de AS con la maximización de la potencia activa residual disponible en modo conectado a la red; la minimización de los costos de capital, mantenimiento y funcionamiento de la microrred al tiempo que se maximizan los ingresos procedentes de la prestación de servicios y la participación en los mercados liberalizados; y la minimización de las pérdidas de energía activa en el funcionamiento de la microrred. Además, se adopta una técnica probabilística basada en la simulación de parámetros a partir de la función de densidad de probabilidad y el método de Monte Carlo para modelar el comportamiento estocástico de los recursos de generación renovable no despachables. Adicionalmente,la POMMP2 mejora la propuesta de POMMP modificando la metodología y el modelo de optimización para considerar simultáneamente la planificación óptima de la topología de la microrred con la asignación de DERs. Así pues, se considera el concepto de microrredes interconectadas en red y se propone un novedoso enfoque holístico que incluye una técnica de partición de gráficos multinivel y optimización iterativa heurística para la formación óptima de clusters para el planeamiento de la topología y asignación de DERs. Este problema de planificación de microrredes da lugar a un complejo problema de optimización mixto, no lineal, no convexos y con múltiples funciones objetivo contradictorias, variables de decisión y diversas condiciones de restricción. Por consiguiente, el problema de optimización en las metodologías POMMP/POMMP2 se concibe para ser resuelto utilizando técnicas multiobjetivo de optimización metaheurísticas basadas en población, lo cual da lugar a la adaptación y evaluación del rendimiento de dos algoritmos de optimización existentes, el conocido Non-dominated Sorting Genetic Algorithm II (NSGAII) y el Evolutionary Algorithm Based on Decomposition (MOEA/D). Además, se ha probado y propuesto el uso de la técnica de proceso analítico jerárquico (AHP) para la toma de decisiones multicriterio en el último paso de las metodologías de planificación. Las metodologías POMMP/POMMP2 son probadas en una red de distribución de media tensión de 69 y 37 buses, respectivamente. Los resultados muestran los beneficios de la toma de decisiones a posteriori con el enfoque de optimización multiobjetivo verdadero, así como una metodología de planificación dependiente del tiempo. Además, los resultados de la estrategia de planificación con POMMP2 revelan los beneficios de una metodología de planificación holística, en la que las diferentes tareas de planificación se abordan de manera óptima y simultánea para ofrecer mejores resultados de planificación.Línea de investigación: Planificación de redes inteligentes We thank to the Administrative Department of Science, Technology and Innovation - Colciencias, Colombia, for the granted National Doctoral funding program - 647Doctorad

    Physically-based, real-time visualization and constraint analysis in multidisciplinary design optimization

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2003.Includes bibliographical references (p. 147-150).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.As computational tools becomes a valuable part of the engineering process, multidisciplinary design optimization (MDO) has become a popular approach for the design of complex engineering systems. MDO has had considerable impact by improving the performance, lowering the lifecycle cost and shortening product design time for complex systems; however, lack of knowledge on the design process is often expressed by the engineering community. This thesis addresses this issue by proposing a novel approach that brings visualization into the MDO framework and delivers a physically-based real-time constraint analysis and visualization. A framework and methodology are presented for effective, intuitive visualization of design optimization data. The visualization is effected on a Computer-Aided-Design (CAD)-based, physical representation of the system being designed. The use of a parametric CAD model allow real-time regeneration by using the Computational Analysis PRogramming Interface (CAPRI). CAPRI is used to link a general optimization framework to the CAD model. An example is presented for multidisciplinary design optimization of an aircraft. The new methodology is used to visualize the path of the optimizer through the design space. Visualizing the optimization process is also of interest for optimization health monitoring. By detecting flaws in the optimization definition, useless computations and time can be saved. Visualization of the optimization process enables the designer to rapidly gain physical understanding of the design tradeoffs made by the optimizer. The visualization framework is also used to investigate constraint behavior. Active constraints are displayed on the CAD model and the participation of design variables in a given constraint is represented in a physically intuitive manner. This novel visualization approach serves to dramatically increase the amount of learning that can be gained from design optimization tools and also proves useful as a diagnostic tool for identifying formulation errors.by Yann Deremaux.S.M

    On The Use of Over-Approximate Analysis in Support of Software Development and Testing

    Get PDF
    The effectiveness of dynamic program analyses, such as profiling and memory-leak detection, crucially depend on the quality of the test inputs. However, adequate sets of inputs are rarely available. Existing automated input generation techniques can help but tend to be either too expensive or ineffective. For example, traditional symbolic execution scales poorly to real-world programs and random input generation may never reach deep states within the program. For scalable, effective, automated input generation that can better support dynamic analysis, I propose an approach that extends traditional symbolic execution by targeting increasingly small fragments of a program. The approach starts by generating inputs for the whole program and progressively introduces additional unconstrained state until it reaches a given program coverage objective. This approach is applicable to any client dynamic analysis requiring high coverage that is also tolerant of over-approximated program behavior--behavior that cannot occur on a complete execution. To assess the effectiveness of my approach, I applied it to two client techniques. The first technique infers the actual path taken by a program execution by observing the CPU's electromagnetic emanations and requires inputs to generate a model that can recognize executed path segments. The client inference works by piece wise matching the observed emanation waveform to those recorded in a model. It requires the model to be complete (i.e. contain every piece) and the waveforms are sufficiently distinct that the inclusion of extra samples is unlikely to cause a misinference. After applying my approach to generate inputs covering all subsegments of the program’s execution paths, I designed a source generator to automatically construct a harness and scaffolding to replay these inputs against fragments of the original program. The inference client constructs the model by recording the harness execution. The second technique performs automated regression testing by identifying behavioral differences between two program versions and requires inputs to perform differential testing. It explores local behavior in a neighborhood of the program changes by generating inputs to functions near (as measured by call-graph) to the modified code. The inputs are then concretely executed on both versions, periodically checking internal state for behavioral differences. The technique requires high coverage inputs for a full examination, and tolerates infeasible local state since both versions likely execute it equivalently. I will then present a separate technique to improve the coverage obtained by symbolic execution of floating-point programs. This technique is equally applicable to both traditional symbolic execution and my progressively under-constrained symbolic execution. Its key idea is to approximate floating-point expressions with fixed-point analogs. In concluding, I will also discuss future research directions, including additional empirical evaluations and the investigation of additional client analyses that could benefit from my approach.Ph.D

    Hidden Markov models and neural networks for speech recognition

    Get PDF
    The Hidden Markov Model (HMMs) is one of the most successful modeling approaches for acoustic events in speech recognition, and more recently it has proven useful for several problems in biological sequence analysis. Although the HMM is good at capturing the temporal nature of processes such as speech, it has a very limited capacity for recognizing complex patterns involving more than first order dependencies in the observed data sequences. This is due to the first order state process and the assumption of state conditional independence between observations. Artificial Neural Networks (NNs) are almost the opposite: they cannot model dynamic, temporally extended phenomena very well, but are good at static classification and regression tasks. Combining the two frameworks in a sensible way can therefore lead to a more powerful model with better classification abilities. The overall aim of this work has been to develop a probabilistic hybrid of hidden Markov models and neural networks and ..

    Cooperation in open, decentralized, and heterogeneous computer networks

    Get PDF
    Community Networks (CN) are naturally open and decentralized structures, that grow organically with the addition of heterogeneous network devices, contributed and configured as needed by their participants. The continuous growth in popularity and dissemination of CNs in recent years has raised the perception of a mature and sustainable model for the provisioning of networking services. However, because such infrastructures include uncontrolled entities with non delimited responsibilities, every single network entity does indeed represent a potential single-point of failure that can stop the entire network from working, and that no other entity can prevent or even circumvent. Given the open and decentralized nature of CNs, that brings together individuals and organizations with different and even conflicting economic, political, and technical interests, the achievement of no more than basic consensus on the correctness of all network nodes is challenging. In such environment, the lack of self-determination for CN participants in terms of control and security of routing can be regarded as an obstacle for growth or even as a risk of collapse. To address this problem we first consider deployments of existing Wireless CN and we analyze their technology, characteristics, and performance. We perform an experimental evaluation of a production 802.11an Wireless CN, and compare to studies of other Wireless CN deployments in the literature. We compare experimentally obtained throughput traces with path-capacity calculations based on well-known conflict graph models. We observe that in the majority of cases the path chosen by the employed BMX6 routing protocol corresponds with the best identified path in our model. We analyze monitoring and interaction shortcomings of CNs and address these with Network Characterization Tool (NCT), a novel tool that allows users to assess network state and performance, and improve their quality of experience by individually modifying the routing parameters of their devices. We also evaluate performance outcomes when different routing policies are in use. Routing protocols provide self-management mechanisms that allow the continuous operation of a Community Mesh Network (CMN). We focus on three widely used proactive mesh routing protocols and their implementations: BMX6, OLSR, and Babel. We describe the core idea behind these protocols and study the implications of these in terms of scalability, performance, and stability by exposing them to typical but challenging network topologies and scenarios. Our results show the relative merits, costs, and limitations of the three protocols. Built upon the studied characteristics of typical CN deployments, their requirements on open and decentralized cooperation, and the potential controversy on the trustiness of particular components of a network infrastructure, we propose and evaluate SEMTOR, a novel routing-protocol that can satisfy these demands. SEMTOR allows the verifiable and undeniable definition and distributed application of individually trusted topologies for routing traffic towards each node. One unique advantage of SEMTOR is that it does not require a global consensus on the trustiness of any node and thus preserves cooperation among nodes with even oppositional defined trust specification. This gives each node admin the freedom to individually define the subset, and the resulting sub-topology, from the whole set of participating nodes that he considers sufficiently trustworthy to meet their security, data-delivery objectives and concerns. The proposed mechanisms have been realized as a usable and open-source implementation called BMX7, as successor of BMX6. We have evaluated its scalability, contributed robustness, and security. These results show that the usage of SEMTOR for securing trusted routing topologies is feasible, even when executed on real and very cheap (10 Euro, Linux SoC) routers as commonly used in Community Mesh Networks.Las Redes Comunitarias (CNs) son estructuras de naturaleza abierta y descentralizada, que crecen orgánicamente con la adición de dispositivos de red heterogéneos que aportan y configuran sus participantes según sea necesario. Sin embargo, debido a que estas infraestructuras incluyen entidades con responsabilidades poco delimitadas, cada entidad puede representar un punto de fallo que puede impedir que la red funcione y que ninguna otra entidad pueda prevenir o eludir. Dada la naturaleza abierta y descentralizada de las CNs, que agrupa individuos y organizaciones con diferentes e incluso contrapuestos intereses económicos, políticos y técnicos, conseguir poco más que un consenso básico sobre los nodos correctos en la red puede ser un reto. En este entorno, la falta de autodeterminación para los participantes de una CN en cuanto a control y seguridad del encaminamiento puede considerarse un obstáculo para el crecimiento o incluso un riesgo de colapso. Para abordar este problema consideramos las implementaciones de redes comunitarias inalámbricas (WCN) y se analiza su tecnología, características y desempeño. Realizamos una evaluación experimental de una WCN establecida y se compara con estudios de otros despliegues. Comparamos las trazas de rendimiento experimentales con cálculos de la capacidad de los caminos basados en modelos bien conocidos del grafo. Se observa que en la mayoría de los casos el camino elegido por el protocolo de encaminamiento BMX6 corresponde con el mejor camino identificado en nuestro modelo. Analizamos las limitaciones de monitorización e interacción en CNs y los tratamos con NCT, una nueva herramienta que permite evaluar el estado y rendimiento de la red, y mejorar la calidad de experiencia modificando los parámetros de sus dispositivos individuales. También evaluamos el rendimiento resultante para diferentes políticas de encaminamiento. Los protocolos de encaminamiento proporcionan mecanismos de autogestión que hacen posible el funcionamiento continuo de una red comunitaria mesh (CMN). Nos centramos en tres protocolos de encaminamiento proactivos para redes mesh ampliamente utilizados y sus implementaciones: BMX6, OLSR y Babel. Se describe la idea central de estos protocolos y se estudian la implicaciones de éstos en términos de escalabilidad, rendimiento y estabilidad al exponerlos a topologías y escenarios de red típicos pero exigentes. Nuestros resultados muestran los méritos, costes y limitaciones de los tres protocolos. A partir de las características analizadas en despliegues típicos de redes comunitarias, y de las necesidades en cuanto a cooperación abierta y descentralizada, y la esperable divergencia sobre la confiabilidad en ciertos componentes de la infraestructura de red, proponemos y evaluamos SEMTOR, un nuevo protocolo de encaminamiento que puede satisfacer estas necesidades. SEMTOR permite definir de forma verificable e innegable, así como aplicar de forma distribuida, topologías de confianza individualizadas para encaminar tráfico hacia cada nodo. Una ventaja única de SEMTOR es que no precisa de consenso global sobre la confianza en cualquier nodo y por tanto preserva la cooperación entre los nodos, incluso con especificaciones de confianza definidas por oposición. Esto proporciona a cada administrador de nodo la libertad para definir el subconjunto, y la sub-topología resultante, entre el conjunto de todos los nodos participantes que considere dignos de suficiente confianza para cumplir con su objetivo y criterio de seguridad y entrega de datos. Los mecanismos propuestos se han realizado en forma de una implementación utilizable de código abierto llamada BMX7. Se ha evaluado su escalabilidad, robustez y seguridad. Estos resultados demuestran que el uso de SEMTOR para asegurar topologías de encaminamiento de confianza es factible, incluso cuando se ejecuta en routers reales y muy baratos utilizados de forma habitual en WCN.Postprint (published version
    corecore