18 research outputs found

    The distance-based critical node detection problem : models and algorithms

    Get PDF
    In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics.In the wake of terrorism and natural disasters, assessing networked systems for vulnerability to failures that arise from these events is essential to maintaining the operations of the systems. This is very crucial given the heavy dependence of daily social and economic activities on networked systems such as transport, telecommunication and energy networks as well as the interdependence of these networks. In this thesis, we explore methods to assess the vulnerability of networked systems to element failures which employ connectivity as the performance measure for vulnerability. The associated optimisation problem termed the critical node (edge) detection problem seeks to identify a subset of nodes (edges) of a network whose deletion (failure) optimises a network connectivity objective. Traditional connectivity measures employed in most studies of the critical node detection problem overlook internal cohesiveness of networks and the extent of connectivity in the network. This limits the effectiveness of the developed methods in uncovering vulnerability with regards to network connectivity. Our work therefore focuses on distance-based connectivity which is a fairly new class of connectivity introduced for studying the critical node detection problem to overcome the limitations of traditional connectivity measures. In Chapter 1, we provide an introduction outlining the motivations and the methods related to our study. In Chapter 2, we review the literature on the critical node detection problem as well as its application areas and related problems. Following this, we formally introduce the distance-based critical node detection problem in Chapter 3 where we propose new integer programming models for the case of hop-based distances and an efficient algorithm for the separation problems associated with the models. We also propose two families of valid inequalities. In Chapter 4, we consider the distance-based critical node detection problem using a heuristic approach in which we propose a centrality-based heuristic that employs a backbone crossover and a centrality-based neighbourhood search. In Chapter 5, we present generalisations of the methods proposed in Chapter 3 to edge-weighted graphs. We also introduce the edge-deletion version of the problem which we term the distance based critical edge detection problem. Throughout Chapters 3, 4 and 5, we provide computational experiments. Finally, in Chapter 6 we present conclusions as well future research directions. Keywords: Network Vulnerability, Critical Node Detection Problem, Distance-based Connectivity, Integer Programming, Lazy Constraints, Branch-and-cut, Heuristics

    Evolutionary Computation and QSAR Research

    Get PDF
    [Abstract] The successful high throughput screening of molecule libraries for a specific biological property is one of the main improvements in drug discovery. The virtual molecular filtering and screening relies greatly on quantitative structure-activity relationship (QSAR) analysis, a mathematical model that correlates the activity of a molecule with molecular descriptors. QSAR models have the potential to reduce the costly failure of drug candidates in advanced (clinical) stages by filtering combinatorial libraries, eliminating candidates with a predicted toxic effect and poor pharmacokinetic profiles, and reducing the number of experiments. To obtain a predictive and reliable QSAR model, scientists use methods from various fields such as molecular modeling, pattern recognition, machine learning or artificial intelligence. QSAR modeling relies on three main steps: molecular structure codification into molecular descriptors, selection of relevant variables in the context of the analyzed activity, and search of the optimal mathematical model that correlates the molecular descriptors with a specific activity. Since a variety of techniques from statistics and artificial intelligence can aid variable selection and model building steps, this review focuses on the evolutionary computation methods supporting these tasks. Thus, this review explains the basic of the genetic algorithms and genetic programming as evolutionary computation approaches, the selection methods for high-dimensional data in QSAR, the methods to build QSAR models, the current evolutionary feature selection methods and applications in QSAR and the future trend on the joint or multi-task feature selection methods.Instituto de Salud Carlos III, PIO52048Instituto de Salud Carlos III, RD07/0067/0005Ministerio de Industria, Comercio y Turismo; TSI-020110-2009-53)Galicia. ConsellerĂ­a de EconomĂ­a e Industria; 10SIN105004P

    NOVEL ALGORITHMS AND TOOLS FOR LIGAND-BASED DRUG DESIGN

    Get PDF
    Computer-aided drug design (CADD) has become an indispensible component in modern drug discovery projects. The prediction of physicochemical properties and pharmacological properties of candidate compounds effectively increases the probability for drug candidates to pass latter phases of clinic trials. Ligand-based virtual screening exhibits advantages over structure-based drug design, in terms of its wide applicability and high computational efficiency. The established chemical repositories and reported bioassays form a gigantic knowledgebase to derive quantitative structure-activity relationship (QSAR) and structure-property relationship (QSPR). In addition, the rapid advance of machine learning techniques suggests new solutions for data-mining huge compound databases. In this thesis, a novel ligand classification algorithm, Ligand Classifier of Adaptively Boosting Ensemble Decision Stumps (LiCABEDS), was reported for the prediction of diverse categorical pharmacological properties. LiCABEDS was successfully applied to model 5-HT1A ligand functionality, ligand selectivity of cannabinoid receptor subtypes, and blood-brain-barrier (BBB) passage. LiCABEDS was implemented and integrated with graphical user interface, data import/export, automated model training/ prediction, and project management. Besides, a non-linear ligand classifier was proposed, using a novel Topomer kernel function in support vector machine. With the emphasis on green high-performance computing, graphics processing units are alternative platforms for computationally expensive tasks. A novel GPU algorithm was designed and implemented in order to accelerate the calculation of chemical similarities with dense-format molecular fingerprints. Finally, a compound acquisition algorithm was reported to construct structurally diverse screening library in order to enhance hit rates in high-throughput screening

    Study of Acid Suppressed Thickener Technology Using Density Functional Theory and Machine Learning Techniques

    Full text link
    Hydrophobically modified ethylene oxide urethane (HEUR) rheology modifiers, which are water-based polyurethane formulations manufactured by Dow Coating Materials, a division of the Dow Chemical Company, are often added to interior and exterior water-based Latex paint formulations to control their viscosity. The thickening efficiency of the HEUR rheol-ogy modifier is controlled by the pH of the solvent, as this affects the protonation-deprotonation equilibrium of the amine hydrophobe group at the end of the rheology modi-fier polymer chain. The principal quantity characterizing this equilibrium is the acid disso-ciation constant (pKa) of the hydrophobe group, which identifies the transition between high and low viscosity of the suspension. To gain a better understanding of the functioning of the hydrophobe molecular groups, and to develop novel hydrophobes that meet specific per-formance characteristics, it is important to accurately predict the pKa based on first princi-ples calculations, and use it as a first evaluation criterion for a rapid screening of candidate hydrophobe molecules. A main source of error in the pKa calculation is the value of solvation free energy of the molecule in its charged state. We therefore develop new methods to increase the accuracy of the solvation free energy calculation for charged species without excessive increase the computational expense. This includes a hybrid cluster-continuum model approach, where explicit solvent molecules are added to the traditionally employed continuum solvation model, and a molecular dynamics (MD sampling procedure that eliminates the costly ener-gy minimization step. Using test molecules for pKa calculations, we systematically exam-ine the convergence behavior in terms of number of explicit water molecules that need to be included in the cluster-continuum model, the influence of the dielectric constant attributed to the continuum, and the placement of a counter ion for charge neutrality for the accurate calculation of the solvation free energy. We establish that the MD sampling method yields results comparable the energy minimization procedure during density functional theory (DFT) calculations, but at 100 times the speed. When calculating the solvation free energy and the pKa calculation of a known hydrophobe, ethoxylated bis(2-ethylhexy)amine, we find that including explicit water molecules and a fragment of the latex polymer in its local en-vironment both significantly improve the results. Finally we develop an informatics-based approach that employs a transferable machine learning (ML) model, trained and validated on a limited amount of experimental data, to predict the solvation free energies of new ionic species at a reasonable computational cost. We compare three different ML methods – linear ridge regression, support vector regression and random forest regression, and find that the model trained by the random forest regres-sion method yields the predictions with the lowest mean absolute error. A feature selection analysis shows that the atomic fraction feature, which reflects the chemical constitution of the hydrophobe, plays the most important role in the solvation free energy prediction. Add-ing the Wiener index, a measure of the molecular topology, and the solvent accessible sur-face area of the molecules further improve the performance of the model. Accordingly, our ML model predicts the solvation energies of ionic species, including our test hydrophobe molecule, with similar accuracy as atomistic modeling using first-principles calculations.PHDMaterials Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145967/1/wuwenkun_1.pd

    Subject index volumes 1–92

    Get PDF

    Symmetry in Graph Theory

    Get PDF
    This book contains the successful invited submissions to a Special Issue of Symmetry on the subject of ""Graph Theory"". Although symmetry has always played an important role in Graph Theory, in recent years, this role has increased significantly in several branches of this field, including but not limited to Gromov hyperbolic graphs, the metric dimension of graphs, domination theory, and topological indices. This Special Issue includes contributions addressing new results on these topics, both from a theoretical and an applied point of view

    Risks

    Get PDF
    This book is a collection of feature articles published in Risks in 2020. They were all written by experts in their respective fields. In these articles, they all develop and present new aspects and insights that can help us to understand and cope with the different and ever-changing aspects of risks. In some of the feature articles the probabilistic risk modeling is the central focus, whereas impact and innovation, in the context of financial economics and actuarial science, is somewhat retained and left for future research. In other articles it is the other way around. Ideas and perceptions in financial markets are the driving force of the research but they do not necessarily rely on innovation in the underlying risk models. Together, they are state-of-the-art, expert-led, up-to-date contributions, demonstrating what Risks is and what Risks has to offer: articles that focus on the central aspects of insurance and financial risk management, that detail progress and paths of further development in understanding and dealing with...risks. Asking the same type of questions (which risk allocation and mitigation should be provided, and why?) creates value from three different perspectives: the normative perspective of market regulator; the existential perspective of the financial institution; the phenomenological perspective of the individual consumer or policy holder

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    The 8th International Conference on Time Series and Forecasting

    Get PDF
    The aim of ITISE 2022 is to create a friendly environment that could lead to the establishment or strengthening of scientific collaborations and exchanges among attendees. Therefore, ITISE 2022 is soliciting high-quality original research papers (including significant works-in-progress) on any aspect time series analysis and forecasting, in order to motivating the generation and use of new knowledge, computational techniques and methods on forecasting in a wide range of fields
    corecore