1,310 research outputs found

    Continuous extremal optimization for Lennard-Jones Clusters

    Full text link
    In this paper, we explore a general-purpose heuristic algorithm for finding high-quality solutions to continuous optimization problems. The method, called continuous extremal optimization(CEO), can be considered as an extension of extremal optimization(EO) and is consisted of two components, one is with responsibility for global searching and the other is with responsibility for local searching. With only one adjustable parameter, the CEO's performance proves competitive with more elaborate stochastic optimization procedures. We demonstrate it on a well known continuous optimization problem: the Lennerd-Jones clusters optimization problem.Comment: 5 pages and 3 figure

    A methodology for the structural and functional analysis of signaling and regulatory networks

    Get PDF
    BACKGROUND: Structural analysis of cellular interaction networks contributes to a deeper understanding of network-wide interdependencies, causal relationships, and basic functional capabilities. While the structural analysis of metabolic networks is a well-established field, similar methodologies have been scarcely developed and applied to signaling and regulatory networks. RESULTS: We propose formalisms and methods, relying on adapted and partially newly introduced approaches, which facilitate a structural analysis of signaling and regulatory networks with focus on functional aspects. We use two different formalisms to represent and analyze interaction networks: interaction graphs and (logical) interaction hypergraphs. We show that, in interaction graphs, the determination of feedback cycles and of all the signaling paths between any pair of species is equivalent to the computation of elementary modes known from metabolic networks. Knowledge on the set of signaling paths and feedback loops facilitates the computation of intervention strategies and the classification of compounds into activators, inhibitors, ambivalent factors, and non-affecting factors with respect to a certain species. In some cases, qualitative effects induced by perturbations can be unambiguously predicted from the network scheme. Interaction graphs however, are not able to capture AND relationships which do frequently occur in interaction networks. The consequent logical concatenation of all the arcs pointing into a species leads to Boolean networks. For a Boolean representation of cellular interaction networks we propose a formalism based on logical (or signed) interaction hypergraphs, which facilitates in particular a logical steady state analysis (LSSA). LSSA enables studies on the logical processing of signals and the identification of optimal intervention points (targets) in cellular networks. LSSA also reveals network regions whose parametrization and initial states are crucial for the dynamic behavior. We have implemented these methods in our software tool CellNetAnalyzer (successor of FluxAnalyzer) and illustrate their applicability using a logical model of T-Cell receptor signaling providing non-intuitive results regarding feedback loops, essential elements, and (logical) signal processing upon different stimuli. CONCLUSION: The methods and formalisms we propose herein are another step towards the comprehensive functional analysis of cellular interaction networks. Their potential, shown on a realistic T-cell signaling model, makes them a promising tool

    Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system

    Full text link
    Biology has taken strong steps towards becoming a computer science aiming at reprogramming nature after the realisation that nature herself has reprogrammed organisms by harnessing the power of natural selection and the digital prescriptive nature of replicating DNA. Here we further unpack ideas related to computability, algorithmic information theory and software engineering, in the context of the extent to which biology can be (re)programmed, and with how we may go about doing so in a more systematic way with all the tools and concepts offered by theoretical computer science in a translation exercise from computing to molecular biology and back. These concepts provide a means to a hierarchical organization thereby blurring previously clear-cut lines between concepts like matter and life, or between tumour types that are otherwise taken as different and may not have however a different cause. This does not diminish the properties of life or make its components and functions less interesting. On the contrary, this approach makes for a more encompassing and integrated view of nature, one that subsumes observer and observed within the same system, and can generate new perspectives and tools with which to view complex diseases like cancer, approaching them afresh from a software-engineering viewpoint that casts evolution in the role of programmer, cells as computing machines, DNA and genes as instructions and computer programs, viruses as hacking devices, the immune system as a software debugging tool, and diseases as an information-theoretic battlefield where all these forces deploy. We show how information theory and algorithmic programming may explain fundamental mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George Ellis (eds.), Cambridge University Pres

    On Easiest Functions for Mutation Operators in Bio-Inspired Optimisation

    Get PDF
    Understanding which function classes are easy and which are hard for a given algorithm is a fundamental question for the analysis and design of bio-inspired search heuristics. A natural starting point is to consider the easiest and hardest functions for an algorithm. For the (1+1) EA using standard bit mutation (SBM) it is well known that OneMax is an easiest function with unique optimum while Trap is a hardest. In this paper we extend the analysis of easiest function classes to the contiguous somatic hypermutation (CHM) operator used in artificial immune systems. We define a function MinBlocks and prove that it is an easiest function for the (1+1) EA using CHM, presenting both a runtime and a fixed budget analysis. Since MinBlocks is, up to a factor of 2, a hardest function for standard bit mutations, we consider the effects of combining both operators into a hybrid algorithm. We rigorously prove that by combining the advantages of k operators, several hybrid algorithmic schemes have optimal asymptotic performance on the easiest functions for each individual operator. In particular, the hybrid algorithms using CHM and SBM have optimal asymptotic performance on both OneMax and MinBlocks. We then investigate easiest functions for hybrid schemes and show that an easiest function for an hybrid algorithm is not just a trivial weighted combination of the respective easiest functions for each operator.publishersversionPeer reviewe

    Network intrusion detection by artificial immune system

    Get PDF
    With computer network’s fast penetration into our life, various types of malicious attacks and service abuses increase dramatically. Network security has become one of the big challenges in the modern networks. Intrusion Detection (ID) is one of the active branches in network security research field. Many technologies, such as neural networks, fuzzy logic and genetic algorithms have been applied in intrusion detection and the results are varied. In this thesis, an Artificial Immune System (AIS) based intrusion detection is explored. AIS is a bio-inspired computing paradigm that has been applied in many different areas including intrusion detection. The main objective of our research is to improve the AIS based Intrusion Detection System’s (IDS) performance on detection while keeping its system computing complexity to a low level. An IDS requires specified monitoring parameter set. In a computer network, there are many parameters can be collected or monitored. The quantity of parameters could be real big. These parameters can be used for the intrusion detection purpose. However, the significance of these parameters in intrusion detection can be very different. If all parameters were used, the computing complexity of IDS would be high. Therefore the selection of a group of significant parameters is necessary. This process is called feature selection. Two feature selection algorithms, i.e. Rough set algorithm (RSA) and linear genetic programming (LPG) are selected and compared in this thesis. An improved AIS based IDS with these two feature selection algorithms are studied. A basic feature selection algorithm only picks the features to be used, assuming they have equal contribution towards the system performance and that is not the case in reality. Therefore weighing the parameters’ contribution in the IDS is expected to further improve the performance. However, assigning weights to the selected features is not an easy work. In this thesis, a weight distribution scheme among selected features is proposed. With a simplified exhausted approach, an optimal weight allocation is obtained. The results show that the improved AIS based IDS with weighted feature selection can achieve 99.98 % of true positive rate while keeping the true negative rate at 99.94%. These results are obtained from the experiment on the popular testing dataset: KDD Cup 99. The results indicate the proposed scheme outperforms most of the existing IDS on the same testing data set

    A Novel Algorithm for Solving Structural Optimization Problems

    Get PDF
    In the past few decades, metaheuristic optimization methods have emerged as an effective approach for addressing structural design problems. Structural optimization methods are based on mathematical algorithms that are population-based techniques. Optimization methods use technology development to employ algorithms to search through complex solution space to find the minimum. In this paper, a simple algorithm inspired by hurricane chaos is proposed for solving structural optimization problems. In general, optimization algorithms use equations that employ the global best solution that might cause the algorithm to get trapped in a local minimum. Hence, this methodology is avoided in this work. The algorithm was tested on several common truss examples from the literature and proved efficient in finding lower weights for the test problems

    An Adaptive Modular Redundancy Technique to Self-regulate Availability, Area, and Energy Consumption in Mission-critical Applications

    Get PDF
    As reconfigurable devices\u27 capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing environments that require high degree of adaptation. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach called a Reconfigurable Adaptive Redundancy System (RARS). The software layer supervises the organic activities within the FPGA and extends the self-healing capabilities through application-independent, intrinsic, evolutionary repair techniques to leverage the benefits of dynamic Partial Reconfiguration (PR). A SMART prototype is evaluated using a Sobel edge detection application. This prototype is shown to provide sustainability for stressful occurrences of transient and permanent fault injection procedures while still reducing energy consumption and area requirements. An Organic Genetic Algorithm (OGA) technique is shown capable of consistently repairing hard faults while maintaining correct edge detector outputs, by exploiting spatial redundancy in the reconfigurable hardware. A Monte Carlo driven Continuous Markov Time Chains (CTMC) simulation is conducted to compare SMART\u27s availability to industry-standard Triple Modular Technique (TMR) techniques. Based on nine use cases, parameterized with realistic fault and repair rates acquired from publically available sources, the results indicate that availability is significantly enhanced by the adoption of fast repair techniques targeting aging-related hard-faults. Under harsh environments, SMART is shown to improve system availability from 36.02% with lengthy repair techniques to 98.84% with fast ones. This value increases to five nines (99.9998%) under relatively more favorable conditions. Lastly, SMART is compared to twenty eight standard TMR benchmarks that are generated by the widely-accepted BL-TMR tools. Results show that in seven out of nine use cases, SMART is the recommended technique, with power savings ranging from 22% to 29%, and area savings ranging from 17% to 24%, while still maintaining the same level of availability

    The Maximum Clique Problem: Algorithms, Applications, and Implementations

    Get PDF
    Computationally hard problems are routinely encountered during the course of solving practical problems. This is commonly dealt with by settling for less than optimal solutions, through the use of heuristics or approximation algorithms. This dissertation examines the alternate possibility of solving such problems exactly, through a detailed study of one particular problem, the maximum clique problem. It discusses algorithms, implementations, and the application of maximum clique results to real-world problems. First, the theoretical roots of the algorithmic method employed are discussed. Then a practical approach is described, which separates out important algorithmic decisions so that the algorithm can be easily tuned for different types of input data. This general and modifiable approach is also meant as a tool for research so that different strategies can easily be tried for different situations. Next, a specific implementation is described. The program is tuned, by use of experiments, to work best for two different graph types, real-world biological data and a suite of synthetic graphs. A parallel implementation is then briefly discussed and tested. After considering implementation, an example of applying these clique-finding tools to a specific case of real-world biological data is presented. Results are analyzed using both statistical and biological metrics. Then the development of practical algorithms based on clique-finding tools is explored in greater detail. New algorithms are introduced and preliminary experiments are performed. Next, some relaxations of clique are discussed along with the possibility of developing new practical algorithms from these variations. Finally, conclusions and future research directions are given
    • …
    corecore