16,302 research outputs found

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    "Going back to our roots": second generation biocomputing

    Full text link
    Researchers in the field of biocomputing have, for many years, successfully "harvested and exploited" the natural world for inspiration in developing systems that are robust, adaptable and capable of generating novel and even "creative" solutions to human-defined problems. However, in this position paper we argue that the time has now come for a reassessment of how we exploit biology to generate new computational systems. Previous solutions (the "first generation" of biocomputing techniques), whilst reasonably effective, are crude analogues of actual biological systems. We believe that a new, inherently inter-disciplinary approach is needed for the development of the emerging "second generation" of bio-inspired methods. This new modus operandi will require much closer interaction between the engineering and life sciences communities, as well as a bidirectional flow of concepts, applications and expertise. We support our argument by examining, in this new light, three existing areas of biocomputing (genetic programming, artificial immune systems and evolvable hardware), as well as an emerging area (natural genetic engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin

    Robust, reproducible, industrialized, standard membrane feeding assay for assessing the transmission blocking activity of vaccines and drugs against Plasmodium falciparum.

    Get PDF
    BackgroundA vaccine that interrupts malaria transmission (VIMT) would be a valuable tool for malaria control and elimination. One VIMT approach is to identify sexual erythrocytic and mosquito stage antigens of the malaria parasite that induce immune responses targeted at disrupting parasite development in the mosquito. The standard Plasmodium falciparum membrane-feeding assay (SMFA) is used to assess transmission-blocking activity (TBA) of antibodies against candidate immunogens and of drugs targeting the mosquito stages. To develop its P. falciparum sporozoite (SPZ) products, Sanaria has industrialized the production of P. falciparum-infected Anopheles stephensi mosquitoes, incorporating quantitative analyses of oocyst and P. falciparum SPZ infections as part of the manufacturing process.MethodsThese capabilities were exploited to develop a robust, reliable, consistent SMFA that was used to assess 188 serum samples from animals immunized with the candidate vaccine immunogen, Pfs25, targeting P. falciparum mosquito stages. Seventy-four independent SMFAs were performed. Infection intensity (number of oocysts/mosquito) and infection prevalence (percentage of mosquitoes infected with oocysts) were compared between mosquitoes fed cultured gametocytes plus normal human O(+) serum (negative control), anti-Pfs25 polyclonal antisera (MRA39 or MRA38, at a final dilution in the blood meal of 1:54 as positive control), and test sera from animals immunized with Pfs25 (at a final dilution in the blood meal of 1:9).ResultsSMFA negative controls consistently yielded high infection intensity (mean = 46.1 oocysts/midgut, range of positives 3.7-135.6) and infection prevalence (mean = 94.2%, range 71.4-100.0) and in positive controls, infection intensity was reduced by 81.6% (anti-Pfs25 MRA39) and 97.0% (anti-Pfs25 MRA38), and infection prevalence was reduced by 12.9 and 63.5%, respectively. A range of TBAs was detected among the 188 test samples assayed in duplicate. Consistent administration of infectious gametocytes to mosquitoes within and between assays was achieved, and the TBA of anti-Pfs25 control antibodies was highly reproducible.ConclusionsThese results demonstrate a robust capacity to perform the SMFA in a medium-to-high throughput format, suitable for assessing large numbers of experimental samples of candidate antibodies or drugs

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications
    • …
    corecore