2,893 research outputs found

    Testing an Intentional Naming Scheme Using Genetic Algorithms

    Full text link
    Abstract. Various attempts have been made to use genetic algorithms (GAs) for software testing, a problem that consumes a large amount of time and eort in software development. We demonstrate the use of GAs in automating testing of complex data structures and methods for manipulating them, which to our knowledge has not been successfully displayed before on non-trivial software structures. We evaluate the ef-fectiveness of our GA-based test suite generation technique by applying it to test the design and implementation of the Intentional Naming Sys-tem (INS), a new scheme for resource discovery and service location in a dynamic networked environment. Our analysis using GAs reveals serious problems with both the design of INS and its inventors ' implementation.

    Artificial Intelligence and Discrimination in Health Care

    Get PDF
    Artificial intelligence (AI) holds great promise for improved health-care outcomes. It has been used to analyze tumor images, to help doctors choose among different treatment options, and to combat the COVID-19 pandemic. But AI also poses substantial new hazards. This Article focuses on a particular type of healthcare harm that has thus far evaded significant legal scrutiny. The harm is algorithmic discrimination. Algorithmic discrimination in health care occurs with surprising frequency. A well-known example is an algorithm used to identify candidates for “high risk care management” programs that routinely failed to refer racial minorities for these beneficial services. Furthermore, some algorithms deliberately adjust for race in ways that hurt minority patients. For example, according to a 2020 New England Journal of Medicine article, algorithms have regularly underestimated African Americans’ risks of kidney stones, death from heart failure, and other medical problems

    Are developmental disorders like cases of adult brain damage? Implications from connectionist modelling

    Get PDF
    It is often assumed that similar domain-specific behavioural impairments found in cases of adult brain damage and developmental disorders correspond to similar underlying causes, and can serve as convergent evidence for the modular structure of the normal adult cognitive system. We argue that this correspondence is contingent on an unsupported assumption that atypical development can produce selective deficits while the rest of the system develops normally (Residual Normality), and that this assumption tends to bias data collection in the field. Based on a review of connectionist models of acquired and developmental disorders in the domains of reading and past tense, as well as on new simulations, we explore the computational viability of Residual Normality and the potential role of development in producing behavioural deficits. Simulations demonstrate that damage to a developmental model can produce very different effects depending on whether it occurs prior to or following the training process. Because developmental disorders typically involve damage prior to learning, we conclude that the developmental process is a key component of the explanation of endstate impairments in such disorders. Further simulations demonstrate that in simple connectionist learning systems, the assumption of Residual Normality is undermined by processes of compensation or alteration elsewhere in the system. We outline the precise computational conditions required for Residual Normality to hold in development, and suggest that in many cases it is an unlikely hypothesis. We conclude that in developmental disorders, inferences from behavioural deficits to underlying structure crucially depend on developmental conditions, and that the process of ontogenetic development cannot be ignored in constructing models of developmental disorders

    Unified architecture of mobile ad hoc network security (MANS) system

    Get PDF
    In this dissertation, a unified architecture of Mobile Ad-hoc Network Security (MANS) system is proposed, under which IDS agent, authentication, recovery policy and other policies can be defined formally and explicitly, and are enforced by a uniform architecture. A new authentication model for high-value transactions in cluster-based MANET is also designed in MANS system. This model is motivated by previous works but try to use their beauties and avoid their shortcomings, by using threshold sharing of the certificate signing key within each cluster to distribute the certificate services, and using certificate chain and certificate repository to achieve better scalability, less overhead and better security performance. An Intrusion Detection System is installed in every node, which is responsible for colleting local data from its host node and neighbor nodes within its communication range, pro-processing raw data and periodically broadcasting to its neighborhood, classifying normal or abnormal based on pro-processed data from its host node and neighbor nodes. Security recovery policy in ad hoc networks is the procedure of making a global decision according to messages received from distributed IDS and restore to operational health the whole system if any user or host that conducts the inappropriate, incorrect, or anomalous activities that threaten the connectivity or reliability of the networks and the authenticity of the data traffic in the networks. Finally, quantitative risk assessment model is proposed to numerically evaluate MANS security

    PIRANHA: an engine for a methodology of detecting covert communication via image-based steganography

    Get PDF
    In current cutting-edge steganalysis research, model-building and machine learning has been utilized to detect steganography. However, these models are computationally and cognitively cumbersome, and are specifically and exactly targeted to attack one and only one type of steganography. The model built and utilized in this thesis has shown capability in detecting a class or family of steganography, while also demonstrating that it is viable to construct a minimalist model for steganalysis. The notion of detecting steganographic primitives or families is one that has not been discussed in literature, and would serve well as a first-pass steganographic detection methodology. The model built here serves this end well, and it must be kept in mind that the model presented is posited to work as a front-end broad-pass filter for some of the more computationally advanced and directed stganalytic algorithms currently in use. This thesis attempts to convey a view of steganography and steganalysis in a manner more utilitarian and immediately useful to everyday scenarios. This is vastly different from a good many publications that treat the topic as one relegated only to cloak-and-dagger information passing. The subsequent view of steganography as primarily a communications tool useable by petty information brokers and the like directs the text and helps ensure that the notion of steganography as a digital dead-drop box is abandoned in favor of a more grounded approach. As such, the model presented underperforms specialized models that have been presented in current literature, but also makes use of a large image sample space (747 images) as well as images that are contextually diverse and representative of those seen in wide use. In future applications by either law-enforcement or corporate officials, it is hoped that the model presented in this thesis can aid in rapid and targeted responses without causing undue strain upon an eventual human operator. As such, a design constraint that was utilized for this research favored a False Negative as opposed to a False Positive - this methodology helps to ensure that, in the event of an alert, it is worthwhile to apply a more directed attack against the flagged image

    Learning Human Behavior From Observation For Gaming Applications

    Get PDF
    The gaming industry has reached a point where improving graphics has only a small effect on how much a player will enjoy a game. One focus has turned to adding more humanlike characteristics into computer game agents. Machine learning techniques are being used scarcely in games, although they do offer powerful means for creating humanlike behaviors in agents. The first person shooter (FPS), Quake 2, is an open source game that offers a multi-agent environment to create game agents (bots) in. This work attempts to combine neural networks with a modeling paradigm known as context based reasoning (CxBR) to create a contextual game observation (CONGO) system that produces Quake 2 agents that behave as a human player trains them to act. A default level of intelligence is instilled into the bots through contextual scripts to prevent the bot from being trained to be completely useless. The results show that the humanness and entertainment value as compared to a traditional scripted bot have improved, although, CONGO bots usually ranked only slightly above a novice skill level. Overall, CONGO is a technique that offers the gaming community a mode of game play that has promising entertainment value

    Rate Me: Risk Assessment Drones and the Resurrection of Discriminatory Insurance Practices

    Full text link
    corecore