723 research outputs found

    Automatic Detection of Freshwater Phytoplankton Specimens in Conventional Microscopy Images

    Get PDF
    [Abstract] Water safety and quality can be compromised by the proliferation of toxin-producing phytoplankton species, requiring continuous monitoring of water sources. This analysis involves the identification and counting of these species which requires broad experience and knowledge. The automatization of these tasks is highly desirable as it would release the experts from tedious work, eliminate subjective factors, and improve repeatability. Thus, in this preliminary work, we propose to advance towards an automatic methodology for phytoplankton analysis in digital images of water samples acquired using regular microscopes. In particular, we propose a novel and fully automatic method to detect and segment the existent phytoplankton specimens in these images using classical computer vision algorithms. The proposed method is able to correctly detect sparse colonies as single phytoplankton candidates, thanks to a novel fusion algorithm, and is able to differentiate phytoplankton specimens from other image objects in the microscope samples (like minerals, bubbles or detritus) using a machine learning based approach that exploits texture and colour features. Our preliminary experiments demonstrate that the proposed method provides satisfactory and accurate results.This work is supported by the European Regional Development Fund (ERDF) of the European Union and Xunta de Galicia through Centro de Investigación del Sistema Universitario de Galicia, ref. ED431G 2019/01Xunta de Galicia; ED431G 2019/0

    Secure Computing, Economy, and Trust: A Generic Solution for Secure Auctions with Real-World Applications

    Get PDF
    In this paper we consider the problem of constructing secure auctions based on techniques from modern cryptography. We combine knowledge from economics, cryptography and security engineering and develop and implement secure auctions for practical real-world problems. In essence this paper is an overview of the research project SCET--Secure Computing, Economy, and Trust-- which attempts to build auctions for real applications using secure multiparty computation. The main contributions of this project are: A generic setup for secure evaluation of integer arithmetic including comparisons; general double auctions expressed by such operations; a real world double auction tailored to the complexity and performance of the basic primitives '+' and

    Radio Frequency Fingerprinting Techniques through Preamble Modification in IEEE 802.11b

    Get PDF
    Wireless local area networks are particularly vulnerable to cyber attacks due to their contested transmission medium. Access point spoofing, route poisoning, and cryptographic attacks are some of the many mature threats faced by wireless networks. Recent work investigates physical-layer features such as received signal strength or radio frequency fingerprinting to identify and localize malicious devices. This thesis demonstrates a novel and complementary approach to exploiting physical-layer differences among wireless devices that is more energy efficient and invariant with respect to the environment than traditional fingerprinting techniques. Specifically, this methodology exploits subtle design differences among different transceiver hardware types. A software defined radio captures packets with standard-length IEEE 802.11b preambles, manipulates the recorded preambles by shortening their length, then replays the altered packets toward the transceivers under test. Wireless transceivers vary in their ability to receive packets with preambles shorter than the standard. By analyzing differences in packet reception with respect to preamble length, this methodology distinguishes amongst eight transceiver types from three manufacturers. All tests to successfully enumerate the transceivers achieve accuracy rates greater than 99%, while transmitting less than 60 test packets. This research extends previous work illustrating RF fingerprinting techniques through IEEE 802.15.4 wireless protocols. The results demonstrate that preamble manipulation is effective for multi-factor device authentication, network intrusion detection, and remote transceiver type fingerprinting in IEEE 802.11b

    A Framework for File Format Fuzzing with Genetic Algorithms

    Get PDF
    Secure software, meaning software free from vulnerabilities, is desirable in today\u27s marketplace. Consumers are beginning to value a product\u27s security posture as well as its functionality. Software development companies are recognizing this trend, and they are factoring security into their entire software development lifecycle. Secure development practices like threat modeling, static analysis, safe programming libraries, run-time protections, and software verification are being mandated during product development. Mandating these practices improves a product\u27s security posture before customer delivery, and these practices increase the difficulty of discovering and exploiting vulnerabilities. Since the 1980\u27s, security researchers have uncovered software defects by fuzz testing an application. In fuzz testing\u27s infancy, randomly generated data could discover multiple defects quickly. However, as software matures and software development companies integrate secure development practices into their development life cycles, fuzzers must apply more sophisticated techniques in order to retain their ability to uncover defects. Fuzz testing must evolve, and fuzz testing practitioners must devise new algorithms to exercise an application in unexpected ways. This dissertation\u27s objective is to create a proof-of-concept genetic algorithm fuzz testing framework to exercise an application\u27s file format parsing routines. The framework includes multiple genetic algorithm variations, provides a configuration scheme, and correlates data gathered from static and dynamic analysis to guide negative test case evolution. Experiments conducted for this dissertation illustrate the effectiveness of a genetic algorithm fuzzer in comparison to standard fuzz testing tools. The experiments showcase a genetic algorithm fuzzer\u27s ability to discover multiple unique defects within a limited number of negative test cases. These experiments also highlight an application\u27s increased execution time when fuzzing with a genetic algorithm. To combat increased execution time, a distributed architecture is implemented and additional experiments demonstrate a decrease in execution time comparable to standard fuzz testing tools. A final set of experiments provide guidance on fitness function selection with a CHC genetic algorithm fuzzer with different population size configurations

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin

    Self-stabilizing sorting algorithms

    Full text link
    A distributed system consists of a set of machines which do not share a global memory. Depending on the connectivity of the network, each machine gets a partial view of the global state. Transient failures in one area of the network may go unnoticed in other areas and may cause the system to go to an illegal global state. However, if the system were self-stabilizing, it would be guaranteed that regardless of the current state, the system would recover to a legal configuration in a finite number of moves; The traditional way of creating reliable systems is to make redundant components. Self-stabilization allows systems to be fault tolerant through software as well. This is an evolving paradigm in the design of robust distributed systems. The ability to recover spontaneously from an arbitrary state makes self-stabilizing systems immune to transient failures or perturbations in the system state such as changes in network topology; This thesis presents an O(nh) fault-tolerant distributed sorting algorithm for a tree network, where n is the number of nodes in the system, and h is the height of the tree. Fault-tolerance is achieved using Dijkstra\u27s paradigm of self-stabilization which is a method of non-masking fault-tolerance embedding the fault-tolerance within the algorithm. Varghese\u27s counter flushing method is used in order to achieve synchronization among processes in the system. In the distributed sorting problem each node is given a value and an id which are non-corruptible. The idea is to have each node take a specific value based on its id. The algorithm handles transient faults by weeding out false information in the system. Nodes can start with completely false information concerning the values and ids of the system yet the intended behavior is still achieved. Also, nodes are allowed to crash and re-enter the system later as well as allowing new nodes to enter the system

    Liberal Studies and Servant Leadership: Inspiring Ignatian values at the margins

    Get PDF
    This paper aims to unpack how the concept of servant leadership is perceived from Liberal Studies graduates living at the margins of society. Anchoring this research in the work of Jesuit Worldwide Learning (JWL), a faith-based organization providing higher education to displaced and poor communities, this paper seeks to deconstruct the assumption present in the literature of an ‘already-in-power’ servant leader by inviting looking for servant leaders in vulnerable and marginalized places, where they would be most needed. By synthesizing the voices of more than 100 graduates from the Diploma in Liberal Studies program, this research looks at how graduates define servant leadership from the unprivileged perspective and how these graduates could develop and apply this approach in their daily life. Through this analysis, certain values and skills emerge, which permits a contribution to mapping a servant leader’s attributes. This paper concludes on the effect of democratizing the definition of servant leadership with the aims of serving a wider population and having a stronger impact on communities
    • …
    corecore