8,587 research outputs found

    Towards Sybil Resilience in Decentralized Learning

    Full text link
    Federated learning is a privacy-enforcing machine learning technology but suffers from limited scalability. This limitation mostly originates from the internet connection and memory capacity of the central parameter server, and the complexity of the model aggregation function. Decentralized learning has recently been emerging as a promising alternative to federated learning. This novel technology eliminates the need for a central parameter server by decentralizing the model aggregation across all participating nodes. Numerous studies have been conducted on improving the resilience of federated learning against poisoning and Sybil attacks, whereas the resilience of decentralized learning remains largely unstudied. This research gap serves as the main motivator for this study, in which our objective is to improve the Sybil poisoning resilience of decentralized learning. We present SybilWall, an innovative algorithm focused on increasing the resilience of decentralized learning against targeted Sybil poisoning attacks. By combining a Sybil-resistant aggregation function based on similarity between Sybils with a novel probabilistic gossiping mechanism, we establish a new benchmark for scalable, Sybil-resilient decentralized learning. A comprehensive empirical evaluation demonstrated that SybilWall outperforms existing state-of-the-art solutions designed for federated learning scenarios and is the only algorithm to obtain consistent accuracy over a range of adversarial attack scenarios. We also found SybilWall to diminish the utility of creating many Sybils, as our evaluations demonstrate a higher success rate among adversaries employing fewer Sybils. Finally, we suggest a number of possible improvements to SybilWall and highlight promising future research directions

    An investigation of entorhinal spatial representations in self-localisation behaviours

    Get PDF
    Spatial-modulated cells of the medial entorhinal cortex (MEC) and neighbouring cortices are thought to provide the neural substrate for self-localisation behaviours. These cells include grid cells of the MEC which are thought to compute path integration operations to update self-location estimates. In order to read this grid code, downstream cells are thought to reconstruct a positional estimate as a simple rate-coded representation of space. Here, I show the coding scheme of grid cell and putative readout cells recorded from mice performing a virtual reality (VR) linear location task which engaged mice in both beaconing and path integration behaviours. I found grid cells can encode two unique coding schemes on the linear track, namely a position code which reflects periodic grid fields anchored to salient features of the track and a distance code which reflects periodic grid fields without this anchoring. Grid cells were found to switch between these coding schemes within sessions. When grid cells were encoding position, mice performed better at trials that required path integration but not on trials that required beaconing. This result provides the first mechanistic evidence linking grid cell activity to path integration-dependent behaviour. Putative readout cells were found in the form of ramp cells which fire proportionally as a function of location in defined regions of the linear track. This ramping activity was found to be primarily explained by track position rather than other kinematic variables like speed and acceleration. These representations were found to be maintained across both trial types and outcomes indicating they likely result from recall of the track structure. Together, these results support the functional importance of grid and ramp cells for self-localisation behaviours. Future investigations will look into the coherence between these two neural populations, which may together form a complete neural system for coding and decoding self-location in the brain

    Technology for Low Resolution Space Based RSO Detection and Characterisation

    Get PDF
    Space Situational Awareness (SSA) refers to all activities to detect, identify and track objects in Earth orbit. SSA is critical to all current and future space activities and protect space assets by providing access control, conjunction warnings, and monitoring status of active satellites. Currently SSA methods and infrastructure are not sufficient to account for the proliferations of space debris. In response to the need for better SSA there has been many different areas of research looking to improve SSA most of the requiring dedicated ground or space-based infrastructure. In this thesis, a novel approach for the characterisation of RSO’s (Resident Space Objects) from passive low-resolution space-based sensors is presented with all the background work performed to enable this novel method. Low resolution space-based sensors are common on current satellites, with many of these sensors being in space using them passively to detect RSO’s can greatly augment SSA with out expensive infrastructure or long lead times. One of the largest hurtles to overcome with research in the area has to do with the lack of publicly available labelled data to test and confirm results with. To overcome this hurtle a simulation software, ORBITALS, was created. To verify and validate the ORBITALS simulator it was compared with the Fast Auroral Imager images, which is one of the only publicly available low-resolution space-based images found with auxiliary data. During the development of the ORBITALS simulator it was found that the generation of these simulated images are computationally intensive when propagating the entire space catalog. To overcome this an upgrade of the currently used propagation method, Specialised General Perturbation Method 4th order (SGP4), was performed to allow the algorithm to run in parallel reducing the computational time required to propagate entire catalogs of RSO’s. From the results it was found that the standard facet model with a particle swarm optimisation performed the best estimating an RSO’s attitude with a 0.66 degree RMSE accuracy across a sequence, and ~1% MAPE accuracy for the optical properties. This accomplished this thesis goal of demonstrating the feasibility of low-resolution passive RSO characterisation from space-based platforms in a simulated environment

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Scavenger: A Cloud Service for Optimizing Cost and Performance of ML Training

    Full text link
    While the pay-as-you-go nature of cloud virtual machines (VMs) makes it easy to spin-up large clusters for training ML models, it can also lead to ballooning costs. The 100s of virtual machine sizes provided by cloud platforms also makes it extremely challenging to select the ``right'' cloud cluster configuration for training. Furthermore, the training time and cost of distributed model training is highly sensitive to the cluster configurations, and presents a large and complex tradeoff-space. In this paper, we develop principled and practical techniques for optimizing the training time and cost of distributed ML model training on the cloud. Our key insight is that both parallel and statistical efficiency must be considered when selecting the optimum job configuration parameters such as the number of workers and the batch size. By combining conventional parallel scaling concepts and new insights into SGD noise, our models accurately estimate the time and cost on different cluster configurations with < 5% error. Using the repetitive nature of training and our models, we can search for optimum cloud configurations in a black-box, online manner. Our approach reduces training times by 2 times and costs more more than 50%. Compared to an oracle-based approach, our performance models are accurate to within 2% such that the search imposes an overhead of just 10%

    Resilience and food security in a food systems context

    Get PDF
    This open access book compiles a series of chapters written by internationally recognized experts known for their in-depth but critical views on questions of resilience and food security. The book assesses rigorously and critically the contribution of the concept of resilience in advancing our understanding and ability to design and implement development interventions in relation to food security and humanitarian crises. For this, the book departs from the narrow beaten tracks of agriculture and trade, which have influenced the mainstream debate on food security for nearly 60 years, and adopts instead a wider, more holistic perspective, framed around food systems. The foundation for this new approach is the recognition that in the current post-globalization era, the food and nutritional security of the world’s population no longer depends just on the performance of agriculture and policies on trade, but rather on the capacity of the entire (food) system to produce, process, transport and distribute safe, affordable and nutritious food for all, in ways that remain environmentally sustainable. In that context, adopting a food system perspective provides a more appropriate frame as it incites to broaden the conventional thinking and to acknowledge the systemic nature of the different processes and actors involved. This book is written for a large audience, from academics to policymakers, students to practitioners

    Leveraging a machine learning based predictive framework to study brain-phenotype relationships

    Get PDF
    An immense collective effort has been put towards the development of methods forquantifying brain activity and structure. In parallel, a similar effort has focused on collecting experimental data, resulting in ever-growing data banks of complex human in vivo neuroimaging data. Machine learning, a broad set of powerful and effective tools for identifying multivariate relationships in high-dimensional problem spaces, has proven to be a promising approach toward better understanding the relationships between the brain and different phenotypes of interest. However, applied machine learning within a predictive framework for the study of neuroimaging data introduces several domain-specific problems and considerations, leaving the overarching question of how to best structure and run experiments ambiguous. In this work, I cover two explicit pieces of this larger question, the relationship between data representation and predictive performance and a case study on issues related to data collected from disparate sites and cohorts. I then present the Brain Predictability toolbox, a soft- ware package to explicitly codify and make more broadly accessible to researchers the recommended steps in performing a predictive experiment, everything from framing a question to reporting results. This unique perspective ultimately offers recommen- dations, explicit analytical strategies, and example applications for using machine learning to study the brain

    Knowledge-based Modelling of Additive Manufacturing for Sustainability Performance Analysis and Decision Making

    Get PDF
    Additiivista valmistusta on pidetty käyttökelpoisena monimutkaisissa geometrioissa, topologisesti optimoiduissa kappaleissa ja kappaleissa joita on muuten vaikea valmistaa perinteisillä valmistusprosesseilla. Eduista huolimatta, yksi additiivisen valmistuksen vallitsevista haasteista on ollut heikko kyky tuottaa toimivia osia kilpailukykyisillä tuotantomäärillä perinteisen valmistuksen kanssa. Mallintaminen ja simulointi ovat tehokkaita työkaluja, jotka voivat auttaa lyhentämään suunnittelun, rakentamisen ja testauksen sykliä mahdollistamalla erilaisten tuotesuunnitelmien ja prosessiskenaarioiden nopean analyysin. Perinteisten ja edistyneiden valmistusteknologioiden mahdollisuudet ja rajoitukset määrittelevät kuitenkin rajat uusille tuotekehityksille. Siksi on tärkeää, että suunnittelijoilla on käytettävissään menetelmät ja työkalut, joiden avulla he voivat mallintaa ja simuloida tuotteen suorituskykyä ja siihen liittyvän valmistusprosessin suorituskykyä, toimivien korkea arvoisten tuotteiden toteuttamiseksi. Motivaation tämän väitöstutkimuksen tekemiselle on, meneillään oleva kehitystyö uudenlaisen korkean lämpötilan suprajohtavan (high temperature superconducting (HTS)) magneettikokoonpanon kehittämisessä, joka toimii kryogeenisissä lämpötiloissa. Sen monimutkaisuus edellyttää monitieteisen asiantuntemuksen lähentymistä suunnittelun ja prototyyppien valmistuksen aikana. Tutkimus hyödyntää tietopohjaista mallinnusta valmistusprosessin analysoinnin ja päätöksenteon apuna HTS-magneettien mekaanisten komponenttien suunnittelussa. Tämän lisäksi, tutkimus etsii mahdollisuuksia additiivisen valmistuksen toteutettavuuteen HTS-magneettikokoonpanon tuotannossa. Kehitetty lähestymistapa käyttää fysikaalisiin kokeisiin perustuvaa tuote-prosessi-integroitua mallinnusta tuottamaan kvantitatiivista ja laadullista tietoa, joka määrittelee prosessi-rakenne-ominaisuus-suorituskyky-vuorovaikutuksia tietyille materiaali-prosessi-yhdistelmille. Tuloksina saadut vuorovaikutukset integroidaan kaaviopohjaiseen malliin, joka voi auttaa suunnittelutilan tutkimisessa ja täten auttaa varhaisessa suunnittelu- ja valmistuspäätöksenteossa. Tätä varten testikomponentit valmistetaan käyttämällä kahta metallin additiivista valmistus prosessia: lankakaarihitsaus additiivista valmistusta (wire arc additive manufacturing) ja selektiivistä lasersulatusta (selective laser melting). Rakenteellisissa sovelluksissa yleisesti käytetyistä metalliseoksista (ruostumaton teräs, pehmeä teräs, luja niukkaseosteinen teräs, alumiini ja kupariseokset) testataan niiden mekaaniset, lämpö- ja sähköiset ominaisuudet. Lisäksi tehdään metalliseosten mikrorakenteen karakterisointi, jotta voidaan ymmärtää paremmin valmistusprosessin parametrien vaikutusta materiaalin ominaisuuksiin. Integroitu mallinnustapa yhdistää kerätyn kokeellisen tiedon, olemassa olevat analyyttiset ja empiiriset vuorovaikutus suhteet, sekä muut tietopohjaiset mallit (esim. elementtimallit, koneoppimismallit) päätöksenteon tukijärjestelmän muodossa, joka mahdollistaa optimaalisen materiaalin, valmistustekniikan, prosessiparametrien ja muitten ohjausmuuttujien valinnan, lopullisen 3d-tulosteun komponentin halutun rakenteen, ominaisuuksien ja suorituskyvyn saavuttamiseksi. Valmistuspäätöksenteko tapahtuu todennäköisyysmallin, eli Bayesin verkkomallin toteuttamisen kautta, joka on vankka, modulaarinen ja sovellettavissa muihin valmistusjärjestelmiin ja tuotesuunnitelmiin. Väitöstyössä esitetyn mallin kyky parantaa additiivisien valmistusprosessien suorituskykyä ja laatua, täten edistää kestävän tuotannon tavoitteita.Additive manufacturing (AM) has been considered viable for complex geometries, topology optimized parts, and parts that are otherwise difficult to produce using conventional manufacturing processes. Despite the advantages, one of the prevalent challenges in AM has been the poor capability of producing functional parts at production volumes that are competitive with traditional manufacturing. Modelling and simulation are powerful tools that can help shorten the design-build-test cycle by enabling rapid analysis of various product designs and process scenarios. Nevertheless, the capabilities and limitations of traditional and advanced manufacturing technologies do define the bounds for new product development. Thus, it is important that the designers have access to methods and tools that enable them to model and simulate product performance and associated manufacturing process performance to realize functional high value products. The motivation for this dissertation research stems from ongoing development of a novel high temperature superconducting (HTS) magnet assembly, which operates in cryogenic environment. Its complexity requires the convergence of multidisciplinary expertise during design and prototyping. The research applies knowledge-based modelling to aid manufacturing process analysis and decision making in the design of mechanical components of the HTS magnet. Further, it explores the feasibility of using AM in the production of the HTS magnet assembly. The developed approach uses product-process integrated modelling based on physical experiments to generate quantitative and qualitative information that define process-structure-property-performance interactions for given material-process combinations. The resulting interactions are then integrated into a graph-based model that can aid in design space exploration to assist early design and manufacturing decision-making. To do so, test components are fabricated using two metal AM processes: wire and arc additive manufacturing and selective laser melting. Metal alloys (stainless steel, mild steel, high-strength low-alloyed steel, aluminium, and copper alloys) commonly used in structural applications are tested for their mechanical-, thermal-, and electrical properties. In addition, microstructural characterization of the alloys is performed to further understand the impact of manufacturing process parameters on material properties. The integrated modelling approach combines the collected experimental data, existing analytical and empirical relationships, and other data-driven models (e.g., finite element models, machine learning models) in the form of a decision support system that enables optimal selection of material, manufacturing technology, process parameters, and other control variables for attaining desired structure, property, and performance characteristics of the final printed component. The manufacturing decision making is performed through implementation of a probabilistic model i.e., a Bayesian network model, which is robust, modular, and can be adapted for other manufacturing systems and product designs. The ability of the model to improve throughput and quality of additive manufacturing processes will boost sustainable manufacturing goals

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field
    corecore