7,775 research outputs found

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Measurement of the Environmental Impact of Materials

    Get PDF
    Throughout their life cycles—from production, usage, through to disposal—materials and products interact with the environment (water, soil, and air). At the same time, they are exposed to environmental influences and, through their emissions, have an impact on the environment, people, and health. Accelerated experimental testing processes can be used to predict the long-term environmental consequences of innovative products before these actually enter the environment. We are living in a material world. Building materials, geosynthetics, wooden toys, soil, nanomaterials, composites, wastes and more are research subjects examined by the authors of this book. The interactions of materials with the environment are manifold. Therefore, it is important to assess the environmental impact of these interactions. Some answers to how this task can be achieved are given in this Special Issue

    Optimizing Weights And Biases in MLP Using Whale Optimization Algorithm

    Get PDF
    Artificial Neural Networks are intelligent and non-parametric mathematical models inspired by the human nervous system. They have been widely studied and applied for classification, pattern recognition and forecasting problems. The main challenge of training an Artificial Neural network is its learning process, the nonlinear nature and the unknown best set of main controlling parameters (weights and biases). When the Artificial Neural Networks are trained using the conventional training algorithm, they get caught in the local optima stagnation and slow convergence speed; this makes the stochastic optimization algorithm a definitive alternative to alleviate the drawbacks. This thesis proposes an algorithm based on the recently proposed Whale Optimization Algorithm(WOA). The algorithm has proven to solve a wide range of optimization problems and outperform existing algorithms. The successful implementation of this algorithm motivated our attempts to benchmark its performance in training feed-forward neural networks. We have taken a set of 20 datasets with different difficulty levels and tested the proposed WOA-MLP based trainer. Further, the results are verified by comparing WOA-MLP with the back propagation algorithms and six evolutionary techniques. The results have proved that the proposed trainer can outperform the current algorithms on the majority of datasets in terms of local optima avoidance and convergence speed

    The Anthropocene Hypothesis

    Get PDF

    A Precariat Charter

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Guy Standing's immensely influential 2011 book introduced the Precariat as an emerging mass class, characterized by inequality and insecurity. Standing outlined the increasingly global nature of the Precariat as a social phenomenon, especially in the light of the social unrest characterized by the Occupy movements. He outlined the political risks they might pose, and at what might be done to diminish inequality and allow such workers to find a more stable labour identity. His concept and his conclusions have been widely taken up by thinkers from Noam Chomsky to Zygmunt Bauman, by political activists and by policy-makers. This new book takes the debate a stage further, looking in more detail at the kind of progressive politics that might form the vision of a Good Society in which such inequality, and the instability it produces, is reduced. A Precariat Charter discusses how rights - political, civil, social and economic - have been denied to the Precariat, and argues for the importance of redefining our social contract around notions of associational freedom, agency and the commons

    Z-Numbers-Based Approach to Hotel Service Quality Assessment

    Get PDF
    In this study, we are analyzing the possibility of using Z-numbers for measuring the service quality and decision-making for quality improvement in the hotel industry. Techniques used for these purposes are based on consumer evalu- ations - expectations and perceptions. As a rule, these evaluations are expressed in crisp numbers (Likert scale) or fuzzy estimates. However, descriptions of the respondent opinions based on crisp or fuzzy numbers formalism not in all cases are relevant. The existing methods do not take into account the degree of con- fidence of respondents in their assessments. A fuzzy approach better describes the uncertainties associated with human perceptions and expectations. Linguis- tic values are more acceptable than crisp numbers. To consider the subjective natures of both service quality estimates and confidence degree in them, the two- component Z-numbers Z = (A, B) were used. Z-numbers express more adequately the opinion of consumers. The proposed and computationally efficient approach (Z-SERVQUAL, Z-IPA) allows to determine the quality of services and iden- tify the factors that required improvement and the areas for further development. The suggested method was applied to evaluate the service quality in small and medium-sized hotels in Turkey and Azerbaijan, illustrated by the example

    Investigating and mitigating the role of neutralisation techniques on information security policies violation in healthcare organisations

    Get PDF
    Healthcare organisations today rely heavily on Electronic Medical Records systems (EMRs), which have become highly crucial IT assets that require significant security efforts to safeguard patients’ information. Individuals who have legitimate access to an organisation’s assets to perform their day-to-day duties but intentionally or unintentionally violate information security policies can jeopardise their organisation’s information security efforts and cause significant legal and financial losses. In the information security (InfoSec) literature, several studies emphasised the necessity to understand why employees behave in ways that contradict information security requirements but have offered widely different solutions. In an effort to respond to this situation, this thesis addressed the gap in the information security academic research by providing a deep understanding of the problem of medical practitioners’ behavioural justifications to violate information security policies and then determining proper solutions to reduce this undesirable behaviour. Neutralisation theory was used as the theoretical basis for the research. This thesis adopted a mixed-method research approach that comprises four consecutive phases, and each phase represents a research study that was conducted in light of the results from the preceding phase. The first phase of the thesis started by investigating the relationship between medical practitioners’ neutralisation techniques and their intention to violate information security policies that protect a patient’s privacy. A quantitative study was conducted to extend the work of Siponen and Vance [1] through a study of the Saudi Arabia healthcare industry. The data was collected via an online questionnaire from 66 Medical Interns (MIs) working in four academic hospitals. The study found that six neutralisation techniques—(1) appeal to higher loyalties, (2) defence of necessity, (3) the metaphor of ledger, (4) denial of responsibility, (5) denial of injury, and (6) condemnation of condemners—significantly contribute to the justifications of the MIs in hypothetically violating information security policies. The second phase of this research used a series of semi-structured interviews with IT security professionals in one of the largest academic hospitals in Saudi Arabia to explore the environmental factors that motivated the medical practitioners to evoke various neutralisation techniques. The results revealed that social, organisational, and emotional factors all stimulated the behavioural justifications to breach information security policies. During these interviews, it became clear that the IT department needed to ensure that security policies fit the daily tasks of the medical practitioners by providing alternative solutions to ensure the effectiveness of those policies. Based on these interviews, the objective of the following two phases was to improve the effectiveness of InfoSec policies against the use of behavioural justification by engaging the end users in the modification of existing policies via a collaborative writing process. Those two phases were conducted in the UK and Saudi Arabia to determine whether the collaborative writing process could produce a more effective security policy that balanced the security requirements with daily business needs, thus leading to a reduction in the use of neutralisation techniques to violate security policies. The overall result confirmed that the involvement of the end users via a collaborative writing process positively improved the effectiveness of the security policy to mitigate the individual behavioural justifications, showing that the process is a promising one to enhance security compliance

    Fielding Design, Design Fielding:Learning, Leading & Organising in New Territories

    Get PDF
    A framing question; What does (meaningful) collaboration look like in action? led to the search for and identification of a polycontext, a site where advanced collaborative activity is intelligible. This research aims to explore how the epistemic foundations of learning and design theory can adapt to collaborative approaches to organizing, learning and leadership as the macro-economic transition of digital transformation proceeds. Through embedded ethnographic engagement within a learning organization facilitating group-oriented, design-led collaborative learning experiences, a case study investigates multiple sites within a global organizational network whose distinctive methodology and culture provides a setting emblematic of frontier digital economic activity. The organization’s activity generates environments which notionally act as boundary sites where negotiation of epistemic difference is necessitated, consequently distinctive forms of expertise in brokerage and perspective-taking arise to support dynamic coordination, presenting a distinct take on group-oriented learning. Comprising interacting investigation of communities of facilitators and learning designers tasked to equip learners with distinctive forms of integrative expertise, with the objective of forming individuals adept at rapid orientation to contingent circumstances achieved by collaborative organizing. In parallel, investigating narratives of an organization’s formation led to grounded theory about how collaborative activity is enabled by shared reframing practices. Consequently, the organization anticipates and reshapes the field it operates within, the research discusses scalar effects of learning communities on industry work practices. The inquiry interrogates design-led learning and expertise formation apt for transformative activity within and beyond the digital economy. Exploring how methodological innovations within collaborative learning organizations are enacted and scaled, primary perspectives on design-led, group-oriented learning are evaluated alongside relevant secondary theoretic perspectives on collaborative organizing, learning and leading. The study synthesizes contributions that point to expansions of existing learning paradigms and anticipates how collaborative learning by design intervenes with the schematic assumptions at work in individuals, communities and fields. Observational insight, systematic analysis and theoretical evaluation are applied to problematize assumptions underlying social theory to anticipate generational expansions to the design methods field which responds to inadequacies in planning and organizing approaches applied by design. The research attempts to habituate understanding from outside design methods to better equip an explanatory understanding of contemporary design-led learning and expertise formation occurring in modern professional structures, especially in the creative industries. Together, the research investigates how learners navigate challenges of organizing, learning and leading into unseen territories
    • …
    corecore