724 research outputs found

    A Computationally Light Pruning Strategy for Single Layer Neural Networks based on Threshold Function

    Get PDF
    Embedded machine learning relies on inference functions that can fit resource-constrained, low-power computing devices. The literature proves that single layer neural networks using threshold functions can provide a suitable trade off between classification accuracy and computational cost. In this regard, the number of neurons directly impacts both on computational complexity and on resources allocation. Thus, the present research aims at designing an efficient pruning technique that can take into account the peculiarities of the threshold function. The paper shows that feature selection criteria based on filter models can effectively be applied to neuron selection. In particular, valuable outcomes can be obtained by designing ad-hoc objective functions for the selection process. An extensive experimental campaign confirms that the proposed objective function compares favourably with state-of-the-art pruning techniques

    Ant-colony and nature-inspired heuristic models for NOMA systems: a review

    Get PDF
    The increasing computational complexity in scheduling the large number of users for non-orthogonal multiple access (NOMA) system and future cellular networks lead to the need for scheduling models with relatively lower computational complexity such as heuristic models. The main objective of this paper is to conduct a concise study on ant-colony optimization (ACO) methods and potential nature-inspired heuristic models for NOMA implementation in future high-speed networks. The issues, challenges and future work of ACO and other related heuristic models in NOMA are concisely reviewed. The throughput result of the proposed ACO method is observed to be close to the maximum theoretical value and stands 44% higher than that of the existing method. This result demonstrates the effectiveness of ACO implementation for NOMA user scheduling and grouping

    A Particle Swarm Optimization-based Flexible Convolutional Auto-Encoder for Image Classification

    Full text link
    Convolutional auto-encoders have shown their remarkable performance in stacking to deep convolutional neural networks for classifying image data during past several years. However, they are unable to construct the state-of-the-art convolutional neural networks due to their intrinsic architectures. In this regard, we propose a flexible convolutional auto-encoder by eliminating the constraints on the numbers of convolutional layers and pooling layers from the traditional convolutional auto-encoder. We also design an architecture discovery method by using particle swarm optimization, which is capable of automatically searching for the optimal architectures of the proposed flexible convolutional auto-encoder with much less computational resource and without any manual intervention. We use the designed architecture optimization algorithm to test the proposed flexible convolutional auto-encoder through utilizing one graphic processing unit card on four extensively used image classification datasets. Experimental results show that our work in this paper significantly outperform the peer competitors including the state-of-the-art algorithm.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems, 201

    Optimized Deep Learning Schemes for Secured Resource Allocation and Task Scheduling in Cloud Computing - A Survey

    Get PDF
    Scheduling involves allocating shared resources gradually so that tasks can be completed within a predetermined time frame. In Task Scheduling (TS) and Resource Allocation (RA), the phrase is used independently for tasks and resources. Scheduling is widely used for Cloud Computing (CC), computer science, and operational management. Effective scheduling ensures that systems operate efficiently, decisions are made effectively, resources are used efficiently, costs are kept to a minimum, and productivity is increased. High energy consumption, lower CPU utilization, time consumption, and low robustness are the most frequent problems in TS and RA in CC. In this survey, RA and TS based on deep learning (DL) and machine learning (ML) were discussed. Additionally, look into the methods employed by DL-based RA and TS-based CC. Additionally, the benefits, drawbacks, advantages, disadvantages, and merits are explored. The work's primary contribution is an analysis and assessment of DL-based RA and TS methodologies that pinpoint problems with cloud computing

    A Survey on Underwater Acoustic Sensor Network Routing Protocols

    Full text link
    Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research

    Risk and regulatory calibration : WTO compliance review of the U.S. dolphin-safe tuna labeling regime

    Get PDF
    In a series of recent disputes arising under the TBT Agreement, the Appellate Body has interpreted Article 2.1 to provide that discriminatory and trade-distortive regulation could be permissible if based upon a “legitimate regulatory distinction.” In its recent compliance decision in the US-Tuna II dispute, the AB reaffirmed its view that regulatory distinctions embedded in the U.S. dolphin-safe tuna labeling regime were not legitimate because they were not sufficiently calibrated to the risks to dolphins associated with different tuna fishing conditions. This paper analyzes the AB’s application of the notion of risk-based regulation in the US-Tuna II dispute and finds the AB’s reasoning lacking in coherence. Although risk analysis and calibration can in principle play useful roles in TBT cases, the AB needs to provide more explicit and careful guidance to WTO members and to panels to avoid the kind of ad hoc decision-making exhibited throughout the US-Tuna II dispute

    How universal can an intelligence test be?

    Full text link
    [EN] The notion of a universal intelligence test has been recently advocated as a means to assess humans, non-human animals and machines in an integrated, uniform way. While the main motivation has been the development of machine intelligence tests, the mere concept of a universal test has many implications in the way human intelligence tests are understood, and their relation to other tests in comparative psychology and animal cognition. From this diversity of subjects in the natural and artificial kingdoms, the very possibility of constructing a universal test is still controversial. In this paper we rephrase the question of whether universal intelligence tests are possible or not into the question of how universal intelligence tests can be, in terms of subjects, interfaces and resolutions. We discuss the feasibility and difficulty of universal tests depending on several levels according to what is taken for granted: the communication milieu, the resolution, the reward system or the agent itself. We argue that such tests must be highly adaptive, i.e., that tasks, resolution, rewards and communication have to be adapted according to how the evaluated agent is reacting and performing. Even so, the most general expression of a universal test may not be feasible (and, at best, might only be theoretically semi-computable). Nonetheless, in general, we can analyse the universality in terms of some traits that lead to several levels of universality and set the quest for universal tests as a progressive rather than absolute goal.This work was supported by the MEC/MINECO (projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02), the GVA (project PROMETEO/2008/051) and the COST-European Cooperation in the field of Scientific and Technical Research (project IC0801 AT).Dowe, DL.; Hernández Orallo, J. (2014). How universal can an intelligence test be?. Adaptive Behavior. 22(1):51-69. https://doi.org/10.1177/1059712313500502S516922

    Design-space assessment and dimensionality reduction: An off-line method for shape reparameterization in simulation-based optimization

    Get PDF
    A method based on the Karhunen–Loève expansion (KLE) is formulated for the assessment of arbitrary design spaces in shape optimization, assessing the shape modification variability and providing the definition of a reduced-dimensionality global model of the shape modification vector. The method is based on the concept of geometric variance and does not require design-performance analyses. Specifically, the KLE is applied to the continuous shape modification vector, requiring the solution of a Fredholm integral equation of the second kind. Once the equation is discretized, the problem reduces to the principal component analysis (PCA) of discrete geometrical data. The objective of the present work is to demonstrate how this method can be used to (a) assess different design spaces and shape parameterization methods before optimization is performed and without the need of running simulations for the performance prediction, and (b) reduce the dimensionality of the design space, providing a shape reparameterization using KLE/PCA eigenvalues and eigenmodes. A demonstration for the hull-form optimization of the DTMB 5415 model in calm water is shown, where three design spaces are investigated, namely provided by free-form deformation, radial basis functions, and global modification functions
    • …
    corecore