69 research outputs found

    Learning node labels with multi-category Hopfield networks

    Get PDF
    In several real-world node label prediction problems on graphs, in fields ranging from computational biology to World Wide Web analysis, nodes can be partitioned into categories different from the classes to be predicted, on the basis of their characteristics or their common properties. Such partitions may provide further information about node classification that classical machine learning algorithms do not take into account. We introduce a novel family of parametric Hopfield networks (m-category Hopfield networks) and a novel algorithm (Hopfield multi-category \u2014 HoMCat ), designed to appropriately exploit the presence of property-based partitions of nodes into multiple categories. Moreover, the proposed model adopts a cost-sensitive learning strategy to prevent the remarkable decay in performance usually observed when instance labels are unbalanced, that is, when one class of labels is highly underrepresented than the other one. We validate the proposed model on both synthetic and real-world data, in the context of multi-species function prediction, where the classes to be predicted are the Gene Ontology terms and the categories the different species in the multi-species protein network. We carried out an intensive experimental validation, which on the one hand compares HoMCat with several state-of-the-art graph-based algorithms, and on the other hand reveals that exploiting meaningful prior partitions of input data can substantially improve classification performances

    Technology Directions for the 21st Century

    Get PDF
    The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    Low-power neuromorphic sensor fusion for elderly care

    Get PDF
    Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensor’s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Data mining industry : emerging trends and new opportunities

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, June 2000."May 2000."Includes bibliographical references (leaves 170-179).by Walter Alberto Aldana.M.Eng

    Efficient Mapping of Neural Network Models on a Class of Parallel Architectures.

    Get PDF
    This dissertation develops a formal and systematic methodology for efficient mapping of several contemporary artificial neural network (ANN) models on k-ary n-cube parallel architectures (KNC\u27s). We apply the general mapping to several important ANN models including feedforward ANN\u27s trained with backpropagation algorithm, radial basis function networks, cascade correlation learning, and adaptive resonance theory networks. Our approach utilizes a parallel task graph representing concurrent operations of the ANN model during training. The mapping of the ANN is performed in two steps. First, the parallel task graph of the ANN is mapped to a virtual KNC of compatible dimensionality. This involves decomposing each operation into its atomic tasks. Second, the dimensionality of the virtual KNC architecture is recursively reduced through a sequence of transformations until a desired metric is optimized. We refer to this process as folding the virtual architecture. The optimization criteria we consider in this dissertation are defined in terms of the iteration time of the algorithm on the folded architecture. If necessary, the mapping scheme may utilize a subset of the processors of a given KNC architecture if it results in the most efficient simulation. A unique feature of our mapping is that it systematically selects an appropriate degree of parallelism leading to a highly efficient realization of the ANN model on KNC architectures. A novel feature of our work is its ability to efficiently map unit-allocating ANN\u27s. These networks possess a dynamic structure which grows during training. We present a highly efficient scheme for simulating such networks on existing KNC parallel architectures. We assume an upper bound on size of the neural network We perform the folding such that the iteration time of the largest network is minimized. We show that our mapping leads to near-optimal simulation of smaller instances of the neural network. In addition, based on our mapping no data migration or task rescheduling is needed as the size of network grows
    • …
    corecore