5,037 research outputs found

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    NetEvo: A computational framework for the evolution of dynamical complex networks

    Get PDF
    NetEvo is a computational framework designed to help understand the evolution of dynamical complex networks. It provides flexible tools for the simulation of dynamical processes on networks and methods for the evolution of underlying topological structures. The concept of a supervisor is used to bring together both these aspects in a coherent way. It is the job of the supervisor to rewire the network topology and alter model parameters such that a user specified performance measure is minimised. This performance measure can make use of current topological information and simulated dynamical output from the system. Such an abstraction provides a suitable basis in which to study many outstanding questions related to complex system design and evolution

    The Automation of the Extraction of Evidence masked by Steganographic Techniques in WAV and MP3 Audio Files

    Full text link
    Antiforensics techniques and particularly steganography and cryptography have become increasingly pressing issues that affect the current digital forensics practice, both techniques are widely researched and developed as considered in the heart of the modern digital era but remain double edged swords standing between the privacy conscious and the criminally malicious, dependent on the severity of the methods deployed. This paper advances the automation of hidden evidence extraction in the context of audio files enabling the correlation between unprocessed evidence artefacts and extreme Steganographic and Cryptographic techniques using the Least Significant Bits extraction method (LSB). The research generates an in-depth review of current digital forensic toolkit and systems and formally address their capabilities in handling steganography-related cases, we opted for experimental research methodology in the form of quantitative analysis of the efficiency of detecting and extraction of hidden artefacts in WAV and MP3 audio files by comparing standard industry software. This work establishes an environment for the practical implementation and testing of the proposed approach and the new toolkit for extracting evidence hidden by Cryptographic and Steganographic techniques during forensics investigations. The proposed multi-approach automation demonstrated a huge positive impact in terms of efficiency and accuracy and notably on large audio files (MP3 and WAV) which the forensics analysis is time-consuming and requires significant computational resources and memory. However, the proposed automation may occasionally produce false positives (detecting steganography where none exists) or false negatives (failing to detect steganography that is present) but overall achieve a balance between detecting hidden data accurately along with minimising the false alarms.Comment: Wires Forensics Sciences Under Revie

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Estimating sample-specific regulatory networks

    Full text link
    Biological systems are driven by intricate interactions among the complex array of molecules that comprise the cell. Many methods have been developed to reconstruct network models of those interactions. These methods often draw on large numbers of samples with measured gene expression profiles to infer connections between genes (or gene products). The result is an aggregate network model representing a single estimate for the likelihood of each interaction, or "edge," in the network. While informative, aggregate models fail to capture the heterogeneity that is represented in any population. Here we propose a method to reverse engineer sample-specific networks from aggregate network models. We demonstrate the accuracy and applicability of our approach in several data sets, including simulated data, microarray expression data from synchronized yeast cells, and RNA-seq data collected from human lymphoblastoid cell lines. We show that these sample-specific networks can be used to study changes in network topology across time and to characterize shifts in gene regulation that may not be apparent in expression data. We believe the ability to generate sample-specific networks will greatly facilitate the application of network methods to the increasingly large, complex, and heterogeneous multi-omic data sets that are currently being generated, and ultimately support the emerging field of precision network medicine

    Cooperative Wide Area Search Algorithm Analysis Using Sub-Region Techniques

    Get PDF
    Recent advances in small Unmmaned Aerial Vehicle (UAV) technology reinvigorates the need for additional research into Wide Area Search (WAS) algorithms for civilian and military applications. But due to the extremely large variability in UAV environments and design, Digital Engineering (DE) is utilized to reduce the time, cost, and energy required to advance this technology. DE also allows rapid design and evaluation of autonomous systems which utilize and support WAS algorithms. Modern WAS algorithms can be broadly classified into decision-based algorithms, statistical algorithms, and Artificial Intelligence (AI)/Machine Learning (ML) algorithms. This research continues on the work by Hatzinger and Gertsman by creating a decision-based algorithm which subdivides the search region into sub-regions known as cells, decides an optimal next cell to search, and distributes the results of the search to other cooperative search assets. Each cooperative search asset would store the following four crucial arrays in order to decide which cell to search: current estimated target density of each cell; the current number of assets in a cell; each cooperative asset’s next cell to search; and the total time any asset has been in a cell. A software-based simulation based environment, Advanced Framework for Simulation, Integration, and Modeling (AFSIM), was utilized to complete the verification process, create the test environment, and the System under Test (SUT). Additionally, the algorithm was tested against threats of various distributions to simulate clustering of targets. Finally, new Measures of Effectiveness (MOEs) are introduced from AI and ML including Precision, Recall, and F-score. The new and the original MOEs from Hatzinger and Gertsman are analyzed using Analysis of Variance (ANOVA) and covariance matrix. The results of this research show the algorithm does not have a significant effect against the original MOEs or the new MOEs which is likely due to a similar spreading of the Networked Collaborative Autonomous Munition (NCAM) as compared to Hatzinger and Gertsman. The results are negatively correlated to a decrease in target distributions standard deviation i.e. target clustering. This second result is more surprising as tighter target distributions could result in less area to search, but the NCAM continue to distribute their locations regardless of clusters identified

    A comprehensive structural, biochemical and biological profiling of the human NUDIX hydrolase family

    Get PDF
    The NUDIX enzymes are involved in cellular metabolism and homeostasis, as well as mRNA processing. Although highly conserved throughout all organisms, their biological roles and biochemical redundancies remain largely unclear. To address this, we globally resolve their individual properties and inter-relationships. We purify 18 of the human NUDIX proteins and screen 52 substrates, providing a substrate redundancy map. Using crystal structures, we generate sequence alignment analyses revealing four major structural classes. To a certain extent, their substrate preference redundancies correlate with structural classes, thus linking structure and activity relationships. To elucidate interdependence among the NUDIX hydrolases, we pairwise deplete them generating an epistatic interaction map, evaluate cell cycle perturbations upon knockdown in normal and cancer cells, and analyse their protein and mRNA expression in normal and cancer tissues. Using a novel FUSION algorithm, we integrate all data creating a comprehensive NUDIX enzyme profile map, which will prove fundamental to understanding their biological functionality
    • …
    corecore