5,929 research outputs found

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    Scaling up integrated photonic reservoirs towards low-power high-bandwidth computing

    No full text

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing

    Robust and reliable hand gesture recognition for myoelectric control

    Get PDF
    Surface Electromyography (sEMG) is a physiological signal to record the electrical activity of muscles by electrodes applied to the skin. In the context of Muscle Computer Interaction (MCI), systems are controlled by transforming myoelectric signals into interaction commands that convey the intent of user movement, mostly for rehabilitation purposes. Taking the myoeletric hand prosthetic control as an example, using sEMG recorded from the remaining muscles of the stump can be considered as the most natural way for amputees who lose their limbs to perform activities of daily living with the aid of prostheses. Although the earliest myoelectric control research can date back to the 1950s, there still exist considerable challenges to address the significant gap between academic research and industrial applications. Most recently, pattern recognition-based control is being developed rapidly to improve the dexterity of myoelectric prosthetic devices due to the recent development of machine learning and deep learning techniques. It is clear that the performance of Hand Gesture Recognition (HGR) plays an essential role in pattern recognition-based control systems. However, in reality, the tremendous success in achieving very high sEMG-based HGR accuracy (≥ 90%) reported in scientific articles produced only limited clinical or commercial impact. As many have reported, its real-time performance tends to degrade significantly as a result of many confounding factors, such as electrode shift, sweating, fatigue, and day-to-day variation. The main interest of the present thesis is, therefore, to improve the robustness of sEMG-based HGR by taking advantage of the most recent advanced deep learning techniques to address several practical concerns. Furthermore, the challenge of this research problem has been reinforced by only considering using raw sparse multichannel sEMG signals as input. Firstly, a framework for designing an uncertainty-aware sEMG-based hand gesture classifier is proposed. Applying it allows us to quickly build a model with the ability to make its inference along with explainable quantified multidimensional uncertainties. This addresses the black-box concern of the HGR process directly. Secondly, to fill the gap of lacking consensus on the definition of model reliability in this field, a proper definition of model reliability is proposed. Based on it, reliability analysis can be performed as a new dimension of evaluation to help select the best model without relying only on classification accuracy. Our extensive experimental results have shown the efficiency of the proposed reliability analysis, which encourages researchers to use it as a supplementary tool for model evaluation. Next, an uncertainty-aware model is designed based on the proposed framework to address the low robustness of hand grasp recognition. This offers an opportunity to investigate whether reliable models can achieve robust performance. The results show that the proposed model can improve the long-term robustness of hand grasp recognition by rejecting highly uncertain predictions. Finally, a simple but effective normalisation approach is proposed to improve the robustness of inter-subject HGR, thus addressing the clinical challenge of having only a limited amount of data from any individual. The comparison results show that better performance can be obtained by it compared to a state-of-the-art (SoA) transfer learning method when only one training cycle is available. In summary, this study presents promising methods to pursue an accurate, robust, and reliable classifier, which is the overarching goal for sEMG-based HGR. The direction for future work would be the inclusion of these in real-time myoelectric control applications

    Novel approaches for hierarchical classification with case studies in protein function prediction

    Get PDF
    A very large amount of research in the data mining, machine learning, statistical pattern recognition and related research communities has focused on flat classification problems. However, many problems in the real world such as hierarchical protein function prediction have their classes naturally organised into hierarchies. The task of hierarchical classification, however, needs to be better defined as researchers into one application domain are often unaware of similar efforts developed in other research areas. The first contribution of this thesis is to survey the task of hierarchical classification across different application domains and present an unifying framework for the task. After clearly defining the problem, we explore novel approaches to the task. Based on the understanding gained by surveying the task of hierarchical classification, there are three major approaches to deal with hierarchical classification problems. The first approach is to use one of the many existing flat classification algorithms to predict only the leaf classes in the hierarchy. Note that, in the training phase, this approach completely ignores the hierarchical class relationships, i.e. the parent-child and sibling class relationships, but in the testing phase the ancestral classes of an instance can be inferred from its predicted leaf classes. The second approach is to build a set of local models, by training one flat classification algorithm for each local view of the hierarchy. The two main variations of this approach are: (a) training a local flat multi-class classifier at each non-leaf class node, where each classifier discriminates among the child classes of its associated class; or (b) training a local fiat binary classifier at each node of the class hierarchy, where each classifier predicts whether or not a new instance has the classifier’s associated class. In both these variations, in the testing phase a procedure is used to combine the predictions of the set of local classifiers in a coherent way, avoiding inconsistent predictions. The third approach is to use a global-model hierarchical classification algorithm, which builds one single classification model by taking into account all the hierarchical class relationships in the training phase. In the context of this categorization of hierarchical classification approaches, the other contributions of this thesis are as follows. The second contribution of this thesis is a novel algorithm which is based on the local classifier per parent node approach. The novel algorithm is the selective representation approach that automatically selects the best protein representation to use at each non-leaf class node. The third contribution is a global-model hierarchical classification extension of the well known naive Bayes algorithm. Given the good predictive performance of the global-model hierarchical-classification naive Bayes algorithm, we relax the Naive Bayes’ assumption that attributes are independent from each other given the class by using the concept of k dependencies. Hence, we extend the flat classification /¿-Dependence Bayesian network classifier to the task of hierarchical classification, which is the fourth contribution of this thesis. Both the proposed global-model hierarchical classification Naive Bayes and the proposed global-model hierarchical /¿-Dependence Bayesian network classifier have achieved predictive accuracies that were, overall, significantly higher than the predictive accuracies obtained by their corresponding local hierarchical classification versions, across a number of datasets for the task of hierarchical protein function prediction

    Representation learning for generalisation in medical image analysis

    Get PDF
    To help diagnose, treat, manage, prevent and predict diseases, medical image analysis plays an increasingly crucial role in modern health care. In particular, using machine learning (ML) and deep learning (DL) techniques to process medical imaging data such as MRI, CT and X-Rays scans has been a research hot topic. Accurate and generalisable medical image segmentation using ML and DL is one of the most challenging medical image analysis tasks. The challenges are mainly caused by two key reasons: a) the variations of data statistics across different clinical centres or hospitals, and b) the lack of extensive annotations of medical data. To tackle the above challenges, one of the best ways is to learn disentangled representations. Learning disentangled representations aims to separate out, or disentangle, the underlying explanatory generative factors into disjoint subsets. Importantly, disentangled representations can be efficiently learnt from raw training data with limited annotations. Although, it is evident that learning disentangled representations is well suited for the challenges, there are several open problems in this area. First, there is no work to systematically study how much disentanglement is achieved with different learning and design biases and how different biases affect the task performance for medical data. Second, the benefit of leveraging disentanglement to design models that generalise well on new data has not been well studied especially in medical domain. Finally, the independence prior for disentanglement is a too strong assumption that does not approximate well the true generative factors. According to these problems, this thesis focuses on understanding the role of disentanglement in medical image analysis, measuring how different biases affect disentanglement and the task performance, and then finally using disentangled representations to improve generalisation performance and exploring better representations beyond disentanglement. In the medical domain, content-style disentanglement is one of the most effective frameworks to learn disentangled presentations. It disentangles and encodes image “content” into a spatial tensor, and image appearance or “style” into a vector that contains information on imaging characteristics. Based on an extensive review of disentanglement, I conclude that it is unclear how different design and learning biases affect the performance of content-style disentanglement methods. Hence, two metrics are proposed to measure the degree of content-style disentanglement by evaluating the informativeness and correlation of representations. By modifying the design and learning biases in three popular content-style disentanglement models, the degree of disentanglement and task performance of different model variants have been evaluated. A key conclusion is that there exists a sweet spot between task performance and the degree of disentanglement; achieving this sweet spot is the key to design disentanglement models. Generalising deep models to new data from new centres (termed here domains) remains a challenge. This is largely attributed to shifts in data statistics (domain shifts) between source and unseen domains. With the findings of aforementioned disentanglement metrics study, I design two content-style disentanglement approaches for generalisation. First, I propose two data augmentation methods that improve generalisation. The Resolution Augmentation method generates more diverse data by rescaling images to different resolutions. Subsequently, the Factor-based Augmentation method generates more diverse data by projecting the original samples onto disentangled latent spaces, and combining the learned content and style factors from different domains. To learn more generalisable representations, I integrate gradient-based meta-learning in disentanglement. Gradient-based meta-learning splits the training data into meta-train and meta-test sets to simulate and handle the domain shifts during training, which has shown superior generalisation performance. Considering limited annotations of data, I propose a novel semi-supervised meta-learning framework with disentanglement. I explicitly model the representations related to domain shifts. Disentangling the representations and combining them to reconstruct the input image, allows unlabeled data to be used to better approximate the true domain shifts within a meta-learning setting. Humans can quickly learn to accurately recognise anatomy of interest from medical images with limited guidance. Such recognition ability can easily generalise to new images from different clinical centres and new tasks in other contexts. This rapid and generalisable learning ability is mostly due to the compositional structure of image patterns in the human brain, which is less incorporated in the medical domain. In this thesis, I explore how compositionality can be applied to learning more interpretable and generalisable representations. Overall, I propose that the ground-truth generative factors that generate the medical images satisfy the compositional equivariance property. Hence, a good representation that approximates well the ground-truth factor has to be compositionally equivariant. By modelling the compositional representations with the learnable von-Mises-Fisher kernels, I explore how different design and learning biases can be used to enforce the representations to be more compositionally equivariant under different learning settings. Overall, this thesis creates new avenues for further research in the area of generalisable representation learning in medical image analysis, which we believe are key to more generalised machine learning and deep learning solutions in healthcare. In particular, the proposed metrics can be used to guide future work on designing better content-style frameworks. The disentanglement-based meta-learning approach sheds light on leveraging meta-learning for better model generalisation in a low-data regime. Finally, compositional representation learning we believe will play an increasingly important role in designing more generalisable and interpretable models in the future

    BRAMAC: Compute-in-BRAM Architectures for Multiply-Accumulate on FPGAs

    Full text link
    Deep neural network (DNN) inference using reduced integer precision has been shown to achieve significant improvements in memory utilization and compute throughput with little or no accuracy loss compared to full-precision floating-point. Modern FPGA-based DNN inference relies heavily on the on-chip block RAM (BRAM) for model storage and the digital signal processing (DSP) unit for implementing the multiply-accumulate (MAC) operation, a fundamental DNN primitive. In this paper, we enhance the existing BRAM to also compute MAC by proposing BRAMAC (Compute-in-BR\underline{\text{BR}}AM A\underline{\text{A}}rchitectures for M\underline{\text{M}}ultiply-Ac\underline{\text{Ac}}cumulate). BRAMAC supports 2's complement 2- to 8-bit MAC in a small dummy BRAM array using a hybrid bit-serial & bit-parallel data flow. Unlike previous compute-in-BRAM architectures, BRAMAC allows read/write access to the main BRAM array while computing in the dummy BRAM array, enabling both persistent and tiling-based DNN inference. We explore two BRAMAC variants: BRAMAC-2SA (with 2 synchronous dummy arrays) and BRAMAC-1DA (with 1 double-pumped dummy array). BRAMAC-2SA/BRAMAC-1DA can boost the peak MAC throughput of a large Arria-10 FPGA by 2.6×\times/2.1×\times, 2.3×\times/2.0×\times, and 1.9×\times/1.7×\times for 2-bit, 4-bit, and 8-bit precisions, respectively at the cost of 6.8%/3.4% increase in the FPGA core area. By adding BRAMAC-2SA/BRAMAC-1DA to a state-of-the-art tiling-based DNN accelerator, an average speedup of 2.05×\times/1.7×\times and 1.33×\times/1.52×\times can be achieved for AlexNet and ResNet-34, respectively across different model precisions.Comment: 11 pages, 13 figures, 3 tables, FCCM conference 202

    On the Mechanism of Building Core Competencies: a Study of Chinese Multinational Port Enterprises

    Get PDF
    This study aims to explore how Chinese multinational port enterprises (MNPEs) build their core competencies. Core competencies are firms’special capabilities and sources to gain sustainable competitive advantage (SCA) in marketplace, and the concept led to extensive research and debates. However, few studies include inquiries about the mechanisms of building core competencies in the context of Chinese MNPEs. Accordingly, answers were sought to three research questions: 1. What are the core competencies of the Chinese MNPEs? 2. What are the mechanisms that the Chinese MNPEs use to build their core competencies? 3. What are the paths that the Chinese MNPEs pursue to build their resources bases? The study adopted a multiple-case study design, focusing on building mechanism of core competencies with RBV. It selected purposively five Chinese leading MNPEs and three industry associations as Case Companies. The study revealed three main findings. First, it identified three generic core competencies possessed by Case Companies, i.e., innovation in business models and operations, utilisation of technologies, and acquisition of strategic resources. Second, it developed the conceptual framework of the Mechanism of Building Core Competencies (MBCC), which is a process of change of collective learning in effective and efficient utilization of resources of a firm in response to critical events. Third, it proposed three paths to build core competencies, i.e., enhancing collective learning, selecting sustainable processes, and building resource base. The study contributes to the knowledge of core competencies and RBV in three ways: (1) presenting three generic core competencies of the Chinese MNPEs, (2) proposing a new conceptual framework to explain how Chinese MNPEs build their core competencies, (3) suggesting a solid anchor point (MBCC) to explain the links among resources, core competencies, and SCA. The findings set benchmarks for Chinese logistics industry and provide guidelines to build core competencies
    corecore