485 research outputs found

    Image similarity in medical images

    Get PDF
    Recent experiments have indicated a strong influence of the substrate grain orientation on the self-ordering in anodic porous alumina. Anodic porous alumina with straight pore channels grown in a stable, self-ordered manner is formed on (001) oriented Al grain, while disordered porous pattern is formed on (101) oriented Al grain with tilted pore channels growing in an unstable manner. In this work, numerical simulation of the pore growth process is carried out to understand this phenomenon. The rate-determining step of the oxide growth is assumed to be the Cabrera-Mott barrier at the oxide/electrolyte (o/e) interface, while the substrate is assumed to determine the ratio Ī² between the ionization and oxidation reactions at the metal/oxide (m/o) interface. By numerically solving the electric field inside a growing porous alumina during anodization, the migration rates of the ions and hence the evolution of the o/e and m/o interfaces are computed. The simulated results show that pore growth is more stable when Ī² is higher. A higher Ī² corresponds to more Al ionized and migrating away from the m/o interface rather than being oxidized, and hence a higher retained O:Al ratio in the oxide. Experimentally measured oxygen content in the self-ordered porous alumina on (001) Al is indeed found to be about 3% higher than that in the disordered alumina on (101) Al, in agreement with the theoretical prediction. The results, therefore, suggest that ionization on (001) Al substrate is relatively easier than on (101) Al, and this leads to the more stable growth of the pore channels on (001) Al

    Image similarity in medical images

    Get PDF

    Innovative intelligent sensors to objectively understand exercise interventions for older adults

    Get PDF
    The population of most western countries is ageing and, therefore, the ageing issue now matters more than ever. According to the reports of the United Nations in 2017, there were a total of 15.8 million (26.9%) people over 60 years of age in the United Kindom, and the numbers are projected to reach 23.5 million (31.5%) by 2050. Spending on medical treatment and healthcare for older adults accounts for two-fifths of the UK National Health Service (NHS) budget. Keeping older people healthy is a challenge. In general, exercise is believed to benefit both mental and physical health. Specifically, resistance band exercises are proven by many studies that they have potentially positive effects on both mental and physical health. However, treatment using resistance band exercise is usually done in unmonitored environments, such as at home or in a rehabilitation centre; therefore, the exercise cannot be measured and/or quantified accurately. Despite many years of research, the true effectiveness of resistance band exercises remains unclear. [Continues.]</div

    Neural-network-aided automatic modulation classification

    Get PDF
    Automatic modulation classification (AMC) is a pattern matching problem which significantly impacts divers telecommunication systems, with significant applications in military and civilian contexts alike. Although its appearance in the literature is far from novel, recent developments in machine learning technologies have triggered an increased interest in this area of research. In the first part of this thesis, an AMC system is studied where, in addition to the typical point-to-point setup of one receiver and one transmitter, a second transmitter is also present, which is considered an interfering device. A convolutional neural network (CNN) is used for classification. In addition to studying the effect of interference strength, we propose a modification attempting to leverage some of the debilitating results of interference, and also study the effect of signal quantisation upon classification performance. Consequently, we assess a cooperative setting of AMC, namely one where the receiver features multiple antennas, and receives different versions of the same signal from the single-antenna transmitter. Through the combination of data from different antennas, it is evidenced that this cooperative approach leads to notable performance improvements over the established baseline. Finally, the cooperative scenario is expanded to a more complicated setting, where a realistic geographic distribution of four receiving nodes is modelled, and furthermore, the decision-making mechanism with regard to the identity of a signal resides in a fusion centre independent of the receivers, connected to them over finite-bandwidth backhaul links. In addition to the common concerns over classification accuracy and inference time, data reduction methods of various types (including ā€œtrainedā€ lossy compression) are implemented with the objective of minimising the data load placed upon the backhaul links.Open Acces

    Artificial Intelligence Tools for Facial Expression Analysis.

    Get PDF
    Inner emotions show visibly upon the human face and are understood as a basic guide to an individualā€™s inner world. It is, therefore, possible to determine a personā€™s attitudes and the effects of othersā€™ behaviour on their deeper feelings through examining facial expressions. In real world applications, machines that interact with people need strong facial expression recognition. This recognition is seen to hold advantages for varied applications in affective computing, advanced human-computer interaction, security, stress and depression analysis, robotic systems, and machine learning. This thesis starts by proposing a benchmark of dynamic versus static methods for facial Action Unit (AU) detection. AU activation is a set of local individual facial muscle parts that occur in unison constituting a natural facial expression event. Detecting AUs automatically can provide explicit benefits since it considers both static and dynamic facial features. For this research, AU occurrence activation detection was conducted by extracting features (static and dynamic) of both nominal hand-crafted and deep learning representation from each static image of a video. This confirmed the superior ability of a pretrained model that leaps in performance. Next, temporal modelling was investigated to detect the underlying temporal variation phases using supervised and unsupervised methods from dynamic sequences. During these processes, the importance of stacking dynamic on top of static was discovered in encoding deep features for learning temporal information when combining the spatial and temporal schemes simultaneously. Also, this study found that fusing both temporal and temporal features will give more long term temporal pattern information. Moreover, we hypothesised that using an unsupervised method would enable the leaching of invariant information from dynamic textures. Recently, fresh cutting-edge developments have been created by approaches based on Generative Adversarial Networks (GANs). In the second section of this thesis, we propose a model based on the adoption of an unsupervised DCGAN for the facial featuresā€™ extraction and classification to achieve the following: the creation of facial expression images under different arbitrary poses (frontal, multi-view, and in the wild), and the recognition of emotion categories and AUs, in an attempt to resolve the problem of recognising the static seven classes of emotion in the wild. Thorough experimentation with the proposed cross-database performance demonstrates that this approach can improve the generalization results. Additionally, we showed that the features learnt by the DCGAN process are poorly suited to encoding facial expressions when observed under multiple views, or when trained from a limited number of positive examples. Finally, this research focuses on disentangling identity from expression for facial expression recognition. A novel technique was implemented for emotion recognition from a single monocular image. A large-scale dataset (Face vid) was created from facial image videos which were rich in variations and distribution of facial dynamics, appearance, identities, expressions, and 3D poses. This dataset was used to train a DCNN (ResNet) to regress the expression parameters from a 3D Morphable Model jointly with a back-end classifier

    Enhancing person annotation for personal photo management using content and context based technologies

    Get PDF
    Rapid technological growth and the decreasing cost of photo capture means that we are all taking more digital photographs than ever before. However, lack of technology for automatically organising personal photo archives has resulted in many users left with poorly annotated photos, causing them great frustration when such photo collections are to be browsed or searched at a later time. As a result, there has recently been significant research interest in technologies for supporting effective annotation. This thesis addresses an important sub-problem of the broad annotation problem, namely "person annotation" associated with personal digital photo management. Solutions to this problem are provided using content analysis tools in combination with context data within the experimental photo management framework, called ā€œMediAssistā€. Readily available image metadata, such as location and date/time, are captured from digital cameras with in-built GPS functionality, and thus provide knowledge about when and where the photos were taken. Such information is then used to identify the "real-world" events corresponding to certain activities in the photo capture process. The problem of enabling effective person annotation is formulated in such a way that both "within-event" and "cross-event" relationships of persons' appearances are captured. The research reported in the thesis is built upon a firm foundation of content-based analysis technologies, namely face detection, face recognition, and body-patch matching together with data fusion. Two annotation models are investigated in this thesis, namely progressive and non-progressive. The effectiveness of each model is evaluated against varying proportions of initial annotation, and the type of initial annotation based on individual and combined face, body-patch and person-context information sources. The results reported in the thesis strongly validate the use of multiple information sources for person annotation whilst emphasising the advantage of event-based photo analysis in real-life photo management systems
    • ā€¦
    corecore