61 research outputs found
Autonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.
With the continuous advancement in hardware technologies, significant research has been devoted to design and develop high-level parallel programming models that allow programmers to exploit the latest developments in heterogeneous multi-core/many-core architectures. Structural programming paradigms propose a viable solution for e ciently programming modern heterogeneous multi-core architectures equipped with one or more programmable Graphics Processing Units (GPUs). Applying structured programming paradigms, it is possible to subdivide a system into building blocks (modules, skids or components) that can be independently created and then used in di erent systems to derive multiple functionalities. Exploiting such systematic divisions, it is possible to address extra-functional features such as application performance, portability and resource utilisations from the component level in heterogeneous multi-core architecture. While the computing function of a building block can vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore, by understanding the behaviour of building blocks and their structural compositions in parallel patterns, the process of constructing and coordinating a structured application can be automated. In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP) as a systematic methodology to exploit the structural programming paradigm (Building block approach in this case) for constructing a structured application and extracting/injecting information from/to the structured application. Using SKIP methodology, we have designed and developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural framework to automatically coordinate structured parallel applications based on the extracted extra-functional properties related to the parallel computation patterns. We have used 15 di erent PEI-based applications (from large scale applications with heavy input workload that take hours to execute to small-scale applications which take seconds to execute) to evaluate PEI in terms of overhead and performance improvements. The experiments have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including one cluster machine with 4 symmetric nodes with one GPU per node and 2 single machines with one GPU per machine). Our results demonstrate that with less than 3% overhead, we can achieve up to one order of magnitude speed-up when using PEI for enhancing application performance
Novel techniques of computational intelligence for analysis of astronomical structures
Gravitational forces cause the formation and evolution of a variety of cosmological structures. The detailed investigation and study of these structures is a crucial step towards our understanding of the universe. This thesis provides several solutions for the detection and classification of such structures. In the first part of the thesis, we focus on astronomical simulations, and we propose two algorithms to extract stellar structures. Although they follow different strategies (while the first one is a downsampling method, the second one keeps all samples), both techniques help to build more effective probabilistic models. In the second part, we consider observational data, and the goal is to overcome some of the common challenges in observational data such as noisy features and imbalanced classes. For instance, when not enough examples are present in the training set, two different strategies are used: a) nearest neighbor technique and b) outlier detection technique. In summary, both parts of the thesis show the effectiveness of automated algorithms in extracting valuable information from astronomical databases
Recommended from our members
Seismological data acquisition and signal processing using wavelets
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This work deals with two main fields:
a) The design, built, installation, test, evaluation, deployment and maintenance of Seismological Network of Crete (SNC) of the Laboratory of Geophysics and Seismology (LGS) at Technological Educational Institute (TEI) at Chania.
b) The use of Wavelet Transform (WT) in several applications during the operation of the aforementioned network.
SNC began its operation in 2003. It is designed and built in order to provide denser network coverage, real time data transmission to CRC, real time telemetry, use of wired ADSL lines and dedicated private satellite links, real time data processing and estimation of source parameters as well as rapid dissemination of results. All the above are implemented using commercial hardware and software which is modified and where is necessary, author designs and deploy additional software modules. Up to now (July 2008) SNC has recorded 5500 identified events (around 970 more than those reported by national bulletin the same period) and its seismic catalogue is complete for magnitudes over 3.2, instead national catalogue which was complete for magnitudes over 3.7 before the operation of SNC.
During its operation, several applications at SNC used WT as a signal processing tool.
These applications benefited from the adaptation of WT to non-stationary signals such as the seismic signals. These applications are:
HVSR method. WT used to reveal undetectable non-stationarities in order to eliminate errors in site’s fundamental frequency estimation. Denoising. Several wavelet denoising schemes compared with the widely used in seismology band-pass filtering in order to prove the superiority of wavelet denoising and to choose the most appropriate scheme for different signal to noise ratios of seismograms.
EEWS. WT used for producing magnitude prediction equations and epicentral estimations from the first 5 secs of P wave arrival. As an alternative analysis tool for detection of significant indicators in temporal patterns of seismicity. Multiresolution wavelet analysis of seismicity used to estimate (in a several years time period) the time where the maximum emitted earthquake energy was observed
Deep-Learning-Based 3-D Surface Reconstruction—A Survey
In the last decade, deep learning (DL) has significantly impacted industry and science. Initially largely motivated by computer vision tasks in 2-D imagery, the focus has shifted toward 3-D data analysis. In particular, 3-D surface reconstruction, i.e., reconstructing a 3-D shape from sparse input, is of great interest to a large variety of application fields. DL-based approaches show promising quantitative and qualitative surface reconstruction performance compared to traditional computer vision and geometric algorithms. This survey provides a comprehensive overview of these DL-based methods for 3-D surface reconstruction. To this end, we will first discuss input data modalities, such as volumetric data, point clouds, and RGB, single-view, multiview, and depth images, along with corresponding acquisition technologies and common benchmark datasets. For practical purposes, we also discuss evaluation metrics enabling us to judge the reconstructive performance of different methods. The main part of the document will introduce a methodological taxonomy ranging from point- and mesh-based techniques to volumetric and implicit neural approaches. Recent research trends, both methodological and for applications, are highlighted, pointing toward future developments
Detecting cow behaviours associated with parturition using computer vision
Monitoring of dairy cows and their calf during parturition is essential in determining if there are any associated problems for mother and offspring. This is a critical period in the productive life of the mother and offspring. A difficult and assisted calving can impact on the subsequent milk production, health and fertility of a cow, and its potential survival. Furthermore, an alert to the need for any assistance would enhance animal and stockperson wellbeing. Manual monitoring of animal behaviour from images has been used for decades, but is very labour intensive. Recent technological advances in the field of Computer Vision based on the technique of Deep Learning have emerged, which now makes automated monitoring of surveillance video feeds feasible. The benefits of using image analysis compared to other monitoring systems is that image analysis relies upon neither transponder attachments, nor invasive tools and may provide more information at a relatively low cost. Image analysis can also detect and track the calf, which is not possible using other monitoring methods. Using cameras to monitor animals is commonly used, however, automated detection of behaviours is new especially for livestock.
Using the latest state-of-the-art techniques in Computer Vision, and in particular the ground-breaking technique of Deep Learning, this thesis develops a vision-based model to detect the progress of parturition in dairy cows. A large-scale dataset of cow behaviour annotations was created, which included over 46 individual cow calvings and is approximately 690 hours of video footage with over 2.5k of video clips, each between 3-10 seconds. The model was trained on seven different behaviours, which included standing, walking, shuffle, lying, eating, drinking, and contractions while lying. The developed network correctly classified the seven behaviours with an accuracy of between 80 to 95%. The accuracy in predicting contractions while lying down was 83%, which in itself can be an early warning calving alert, as all cows start contractions one to two hours before giving birth. The performance of the model developed was also comparable to methods for human action classification using the Kinetics dataset
Enabling Technology in Optical Fiber Communications: From Device, System to Networking
This book explores the enabling technology in optical fiber communications. It focuses on the state-of-the-art advances from fundamental theories, devices, and subsystems to networking applications as well as future perspectives of optical fiber communications. The topics cover include integrated photonics, fiber optics, fiber and free-space optical communications, and optical networking
Action recognition from RGB-D data
In recent years, action recognition based on RGB-D data has attracted increasing attention. Different from traditional 2D action recognition, RGB-D data contains extra depth and skeleton modalities. Different modalities have their own characteristics. This thesis presents seven novel methods to take advantages of the three modalities for action recognition.
First, effective handcrafted features are designed and frequent pattern mining method is employed to mine the most discriminative, representative and nonredundant features for skeleton-based action recognition. Second, to take advantages of powerful Convolutional Neural Networks (ConvNets), it is proposed to represent spatio-temporal information carried in 3D skeleton sequences in three 2D images by encoding the joint trajectories and their dynamics into color distribution in the images, and ConvNets are adopted to learn the discriminative features for human action recognition. Third, for depth-based action recognition, three strategies of data augmentation are proposed to apply ConvNets to small training datasets. Forth, to take full advantage of the 3D structural information offered in the depth modality and its being insensitive to illumination variations, three simple, compact yet effective images-based representations are proposed and ConvNets are adopted for feature extraction and classification. However, both of previous two methods are sensitive to noise and could not differentiate well fine-grained actions. Fifth, it is proposed to represent a depth map sequence into three pairs of structured dynamic images at body, part and joint levels respectively through bidirectional rank pooling to deal with the issue. The structured dynamic image preserves the spatial-temporal information, enhances the structure information across both body parts/joints and different temporal scales, and takes advantages of ConvNets for action recognition. Sixth, it is proposed to extract and use scene flow for action recognition from RGB and depth data. Last, to exploit the joint information in multi-modal features arising from heterogeneous sources (RGB, depth), it is proposed to cooperatively train a single ConvNet (referred to as c-ConvNet) on both RGB features and depth features, and deeply aggregate the two modalities to achieve robust action recognition
Sensor Independent Deep Learning for Detection Tasks with Optical Satellites
The design of optical satellite sensors varies widely, and this variety is mirrored in the data they produce. Deep learning has become a popular method for automating tasks in remote sensing, but currently it is ill-equipped to deal with this diversity of satellite data. In this work, sensor independent deep learning models are proposed, which are able to ingest data from multiple satellites without retraining. This strategy is applied to two tasks in remote sensing: cloud masking and crater detection. For cloud masking, a new dataset---the largest ever to date with respect to the number of scenes---is created for Sentinel-2. Combination of this with other datasets from the Landsat missions results in a state-of-the-art deep learning model, capable of masking clouds on a wide array of satellites, including ones it was not trained on. For small crater detection on Mars, a dataset is also produced, and state-of-the-art deep learning approaches are compared. By combining datasets from sensors with different resolutions, a highly accurate sensor independent model is trained. This is used to produce the largest ever database of crater detections for any solar system body, comprising 5.5 million craters across Isidis Planitia, Mars using CTX imagery. Novel geospatial statistical techniques are used to explore this database of small craters, finding evidence for large populations of distant secondary impacts. Across these problems, sensor independence is shown to offer unique benefits, both regarding model performance and scientific outcomes, and in the future can aid in many problems relating to data fusion, time series analysis, and on-board applications. Further work on a wider range of problems is needed to determine the generalisability of the proposed strategies for sensor independence, and extension from optical sensors to other kinds of remote sensing instruments could expand the possible applications of this new technique
- …