22 research outputs found

    Slotted Aloha for Networked Base Stations

    Full text link
    We study multiple base station, multi-access systems in which the user-base station adjacency is induced by geographical proximity. At each slot, each user transmits (is active) with a certain probability, independently of other users, and is heard by all base stations within the distance rr. Both the users and base stations are placed uniformly at random over the (unit) area. We first consider a non-cooperative decoding where base stations work in isolation, but a user is decoded as soon as one of its nearby base stations reads a clean signal from it. We find the decoding probability and quantify the gains introduced by multiple base stations. Specifically, the peak throughput increases linearly with the number of base stations mm and is roughly m/4m/4 larger than the throughput of a single-base station that uses standard slotted Aloha. Next, we propose a cooperative decoding, where the mutually close base stations inform each other whenever they decode a user inside their coverage overlap. At each base station, the messages received from the nearby stations help resolve collisions by the interference cancellation mechanism. Building from our exact formulas for the non-cooperative case, we provide a heuristic formula for the cooperative decoding probability that reflects well the actual performance. Finally, we demonstrate by simulation significant gains of cooperation with respect to the non-cooperative decoding.Comment: conference; submitted on Dec 15, 201

    Integrative clustering by non-negative matrix factorization can reveal coherent functional groups from gene profile data

    Get PDF
    Recent developments in molecular biology and tech- niques for genome-wide data acquisition have resulted in abun- dance of data to profile genes and predict their function. These data sets may come from diverse sources and it is an open question how to commonly address them and fuse them into a joint prediction model. A prevailing technique to identify groups of related genes that exhibit similar profiles is profile-based clustering. Cluster inference may benefit from consensus across different clustering models. In this paper we propose a technique that develops separate gene clusters from each of available data sources and then fuses them by means of non-negative matrix factorization. We use gene profile data on the budding yeast S. cerevisiae to demonstrate that this approach can successfully integrate heterogeneous data sets and yields high-quality clusters that could otherwise not be inferred by simply merging the gene profiles prior to clustering

    Integrative clustering by non-negative matrix factorization can reveal coherent functional groups from gene profile data

    Get PDF
    Recent developments in molecular biology and tech- niques for genome-wide data acquisition have resulted in abun- dance of data to profile genes and predict their function. These data sets may come from diverse sources and it is an open question how to commonly address them and fuse them into a joint prediction model. A prevailing technique to identify groups of related genes that exhibit similar profiles is profile-based clustering. Cluster inference may benefit from consensus across different clustering models. In this paper we propose a technique that develops separate gene clusters from each of available data sources and then fuses them by means of non-negative matrix factorization. We use gene profile data on the budding yeast S. cerevisiae to demonstrate that this approach can successfully integrate heterogeneous data sets and yields high-quality clusters that could otherwise not be inferred by simply merging the gene profiles prior to clustering

    Classification of cattle behaviour using convolutional neural networks

    Get PDF
    The monitoring of cattle behaviour through sensor systems is gaining importance in the improvement of animal health, fertility and management of large herds. Commercial farms commonly implement accelerometer-based systems to monitor the time an animal spends ruminating, eating and overall activity which informs farmers on the health and fertility status of individual cattle. Ill or injured cattle feed and ruminate less, so tracking the duration and frequency of these states provide key indicators of animal health. Activity is used as a metric for the detection of oestrus (heat) which promotes more efficient fertilisation of dairy and beef cattle, reducing operating costs and increasing profits for farmers. The aim of the study was to determine the feasibility of enhancing the accuracy of estimating multiple classifications derived from acceleration-based activity collars can through Convolutional Neural Networks (CNN). CNN models are typically used to classify objects within images, but have been demonstrated to be effective at classifying time-series data across different domains. To evaluate their effectiveness for cattle behaviours classifications, acceleration data was collected from 18 cows across 3 farms using neck-mounted collars which provided 3-axis acceleration values at 10Hz sampling frequency. Each cow was equipped with pressure sensor halters which provided ground truth data of the animal behavioural state, also at 10Hz sampling frequency. The ground truth from the halter allowed the CNN model to be trained to predict a number of key cattle behaviours. The model was then tested on separate data to assess performance. The CNN was able to classify the 3 activity states (rumination, eating and other) with an overall F1 score of 82% compared to reported collar classifications with an overall F1 score of 72%

    Behavioural classification of cattle using neck-mounted accelerometer-equipped collars

    Get PDF
    Monitoring and classification of dairy cattle behaviours is essential for optimising milk yields. Early detection of illness, days before the critical conditions occur, together with automatic detection of the onset of oestrus cycles is crucial for obviating prolonged cattle treatments and improving the pregnancy rates. Accelerometer-based sensor systems are becoming increasingly popular, as they are automatically providing information about key cattle behaviours such as the level of restlessness and the time spent ruminating and eating, proxy measurements that indicate the onset of heat events and overall welfare, at an individual animal level. This paper reports on an approach to the development of algorithms that classify key cattle states based on a systematic dimensionality reduction process through two feature selection techniques. These are based on Mutual Information and Backward Feature Elimination and applied on knowledge-specific and generic time-series extracted from raw accelerometer data. The extracted features are then used to train classification models based on a Hidden Markov Model, Linear Discriminant Analysis and Partial Least Squares Discriminant Analysis. The proposed feature engineering methodology permits model deployment within the computing and memory restrictions imposed by operational settings. The models were based on measurement data from 18 steers, each animal equipped with an accelerometer-based neck-mounted collar and muzzle-mounted halter, the latter providing the truthing data. A total of 42 time-series features were initially extracted and the trade-off between model performance, computational complexity and memory footprint was explored. Results show that the classification model that best balances performance and computation complexity is based on Linear Discriminant Analysis using features selected through Backward Feature Elimination. The final model requires 1.83 ± 1.00 ms to perform feature extraction with 0.05 ± 0.01 ms for inference with an overall balanced accuracy of 0.83

    Universal impulse noise filter based on genetic programming

    No full text
    In this paper, we present a novel method for impulse noise filter construction, based on the switching scheme with two cascaded detectors and two corresponding estimators. Genetic programming as a supervised learning algorithm is employed for building two detectors with complementary characteristics. The first detector identifies the majority of noisy pixels. The second detector searches for the remaining noise missed by the first detector, usually hidden in image details or with amplitudes close to its local neighborhood. Both detectors are based on the robust estimators of location and scale-median and MAD. The filter made by the proposed method is capable of effectively suppressing all kinds of impulse noise, in contrast to many existing filters which are specialized only for a particular noise model. In addition, we propose the usage of a new impulse noise model-the mixed impulse noise, which is more realistic and harder to treat than existing impulse noise models. The proposed model is the combination of commonly used noise models: salt-and-pepper and uniform impulse noise models. Simulation results show that the proposed two-stage GP filter produces excellent results and outperforms existing state-of-the-art filters

    V.Crnojevic, “Mining Web Videos for Video Quality Assessment

    No full text
    Abstract: Correlating estimates of objective measures related to the presence of different coding artifacts with the quality of video as perceived by human observers is a non-trivial task. There is no shortage of data to learn from, thanks to the Internet and web-sites such as YouTube tm . There has, however, been little done in the research community to try to use such resources to advance our understanding of perceived video quality. The problem is the fact that it is not easy to obtain the Mean Opinion Score (MOS), a standard measure of the perceived video quality, for more than a handful of videos. The paper presents an approach to determining the quality of a relatively large number of videos obtained randomly from YouTube tm . Several measures related to motion, saliency and coding artifacts are calculated for the frames of the video. Programmable graphics hardware is used to perform clustering: first, to create an artifacts-related signature of each video; then, to cluster the videos according to their signatures. To obtain an estimate for the video quality, MOS is obtained for representative videos, closest to the cluster centers. This is then used as an estimate of the quality of all other videos in the cluster. Results based on 2,107 videos containing some 90,000,000 frames are presented in the paper
    corecore