10 research outputs found

    Encoding and Decoding Techniques for Distributed Data Storage Systems

    Get PDF
    Dimensionality reduction is the conversion of high-dimensional data into a meaningful representation of reduced data. Preferably, the reduced representation has a dimensionality that corresponds to the essential dimensionality of the data. The essential dimensionality of data is the minimum number of parameters needed to account for the observed properties of the data [4]. Dimensionality reduction is important in many domains, since it facilitates classification, visualization, and compression of high-dimensional data, by helpful the curse of dimensionality and other undesired properties of high-dimensional spaces [5]. Dimension reduction can be beneficial not only for reasons of computational efficiency but also because it can improve the accuracy of the analysis. In this research area, it significantly reduces the storage spaces

    Some applications of distributed signal processing

    Get PDF
    In this work we review some earlier distributed algorithms developed by the authors and collaborators, which are based on two different approaches, namely, distributed moment estimation and distributed stochastic approximations. We show applications of these algorithms on image compression, linear classification and stochastic optimal control. In all cases, the benefit of cooperation is clear: even when the nodes have access to small portions of the data, by exchanging their estimates, they achieve the same performance as that of a centralized architecture, which would gather all the data from all the nodes

    Distributed primary user identification from imprecise location information

    Get PDF
    We study a cognitive radio scenario in which the network of sec- ondary users wishes to identify which primary user, if any, is trans- mitting. To achieve this, the nodes will rely on some form of location information. In our previous work we proposed two fully distributed algorithms for this task, with and without a pre-detection step, using propagation parameters as the only source of location information. In a real distributed deployment, each node must estimate its own po- sition and/or propagation parameters. Hence, in this work we study the effect of uncertainty, or error in these estimates on the proposed distributed identification algorithms. We show that the pre-detection step significantly increases robustness against uncertainty in nodes' locations

    Location-aided Distributed Primary User Identification in a Cognitive Radio Scenario

    Get PDF
    We address a cognitive radio scenario, where a number of secondary users performs identification of which primary user, if any, is transmitting, in a distributed way and using limited location information. We propose two fully distributed algorithms: the first is a direct identification scheme, and in the other a distributed sub-optimal detection based on a simplified Neyman-Pearson energy detector precedes the identification scheme. Both algorithms are studied analytically in a realistic transmission scenario, and the advantage obtained by detection pre-processing is also verified via simulation. Finally, we give details of their fully distributed implementation via consensus averaging algorithms.Comment: Submitted to IEEE ICASSP201

    Distributed static linear Gaussian models using consensus

    Get PDF
    Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance tradeoff

    Use of Linear Discriminant Analysis in Song Classification: Modeling Based on Wilco Albums

    Get PDF
    The study of music recommender algorithms is a relatively new area of study. Although these algorithms serve a variety of functions, they primarily help advertise and suggest music to users on music streaming services. This thesis explores the use of linear discriminant analysis in music categorization for the purpose of serving as a cheaper and simpler content-based recommender algorithm. The use of linear discriminant analysis was tested by creating lineardiscriminant functions that classify Wilco’s songs into their respective albums, specifically A.M., Yankee Hotel Foxtrot, and Sky Blue Sky. 4 sample songs were chosen from each album, and song data was collected from these samples to create the model. These models were tested for accuracy by testing the other, non-sample, songs from the albums. After testing these models, all proved to have an accuracy rate of over 80%. Not being able to write computer code for this algorithm was a limiting factor for testing applicability on a larger-scale, but the small-scale model proves to classify accurately. I predict this accuracy to hold on a larger-scale because it was tested on very similar music when in reality, these models work to classify a diverse range of music

    An SSVEP Brain-Computer Interface: A Machine Learning Approach

    Get PDF
    A Brain-Computer Interface (BCI) provides a bidirectional communication path for a human to control an external device using brain signals. Among neurophysiological features in BCI systems, steady state visually evoked potentials (SSVEP), natural responses to visual stimulation at specific frequencies, has increasingly drawn attentions because of its high temporal resolution and minimal user training, which are two important parameters in evaluating a BCI system. The performance of a BCI can be improved by a properly selected neurophysiological signal, or by the introduction of machine learning techniques. With the help of machine learning methods, a BCI system can adapt to the user automatically. In this work, a machine learning approach is introduced to the design of an SSVEP based BCI. The following open problems have been explored: 1. Finding a waveform with high success rate of eliciting SSVEP. SSVEP belongs to the evoked potentials, which require stimulations. By comparing square wave, triangle wave and sine wave light signals and their corresponding SSVEP, it was observed that square waves with 50% duty cycle have a significantly higher success rate of eliciting SSVEPs than either sine or triangle stimuli. 2. The resolution of dual stimuli that elicits consistent SSVEP. Previous studies show that the frequency bandwidth of an SSVEP stimulus is limited. Hence it affects the performance of the whole system. A dual-stimulus, the overlay of two distinctive single frequency stimuli, can potentially expand the number of valid SSVEP stimuli. However, the improvement depends on the resolution of the dual stimuli. Our experimental results shothat 4 Hz is the minimum difference between two frequencies in a dual-stimulus that elicits consistent SSVEP. 3. Stimuli and color-space decomposition. It is known in the literature that although low-frequency stimuli (\u3c30 Hz) elicit strong SSVEP, they may cause dizziness. In this work, we explored the design of a visually friendly stimulus from the perspective of color-space decomposition. In particular, a stimulus was designed with a fixed luminance component and variations in the other two dimensions in the HSL (Hue, Saturation, Luminance) color-space. Our results shothat the change of color alone evokes SSVEP, and the embedded frequencies in stimuli affect the harmonics. Also, subjects claimed that a fixed luminance eases the feeling of dizziness caused by low frequency flashing objects. 4. Machine learning techniques have been applied to make a BCI adaptive to individuals. An SSVEP-based BCI brings new requirements to machine learning. Because of the non-stationarity of the brain signal, a classifier should adapt to the time-varying statistical characters of a single user\u27s brain wave in realtime. In this work, the potential function classifier is proposed to address this requirement, and achieves 38.2bits/min on offline EEG data

    Computer-aided diagnosis of tuberculosis in paediatric chest X-rays using local textural analysis

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 99-103).This report presents a computerised tool to analyse the appearance of the lung fields in paediatric chest X-rays to detect the presence of tuberculosis. The computer aided diagnosis (CAD) tool consists of 4 phases: 1) lung field segmentation; 2) lung field subdivision; 3) feature extraction and 4) classification. Lung field segmentation is performed using a semi-automatic implementation of the active shape model algorithm. Two approaches to subdividing the lung fields into regions of interest are compared. The first divides each lung field into 21 overlapping regions of varying sizes, resulting in a total of 42 regions per image; this approach is called the big region approach. The second approach divides the lung fields into a large number of overlapping circular regions of interest. The circular regions have a radius of 32 pixels and are placed on an 8 x 8 pixel grid. This approach is called the circular region approach. Textural features are extracted from each of the regions using the moments of responses to a multiscale bank of Gaussian filters. Additional positional features are added to the circular regions

    Visual Scene Understanding by Deep Fisher Discriminant Learning

    No full text
    Modern deep learning has recently revolutionized several fields of classic machine learning and computer vision, such as, scene understanding, natural language processing and machine translation. The substitution of feature hand-crafting with automatic feature learning, provides an excellent opportunity for gaining an in-depth understanding of large-scale data statistics. Deep neural networks generally train models with huge numbers of parameters, facilitating efficient search for optimal and sub-optimal spaces of highly non-convex objective functions. On the other hand, Fisher discriminant analysis has been widely employed to impose class discrepancy, for the sake of segmentation, classification, and recognition tasks. This thesis bridges between contemporary deep learning and classic discriminant analysis, to accommodate some important challenges in visual scene understanding, i.e. semantic segmentation, texture classification, and object recognition. The aim is to accomplish specific tasks in some new high-dimensional spaces, covered by the statistical information of the datasets under study. Inspired by a new formulation of Fisher discriminant analysis, this thesis introduces some novel arrangements of well-known deep learning architectures, to achieve better performances on the targeted missions. The theoretical justifications are based upon a large body of experimental work, and consolidate the contribution of the proposed idea; Deep Fisher Discriminant Learning, to several challenges in visual scene understanding
    corecore