698 research outputs found

    Parallel, Self-Organizing, Hierarchical Neural Networks

    Get PDF
    A new neural network architecture called the parallel self-organizing hierarchical neural network (PSHNN) is discussed. The PSHNN involves a number of stages in which each stage can be a particular neural network (SNN). At the end of each SNN, error detection is carried out, and a number of input vectors are rejected. Between 2 SNN’s there is a nonlinear transformation of those input vectors rejected by the first SNN. The PSHNN has many desirable properties such as optimized system complexity in the sense of minimized self-organizing number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all SNN’s are operating simultaneously without waiting for data from each other during testing. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the PSHN

    Fast Algorithms for the Real Discrete Fourier Transform

    Get PDF
    Fast algorithms for the computation of the real discrete Fourier transform (RDFT) are discussed. Implementations based on the RDFT are always efficient whereas the implementations based on the DFT are efficient only when signals to be processed are complex. The fast real Fourier (FRFT) algorithms discussed are the radix-2 decimation-in-time (DIT), the radix-2 decimation-in-frequency (DIF), the radix-4 DIT, the split-radix DIT, the split-radix DIF, the prime-factor, the Rader prime, and the Winograd FRFT algorithms

    PRS43 RELIABILITY AND VALIDITY OF THE SMOKER COMPLAINT SCALE

    Get PDF

    AN ITERATIVE INTERLACING APPROACH FOR SYNTHESIS OF COMPUTER-GENERATED HOLOGRAMS

    Get PDF
    A new approach to optimizing computer-generated holograms (CGH\u27s) is discussed. The approach can be summarized most generally as hierarchically designing a number of holograms to add up coherently to a single desired reconstruction. In the case of binary holograms, this approach results in the interlacing (IT) and the iterative interlacing (IIT) techniques. In the IT technique, a number of subholograms are designed and interlaced together to generate the total binary hologram. The first sttbhologram is designed to reconstruct the desired image. The succeeding subholograms are designed to correct the remaining error image. In the IIT technique, the remaining error image after the last subhologram is circulated back to the first subhologram, and the process is continued a number of sweeps until convergence. The IT and the IIT techniques can be used together with most CGH synthesis algorithms, and result in substantial reduction in reconstruction error as well as increased speed of convergence in the case of iterative algorithms

    VLSI Implementation of Discrete Cosine Transform Based on the Shared-Multiplier Algorithm

    Get PDF
    In this paper a new algorithm for discrete cosine transform (DCT) is proposed. This algorithm is especially efficient for VLSI implementation because each multiplier in @e 1-D DCT is shared by two constants rather than one. This greatly reduces the chip area, and the high speed characteristics are still retained. Based on this algorithm, we have developed the corresponding bit-parallel, fully-pipelined architecture for the size-8 DCT. The core area of the chip is only 8.6mm x 8.5mm, using 1.2um double-metal single-poly CMOS technology. This chip is simulated for operation at the maximum speed of 100 MHz which far exceeds the speed requirement of the HDTV system (70 MHz)

    Advances in Nonlinear Matched Filtering

    Get PDF
    Symmetric nonlinear matched filters (SNMF’s) involve the transformation of the signal spectrum and the filter transfer function through pointwise nonlinearities before they are multiplied in the transform domain. The resulting system is analogous to a 3-layer neural net The experimental and theoretical results discussed indicate that SNMF’s hold considerable potential to achieve high-power of discrimination, resolution and large SNR. The statistical analysis of a particular SNMF in the 2-class problem indicates that the performance coefficient of the SNMF is about four times larger than the performance coefficient of the classical matched filter. In terms of resolving closeby signals, there seems to be no limit to die achievable resolution. However, intermodulation noise has to be carefully monitored

    Parallel Multistage Wide Neural Network

    Get PDF
    Deep learning networks have achieved great success in many areas such as in large scale image processing. They usually need large computing resources and time, and process easy and hard samples inefficiently in the same way. Another undesirable problem is that the network generally needs to be retrained to learn new incoming data. Efforts have been made to reduce the computing resources and realize incremental learning by adjusting architectures, such as scalable effort classifiers, multi-grained cascade forest (gc forest), conditional deep learning (CDL), tree CNN, decision tree structure with knowledge transfer (ERDK), forest of decision trees with RBF networks and knowledge transfer (FDRK). In this paper, a parallel multistage wide neural network (PMWNN) is presented. It is composed of multiple stages to classify different parts of data. First, a wide radial basis function (WRBF) network is designed to learn features efficiently in the wide direction. It can work on both vector and image instances, and be trained fast in one epoch using subsampling and least squares (LS). Secondly, successive stages of WRBF networks are combined to make up the PMWNN. Each stage focuses on the misclassified samples of the previous stage. It can stop growing at an early stage, and a stage can be added incrementally when new training data is acquired. Finally, the stages of the PMWNN can be tested in parallel, thus speeding up the testing process. To sum up, the proposed PMWNN network has the advantages of (1) fast training, (2) optimized computing resources, (3) incremental learning, and (4) parallel testing with stages. The experimental results with the MNIST, a number of large hyperspectral remote sensing data, CVL single digits, SVHN datasets, and audio signal datasets show that the WRBF and PMWNN have the competitive accuracy compared to learning models such as stacked auto encoders, deep belief nets, SVM, MLP, LeNet-5, RBF network, recently proposed CDL, broad learning, gc forest etc. In fact, the PMWNN has often the best classification performance

    Characterization of proliferative, glial and angiogenic responses after a CoCl2-induced injury of photoreceptor cells in the adult zebrafish retina

    Get PDF
    The adult zebrafish is considered a useful model for studying mechanisms involved in tissue growth and regeneration. We have characterized cytotoxic damage to the retina of adult zebrafish caused by the injection of cobalt chloride (CoCl2) into the vitreous cavity. The CoCl2 concentration we used primarily caused injury to photoreceptors. We observed the complete disappearance of cones, followed by rods, across the retina surface from 28 to 96 hr after CoCl2 injury. The loss of 30% of bipolar cells was also observed by 50 hr after lesion (hpl). CoCl2 injury provoked a strong induction of the proliferative activity of multipotent Müller glia and derived progenitors. The effect of CoCl2 on retina cells was significantly reduced by treatment with glutamate ionotropic receptor antagonists. Cone photoreceptor regeneration occurred 25 days after injury. Moreover, a single dose of CoCl2 induced vascular damage and regeneration, whereas three injections of CoCl2 administered weekly provoked neovascular-like changes 20 days after injury. CoCl2 injury also caused microglial reactivity in the optic disc, retina periphery and fibre layer. CoCl2-induced damage enhanced pluripotency and proneural transcription factor gene expression in the mature retina 72 hpl. Tumour necrosis factor alpha, vascular endothelial growth factor (VEGF) and VEGF receptor mRNA levels were also significantly enhanced by 72 hpl. The injury paradigm we have described in this work may be useful for the discovery of signalling molecules and pathways that participate in the regenerative response and it may serve as a model to screen for compounds that could potentially treat aberrant angiogenesis.Fil: Medrano, Matias. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; ArgentinaFil: Pisera Fuster, Antonella. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; ArgentinaFil: Sanchis, Pablo Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; ArgentinaFil: Paez, Natalia. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; ArgentinaFil: Bernabeu, Ramon Oscar. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; ArgentinaFil: Faillace, Maria Paula. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Houssay. Instituto de Fisiología y Biofísica Bernardo Houssay. Universidad de Buenos Aires. Facultad de Medicina. Instituto de Fisiología y Biofísica Bernardo Houssay; Argentin

    Parking as a loss leader at shopping malls

    Get PDF
    This paper investigates the pricing of malls in an environment where shoppers choose between a car and public transportation in getting to a suburban mall. The mall implicitly engages in mixed bundling; it sells goods bundled with parking to shoppers who come by car, and only goods to shoppers who come by public transportation. There are external costs of discomfort in public transportation due to crowdedness. Thus, shoppers using public transportation deter each other. The mall internalizes these external costs, much like a policy maker. To do so, it raises the sales price of the good and sets a parking fee less than parking's marginal cost. Hence, parking is always a loss leader. Surprisingly, this pricing scheme is not necessarily distortionary. © 2016 Elsevier Ltd

    Uzaktan algılama görüntülerinin sınıflandırılması için sınır özniteliklerinin belirlenmesi ve adaptasyonu algoritması

    Get PDF
    Various types of sensors collect very large amounts of data from the earth surface. The characteristics of the data are related to sensor type with its own imaging geometry. Consequently, sensor types affect processing techniques used in remote sensing. In general, image processing techniques used in remote sensing are usually valid for multispectral data which is relatively in a low dimensional feature space. Therefore, advanced algorithms are needed for hyperspectral data which have at least 100-200 features (attributes/bands). Additionally, the training process is very important and affects the generalization capability of a classifier in supervised learning. Enough number of training samples is required to make proper classification. In remote sensing, collecting training samples is difficult and costly. Consequently, a limited number of training samples is often available in practice. Conventional statistical classifiers assume that the data have a specific distribution. For real world data, these kinds of assumptions may not be valid. Additionally, proper parameter estimation is difficult, especially for hyperspectral data. Normally, when the number of bands used in the classification process increases, precise detailed class determination is expected. For high dimensional feature space, when a new feature is added to the data, classification error decreases, but at the same time, the bias of the classification error increases. If the increment of the bias of the classification error is more than the reduction in classification error, then the use of the additional feature decreases the performance of the decision algorithm. This phenomenon is called the Hughes effect, and it may be much more harmful with hyperspectral data than with multispectral data. Our motivation in this study is to overcome some of these general classification problems by developing a classification algorithm which is directly based on the available training data rather than on the underlying statistical data distribution. Our proposed algorithm, Border Feature Detection and Adaptation (BFDA), uses border feature vectors near the decision boundaries which are adapted to make a precise partitioning in the feature space by using maximum margin principle. The BFDA algorithm well suited for classification of remote sensing images is developed with a new approach to choosing and adapting border feature vectors with the training data. This approach is especially effective when the information source has a limited amount of data samples, and the distribution of the data is not necessarily Gaussian. Training samples closer to class borders are more prone to generate misclassification, and therefore are significant feature vectors to be used to reduce classification errors. The proposed classification algorithm searches for such error-causing training samples in a special way, and adapts them to generate border feature vectors to be used as labeled feature vectors for classification. The BFDA algorithm can be considered in two parts. The first part of the algorithm consists of defining initial border feature vectors using class centers and misclassified training vectors. With this approach, a manageable number of border feature vectors is achieved. The second part of the algorithm is adaptation of border feature vectors by using a technique which has some similarity with the learning vector quantization (LVQ) algorithm. In this adaptation process, the border feature vectors are adaptively modified to support proper distances between them and the class centers, and to increase the margins between neighboring border features with different class labels. The class centers are also adapted during this process. Subsequent classification is based on labeled border feature vectors and class centers. With this approach, a proper number of feature vectors for each class is generated by the algorithm. In supervised learning, the training process should be unbiased to reach more accurate results in testing. In the BFDA, accuracy is related to the initialization of the border feature vectors and the input ordering of the training samples. These dependencies make the classifier a biased decision maker. Consensus strategy can be applied with cross validation to reduce these dependencies. In this study, major performance analysis and comparisons were made by using the AVIRIS data. Using the BFDA, we obtained satisfactory results with both multispectral and hyperspectal data sets. The BFDA is also a robust algorithm with the Hughes effect. Additionally, rare class members are more accurately classified by the BFDA as compared to conventional statistical methods.  Keywords: Remote sensing, hyperspectral data classification, consensual classification.Geleneksel görüntü işleme tekniklerinin direkt olarak uzaktan algılamaya uygulanması, sadece multispektral datalar için geçerli olabilir. Öznitelik vektörü boyutu 100-200 civarında olan hiperspektral dataların analizi için gelişmiş algoritmalara ihtiyaç vardır. Bununla birlikte, uzaktan algılamada, genellikle sınırlı sayıda eğitim örneğinin olması, özellikle öznitelik vektörünün boyutunun büyük olduğu hiperspektral datalarda, parametrik sınıflayıcıların kullanımını kısıtlar. Bu çalışmanın amacı, istatistiksel dağılıma bağlı olmayan, sadece eldeki eğitim örneklerine dayanan bir algoritma geliştirerek yukarıda özetlenen uzaktan algılama için genel sınıflandırma problemlerinin üstesinden gelmektir. Önerilen Sınır Özniteliklerinin Belirlenmesi ve Adaptasyonu (SÖBA) algoritması, karar yüzeylerine yakın sınır öznitelik vektörlerini kullanır ve bu sınır öznitelik vektörleri, maksimum marjin prensibini sağlayacak şekilde adapte edilerek, öznitelik uzayında doğru bölütlemenin yapılmasını sağlar. SÖBA algoritması iki bölümden oluşur. İlk aşamada sınır öznitelik vektörlerinin başlangıç değerleri uygun eğitim kümesi elemanlarından, yönetilebilir sayıda atanır. Daha sonra uygulanan adaptasyon işlemiyle, öğrenme süreci gerçekleştirilerek sınır özniteliklerinin, sonuç değerlerine ulaşması hedeflenir. Sınıflandırma sonuç sınır öznitelik vektörlerine olan  en yakın 1 komşuluk (1-EK) kuralı uyarınca yapılır. Ek olarak, SÖBA algoritmasının sınır öznitelik vektörlerinin başlangıç değerlerine ve eğitim kümesi elemanlarının eğitimde kullanılma sırasına bağlı olarak her çalışmasında kabul edilebilir derecede farklı sınır karar yüzeyleri oluşturması, konsensüs yapılarda kullanılması için elverişli bir özelliktir. Böylece birçok defa çalıştırılan SÖBA kararlarının uygun kurallarla birleştirilmesiyle tek bir sınıflayıcının aldığı karardan çok daha doğru kararlar elde edilebilir. Anahtar Kelimeler: Uzaktan algılama, hiperspektral data sınıflandırma, konsensüs
    corecore