337 research outputs found

    Automatic Classification of Digital Modulations

    Get PDF
    Tato disertační práce pojednává o nové metodě rozpoznávání digitálních modulací. V úvodní části je shrnut dosavadní vývoj a současný stav problematiky. Jsou popsány stávající metody spolu s jejich charakteristickými vlastnostmi. Větší pozornost je věnována využití umělých neuronových sítí. Po vytyčení cílů disertační práce jsou teoreticky popsány digitální modulace, které byly vybrány pro rozpoznávání. Jedná se o modulace FSK, MSK, BPSK, QPSK a QAM-16, které jsou nejčastěji používány v moderních komunikačních systémech. Navržená metoda je založena na analýze modulových a fázových spektrogramů modulovaných signálů. Pro posouzení vlastností spektrogramů jsou využívány jejich histogramy. Ty dávají informaci o počtu nosných frekvencí v signálu, což slouží pro rozpoznání FSK a MSK, a o počtu fázových stavů, podle čehož se určují modulace BPSK, QPSK a QAM-16. Spektrogramy, ve kterých jsou viditelné charakteristické příznaky modulací, jsou získávány při délce segmentu rovné délce symbolu. Bylo zjištěno, že při známé délce symbolu je možné správně rozpoznat modulaci při poměru signál-šum minimálně 0 dB. Proto je třeba před výpočtem spektrogramů detekovat délku symbolu. K tomuto účelu byly navrženy čtyři metody: autokorelační funkce, kepstrální analýza, waveletová transformace a LPC koeficienty. Tyto metody byly algoritmizovány a analyzovány se signály zarušenými bílým Gaussovským šumem, fázovým šumem a se signály po průchodu přenosovým kanálem s odrazy a úniky. Jako nejvhodnější a nejspolehlivější se ukázala metoda detekce pomocí kepstrální analýzy. Nakonec byla nová metoda rozpoznávání modulací ověřována se signály prošlé přenosovým kanálem, jehož vlastnosti se blíží reálnému kanálu.This dissertation thesis deals with a new method for digital modulation recognition. The history and present state of the topic is summarized in the introduction. Present methods together with their characteristic properties are described. The recognition by means of artificial neural is presented in more detail. After setting the objective of the dissertation thesis, the digital modulations that were chosen for recognition are described theoretically. The modulations FSK, MSK, BPSK, QPSK, and QAM-16 are concerned. These modulations are mostly used in modern communication systems. The method designed is based on the analysis of module and phase spectrograms of the modulated signals. Their histograms are used for the examination of the spectrogram properties. They provide information on the count of carrier frequencies in the signal, which is used for the FSK and MSK recognition, and on the count of phase states on which the BPSK, QPSK, and QAM-16 are classified. The spectrograms in that the characteristic attributes of the modulations are visible are obtained with the segment length equal to the symbol length. It was found that it is possible to correctly recognize the modulation with the known symbol length at the signal-to-noise ratio at least 0 dB. That is why it is necessary to detect the symbol length prior to the spectrogram calculation. Four methods were designed for this purpose: autocorrelation function, cepstrum analysis, wavelet transform, and LPC coefficients. These methods were algorithmized and analyzed with signals disturbed by the white Gaussian noise, phase noise and with signals passed through a multipass fading channel. The method of detection by means of cepstrum analysis proved the most suitable and reliable. Finally the new method for digital modulation recognition was verified with signals passed through a channel with properties close to the real one.

    Safe Crossover of Neural Networks Through Neuron Alignment

    Full text link
    One of the main and largely unexplored challenges in evolving the weights of neural networks using genetic algorithms is to find a sensible crossover operation between parent networks. Indeed, naive crossover leads to functionally damaged offspring that do not retain information from the parents. This is because neural networks are invariant to permutations of neurons, giving rise to multiple ways of representing the same solution. This is often referred to as the competing conventions problem. In this paper, we propose a two-step safe crossover(SC) operator. First, the neurons of the parents are functionally aligned by computing how well they correlate, and only then are the parents recombined. We compare two ways of measuring relationships between neurons: Pairwise Correlation (PwC) and Canonical Correlation Analysis (CCA). We test our safe crossover operators (SC-PwC and SC-CCA) on MNIST and CIFAR-10 by performing arithmetic crossover on the weights of feed-forward neural network pairs. We show that it effectively transmits information from parents to offspring and significantly improves upon naive crossover. Our method is computationally fast,can serve as a way to explore the fitness landscape more efficiently and makes safe crossover a potentially promising operator in future neuroevolution research and applications

    Evolving neural networks for static single-position automated trading

    Get PDF
    This paper presents an approach to single-position, intraday automated trading based on a neurogenetic algorithm. An artificial neural network is evolved to provide trading signals to a simple automated trading agent. The neural network uses open, high, low, and close quotes of the selected financial instrument from the previous day, as well as a selection of the most popular technical indicators, to decide whether to take a single long or short position at market open. The position is then closed as soon as a given profit target is met or at market close. Experimental results indicate that, despite its simplicity, both in terms of input data and in terms of trading strategy, such an approach to automated trading may yield significant returns

    Barry Smith an sich

    Get PDF
    Festschrift in Honor of Barry Smith on the occasion of his 65th Birthday. Published as issue 4:4 of the journal Cosmos + Taxis: Studies in Emergent Order and Organization. Includes contributions by Wolfgang Grassl, Nicola Guarino, John T. Kearns, Rudolf Lüthe, Luc Schneider, Peter Simons, Wojciech Żełaniec, and Jan Woleński

    The 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies

    Get PDF
    This publication comprises the papers presented at the 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland, on May 9-11, 1995. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Virginia Dental Journal (Vol. 78, no. 2, 2001)

    Get PDF

    The 1992 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers fall into the following areas: planning and scheduling, control, fault monitoring/diagnosis and recovery, information management, tools, neural networks, and miscellaneous applications

    Bridging the Geospatial Education-Workforce Divide: A Case Study on How Higher Education Can Address the Emerging Geospatial Drivers and Trends of the Intelligent Web Mapping Era

    Get PDF
    The purpose of this exploratory collective case study is to discover how geospatial education can meet the geospatial workforce needs of the Commonwealth of Virginia, in the emerging intelligent web mapping era. Geospatial education uses geographic information systems (GIS) to enable student learning by increasing in-depth spatial analysis and meaning using geotechnology tools (Baker & White, 2003). Bandura’s (1977) self-efficacy theory and geography concept of spatial thinking form an integrated theoretical framework of spatial cognition for this study. Data collection included in-depth interviews of twelve geospatial stakeholders, documentation collection, and supporting Q methodology to determine the viewpoints of a total of 41 geospatial stakeholders. Q methodology is a type of data collection that when used as a qualitative method utilizes sorting by the participant to determine their preferences. Data analysis strategies included cross-case synthesis, direct interpretation, generalizations, and a correlation matrix to show similarities in participants\u27 preferences. The results revealed four collaborative perceptions of the stakeholders, forming four themes of social education, technology early adoption, data collaboration, and urban fundamentals. Four strategies were identified for higher education to prepare students for the emerging geospatial workforce trends. These strategies are to teach fundamentals, develop agile faculty and curriculum, use an interdisciplinary approach, and collaborate. These strategies reflect the perceptions of stakeholders in this study on how higher education can meet the emerging drivers and trends of the geospatial workforce

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    Temporal disambiguation of relative temporal expressions in clinical texts using temporally fine-tuned contextual word embeddings.

    Get PDF
    Temporal reasoning is the ability to extract and assimilate temporal information to reconstruct a series of events such that they can be reasoned over to answer questions involving time. Temporal reasoning in the clinical domain is challenging due to specialized medical terms and nomenclature, shorthand notation, fragmented text, a variety of writing styles used by different medical units, redundancy of information that has to be reconciled, and an increased number of temporal references as compared to general domain texts. Work in the area of clinical temporal reasoning has progressed, but the current state-of-the-art still has a ways to go before practical application in the clinical setting will be possible. Much of the current work in this field is focused on direct and explicit temporal expressions and identifying temporal relations. However, there is little work focused on relative temporal expressions, which can be difficult to normalize, but are vital to ordering events on a timeline. This work introduces a new temporal expression recognition and normalization tool, Chrono, that normalizes temporal expressions into both SCATE and TimeML schemes. Chrono advances clinical timeline extraction as it is capable of identifying more vague and relative temporal expressions than the current state-of-the-art and utilizes contextualized word embeddings from fine-tuned BERT models to disambiguate temporal types, which achieves state-of-the-art performance on relative temporal expressions. In addition, this work shows that fine-tuning BERT models on temporal tasks modifies the contextualized embeddings so that they achieve improved performance in classical SVM and CNN classifiers. Finally, this works provides a new tool for linking temporal expressions to events or other entities by introducing a novel method to identify which tokens an entire temporal expression is paying the most attention to by summarizing the attention weight matrices output by BERT models
    corecore