1,861 research outputs found

    The development of hybrid intelligent systems for technical analysis based equivolume charting

    Get PDF
    This dissertation proposes the development of a hybrid intelligent system applied to technical analysis based equivolume charting for stock trading. A Neuro-Fuzzy based Genetic Algorithms (NF-GA) system of the Volume Adjusted Moving Average (VAMA) membership functions is introduced to evaluate the effectiveness of using a hybrid intelligent system that integrates neural networks, fuzzy logic, and genetic algorithms techniques for increasing the efficiency of technical analysis based equivolume charting for trading stocks --Introduction, page 1

    Learning with Latent Language

    Full text link
    The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter's loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without

    Process analytical technology in food biotechnology

    Get PDF
    Biotechnology is an area where precision and reproducibility are vital. This is due to the fact that products are often in form of food, pharmaceutical or cosmetic products and therefore very close to the human being. To avoid human error during the production or the evaluation of the quality of a product and to increase the optimal utilization of raw materials, a very high amount of automation is desired. Tools in the food and chemical industry that aim to reach this degree of higher automation are summarized in an initiative called Process Analytical Technology (PAT). Within the scope of the PAT, is to provide new measurement technologies for the purpose of closed loop control in biotechnological processes. These processes are the most demanding processes in regards of control issues due to their very often biological rate-determining component. Most important for an automation attempt is deep process knowledge, which can only be achieved via appropriate measurements. These measurements can either be carried out directly, measuring a crucial physical value, or if not accessible either due to the lack of technology or a complicated sample state, via a soft-sensor.Even after several years the ideal aim of the PAT initiative is not fully implemented in the industry and in many production processes. On the one hand a lot effort still needs to be put into the development of more general algorithms which are more easy to implement and especially more reliable. On the other hand, not all the available advances in this field are employed yet. The potential users seem to stick to approved methods and show certain reservations towards new technologies.Die Biotechnologie ist ein Wissenschaftsbereich, in dem hohe Genauigkeit und Wiederholbarkeit eine wichtige Rolle spielen. Dies ist der Tatsache geschuldet, dass die hergestellten Produkte sehr oft den Bereichen Nahrungsmitteln, Pharmazeutika oder Kosmetik angehöhren und daher besonders den Menschen beeinflussen. Um den menschlichen Fehler bei der Produktion zu vermeiden, die Qualität eines Produktes zu sichern und die optimale Verwertung der Rohmaterialen zu gewährleisten, wird ein besonders hohes Maß an Automation angestrebt. Die Werkzeuge, die in der Nahrungsmittel- und chemischen Industrie hierfür zum Einsatz kommen, werden in der Process Analytical Technology (PAT) Initiative zusammengefasst. Ziel der PAT ist die Entwicklung zuverlässiger neuer Methoden, um Prozesse zu beschreiben und eine automatische Regelungsstrategie zu realisieren. Biotechnologische Prozesse gehören hierbei zu den aufwändigsten Regelungsaufgaben, da in den meisten Fällen eine biologische Komponente der entscheidende Faktor ist. Entscheidend für eine erfolgreiche Regelungsstrategie ist ein hohes Maß an Prozessverständnis. Dieses kann entweder durch eine direkte Messung der entscheidenden physikalischen, chemischen oder biologischen Größen gewonnen werden oder durch einen SoftSensor. Zusammengefasst zeigt sich, dass das finale Ziel der PAT Initiative auch nach einigen Jahren des Propagierens weder komplett in der Industrie noch bei vielen Produktionsprozessen angekommen ist. Auf der einen Seite liegt dies mit Sicherheit an der Tatsache, dass noch viel Arbeit in die Generalisierung von Algorithmen gesteckt werden muss. Diese müsse einfacher zu implementieren und vor allem noch zuverlässiger in der Funktionsweise sein. Auf der anderen Seite wurden jedoch auch Algorithmen, Regelungsstrategien und eigne Ansätze für einen neuartigen Sensor sowie einen Soft-Sensors vorgestellt, die großes Potential zeigen. Nicht zuletzt müssen die möglichen Anwender neue Strategien einsetzen und Vorbehalte gegenüber unbekannten Technologien ablegen

    Transfer learning: bridging the gap between deep learning and domain-specific text mining

    Get PDF
    Inspired by the success of deep learning techniques in Natural Language Processing (NLP), this dissertation tackles the domain-specific text mining problems for which the generic deep learning approaches would fail. More specifically, the domain-specific problems are: (1) success prediction in crowdfunding, (2) variants identification in biomedical literature, and (3) text data augmentation for domains with low-resources. In the first part, transfer learning in a multimodal perspective is utilized to facilitate solving the project success prediction on the crowdfunding application. Even though the information in a project profile can be of different modalities such as text, images, and metadata, most existing prediction approaches leverage only the text modality. It is promising to utilize the visual images in project profiles to find out how images could contribute to the success prediction. An advanced neural network scheme is designed and evaluated combining information learned from different modalities for project success prediction. In the second part, transfer learning is combined with deep learning techniques to solve genomic variants Named Entity Recognition (NER) problems in biomedical literature. Most of the advanced generic NER algorithms can fail due to the restricted training corpus. However, those generic deep learning algorithms are capable of learning from a canonical corpus, without any effort on feature engineering. This work aims to build an end-to-end deep learning approach to transfer the domain-specific knowledge to those advanced generic NER algorithms, addressing the challenges in low-resource training and requiring neither hand-crafted features nor post-processing rules. For the last part, transfer learning with knowledge distillation and active learning are utilized to solve text augmentation for domains with low-resources. Most of the recent text augmentation methods heavily rely on large external resources. This work is dedicates to solving the text augmentation problem adaptively and consistently with minimal resources for token-level tasks like NER. The solution can also assure the reliability of machine labels for noisy data and can enhance training consistency with noisy labels. All the works are evaluated on different domain-specific benchmarks, respectively. Experimental results demonstrate the effectiveness of those proposed methods. The advantages also indicate promising potential for transfer learning in domain-specific applications

    Evolutionary Learning of Fuzzy Rules for Regression

    Get PDF
    The objective of this PhD Thesis is to design Genetic Fuzzy Systems (GFS) that learn Fuzzy Rule Based Systems to solve regression problems in a general manner. Particularly, the aim is to obtain models with low complexity while maintaining high precision without using expert-knowledge about the problem to be solved. This means that the GFSs have to work with raw data, that is, without any preprocessing that help the learning process to solve a particular problem. This is of particular interest, when no knowledge about the input data is available or for a first approximation to the problem. Moreover, within this objective, GFSs have to cope with large scale problems, thus the algorithms have to scale with the data

    FAKE NEWS DETECTION ON THE WEB: A DEEP LEARNING BASED APPROACH

    Get PDF
    The acceptance and popularity of social media platforms for the dispersion and proliferation of news articles have led to the spread of questionable and untrusted information (in part) due to the ease by which misleading content can be created and shared among the communities. While prior research has attempted to automatically classify news articles and tweets as credible and non-credible. This work complements such research by proposing an approach that utilizes the amalgamation of Natural Language Processing (NLP), and Deep Learning techniques such as Long Short-Term Memory (LSTM). Moreover, in Information System’s paradigm, design science research methodology (DSRM) has become the major stream that focuses on building and evaluating an artifact to solve emerging problems. Hence, DSRM can accommodate deep learning-based models with the availability of adequate datasets. Two publicly available datasets that contain labeled news articles and tweets have been used to validate the proposed model’s effectiveness. This work presents two distinct experiments, and the results demonstrate that the proposed model works well for both long sequence news articles and short-sequence texts such as tweets. Finally, the findings suggest that the sentiments, tagging, linguistics, syntactic, and text embeddings are the features that have the potential to foster fake news detection through training the proposed model on various dimensionality to learn the contextual meaning of the news content

    Evolutionary algorithms in artificial intelligence: a comparative study through applications

    Get PDF
    For many years research in artificial intelligence followed a symbolic paradigm which required a level of knowledge described in terms of rules. More recently subsymbolic approaches have been adopted as a suitable means for studying many problems. There are many search mechanisms which can be used to manipulate subsymbolic components, and in recent years general search methods based on models of natural evolution have become increasingly popular. This thesis examines a hybrid symbolic/subsymbolic approach and the application of evolutionary algorithms to a problem from each of the fields of shape representation (finding an iterated function system for an arbitrary shape), natural language dialogue (tuning parameters so that a particular behaviour can be achieved) and speech recognition (selecting the penalties used by a dynamic programming algorithm in creating a word lattice). These problems were selected on the basis that each should have a fundamentally different interactions at the subsymbolic level. Results demonstrate that for the experiments conducted the evolutionary algorithms performed well in most cases. However, the type of subsymbolic interaction that may occur influences the relative performance of evolutionary algorithms which emphasise either top-down (evolutionary programming - EP) or bottom-up (genetic algorithm - GA) means of solution discovery. For the shape representation problem EP is seen to perform significantly better than a GA, and reasons for this disparity are discussed. Furthermore, EP appears to offer a powerful means of finding solutions to this problem, and so the background and details of the problem are discussed at length. Some novel constraints on the problem's search space are also presented which could be used in related work. For the dialogue and speech recognition problems a GA and EP produce good results with EP performing slightly better. Results achieved with EP have been used to improve the performance of a speech recognition system
    • …
    corecore