1,376 research outputs found

    Multi-input distributed classifiers for synthetic genetic circuits

    Full text link
    For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple "bio-bricks" with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multiple input distributed classifier with learning ability. Proposed classifier will be able to separate multi-input data, which are inseparable for single input classifiers. Additionally, the data classes could potentially occupy the area of any shape in the space of inputs. We study two approaches to classification, including hard and soft classification and confirm the schemes of genetic networks by analytical and numerical results

    Facet Analysis Using Grammar

    Get PDF
    Basic grammar can achieve most/all of the goals of facet analysis without requiring the use of facet indicators. Facet analysis is thus rendered far simpler for classificationist, classifier, and user. We compare facet analysis and grammar, and show how various facets can be represented grammatically. We then address potential challenges in employing grammar as subject classification. A detailed review of basic grammar supports the hypothesis that it is feasible to usefully employ grammatical construction in subject classification. A manageable and programmable – set of adjustments is required as classifiers move fairly directly from sentences in a document (or object or idea) description to formulating a subject classification. The user likewise can move fairly quickly from a query to the identification of relevant works. A review of theories in linguistics indicates that a grammatical approach should reduce ambiguity while encouraging ease of use. This paper applies the recommended approach to a small sample of recently published books. It finds that the approach is feasible and results in a more precise subject description than the subject headings assigned at present. It then explores PRECIS, an indexing system developed in the 1970s. Though our approach differs from PRECIS in many important ways, the experience of PRECIS supports our conclusions regarding both feasibility and precision

    Algorithm for the Parkinson’s disease behavioural models characterization using a biosensor

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaThe neurodegenerative disease, Parkinson’s Disease (PD) constitutes a major health problem in the modern world, and its impact on public health and society is expected to increase with the ongoing ageing of the human population. This disease is characterized by motor and non-motor manifestations that are progressive and ultimately refractory to therapeutic interventions. The degeneration of dopaminergic neurons emanating from the substantia nigra is largely responsible for the motor manifestations. Thus, understanding the behaviour related to this disease is an added value for the diagnosis and treatment of PD. Also, in vivo models are essential tools for deciphering the molecular mechanisms underpinning the neurodegenerative process. Zebrafish has several features that make this species a good candidate to study PD. In particular, the occurrence of behavioural phenotypes of treated animals with neurotoxin drugs that mimic the disease has been investigated. And, an electric biosensor, Marine On-line Biomonitor System (MOBS) is being used for the real-time quantification of such behaviour. This equipment allows quantifying the fish movements through signal processing algorithms. Specifically, the algorithm is used for the evaluation of fish locomotion detected by a series of bursts in the domain of MOBS that correspond to the zebrafish tail-flip activity. In this thesis we proceeded to the development of an algorithm affording a electrical signal discrimination between "healthy" and "ill" zebrafish and consequently improving the detection of parkinsonism-like phenotypes in zebrafish. The first approach was the improvement of the existent algorithm. However, the first analysis failed to distinguish between different behavioural phenotypes when fish were treated with the neurotoxin 6-hydroxydopamine (6-OHDA). Consequently, we generated a new algorithm based on Machine Learning techniques. As a result, the novel algorithm provided a classification over the health condition of the fish, if the same is "healthy" or "ill" with its respective probability and the level of activity of the fish in number of tail-flips per minute. The method Support Vector Machine (SVM)was useful for the classification of the fish events. The zero crossing rate parameter was used for the characterization of the swimming activities. The algorithm was also integrated in the platform Open Signals, and for a faster evaluation of the signals, the algorithm implementation included parallel programming methods. This algorithm is a useful tool to study behaviour in zebrafish. Not only it will allow a more realistic study over the PD research area but also test and assess new drugs that use zebrafish as animal model

    Reviewing agent-based modelling of socio-ecosystems: a methodology for the analysis of climate change adaptation and sustainability

    Get PDF
    The integrated - environmental, economic and social - analysis of climate change calls for a paradigm shift as it is fundamentally a problem of complex, bottom-up and multi-agent human behaviour. There is a growing awareness that global environmental change dynamics and the related socio-economic implications involve a degree of complexity that requires an innovative modelling of combined social and ecological systems. Climate change policy can no longer be addressed separately from a broader context of adaptation and sustainability strategies. A vast body of literature on agent-based modelling (ABM) shows its potential to couple social and environmental models, to incorporate the influence of micro-level decision making in the system dynamics and to study the emergence of collective responses to policies. However, there are few publications which concretely apply this methodology to the study of climate change related issues. The analysis of the state of the art reported in this paper supports the idea that today ABM is an appropriate methodology for the bottom-up exploration of climate policies, especially because it can take into account adaptive behaviour and heterogeneity of the system's components.Review, Agent-Based Modelling, Socio-Ecosystems, Climate Change, Adaptation, Complexity.

    Adaptive rule-based malware detection employing learning classifier systems

    Get PDF
    Efficient and accurate malware detection is increasingly becoming a necessity for society to operate. Existing malware detection systems have excellent performance in identifying known malware for which signatures are available, but poor performance in anomaly detection for zero day exploits for which signatures have not yet been made available or targeted attacks against a specific entity. The primary goal of this thesis is to provide evidence for the potential of learning classier systems to improve the accuracy of malware detection. A customized system based on a state-of-the-art learning classier system is presented for adaptive rule-based malware detection, which combines a rule-based expert system with evolutionary algorithm based reinforcement learning, thus creating a self-training adaptive malware detection system which dynamically evolves detection rules. This system is analyzed on a benchmark of malicious and non-malicious files. Experimental results show that the system can outperform C4.5, a well-known non-adaptive machine learning algorithm, under certain conditions. The results demonstrate the system\u27s ability to learn effective rules from repeated presentations of a tagged training set and show the degree of generalization achieved on an independent test set. This thesis is an extension and expansion of the work published in the Security, Trust, and Privacy for Software Applications workshop in COMPSAC 2011 - the 35th Annual IEEE Signature Conference on Computer Software and Applications --Abstract, page iii

    Framework of hierarchy for neural theory

    Get PDF

    Constructivism, epistemology and information processing

    Get PDF
    The author analyzes the main models of artificial intelligence which deal with the transition from one stage to another, a central problem in development. He describes the contributions of rule-based systems and connectionist systems to an explanation of this transition. He considers that Artificial Intelligence models, in spite of their limitations, establish fruitful points of contact with the constructivist position.El autor analiza los principales modelos de inteligencia artificial que dan cuenta del paso de la transición de un estudio a otro, problema central del desarrollo. Describe y señala las aportaciones de los sistemas basados en reglas así como de los sistemas conexionistas para explicar dicha transición. Considera que los modelos de inteligencia artificial, a pesar de sus limitaciones, permiten establecer puntos de contacto muy fructiferos con la posición constructivista

    Automatically identifying and predicting unplanned wind turbine stoppages using SCADA and alarms system data: case study and results

    Get PDF
    Using 10-minute wind turbine SCADA data for fault prediction offers an attractive way of gaining additional prognostic capabilities without needing to invest in extra hardware. To use these data-driven methods effectively, the historical SCADA data must be labelled with the periods when the turbine was in faulty operation as well the sub-system the fault was attributed to. Manually identifying faults using maintenance logs can be effective, but is also highly time consuming and tedious due to the disparate nature of these logs across manufacturers, operators and even individual maintenance events. Turbine alarm systems can help to identify these periods, but the sheer volume of alarms and false positives generated makes analysing them on an individual basis ineffective. In this work, we present a new method for automatically identifying historical stoppages on the turbine using SCADA and alarms data. Each stoppage is associated with either a fault in one of the turbine's sub-systems, a routine maintenance activity, a grid-related event or a number of other categories. This is then checked against maintenance logs for accuracy and the labelled data fed into a classifier for predicting when these stoppages will occur. Results show that the automated labelling process correctly identifies each type of stoppage, and can be effectively used for SCADA-based prediction of turbine fault

    Investigating the causes and consequences of altered subcellular spatial composition in the immune system and beyond

    Get PDF
    NLRP3 priming by translocation Working on the role of NEK7 in NLRP3 activation, we had discovered that, in con- trast to the role of NEK7 in mouse cells, human cells activate NLRP3 independently of NEK7. “Transplanting” mouse NLRP3 into a model of human monocytes rescued the activity of mouse NLRP3 in the absence of NEK7. From this result we concluded that rather than a difference between the two NLRP3 orthologues, a difference be- tween cellular signalling must be responsible for the differential requirement of NEK7 for NLRP3 activation. Coupled with the finding that TLR4 stimulation via LPS can bypass the requirement for NEK7 in mouse cells, we concluded that a pathway activated downstream of TLR4 can bypass NEK7 by priming NLRP3. Tracing the signalling cascade of TLR4 by genetically knocking out its components, we arrived at the kinase IKKβ. Indeed, experiments with knockouts of IKKβ in mouse and human cells explained both phenotypes: LPS could no longer bypass NEK7 in mouse cells and NLRP3 signalling in human cells was blunted. Why human cells are incapable of using NEK7 to prime NLRP3 in the absence of IKKβ remains unclear. Using human induced pluripotent stem cell-derived macrophages that we could genetically engineer to lack NEK7 as a model system, we confirmed that human cells in contrast to mouse cells do not require NEK7, but instead fully rely on IKKβ to prime NLRP3. Elucidating the mechanism by which IKKβ primes NLRP3 for NEK7-independent inflammasome activation, we found that IKKβ activity recruits a fluorescently tagged NLRP3 variant to the trans-Golgi network, a finding we corroborated by mass spec- trometry analysis of subcellular fractions. Our results define recruitment of NLRP3 to a specific organelle as a new priming modality of the NLRP3 inflammasome. CRISPR screening for subcellular spatial phenotypes at genome scale The development of charge coupled device (CCD) chips has enabled the acquisition of digital images at high resolution (Boyle and Smith 1970). In combination with modern microscopes the latest development of such chips has enabled the collection of large digital datasets representing the spatial composition of cells. A technology that can profile this composition and connect it to the genetic identity of individual cells at scale could generate insights into complex cellular biology. Here we developed a new genetic screening technology for image-based phenotypes. We first generated a library of 40 million human U2OS cells with one genetic knockout in each cell using CRISPR/Cas9. The cells in this library had been genetically engineered to express the fluorescently labelled autophagosome marker LC3 (mCherry-LC3). We stimulated these cells with the mTOR inhibitor Torin-1 to induce autophagy, during which LC3 gets redistributed onto autophagosomes. We then acquired microscopy images of this library and segmented these images into single cells using a nuclear stain to identify individual cells and a membrane stain to associate a the cytosol of a cell with its nucleus. This resulted in a dataset of single cell images across three channels: Nucleus and membrane that were used for segmentation and an image corresponding to the distribution of LC3 in each cell. Given that each cell in this library harboured a different genetic knockout, we expected some cells to have been unable to redistribute LC3 onto autophagosomes following Torin-1 stimulation owing to the lack of a gene that is essential for this process. We then sought to identify these cells based on their LC3 images. Since these data are inherently large and complex, we made use of the recent breakthrough in image analysis by machine learning models (LeCun et al. 2015). Using a knockout of ATG5, an essential autophagy gene, as a positive control, we trained a binary classifier based on a convolutional neural network to differentiate between images of cell undergoing autophagy and images of cells that had a blunted response to Torin-1 or were left unstimulated, and therefore not undergoing autophagy. With this classifier we were able to identify individual cells in our library that were incapable of forming autophagosomes in response to Torin-1. We then used fully automated laser microdissection to isolate the nuclei of these cells and subsequently sequenced their genetic perturbations. Here we found almost all genes known to be essential for autophagy to be defective in this pool of selected cells. This experiment demonstrates that our technology is capable of associating image-based phenotypes with the genotype of individual cells at genome scale
    corecore