674 research outputs found

    Simulations in statistical physics and biology: some applications

    Full text link
    One of the most active areas of physics in the last decades has been that of critical phenomena, and Monte Carlo simulations have played an important role as a guide for the validation and prediction of system properties close to the critical points. The kind of phase transitions occurring for the Betts lattice (lattice constructed removing 1/7 of the sites from the triangular lattice) have been studied before with the Potts model for the values q=3, ferromagnetic and antiferromagnetic regime. Here, we add up to this research line the ferromagnetic case for q=4 and 5. In the first case, the critical exponents are estimated for the second order transition, whereas for the latter case the histogram method is applied for the occurring first order transition. Additionally, Domany's Monte Carlo based clustering technique mainly used to group genes similar in their expression levels is reviewed. Finally, a control theory tool --an adaptive observer-- is applied to estimate the exponent parameter involved in the well-known Gompertz curve. By treating all these subjects our aim is to stress the importance of cooperation between distinct disciplines in addressing the complex problems arising in biology. Contents: Chapter 1 - Monte Carlo simulations in stat. physics; Chapter 2: MC simulations in biology; Chapter 3: Gompertz equationComment: 82 pages, 33 figures, 4 tables, somewhat reduced version of the M.Sc. thesis defended in Jan. 2006 at IPICyT, San Luis Potosi, Mx. (Supervisers: Drs. R. Lopez-Sandoval and H.C. Rosu). Last sections 3.3 and 3.4 can be found at http://lanl.arxiv.org/abs/physics/041108

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Statistical Data Modeling and Machine Learning with Applications

    Get PDF
    The modeling and processing of empirical data is one of the main subjects and goals of statistics. Nowadays, with the development of computer science, the extraction of useful and often hidden information and patterns from data sets of different volumes and complex data sets in warehouses has been added to these goals. New and powerful statistical techniques with machine learning (ML) and data mining paradigms have been developed. To one degree or another, all of these techniques and algorithms originate from a rigorous mathematical basis, including probability theory and mathematical statistics, operational research, mathematical analysis, numerical methods, etc. Popular ML methods, such as artificial neural networks (ANN), support vector machines (SVM), decision trees, random forest (RF), among others, have generated models that can be considered as straightforward applications of optimization theory and statistical estimation. The wide arsenal of classical statistical approaches combined with powerful ML techniques allows many challenging and practical problems to be solved. This Special Issue belongs to the section “Mathematics and Computer Science”. Its aim is to establish a brief collection of carefully selected papers presenting new and original methods, data analyses, case studies, comparative studies, and other research on the topic of statistical data modeling and ML as well as their applications. Particular attention is given, but is not limited, to theories and applications in diverse areas such as computer science, medicine, engineering, banking, education, sociology, economics, among others. The resulting palette of methods, algorithms, and applications for statistical modeling and ML presented in this Special Issue is expected to contribute to the further development of research in this area. We also believe that the new knowledge acquired here as well as the applied results are attractive and useful for young scientists, doctoral students, and researchers from various scientific specialties

    Deep representation learning: Fundamentals, Perspectives, Applications, and Open Challenges

    Full text link
    Machine Learning algorithms have had a profound impact on the field of computer science over the past few decades. These algorithms performance is greatly influenced by the representations that are derived from the data in the learning process. The representations learned in a successful learning process should be concise, discrete, meaningful, and able to be applied across a variety of tasks. A recent effort has been directed toward developing Deep Learning models, which have proven to be particularly effective at capturing high-dimensional, non-linear, and multi-modal characteristics. In this work, we discuss the principles and developments that have been made in the process of learning representations, and converting them into desirable applications. In addition, for each framework or model, the key issues and open challenges, as well as the advantages, are examined

    From Molecules to the Masses : Visual Exploration, Analysis, and Communication of Human Physiology

    Get PDF
    Det overordnede målet med denne avhandlingen er tverrfaglig anvendelse av medisinske illustrasjons- og visualiseringsteknikker for å utforske, analysere og formidle aspekter ved fysiologi til publikum med ulik faglig nivå og bakgrunn. Fysiologi beskriver de biologiske prosessene som skjer i levende vesener over tid. Vitenskapen om fysiologi er kompleks, men samtidig kritisk for vår forståelse av hvordan levende organismer fungerer. Fysiologi dekker en stor bredde romlig-temporale skalaer og fordrer behovet for å kombinere og bygge bro mellom basalfagene (biologi, fysikk og kjemi) og medisin. De senere årene har det vært en eksplosjon av nye, avanserte eksperimentelle metoder for å detektere og karakterisere fysiologiske data. Volumet og kompleksiteten til fysiologiske data krever effektive strategier for visualisering for å komplementere dagens standard analyser. Hvilke tilnærminger som benyttes i visualiseringen må nøye balanseres og tilpasses formålet med bruken av dataene, enten dette er for å utforske dataene, analysere disse eller kommunisere og presentere dem. Arbeidet i denne avhandlingen bidrar med ny kunnskap innen teori, empiri, anvendelse og reproduserbarhet av visualiseringsmetoder innen fysiologi. Først i avhandlingen er en rapport som oppsummerer og utforsker dagens kunnskap om muligheter og utfordringer for visualisering innen fysiologi. Motivasjonen for arbeidet er behovet forskere innen visualiseringsfeltet, og forskere i ulike anvendelsesområder, har for en sammensatt oversikt over flerskala visualiseringsoppgaver og teknikker. Ved å bruke søk over et stort spekter av metodiske tilnærminger, er dette den første rapporten i sitt slag som kartlegger visualiseringsmulighetene innen fysiologi. I rapporten er faglitteraturen oppsummert slik at det skal være enkelt å gjøre oppslag innen ulike tema i rom-og-tid-skalaen, samtidig som litteraturen er delt inn i de tre høynivå visualiseringsoppgavene data utforsking, analyse og kommunikasjon. Dette danner et enkelt grunnlag for å navigere i litteraturen i feltet og slik danner rapporten et godt grunnlag for diskusjon og forskningsmuligheter innen feltet visualisering og fysiologi. Basert på arbeidet med rapporten var det særlig to områder som det er ønskelig for oss å fortsette å utforske: (1) utforskende analyse av mangefasetterte fysiologidata for ekspertbrukere, og (2) kommunikasjon av data til både eksperter og ikke-eksperter. Arbeidet vårt av mangefasetterte fysiologidata er oppsummert i to studier i avhandlingen. Hver studie omhandler prosesser som foregår på forskjellige romlig-temporale skalaer og inneholder konkrete eksempler på anvendelse av metodene vurdert av eksperter i feltet. I den første av de to studiene undersøkes konsentrasjonen av molekylære substanser (metabolitter) ut fra data innsamlet med magnetisk resonansspektroskopi (MRS), en avansert biokjemisk teknikk som brukes til å identifisere metabolske forbindelser i levende vev. Selv om MRS kan ha svært høy sensitivitet og spesifisitet i medisinske anvendelser, er analyseresultatene fra denne modaliteten abstrakte og vanskelige å forstå også for medisinskfaglige eksperter i feltet. Vår designstudie som undersøkte oppgavene og kravene til ekspertutforskende analyse av disse dataene førte til utviklingen av SpectraMosaic. Dette er en ny applikasjon som gjør det mulig for domeneeksperter å analysere konsentrasjonen av metabolitter normalisert for en hel kohort, eller etter prøveregion, individ, opptaksdato, eller status på hjernens aktivitetsnivå ved undersøkelsestidspunktet. I den andre studien foreslås en metode for å utføre utforskende analyser av flerdimensjonale fysiologiske data i motsatt ende av den romlig-temporale skalaen, nemlig på populasjonsnivå. En effektiv arbeidsflyt for utforskende dataanalyse må kritisk identifisere interessante mønstre og relasjoner, noe som blir stadig vanskeligere når dimensjonaliteten til dataene øker. Selv om dette delvis kan løses med eksisterende reduksjonsteknikker er det alltid en fare for at subtile mønstre kan gå tapt i reduksjonsprosessen. Isteden presenterer vi i studien DimLift, en iterativ dimensjonsreduksjonsteknikk som muliggjør brukeridentifikasjon av interessante mønstre og relasjoner som kan ligge subtilt i et datasett gjennom dimensjonale bunter. Nøkkelen til denne metoden er brukerens evne til å styre dimensjonalitetsreduksjonen slik at den følger brukerens egne undersøkelseslinjer. For videre å undersøke kommunikasjon til eksperter og ikke-eksperter, studeres i neste arbeid utformingen av visualiseringer for kommunikasjon til publikum med ulike nivåer av ekspertnivå. Det er naturlig å forvente at eksperter innen et emne kan ha ulike preferanser og kriterier for å vurdere en visuell kommunikasjon i forhold til et ikke-ekspertpublikum. Dette påvirker hvor effektivt et bilde kan benyttes til å formidle en gitt scenario. Med utgangspunkt i ulike teknikker innen biomedisinsk illustrasjon og visualisering, gjennomførte vi derfor en utforskende studie av kriteriene som publikum bruker når de evaluerer en biomedisinsk prosessvisualisering målrettet for kommunikasjon. Fra denne studien identifiserte vi muligheter for ytterligere konvergens av biomedisinsk illustrasjon og visualiseringsteknikker for mer målrettet visuell kommunikasjonsdesign. Særlig beskrives i større dybde utviklingen av semantisk konsistente retningslinjer for farging av molekylære scener. Hensikten med slike retningslinjer er å heve den vitenskapelige kompetansen til ikke-ekspertpublikum innen molekyler visualisering, som vil være spesielt relevant for kommunikasjon til befolkningen i forbindelse med folkehelseopplysning. All kode og empiriske funn utviklet i arbeidet med denne avhandlingen er åpen kildekode og tilgjengelig for gjenbruk av det vitenskapelige miljøet og offentligheten. Metodene og funnene presentert i denne avhandlingen danner et grunnlag for tverrfaglig biomedisinsk illustrasjon og visualiseringsforskning, og åpner flere muligheter for fortsatt arbeid med visualisering av fysiologiske prosesser.The overarching theme of this thesis is the cross-disciplinary application of medical illustration and visualization techniques to address challenges in exploring, analyzing, and communicating aspects of physiology to audiences with differing expertise. Describing the myriad biological processes occurring in living beings over time, the science of physiology is complex and critical to our understanding of how life works. It spans many spatio-temporal scales to combine and bridge the basic sciences (biology, physics, and chemistry) to medicine. Recent years have seen an explosion of new and finer-grained experimental and acquisition methods to characterize these data. The volume and complexity of these data necessitate effective visualizations to complement standard analysis practice. Visualization approaches must carefully consider and be adaptable to the user's main task, be it exploratory, analytical, or communication-oriented. This thesis contributes to the areas of theory, empirical findings, methods, applications, and research replicability in visualizing physiology. Our contributions open with a state-of-the-art report exploring the challenges and opportunities in visualization for physiology. This report is motivated by the need for visualization researchers, as well as researchers in various application domains, to have a centralized, multiscale overview of visualization tasks and techniques. Using a mixed-methods search approach, this is the first report of its kind to broadly survey the space of visualization for physiology. Our approach to organizing the literature in this report enables the lookup of topics of interest according to spatio-temporal scale. It further subdivides works according to any combination of three high-level visualization tasks: exploration, analysis, and communication. This provides an easily-navigable foundation for discussion and future research opportunities for audience- and task-appropriate visualization for physiology. From this report, we identify two key areas for continued research that begin narrowly and subsequently broaden in scope: (1) exploratory analysis of multifaceted physiology data for expert users, and (2) communication for experts and non-experts alike. Our investigation of multifaceted physiology data takes place over two studies. Each targets processes occurring at different spatio-temporal scales and includes a case study with experts to assess the applicability of our proposed method. At the molecular scale, we examine data from magnetic resonance spectroscopy (MRS), an advanced biochemical technique used to identify small molecules (metabolites) in living tissue that are indicative of metabolic pathway activity. Although highly sensitive and specific, the output of this modality is abstract and difficult to interpret. Our design study investigating the tasks and requirements for expert exploratory analysis of these data led to SpectraMosaic, a novel application enabling domain researchers to analyze any permutation of metabolites in ratio form for an entire cohort, or by sample region, individual, acquisition date, or brain activity status at the time of acquisition. A second approach considers the exploratory analysis of multidimensional physiological data at the opposite end of the spatio-temporal scale: population. An effective exploratory data analysis workflow critically must identify interesting patterns and relationships, which becomes increasingly difficult as data dimensionality increases. Although this can be partially addressed with existing dimensionality reduction techniques, the nature of these techniques means that subtle patterns may be lost in the process. In this approach, we describe DimLift, an iterative dimensionality reduction technique enabling user identification of interesting patterns and relationships that may lie subtly within a dataset through dimensional bundles. Key to this method is the user's ability to steer the dimensionality reduction technique to follow their own lines of inquiry. Our third question considers the crafting of visualizations for communication to audiences with different levels of expertise. It is natural to expect that experts in a topic may have different preferences and criteria to evaluate a visual communication relative to a non-expert audience. This impacts the success of an image in communicating a given scenario. Drawing from diverse techniques in biomedical illustration and visualization, we conducted an exploratory study of the criteria that audiences use when evaluating a biomedical process visualization targeted for communication. From this study, we identify opportunities for further convergence of biomedical illustration and visualization techniques for more targeted visual communication design. One opportunity that we discuss in greater depth is the development of semantically-consistent guidelines for the coloring of molecular scenes. The intent of such guidelines is to elevate the scientific literacy of non-expert audiences in the context of molecular visualization, which is particularly relevant to public health communication. All application code and empirical findings are open-sourced and available for reuse by the scientific community and public. The methods and findings presented in this thesis contribute to a foundation of cross-disciplinary biomedical illustration and visualization research, opening several opportunities for continued work in visualization for physiology.Doktorgradsavhandlin

    Machine learning methods for genomic high-content screen data analysis applied to deduce organization of endocytic network

    Get PDF
    High-content screens are widely used to get insight on mechanistic organization of biological systems. Chemical and/or genomic interferences are used to modulate molecular machinery, then light microscopy and quantitative image analysis yield a large number of parameters describing phenotype. However, extracting functional information from such high-content datasets (e.g. links between cellular processes or functions of unknown genes) remains challenging. This work is devoted to the analysis of a multi-parametric image-based genomic screen of endocytosis, the process whereby cells uptake cargoes (signals and nutrients) and distribute them into different subcellular compartments. The complexity of the quantitative endocytic data was approached using different Machine Learning techniques, namely, Clustering methods, Bayesian networks, Principal and Independent component analysis, Artificial neural networks. The main goal of such an analysis is to predict possible modes of action of screened genes and also to find candidate genes that can be involved in a process of interest. The degree of freedom for the multidimensional phenotypic space was identified using the data distributions, and then the high-content data were deconvolved into separate signals from different cellular modules. Some of those basic signals (phenotypic traits) were straightforward to interpret in terms of known molecular processes; the other components gave insight into interesting directions for further research. The phenotypic profile of perturbation of individual genes are sparse in coordinates of the basic signals, and, therefore, intrinsically suggest their functional roles in cellular processes. Being a very fundamental process, endocytosis is specifically modulated by a variety of different pathways in the cell; therefore, endocytic phenotyping can be used for analysis of non-endocytic modules in the cell. Proposed approach can be also generalized for analysis of other high-content screens.:Contents Objectives Chapter 1 Introduction 1.1 High-content biological data 1.1.1 Different perturbation types for HCS 1.1.2 Types of observations in HTS 1.1.3 Goals and outcomes of MP HTS 1.1.4 An overview of the classical methods of analysis of biological HT- and HCS data 1.2 Machine learning for systems biology 1.2.1 Feature selection 1.2.2 Unsupervised learning 1.2.3 Supervised learning 1.2.4 Artificial neural networks 1.3 Endocytosis as a system process 1.3.1 Endocytic compartments and main players 1.3.2 Relation to other cellular processes Chapter 2 Experimental and analytical techniques 2.1 Experimental methods 2.1.1 RNA interference 2.1.2 Quantitative multiparametric image analysis 2.2 Detailed description of the endocytic HCS dataset 2.2.1 Basic properties of the endocytic dataset 2.2.2 Control subset of genes 2.3 Machine learning methods 2.3.1 Latent variables models 2.3.2 Clustering 2.3.3 Bayesian networks 2.3.4 Neural networks Chapter 3 Results 3.1 Selection of labeled data for training and validation based on KEGG information about genes pathways 3.2 Clustering of genes 3.2.1 Comparison of clustering techniques on control dataset 3.2.2 Clustering results 3.3 Independent components as basic phenotypes 3.3.1 Algorithm for identification of the best number of independent components 3.3.2 Application of ICA on the full dataset and on separate assays of the screen 3.3.3 Gene annotation based on revealed phenotypes 3.3.4 Searching for genes with target function 3.4 Bayesian network on endocytic parameters 3.4.1 Prediction of pathway based on parameters values using Naïve Bayesian Classifier 3.4.2 General Bayesian Networks 3.5 Neural networks 3.5.1 Autoencoders as nonlinear ICA 3.5.2 siRNA sequence motives discovery with deep NN 3.6 Biological results 3.6.1 Rab11 ZNF-specific phenotype found by ICA 3.6.2 Structure of BN revealed dependency between endocytosis and cell adhesion Chapter 4 Discussion 4.1 Machine learning approaches for discovery of phenotypic patterns 4.1.1 Functional annotation of unknown genes based on phenotypic profiles 4.1.2 Candidate genes search 4.2 Adaptation to other HCS data and generalization Chapter 5 Outlook and future perspectives 5.1 Handling sequence-dependent off-target effects with neural networks 5.2 Transition between machine learning and systems biology models Acknowledgements References Appendix A.1 Full list of cellular and endocytic parameters A.2 Description of independent components of the full dataset A.3 Description of independent components extracted from separate assays of the HC

    Statistical Modelling

    Get PDF
    The book collects the proceedings of the 19th International Workshop on Statistical Modelling held in Florence on July 2004. Statistical modelling is an important cornerstone in many scientific disciplines, and the workshop has provided a rich environment for cross-fertilization of ideas from different disciplines. It consists in four invited lectures, 48 contributed papers and 47 posters. The contributions are arranged in sessions: Statistical Modelling; Statistical Modelling in Genomics; Semi-parametric Regression Models; Generalized Linear Mixed Models; Correlated Data Modelling; Missing Data, Measurement of Error and Survival Analysis; Spatial Data Modelling and Time Series and Econometrics
    corecore