365 research outputs found

    An Open Source Digital Twin Framework

    Get PDF
    In this thesis, the utility and ideal composition of high-level programming frameworks to facilitate digital twin experiments were studied. Digital twins are a specific class of simulation artefacts that exist in the cyber domain parallel to their physical counterparts, reflecting their lives in a particularly detailed manner. As such, digital twins are conceived as one of the key enabling technologies in the context of intelligent life cycle management of industrial equipment. Hence, open source solutions with which digital twins can be built, executed and evaluated will likely see an increase in demand in the coming years. A theoretical framework for the digital twin is first established by reviewing the concepts of simulation, co-simulation and tool integration. Based on the findings, the digital twin is formulated as a specific co-simulation class consisting of software agents that interact with one of two possible types of external actors, i.e., sensory measurement streams originating from physical assets or simulation models that make use of the mentioned streams as inputs. The empirical part of the thesis consists of describing ModelConductor, an original Python library that supports the development of digital twin co-simulation experiments in presence of online input data. Along with describing the main features, a selection of illustrative use cases are presented. From a software engineering point of view, a high-level programmatic syntax is demonstrated through the examples that facilitates rapid prototyping and experimentation with various types of digital twin setups. As a major contribution of the thesis, object-oriented software engineering approach has been demonstrated to be a plausible means to construct and execute digital twins. Such an approach could potentially have consequences on digital twin related tasks being increasingly performed by software engineers in addition to domain experts in various engineering disciplines. In particular, the development of intelligent life cycle services such as predictive maintenance, for example, could benefit from workflow harmonization between the communities of digital twins and artificial intelligence, wherein high-level open source solutions are today used almost exclusively

    MODELING HYPERBARIC CHAMBER ENVIRONMENT AND CONTROL SYSTEM

    Get PDF
    Deep water activities are essential for many industrial fields, for instance in repairing and installation of underwater cables, pipes and constructions, marine salvage and rescue opera- tions. In some cases, these activities must be performed in deep water and hence require special equipment and prepared and experienced personnel. In some critical situations, re- motely controlled vehicles (ROVs) can't be used and a human diver intervention is required. In the last case, divers are required to perform work at high depths, which could be as low as 300m below the water surface. Usually, this is the limit depth for commercial diving and when operations must be carried out even deeper, ROVs remain only possibility to perform them. In the past, the safety regulations were less strict and numerous operations on depth of 300-350 meters of seawater were conducted. However, in the beginning of the 90s gov- ernments and companies started to impose limits on depths of operation; for instance, in Norway maximum operational depth for saturation divers is limited to 180 meters of sea- water (Imbert et al., 2019). Obviously, harsh environmental conditions impose various limitations on performed activi- ties; indeed, low temperature, poor visibility and high pressure make it difficult not only to operate at depth, but even to achieve the point of intervention. One of the main problems is related to elevated pressure, which rises for about 1 bar for each 10 meters of water depth and could achieve up to 20-25 bars at required depth, while pressure inside divers\u2019 atmospheric diving suites must be nearly the same. Considering this, there are several evident limitations. First is related to the fact that at high atmospheric pressure oxy- gen becomes poisonous for human body and special breath gas mixtures are required to avoid health issues. The second one is maximum pressure variation rate which would not cause damage for the human body; indeed, fast compression or decompression could easily cause severe damages and even death of divers. Furthermore, surveys found that circa 1/3 of divers experience headache during decompression which usually last for at least several hours and up to several days (Imbert et al., 2019). The same study indicates that majority of the divers experience fatigue after saturation and it lasts on average more than 4 days before return to normal. Obviously, risk of accidents increases with high number of compression- decompression cycles. To address these issues, in commercial deep water diving the common practice is to perform pressurization only one time before the start of the work activity which typically lasts 20-30 days and consequent depressurization after its end. Hence, divers are living for several weeks in isolated pressurized environments, typically placed on board of a Dive Support Vessel (DSV), usually barge or a ship, and go up and down to the workplace using submersible decompression chamber also known as the bell. While long-term work shifts provide numerous advantages, there is still necessity to perform life support supervision of the plant, the bell and the diving suits, which require presence of well qualified personnel. Currently, most of training activities are performed on empty plant during idle time, but obviously this approach is low efficient and costly, as well as accom- panied by the risk to broke equipment. To address such issues, this research project proposes utilization of simulator of plant and its life support system, devoted to train future Life-Support Supervisors (LSS), taking into account gas dynamics, human behaviour and physiology as well as various aspect of opera- tion of saturation diving plants

    An Integrated Immunopeptidomics and Proteogenomics Framework to Discover Non-Canonical Targets for Cancer Immunotherapy

    Get PDF
    Un Ă©lĂ©ment essentiel de l’immunothĂ©rapie appliquĂ©e au cancer est l’identification de peptides liant les antigĂšnes des leucocytes humains (HLA) et capables d’induire une puissante rĂ©ponse T anti-tumorale. La spectromĂ©trie de masse (MS) constitue actuellement la seule mĂ©thode non-biaisĂ©e permettant une analyse dĂ©taillĂ©e du panel d’antigĂšnes susceptibles d’ĂȘtre prĂ©sentĂ©s aux lymphocytes T in vivo. L’utilisation de cette mĂ©thode en clinique requiert toutefois des amĂ©liorations significatives de la mĂ©thodologie utilisĂ©e lors de l’identification des peptides HLA. Un consortium multidisciplinaire de chercheurs a rĂ©cemment mis en lumiĂšre les problĂšmes actuellement liĂ©s Ă  l’utilisation de la MS en immunopeptidomique, soulignant le besoin de dĂ©velopper de nouvelles mĂ©thodes et mettant en Ă©vidence le dĂ©fi que reprĂ©sente la standardisation de l’immuno-purification des molĂ©cules HLA. La premiĂšre partie de cette thĂšse vise Ă  optimiser les mĂ©thodes expĂ©rimentales permettant l’extraction des peptides apprĂȘtĂ©s aux HLA. L’optimisation de la mĂ©thodologie de base a permis des amĂ©liorations notables en terme de dĂ©bit, de reproductibilitĂ©, de sensibilitĂ© et a permis une purification sĂ©quentielle des molĂ©cules de HLA de classe I de classe II ainsi que de leurs peptides, Ă  partir de lignĂ©es cellulaires ou de tissus. En comparaison avec les mĂ©thodes existantes, ce protocole comprend moins d’étapes et permet de limiter la manipulation des Ă©chantillons ainsi que le temps de purification. Cette mĂ©thode, pour les peptides HLA extraits, a permis d’obtenir des taux de reproductibilitĂ© et de sensibilitĂ© sans prĂ©cĂ©dents (corrĂ©lations de Pearson jusqu'Ă  0,98 et 0,97 pour les HLA de classe I et de classe II, respectivement). De plus, la faisabilitĂ© d’études comparatives robustes a Ă©tĂ© dĂ©montrĂ©e Ă  partir d’une lignĂ©e cellulaire de cancer de l’ovaire, traitĂ©e Ă  l'interfĂ©ron gamma. En effet, cette nouvelle mĂ©thode a mis en Ă©vidence des changements quantitatifs et qualitatifs du catalogue de peptides prĂ©sentĂ©s aux HLA. Les rĂ©sultats obtenus ont mis en avant une augmentation de la prĂ©sentation de longs ligands chymotryptiques de classe I. Ce phĂ©nomĂšne est probablement liĂ© Ă  la modulation de la machinerie de traitement et de prĂ©sentation des antigĂšnes. Dans cette premiĂšre partie de thĂšse, nous avons dĂ©veloppĂ© une mĂ©thodologie robuste et rationalisĂ©e, facilitant la purification des HLA et pouvant ĂȘtre appliquĂ©e en recherche fondamentale et translationnelle. Bien que les nĂ©oantigĂšnes reprĂ©sentent une cible attractive, des Ă©tudes rĂ©centes ont mis en Ă©vidence l’existence des antigĂšnes non canoniques. Ces antigĂšnes tumoraux, bien que non mutĂ©s, sont aussi spĂ©cifiques aux cellules cancĂ©reuses et semblent jouer un rĂŽle important dans l’immunitĂ© anti-tumorale. La seconde partie de cette thĂšse a pour objectif le dĂ©veloppement d’une mĂ©thodologie d’analyse permettant l’identification ainsi que la validation de ces antigĂšnes particuliers. Les antigĂšnes non canoniques sont d'origine prĂ©sumĂ©e non codante et ne sont, par consĂ©quent, que rarement inclus dans les bases de donnĂ©es des sĂ©quences de protĂ©ines de rĂ©fĂ©rence. De ce fait, ils ne sont gĂ©nĂ©ralement pas pris en compte lors des recherches de MS utilisant de telles bases de donnĂ©es. Afin de palier ce problĂšme et de permettre leur identification par MS, le sĂ©quençage de l'exome entier, le sĂ©quençage de l'ARN sur une population de cellules et sur des cellules uniques, ainsi que le profilage des ribosomes ont Ă©tĂ© intĂ©grĂ©s aux donnĂ©es d’immunopeptidomique. Ainsi, NewAnce, un programme informatique permettant de combiner les donnĂ©es de deux outils de recherche MS en tandem, a Ă©tĂ© dĂ©veloppĂ© afin de calculer le taux d’antigĂšnes non canoniques identifiĂ©s comme faux positifs. L’utilisation de NewAnce sur des lignĂ©es cellulaires provenant de patients atteints de mĂ©lanomes ainsi que sur des biopsies de cancer du poumon a permis l’identification prĂ©cise de centaines de peptides HLA non classiques, spĂ©cifiques aux cellules tumorales et communs Ă  plusieurs patients. Le niveau de confirmation des peptides non canoniques a ensuite Ă©tĂ© testĂ© Ă  l’aide d’une approche de MS ciblĂ©e. Les peptides rĂ©sultant de ces analyses ont Ă©tĂ© minutieusement validĂ©s pour un des Ă©chantillons de mĂ©lanome disponibles. De plus, le profilage des ribosomes a rĂ©vĂ©lĂ© que les nouveaux cadres de lecture ouverts, desquels rĂ©sultent certains de ces peptides non classiques, sont activement traduits. L’évaluation de l’immunogenicitĂ© de ces peptides a Ă©tĂ© Ă©valuĂ©e avec des cellules immunitaires autologues et a rĂ©vĂ©lĂ© un Ă©pitope immunogĂšne non canonique, provenant d'un cadre de lecture ouvert alternatif du gĂšne ABCB5, un marqueur des cellules souches du mĂ©lanome. De maniĂšre globale, les rĂ©sultats obtenus au cours de cette thĂšse soulignent la possibilitĂ© d’inclure ce type d’analyse de proteogĂ©nomique dans un protocole d’identification de nĂ©oantigĂšnes existant. Cela permettrait d’inclure et prioriser des antigĂšnes tumoraux non classiques et de proposer aux patients en impasse thĂ©rapeutique des immunothĂ©rapies anti-tumorales personnalisĂ©es. -- A central factor to the development of cancer immunotherapy is the identification of clinically relevant human leukocyte antigen (HLA)-bound peptides that elicit potent anti-tumor T cell responses. Mass spectrometry (MS) is the only unbiased technique that captures the in vivo presented HLA repertoire. However, significant improvements in MS-based HLA peptide discovery methodologies are necessary to enable the smooth transition to the clinic. Recently, a consortium of multidisciplinary researchers presented current issues in clinical MS-based immunopeptidomics, highlighting method development and standardization challenges in HLA immunoaffinity purification. The first part of this thesis addresses improvements to the experimental method for HLA peptide extraction. The approach was optimized with several new developments, facilitating high-throughput, reproducible, scalable, and sensitive sequential immunoaffinity purification of HLA class I and class II peptides from cell lines and tissue samples. The method showed increased speed, and reduced sample handling when compared to previous methods. Unprecedented depth and high reproducibility were achieved for the obtained HLA peptides (Pearson correlations up to 0.98 and 0.97 for HLA class I and HLA class II, respectively). Additionally, the feasibility of performing robust comparative studies was demonstrated on an ovarian cancer cell line treated with interferon gamma. Both quantitative and qualitative changes were detected in the cancer HLA repertoire upon treatment. Specifically, a yet unreported and interesting phenomenon was the upregulated presentation of longer and chymotryptic-like HLA class I ligands, likely related to the modulation of the antigen processing and presentation machinery. Taken together, a robust and streamlined framework was built that facilitates peptide purification and its application in basic and translational research. Furthermore, recent studies have shed light that, along with the highly attractive mutated neoantigens, other non-mutated, yet tumor-specific, non-canonical antigens may also play an important role in anti-tumor immunity. Non-canonical antigens are of presumed non-coding origin and not commonly included in protein reference databases, and are therefore typically disregarded in database-dependent MS searches. The second part of this thesis develops an analytical workflow enabling the confident identification and validation of non- canonical tumor antigens. For this purpose, whole exome sequencing, bulk and single-cell RNA sequencing and ribosome profiling were integrated with MS-based immunopeptidomics for personalized non-canonical HLA peptide discovery. A computational module called NewAnce was designed, which combines the results of two tandem MS search tools and implements group-specific false discovery rate calculations to control the error specifically for the non-canonical peptide group. When applied to patient-derived melanoma cell lines and paired lung cancer and normal tissues, NewAnce resulted in the accurate identification of hundreds of shared and tumor-specific non-canonical HLA peptides. Next, the level of non-canonical peptide confirmation was tested in a targeted MS-based approach, and selected non-canonical peptides were extensively validated for one melanoma sample. Furthermore, the novel open reading frames that generate a selection of these non- canonical peptides were found to be actively translated by ribosome profiling. Importantly, these peptides were assessed with autologous immune cells and a non-canonical immunogenic epitope was discovered from an alternative open reading frame of melanoma stem cell marker gene ABCB5. This thesis concludes by highlighting the possibility of incorporating the proteogenomics pipeline into existing neoantigen discovery engines in order to prioritize tumor-specific non-canonical peptides for cancer immunotherapy. -- Maladie trĂšs hĂ©tĂ©rogĂšne et multifactorielle, le cancer reprĂ©sente Ă  ce jour la seconde cause de dĂ©cĂšs dans le monde. Bien que le systĂšme immunitaire soit capable de reconnaĂźtre puis d’éliminer les cellules cancĂ©reuses, ces derniĂšres peuvent Ă  leur tour s’adapter et accumuler des mutations leur permettant d’échapper Ă  cette reconnaissance. L’immunothĂ©rapie anti-tumorale dĂ©montre le rĂŽle clĂ© de l’immunitĂ© dans l’éradication des tumeurs. Cependant, ces thĂ©rapies prometteuses ne sont efficaces que chez une petite proportion des patients traitĂ©s. Une Ă©tape majeure dans l’établissement d’une rĂ©ponse immunitaire anti-tumorale est la reconnaissance d’antigĂšnes associĂ©s aux tumeurs. Des Ă©tudes rĂ©centes ont montrĂ© que les antigĂšnes tumoraux issus de rĂ©gions non-codantes du gĂ©nome (antigĂšnes non-canoniques) peuvent jouer un rĂŽle clĂ© dans l’induction de rĂ©ponses immunitaires. Ainsi, l’identification de ces antigĂšnes tumoraux particuliers permettrait de guider le dĂ©veloppement d’immunothĂ©rapies anti-cancĂ©reuses personnalisĂ©es telles que la vaccination ou encore le transfert adoptif de lymphocytes T reconnaissant ces cibles. La spectromĂ©trie de masse (MS) est une technique non biaisĂ©e permettant l’identification et l’analyse du rĂ©pertoire des antigĂšnes prĂ©sentĂ©s in vivo. Cependant, cette technique nĂ©cessite d’ĂȘtre optimisĂ©e et standardisĂ©e afin d’ĂȘtre utilisĂ©e en clinique. Ainsi, la premiĂšre partie de ces travaux de thĂšse a Ă©tĂ© dĂ©diĂ©e Ă  l’optimisation expĂ©rimentale de cette mĂ©thode Ă  partir d’échantillons de tissus et de lignĂ©es cellulaires. En comparaison avec les protocoles standards, cette technique permet une couverture plus complĂšte, rapide et reproductible du rĂ©pertoire de peptides apprĂȘtĂ©s aux HLA. La seconde partie de cette thĂšse a Ă©tĂ© consacrĂ©e au dĂ©veloppement d’une mĂ©thode permettant l’identification d’antigĂšnes tumoraux non-canoniques via le sĂ©quençage d’ARN cellulaire, ribosomique et l’utilisation de notre mĂ©thode d’immunopeptidomique optimisĂ©e. Afin de contrĂŽler l’identification de faux positifs, nous avons Ă©laborĂ© un nouveau module computationnel. Ce module a permis l’identification de plusieurs centaines de peptides-HLA non-canoniques, partagĂ©s et spĂ©cifiques au mĂ©lanome et au cancer du poumon. Le sĂ©quençage des ARN ribosomiques a mis en Ă©vidence la traduction de nouveaux cadre ouverts de lecture desquels sont traduits de nouveaux peptides non-canoniques. Cette technique nous a permis de mettre en Ă©vidence un Ă©pitope immunogĂšne issu du gĂšne ABCB5, un marqueur de cellules souches cancĂ©reuses prĂ©alablement identifiĂ© dans le mĂ©lanome. De maniĂšre globale, ces travaux de thĂšse, alliant immunopeptidomique et protĂ©ogĂ©nomique, ont permis la mise au point d’une mĂ©thode expĂ©rimentale permettant une meilleure identification d’antigĂšnes tumoraux. Nous espĂ©rons que ces rĂ©sultats amĂ©lioreront l’identification et la priorisation de cibles pertinentes pour l’immunothĂ©rapie anti-cancĂ©reuse en clinique

    Kriging: applying geostatistical techniques to the genetic study of complex diseases

    Get PDF
    Complex diseases often display geographic distribution patterns. Therefore, the integration of genetic and environmental factors using geographic information systems (GIS) and specific statistical analyses that consider the spatial dimension of data greatly assist in the research of their gene-environment interactions (GxE). The objectives of the present work were to assess the application of a geostatistical interpolation technique (kriging) in the study of complex diseases with a distinct heterogeneous geographic distribution and to test its performance as an alternative to conventional genetic imputation methods. Using multiple sclerosis as a case study, kriging demonstrated to be a flexible and valuable tool for integrating information from various sources and at a different spatial resolution into a model that easily allowed to visualize its heterogeneous geographic distribution in Europe and to explore the intertwined interactions between several known genetic and environmental risk factors. Even though the performance of kriging did not surpass the results obtained with current imputation techniques, this pilot study revealed a worse performance of the latter for rare variants in chromosomal regions with a low density of markers

    Distributed Simulation in Industry

    Get PDF
    Csaba Attila Boer was born in Satu Mare, Romania, on 29 October, 1975. He completed his secondary education at Kölcsey Ferenc High School, in Satu Mare, in 1994. In the same year he started his higher education at BabeƟ-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca, Romania, where he received his B.Sc. degree in Computer Science, in 1998, and his M.Sc. degree with major in Information Systems, specialization Designing and Implementing Complex Systems, in 1999. During these years, he obtained fellowships at the Eötvös LĂłrĂĄnd University, and at the Computer and Automation Research Institute of the Hungarian Academy of Sciences, Budapest, Hungary within the Central European Exchange Program for University Studies (CEEPUS). Since 1999, he has been affiliated with the Computer Science Department, Faculty of Economics at Erasmus University Rotterdam, The Netherlands. There, he worked as a researcher for one year, studying the storage and retrieval of discrete event simulation models, research that resulted in three scientific articles. Between 2000 and 2004, he was associated with the same department as a Ph.D. candidate aiming to research the area of distributed simulation and its application in industry. His topic being close to the research carried out at the Faculty of Technology, Policy and Management, Delft University of Technology, and the BETADE research program, he started to collaborate with researchers from these groups, getting involved in two joint practical case study projects. This collaboration resulted in seven joint scientific articles, presented at various international conferences. Furthermore, Csaba has maintained international contacts with researchers from the distributed simulation area. He has been invited twice to Brunel University, London to give a presentation concerning the application of distributed simulation in industry. Currently, he is working as a simulation consultant atGedistribueerde simulatie wordt binnen de defensie in brede kring geaccepteerd en toegepast, maar het heeft in de industrie geen voet aan de grond gekregen. In dit proefschrift onderzoeken we de redenen voor dit fenomeen door te bestuderen wat de industrie verwacht op het terrein van de gedistribueerde simulatie. In het algemeen worden in de industrie simulatiemodellen ontworpen en ontwikkeld met COTS (“commercial-off-the-shelf”) simulatiepakketten. Echter, de bestaande architecturen voor gedistribueerde simulatie binnen defensie zijn niet gericht op het koppelen van modellen gebouwd met COTS simulatiepakketten. Om de industrie te motiveren gedistribueerde simulatie te accepteren en te gebruiken moet men derhalve ernaar streven het mogelijk te maken om modellen, die gebouwd zijn met deze pakketten, aan elkaar te koppelen zonder dat dat al te veel inspanning vereist van de modelbouwers. Uitgaande van een onderzoek onder experts in dit domein, stellen we in dit proefschrift een pakket van eisen voor voor het ontwerp en ontwikkelen van gedistribueerde simulatiearchitecturen dat de industriegemeenschap zal motiveren om gedistribueerde simulatie te accepteren en toe te passen. Daarnaast presenteren we een lichtgewicht architectuur voor gedistribueerde simulatie die met succes toegepast is in twee industriĂ«le projecten, en die in grote mate voldoet aan het voorgestelde pakket van eisen.While distributed simulation is widely accepted and applied in defence, it has not gathered ground yet in industry. In this thesis we investigate the reasons behind this phenomenon by surveying the expectation of industry with respect to distributed simulation solutions. Simulation models in industry are mainly designed and developed in commercial-off-the-shelf (COTS) simulation packages. The existing distributed simulation architectures in defence, however, do not focus on coupling models created in COTS simulation packages. Therefore, in order to motivate the industrial community to easily accept and use distributed simulation, one should strive to couple models built in these packages. Further, coupling these models should be possible without needing too much extra effort from modellers. In this thesis, based on a survey with experts in domain, we propose a list of requirements for designing and developing distributed simulation architectures that would encourage the industrial community to accept and apply distributed simulation. Furthermore, we present a lightweight distributed simulation architecture which has been successfully applied in two industrial projects, and satisfies to a large extent the proposed requirements

    V Jornadas de InvestigaciĂłn de la Facultad de Ciencia y TecnologĂ­a. 2016

    Get PDF
    171 p.I. Abstracts. Ahozko komunikazioak / Comunicaciones orales: 1. Biozientziak: Alderdi Molekularrak / Biociencias: Aspectos moleculares. 2. Biozientziak: Ingurune Alderdiak / Biociencias: Aspectos Ambientales. 3. Fisika eta Ingenieritza Elektronika / FĂ­sica e IngenierĂ­a ElectrĂłnica. 4. GeologĂ­a / GeologĂ­a. 5. Matematika / MatemĂĄticas. 6. Kimika / QuĂ­mica. 7. Ingenieritza Kimikoa eta Kimika / IngenierĂ­a QuĂ­mica y QuĂ­mica. II. Abstracts. Idatzizko Komunikazioak (Posterrak) / Comunicaciones escritas (PĂłsters): 1. Biozientziak / Biociencias. 2. Fisika eta Ingenieritza Elektronika / FĂ­sica e IngenierĂ­a ElectrĂłnica. 3. Geologia / Geologia. 4. Matematika / MatemĂĄticas. 5. Kimika / QuĂ­mica. 6. Ingenieritza Kimikoa / IngenierĂ­a QuĂ­mica

    2021 - The Second Annual Fall Symposium of Student Scholars

    Get PDF
    The full program book from the Fall 2020 Symposium of Student Scholars, held on November 18, 2021. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1024/thumbnail.jp

    ADAPTIVE IMMUNITY AND THE TUMOR IMMUNE MICROENVIRONMENT

    Get PDF
    The adaptive immune system is essential for production of anti-tumor immune responses, with the majority of current immunotherapeutics designed to modulate the interaction between adaptive immunity and tumor cells within the tumor-immune microenvironment. This dissertation addresses three translational goals regarding our understanding and modulation of anti-tumor adaptive immunity: 1) Improvement of understanding for existing immunotherapies such as checkpoint inhibitor therapy (Chapter 2.1); 2) Improvement of efficacy for novel immunotherapeutics currently in development including tumor neoantigen vaccines (Chapter 4); and 3) Development of next-generation immunotherapies through identification of novel anti-tumor vaccine targets (Chapter 3), as well as development of diagnostic tools including biomarkers of immunotherapy response (Chapter 3) and immune-imaging modalities (Chapter 2.1).Doctor of Philosoph

    White Paper 2: Origins, (Co)Evolution, Diversity & Synthesis Of Life

    Get PDF
    Publicado en Madrid, 185 p. ; 17 cm.How life appeared on Earth and how then it diversified into the different and currently existing forms of life are the unanswered questions that will be discussed this volume. These questions delve into the deep past of our planet, where biology intermingles with geology and chemistry, to explore the origin of life and understand its evolution, since “nothing makes sense in biology except in the light of evolution” (Dobzhansky, 1964). The eight challenges that compose this volume summarize our current knowledge and future research directions touching different aspects of the study of evolution, which can be considered a fundamental discipline of Life Science. The volume discusses recent theories on how the first molecules arouse, became organized and acquired their structure, enabling the first forms of life. It also attempts to explain how this life has changed over time, giving rise, from very similar molecular bases, to an immense biological diversity, and to understand what is the hylogenetic relationship among all the different life forms. The volume further analyzes human evolution, its relationship with the environment and its implications on human health and society. Closing the circle, the volume discusses the possibility of designing new biological machines, thus creating a cell prototype from its components and whether this knowledge can be applied to improve our ecosystem. With an effective coordination among its three main areas of knowledge, the CSIC can become an international benchmark for research in this field
    • 

    corecore