565 research outputs found

    Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system

    Full text link
    Biology has taken strong steps towards becoming a computer science aiming at reprogramming nature after the realisation that nature herself has reprogrammed organisms by harnessing the power of natural selection and the digital prescriptive nature of replicating DNA. Here we further unpack ideas related to computability, algorithmic information theory and software engineering, in the context of the extent to which biology can be (re)programmed, and with how we may go about doing so in a more systematic way with all the tools and concepts offered by theoretical computer science in a translation exercise from computing to molecular biology and back. These concepts provide a means to a hierarchical organization thereby blurring previously clear-cut lines between concepts like matter and life, or between tumour types that are otherwise taken as different and may not have however a different cause. This does not diminish the properties of life or make its components and functions less interesting. On the contrary, this approach makes for a more encompassing and integrated view of nature, one that subsumes observer and observed within the same system, and can generate new perspectives and tools with which to view complex diseases like cancer, approaching them afresh from a software-engineering viewpoint that casts evolution in the role of programmer, cells as computing machines, DNA and genes as instructions and computer programs, viruses as hacking devices, the immune system as a software debugging tool, and diseases as an information-theoretic battlefield where all these forces deploy. We show how information theory and algorithmic programming may explain fundamental mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George Ellis (eds.), Cambridge University Pres

    Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results

    Full text link
    We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table

    The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy

    Full text link
    The principle of maximum entropy (Maxent) is often used to obtain prior probability distributions as a method to obtain a Gibbs measure under some restriction giving the probability that a system will be in a certain state compared to the rest of the elements in the distribution. Because classical entropy-based Maxent collapses cases confounding all distinct degrees of randomness and pseudo-randomness, here we take into consideration the generative mechanism of the systems considered in the ensemble to separate objects that may comply with the principle under some restriction and whose entropy is maximal but may be generated recursively from those that are actually algorithmically random offering a refinement to classical Maxent. We take advantage of a causal algorithmic calculus to derive a thermodynamic-like result based on how difficult it is to reprogram a computer code. Using the distinction between computable and algorithmic randomness we quantify the cost in information loss associated with reprogramming. To illustrate this we apply the algorithmic refinement to Maxent on graphs and introduce a Maximal Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a generalisation over previous approaches. We discuss practical implications of evaluation of network randomness. Our analysis provides insight in that the reprogrammability asymmetry appears to originate from a non-monotonic relationship to algorithmic probability. Our analysis motivates further analysis of the origin and consequences of the aforementioned asymmetries, reprogrammability, and computation.Comment: 30 page

    Optimization of a PEGylation process

    Get PDF
    The PEGylation process is a covalent attachment between a protein (the pharmaceutical) and poly ethylene glycol (PEG) and since the beginning in 1977 PEGylation processes have been used to improve pharmaceuticals. PEGylation of a pharmaceutical achieves improved properties like greater solubility in water, longer residence time in vivo and extended shelf life. The PEGylation process is in general conducted with a batch reactor connected to a size exclusion chromatography (SEC) column or more common an ion exchange chromatography (IEC) column. The batch reactor achieves a yield of monoPEGylated protein at approximately 60 % and a 10 % yield of multiPEGylated proteins. Other processes are still under development like the size exclusion reaction chromatography (SERC). The report contains two parts, an experimental part and a simulation part. The experimental section tests the batch reactor in order to calibrate the kinetic constants. Experiments with a SERC column were also conducted. The simulation section created models for the batch reactor, the SEC column and the SERC column. The batch reactor model includes four reactions, three PEGylation reactions and one deactivation reaction. Both the SEC column and the SERC column are described with the General rate model. The SERC column is combined with a recirculation cycle and optimized for different objectives. The experimental results show fast kinetic reactions for the PEGylation that is suitable for the SERC column. The SERC column experiments resulted in a selective monoPEGylated protein production. The simulations resulted in a monoPEGylated protein yield at 82.3 % when recirculating the unPEGylated protein nine times. In future research a more detailed recirculation cycle can be simulated and validated with experiments. Also an automated injection loop where the reactants are mixed when entering the SERC column is able to improve the results.Dagens krav pÄ lÀkemedel Àr höga, de ska t.ex. stanna kvar i kroppen under lÄng tid, ha lÄng hÄllbarhetstid och vara lÀtt att dosera. Att PEGylera ett lÀkemedel kan ge dessa egenskaper. Idag Àr dock denna tillverkningsprocess lÄngsam och ger ett lÄgt utbyte. Size Exclusion Reaction Chromatography (SERC) Àr en ny teknik som förhoppningsvis ska förbÀttra denna process. Protein Àr idag vanligt förekommande som lÀkemedel. De Àr lÀtta att producera och kroppen har lÀtt för att ta Ät sig medicinen. Nackdelen med dessa lÀkemedel Àr att kroppen med hjÀlp av bl.a. njurarna kan filtrera bort innan medicinen kan ge full effekt. Proteinen kan Àven ha korta hÄllbarhetstider. Detta har forskarna löst genom att koppla pÄ en lÄng kolkedja till proteinet. Denna kolkedja kallas för Poly-EtylenGlykol (PEG) dÀrav namnet PEGylering för sjÀlva processen. PEG tillsammans med proteinet bildar en molekyl som inte filtreras bort av njurarna och dÀr-för kan stanna lÀngre i kroppen. Molekylen blir Àven lÀttare att lösa upp i vatten samt att den fÄr en bÀttre hÄllbarhetstid. Processen som anvÀnds idag gÄr ut pÄ att PEG tillsammans med proteinet blandas i en satsreaktor dÀr dessa fÄr reagera under en lÀngre tid. Resultatet av detta Àr att man fÄr ut en viss del protein som inte har hunnit reagera, en del som har bildat rÀtt kombination av en PEG-kedja och ett protein. Men det kommer Àven ut en del protein som kopplats samman med flera PEG-kedjor, sÄ kallat multiPEGylerat protein. Detta innebÀr att bara ca 60 % av proteinet som tillsÀtts till reaktorn kan anvÀndas som lÀkemedel. Resten (40 %) gÄr direkt till papperskorgen dÄ dessa molekyler inte Àr godkÀnda av lÀkemedels-verket. För att undvika att fÄ stora delar multiPEGylerat protein kan SERC-processen utnyttjas. SERC-processen anvÀnder sig av molekylernas storlek för att separera produkten innan en ny PEG-kedja kan kopplas ihop. En förstorad SERC-kolonn kan besk-rivas som en cylinder fylld med innebandy-bollar. Atomer representeras som sandkorn och molekyler t.ex. PEG och protein som mindre stenar. Stora molekyler som PEGylerade protein kan ses som stora stenar. Sandkorn och mindre stenar kan utan problem ta sig in i bollarnas hÄl. Reaktionen mellan PEG-kedjan och protein innebÀr att tvÄ smÄ stenar bildar en stor. Eftersom sandkorn och smÄ stenar kan röra sig fritt i cylindern tar dessa lÄng tid att ta sig igenom cylindern. De stora stenarna dÀremot kommer inte in i bollarna och kan dÀrför bara röra sig mellan dessa. I och med detta kommer stora stenar att ta sig igenom cylindern mycket snabbare. PEG-kedjan tillsammans med proteinet kan under en lÀngre tid reagera inne i SERC-kolonnen samtidigt som de transporteras genom kolonnen. Produkten monoPEGylerat protein kan tack vare sin stora storlek förflytta sig snabbare genom kolonnen, och dÀrmed undvika att PEGyleras en gÄng till. Resultatet av denna nya metod blir en pro-cess som inte bara har möjligheten att minska mÀngden multiPEGylerade protein, utan Àven öka mÀngden anvÀndbart lÀkemedel till ca 80 %. Till skillnad frÄn en satsreaktor som behöver rena produkten frÄn övriga ingredienser efter reaktionen, sÄ sker detta redan inuti sjÀlva SERC-kolonnen
    • 

    corecore