135 research outputs found

    On The Error-Prone Substructures for The Binary-Input Ternary-Output Channel and Its Corresponding Exhaustive Search Algorithm

    Get PDF
    Abstract-The error floor performance of a low-density paritycheck (LDPC) code is highly related to the presence of errorprone substructures (EPSs). In general, existing characterizations of the EPSs are inspired by the LDPC decoding behavior under simple binary erasure channel (BEC) and binary symmetric channel (BSC) models. In this work, we first introduce a new class of EPSs: the 1-shot EPSs and static EPSs for the binaryinput ternary-output channel (BITOC). By focusing on BITOCs, which are a step closer to additive white Gaussian channels (AWGNC), the proposed EPS would better characterize the decoding behavior of the AWGNC than the existing BECor BSC-based definitions. We then develop an efficient search algorithm that can exhaustively enumerate all small BITOC EPSs. The new exhaustive algorithm enables us to order the harmfulness of the EPSs and distinguish within a given EPS which bits are more prone to what types of errors. The proposed algorithm can also be regarded as a unified search method for the existing EPSs such as cycles, codewords, stopping sets, and fully absorbing sets. The proposed methodology is potentially generalizable to the binary-input m-ary output channel

    On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes

    Get PDF
    Edited version embargoed until 22.03.2019 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed. The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-½ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate ½ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown

    Technological developments in Virtual Screening for the discovery of small molecules with novel mechanisms of action

    Get PDF
    Programa de Doctorat en Recerca, Desenvolupament i Control de Medicaments[eng] Advances in structural and molecular biology have favoured the rational development of novel drugs thru structure-based drug design (SBDD). Particularly, computational tools have proven to be rapid and efficient tools for hit discovery and optimization. The main motivation of this thesis is to improve and develop new methods in the area of computer-based drug discovery in order to study challenging targets. Specifically, this thesis is focused on docking and Virtual Screening (VS) methodologies to be able to exploit non-standard sites, like protein-protein interfaces or allosteric sites, and discover bioactive molecules with novel mechanisms of action. First, I developed an automatic pipeline for binding mode prediction that applies knowledge- based restraints and validated the approach by participating in the CELPP Challenge, a blind pose prediction challenge. The aim of the first VS in this thesis is to find small molecules able to not only disrupt the RANK-RANKL interaction but also inhibit the constitutive activation of the receptor. With a combination of computational, biophysical, and cell-based assays we were able to identify the first small molecule binders for RANK that could be used as a treatment for Triple Negative Breast Cancer. When working with challenging targets, or with non-standard mechanisms of action, the relationship between binding and the biological response is unpredictable, because the biological response (if any) will depend on the biological function of the particular allosteric site, which is generally unknown. For this reason, we then tested the applicability of the combination of ultrahigh-throughput VS with low-throughput high content assay. This allowed us to characterize a novel allosteric pocket in PTEN and also describe the first allosteric modulators for this protein. Finally, as the accessible Chemical Space grows at a rapid pace, we developed an algorithm to efficiently explore ultra-large Chemical Collections using a Bottom-up approach. We prospectively validated the approach in BRD4 and identified novel BRD4 inhibitors with an affinity comparable to advanced drug candidates for this target.[spa] Els avenços en biologia estructural i molecular han afavorit el desenvolupament racional de nous fàrmacs a través del disseny de fàrmacs basat en l'estructura (SBDD). En particular, les eines computacionals han demostrat ser ràpides i eficients per al descobriment i l'optimització de fàrmacs. La principal motivació d'aquesta tesi és millorar i desenvolupar nous mètodes en l'àrea del descobriment de fàrmacs per ordinador per tal d'estudiar proteïnes complexes. Concretament, aquesta tesi se centra en les metodologies d'acoblament i de cribratge virtual (CV) per poder explotar llocs no estàndard, com interfícies proteïna-proteïna o llocs al·lostèrics, i descobrir molècules bioactives amb nous mecanismes d'acció. En primer lloc, vaig desenvolupar un protocol automàtic per a la predicció del mode d’unió aplicant restriccions basades en el coneixement i vaig validar l'enfocament participant en el repte CELPP, un repte de predicció del mode d’unió a cegues. L'objectiu del primer CV d'aquesta tesi és trobar petites molècules capaces no només d'interrompre la interacció RANK-RANKL sinó també d'inhibir l'activació constitutiva del receptor. Amb una combinació d'assajos computacionals, biofísics i basats en cèl·lules, vam poder identificar les primeres molècules petites per a RANK que es podrien utilitzar com a tractament per al càncer de mama triple negatiu. Quan es treballa amb proteïnes complexes, o amb mecanismes d'acció no estàndard, la relació entre la unió i la resposta biològica és impredictible, perquè la resposta biològica (si n'hi ha) dependrà de la funció biològica del lloc al·lostèric particular, que generalment és desconeguda. Per aquest motiu, després vam provar l'aplicabilitat de la combinació de CV d'alt rendiment amb assaig de contingut alt de baix rendiment. Això ens va permetre caracteritzar un nou lloc d’unió al·lostèric en PTEN i també descriure els primers moduladors al·lostèrics d'aquesta proteïna. Finalment, a mesura que l'espai químic accessible creix a un ritme ràpid, hem desenvolupat un algorisme per explorar de manera eficient col·leccions de productes químics molt grans mitjançant un enfocament de baix a dalt. Vam validar aquest enfocament amb BRD4 i vam identificar nous inhibidors de BRD4 amb una afinitat comparable als candidats a fàrmacs més avançats per a aquesta proteïna

    Technological developments in Virtual Screening for the discovery of small molecules with novel mechanisms of action

    Full text link
    [eng] Advances in structural and molecular biology have favoured the rational development of novel drugs thru structure-based drug design (SBDD). Particularly, computational tools have proven to be rapid and efficient tools for hit discovery and optimization. The main motivation of this thesis is to improve and develop new methods in the area of computer-based drug discovery in order to study challenging targets. Specifically, this thesis is focused on docking and Virtual Screening (VS) methodologies to be able to exploit non-standard sites, like protein-protein interfaces or allosteric sites, and discover bioactive molecules with novel mechanisms of action. First, I developed an automatic pipeline for binding mode prediction that applies knowledge- based restraints and validated the approach by participating in the CELPP Challenge, a blind pose prediction challenge. The aim of the first VS in this thesis is to find small molecules able to not only disrupt the RANK-RANKL interaction but also inhibit the constitutive activation of the receptor. With a combination of computational, biophysical, and cell-based assays we were able to identify the first small molecule binders for RANK that could be used as a treatment for Triple Negative Breast Cancer. When working with challenging targets, or with non-standard mechanisms of action, the relationship between binding and the biological response is unpredictable, because the biological response (if any) will depend on the biological function of the particular allosteric site, which is generally unknown. For this reason, we then tested the applicability of the combination of ultrahigh-throughput VS with low-throughput high content assay. This allowed us to characterize a novel allosteric pocket in PTEN and also describe the first allosteric modulators for this protein. Finally, as the accessible Chemical Space grows at a rapid pace, we developed an algorithm to efficiently explore ultra-large Chemical Collections using a Bottom-up approach. We prospectively validated the approach in BRD4 and identified novel BRD4 inhibitors with an affinity comparable to advanced drug candidates for this target.[cat] Els avenços en biologia estructural i molecular han afavorit el desenvolupament racional de nous fàrmacs a través del disseny de fàrmacs basat en l'estructura (SBDD). En particular, les eines computacionals han demostrat ser ràpides i eficients per al descobriment i l'optimització de fàrmacs. La principal motivació d'aquesta tesi és millorar i desenvolupar nous mètodes en l'àrea del descobriment de fàrmacs per ordinador per tal d'estudiar proteïnes complexes. Concretament, aquesta tesi se centra en les metodologies d'acoblament i de cribratge virtual (CV) per poder explotar llocs no estàndard, com interfícies proteïna-proteïna o llocs al·lostèrics, i descobrir molècules bioactives amb nous mecanismes d'acció. En primer lloc, vaig desenvolupar un protocol automàtic per a la predicció del mode d’unió aplicant restriccions basades en el coneixement i vaig validar l'enfocament participant en el repte CELPP, un repte de predicció del mode d’unió a cegues. L'objectiu del primer CV d'aquesta tesi és trobar petites molècules capaces no només d'interrompre la interacció RANK-RANKL sinó també d'inhibir l'activació constitutiva del receptor. Amb una combinació d'assajos computacionals, biofísics i basats en cèl·lules, vam poder identificar les primeres molècules petites per a RANK que es podrien utilitzar com a tractament per al càncer de mama triple negatiu. Quan es treballa amb proteïnes complexes, o amb mecanismes d'acció no estàndard, la relació entre la unió i la resposta biològica és impredictible, perquè la resposta biològica (si n'hi ha) dependrà de la funció biològica del lloc al·lostèric particular, que generalment és desconeguda. Per aquest motiu, després vam provar l'aplicabilitat de la combinació de CV d'alt rendiment amb assaig de contingut alt de baix rendiment. Això ens va permetre caracteritzar un nou lloc d’unió al·lostèric en PTEN i també descriure els primers moduladors al·lostèrics d'aquesta proteïna. Finalment, a mesura que l'espai químic accessible creix a un ritme ràpid, hem desenvolupat un algorisme per explorar de manera eficient col·leccions de productes químics molt grans mitjançant un enfocament de baix a dalt. Vam validar aquest enfocament amb BRD4 i vam identificar nous inhibidors de BRD4 amb una afinitat comparable als candidats a fàrmacs més avançats per a aquesta proteïna

    Image Quality Assessment for Population Cardiac MRI: From Detection to Synthesis

    Get PDF
    Cardiac magnetic resonance (CMR) images play a growing role in diagnostic imaging of cardiovascular diseases. Left Ventricular (LV) cardiac anatomy and function are widely used for diagnosis and monitoring disease progression in cardiology and to assess the patient's response to cardiac surgery and interventional procedures. For population imaging studies, CMR is arguably the most comprehensive imaging modality for non-invasive and non-ionising imaging of the heart and great vessels and, hence, most suited for population imaging cohorts. Due to insufficient radiographer's experience in planning a scan, natural cardiac muscle contraction, breathing motion, and imperfect triggering, CMR can display incomplete LV coverage, which hampers quantitative LV characterization and diagnostic accuracy. To tackle this limitation and enhance the accuracy and robustness of the automated cardiac volume and functional assessment, this thesis focuses on the development and application of state-of-the-art deep learning (DL) techniques in cardiac imaging. Specifically, we propose new image feature representation types that are learnt with DL models and aimed at highlighting the CMR image quality cross-dataset. These representations are also intended to estimate the CMR image quality for better interpretation and analysis. Moreover, we investigate how quantitative analysis can benefit when these learnt image representations are used in image synthesis. Specifically, a 3D fisher discriminative representation is introduced to identify CMR image quality in the UK Biobank cardiac data. Additionally, a novel adversarial learning (AL) framework is introduced for the cross-dataset CMR image quality assessment and we show that the common representations learnt by AL can be useful and informative for cross-dataset CMR image analysis. Moreover, we utilize the dataset invariance (DI) representations for CMR volumes interpolation by introducing a novel generative adversarial nets (GANs) based image synthesis framework, which enhance the CMR image quality cross-dataset

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF

    Holistic interpretation of visual data based on topology:semantic segmentation of architectural facades

    Get PDF
    The work presented in this dissertation is a step towards effectively incorporating contextual knowledge in the task of semantic segmentation. To date, the use of context has been confined to the genre of the scene with a few exceptions in the field. Research has been directed towards enhancing appearance descriptors. While this is unarguably important, recent studies show that computer vision has reached a near-human level of performance in relying on these descriptors when objects have stable distinctive surface properties and in proper imaging conditions. When these conditions are not met, humans exploit their knowledge about the intrinsic geometric layout of the scene to make local decisions. Computer vision lags behind when it comes to this asset. For this reason, we aim to bridge the gap by presenting algorithms for semantic segmentation of building facades making use of scene topological aspects. We provide a classification scheme to carry out segmentation and recognition simultaneously.The algorithm is able to solve a single optimization function and yield a semantic interpretation of facades, relying on the modeling power of probabilistic graphs and efficient discrete combinatorial optimization tools. We tackle the same problem of semantic facade segmentation with the neural network approach.We attain accuracy figures that are on-par with the state-of-the-art in a fully automated pipeline.Starting from pixelwise classifications obtained via Convolutional Neural Networks (CNN). These are then structurally validated through a cascade of Restricted Boltzmann Machines (RBM) and Multi-Layer Perceptron (MLP) that regenerates the most likely layout. In the domain of architectural modeling, there is geometric multi-model fitting. We introduce a novel guided sampling algorithm based on Minimum Spanning Trees (MST), which surpasses other propagation techniques in terms of robustness to noise. We make a number of additional contributions such as measure of model deviation which captures variations among fitted models

    Towards secure distributed computations

    Get PDF
    Nowadays, there are plenty of networks that work in a cooperative way and form what we know as grids of computers. These grids are serving a lot of purposes, and they are used with good results for intensive calculation, because the joined computing power aids in solving complex functions. To cope with these new requirements and facilities, programming languages had to evolve to new paradigms, including facilities to do distributed computing in a straightforward way. Functional programming is a paradigm that treats computation as the evaluation of mathematical functions. Functional programming languages implement the concepts introduced by this paradigm. Usually they are modeled using I» calculus, but other variants exist. In this line we have languages like ML, Haskell and (Pure)Lisp. This work has its focus on ML-like languages. As part of the evolution in grid computing, some functional programming languages were adapted to handle these new requirements. To be used in distributed contexts, the calculi had to be extended with new paradigms. Theoretic support for concurrent and distributed programming was conceived. For concurrent programming the I_ calculus was created, and this formalism was extended for mobility on the Ambient calculus. From these approaches, new functional languages were created. Examples of concurrent programming languages are Pict, occam-pi and Concurrent Haskell. In the case of distributed programming languages, we can mention Nomadic Pict, Alice and Acute. After the creation and utilization of such languages, an aspect remaining to be introduced is the security properties of these computations. The security properties of languages that execute on a single machine are difficult to achieve. Increased precautions must be take into account when dealing with lots of hosts and complex networks.Distributed programming languages must achieve, among other properties, correctness in its own abstractions: they must satisfy type-safety and abstraction safety. This work is concerned with correctness and safety in distributed languages, with focus on ML-like languages and the properties they have. To this aim, we have focused on a language called Acute. This language was born for doing research in distributed programming, and was created as a joint effort of the University of Cambridge and INRIA Rocquencourt. In Acute we have modern primitives for interaction between cooperating programs. Two primitives, marshal and unmarshal, have been introduced with this in mind. Acute has powerful properties: type and abstraction safety are guaranteed along the distributed system. But this only happens when there are no entities that can tamper with data transmitted between hosts. If this situation occurs, safety can be no longer guaranteed. The Acute language typechecks values at unmarshal time, to ensure its correctness. This can be made with values of concrete types, but if we have values of abstract types the situation os different: we may have only partial information to check, or the representation could be not available at all. So, how can values of abstract types be secured, in the context of a distributed programming language? We propose the use of a novel technique, called Proof Carrying Results. This technique is based on Neculaâ_Ts proof carrying code. Basically, the result of some computation comes equipped with a certificate, or witness, that can be used with abstract types.If the value comes with a witness that the computation was performed correctly, the caller can verify this witness and know that the value was generated in a good way. Throughout this thesis work, we will show how to add the PCR technique to a distributed programming language. The supporting infrastructure for the technique is introduced along with it. For checking the values and associated witnesses produced by some host, we use the COQ proof checker for a precise and reliable verification
    corecore