1,956 research outputs found

    A treatment of stereochemistry in computer aided organic synthesis

    Get PDF
    This thesis describes the author’s contributions to a new stereochemical processing module constructed for the ARChem retrosynthesis program. The purpose of the module is to add the ability to perform enantioselective and diastereoselective retrosynthetic disconnections and generate appropriate precursor molecules. The module uses evidence based rules generated from a large database of literature reactions. Chapter 1 provides an introduction and critical review of the published body of work for computer aided synthesis design. The role of computer perception of key structural features (rings, functions groups etc.) and the construction and use of reaction transforms for generating precursors is discussed. Emphasis is also given to the application of strategies in retrosynthetic analysis. The availability of large reaction databases has enabled a new generation of retrosynthesis design programs to be developed that use automatically generated transforms assembled from published reactions. A brief description of the transform generation method employed by ARChem is given. Chapter 2 describes the algorithms devised by the author for handling the computer recognition and representation of the stereochemical features found in molecule and reaction scheme diagrams. The approach is generalised and uses flexible recognition patterns to transform information found in chemical diagrams into concise stereo descriptors for computer processing. An algorithm for efficiently comparing and classifying pairs of stereo descriptors is described. This algorithm is central for solving the stereochemical constraints in a variety of substructure matching problems addressed in chapter 3. The concise representation of reactions and transform rules as hyperstructure graphs is described. Chapter 3 is concerned with the efficient and reliable detection of stereochemical symmetry in both molecules, reactions and rules. A novel symmetry perception algorithm, based on a constraints satisfaction problem (CSP) solver, is described. The use of a CSP solver to implement an isomorph‐free matching algorithm for stereochemical substructure matching is detailed. The prime function of this algorithm is to seek out unique retron locations in target molecules and then to generate precursor molecules without duplications due to symmetry. Novel algorithms for classifying asymmetric, pseudo‐asymmetric and symmetric stereocentres; meso, centro, and C2 symmetric molecules; and the stereotopicity of trigonal (sp2) centres are described. Chapter 4 introduces and formalises the annotated structural language used to create both retrosynthetic rules and the patterns used for functional group recognition. A novel functional group recognition package is described along with its use to detect important electronic features such as electron‐withdrawing or donating groups and leaving groups. The functional groups and electronic features are used as constraints in retron rules to improve transform relevance. Chapter 5 details the approach taken to design detailed stereoselective and substrate controlled transforms from organised hierarchies of rules. The rules employ a rich set of constraints annotations that concisely describe the keying retrons. The application of the transforms for collating evidence based scoring parameters from published reaction examples is described. A survey of available reaction databases and the techniques for mining stereoselective reactions is demonstrated. A data mining tool was developed for finding the best reputable stereoselective reaction types for coding as transforms. For various reasons it was not possible during the research period to fully integrate this work with the ARChem program. Instead, Chapter 6 introduces a novel one‐step retrosynthesis module to test the developed transforms. The retrosynthesis algorithms use the organisation of the transform rule hierarchy to efficiently locate the best retron matches using all applicable stereoselective transforms. This module was tested using a small set of selected target molecules and the generated routes were ranked using a series of measured parameters including: stereocentre clearance and bond cleavage; example reputation; estimated stereoselectivity with reliability; and evidence of tolerated functional groups. In addition a method for detecting regioselectivity issues is presented. This work presents a number of algorithms using common set and graph theory operations and notations. Appendix A lists the set theory symbols and meanings. Appendix B summarises and defines the common graph theory terminology used throughout this thesis

    LookBook: pioneering Inclusive beauty with artificial intelligence and machine learning algorithms

    Get PDF
    Technology's imperfections and biases inherited from historical norms are crucial to acknowledge. Rapid perpetuation and amplification of these biases necessitate transparency and proactive measures to mitigate their impact. The online visual culture reinforces Eurocentric beauty ideals through prioritized algorithms and augmented reality filters, distorting reality and perpetuating unrealistic standards of beauty. Narrow beauty standards in technology pose a significant challenge to overcome. Algorithms personalize content, creating "filter bubbles" that reinforce these ideals and limit exposure to diverse representations of beauty. This cycle compels individuals to conform, hindering the embrace of their unique features and alternative definitions of beauty. LookBook counters prevalent narrow beauty standards in technology. It promotes inclusivity and representation through self-expression, community engagement, and diverse visibility. LookBook comprises three core sections: Dash, Books, and Community. In Dash, users curate their experience through personalization algorithms. Books allow users to collect curated content for inspiration and creativity, while Community fosters connections with like-minded individuals. Through LookBook, users create a reality aligned with their unique vision. They control consumed content, nurturing individualism through preferences and creativity. This personalization empowers individuals to break free from narrow beauty standards and embrace their distinctiveness. LookBook stands out with its algorithmic training and data representation. It offers transparency on how personalization algorithms operate and ensures a balanced and diverse representation of physicalities and ethnicities. By addressing biases and embracing a wide range of identities, LookBook sparks a conversation for a technology landscape that amplifies all voices, fostering an environment celebrating diversity and prioritizing inclusivity

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Science of Facial Attractiveness

    Get PDF

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Global currencies for tomorrow: a European perspective

    Get PDF
    This report examines how the international monetary system (IMS) might evolve and the implications of different scenarios for the euro area over the next fifteen years.After the collapse of the Bretton Woods system forty years ago, the IMS gradually developed into its present state, a hybrid mix of exchange-rate flexibility, capital mobility and monetary independence. The US dollar retains a dominant, but not exclusive, role and the IMS governance system blends regional and multilateral surveillance. It combines IMF-based and ad-hoc liquidity provision. Although it has proved resilient during the crisis, partly thanks to ad-hoc arrangements, the IMS has serious flaws, which are likely to be magnified by the rapid transformation of the global economy and the increasing economic power of emerging economies.

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    The Complex Role of Sequence and Structure in the Stability and Function of the TIM Barrel Proteins

    Get PDF
    Sequence divergence of orthologous proteins enables adaptation to a plethora of environmental stresses and promotes evolution of novel functions. As one of the most common motifs in biology capable of diverse enzymatic functions, the TIM barrel represents an ideal model system for mapping the phenotypic manifestations of protein sequence. Limits on evolution imposed by constraints on sequence and structure were investigated using a model TIM barrel protein, indole-3-glycerol phosphate synthase (IGPS). Exploration of fitness landscapes of phylogenetically distant orthologs provides a strategy for elucidating the complex interrelationship in the context of a protein fold. Fitness effects of point mutations in three phylogenetically divergent IGPS proteins during adaptation to temperature stress were probed by auxotrophic complementation of yeast with prokaryotic, thermophilic IGPS. Significant correlations between the fitness landscapes of distant orthologues implicate both sequence and structure as primary forces in defining the TIM barrel fitness landscape. These results suggest that fitness landscapes of point mutants can be successfully translocated in sequence space, where knowledge of one landscape may be predictive for the landscape of another ortholog. Analysis of a surprising class of beneficial mutations in all three IGPS orthologs pointed to a long-range allosteric pathway towards the active site of the protein. Biophysical and biochemical analyses provided insights into the molecular mechanism of these beneficial fitness effects. Epistatic interactions suggest that the helical shell may be involved in the observed allostery. Taken together, knowledge of the fundamental properties of the TIM protein architecture will provide new strategies for de novo protein design of a highly targeted protein fold

    A Study of Accomodation of Prosodic and Temporal Features in Spoken Dialogues in View of Speech Technology Applications

    Get PDF
    Inter-speaker accommodation is a well-known property of human speech and human interaction in general. Broadly it refers to the behavioural patterns of two (or more) interactants and the effect of the (verbal and non-verbal) behaviour of each to that of the other(s). Implementation of thisbehavior in spoken dialogue systems is desirable as an improvement on the naturalness of humanmachine interaction. However, traditional qualitative descriptions of accommodation phenomena do not provide sufficient information for such an implementation. Therefore, a quantitativedescription of inter-speaker accommodation is required. This thesis proposes a methodology of monitoring accommodation during a human or humancomputer dialogue, which utilizes a moving average filter over sequential frames for each speaker. These frames are time-aligned across the speakers, hence the name Time Aligned Moving Average (TAMA). Analysis of spontaneous human dialogue recordings by means of the TAMA methodology reveals ubiquitous accommodation of prosodic features (pitch, intensity and speech rate) across interlocutors, and allows for statistical (time series) modeling of the behaviour, in a way which is meaningful for implementation in spoken dialogue system (SDS) environments.In addition, a novel dialogue representation is proposed that provides an additional point of view to that of TAMA in monitoring accommodation of temporal features (inter-speaker pause length and overlap frequency). This representation is a percentage turn distribution of individual speakercontributions in a dialogue frame which circumvents strict attribution of speaker-turns, by considering both interlocutors as synchronously active. Both TAMA and turn distribution metrics indicate that correlation of average pause length and overlap frequency between speakers can be attributed to accommodation (a debated issue), and point to possible improvements in SDS “turntaking” behaviour. Although the findings of the prosodic and temporal analyses can directly inform SDS implementations, further work is required in order to describe inter-speaker accommodation sufficiently, as well as to develop an adequate testing platform for evaluating the magnitude ofperceived improvement in human-machine interaction. Therefore, this thesis constitutes a first step towards a convincingly useful implementation of accommodation in spoken dialogue systems
    • 

    corecore