1,001 research outputs found

    Structural and dynamic changes associated with beneficial engineered single-amino-acid deletion mutations in enhanced green fluorescent protein.

    Get PDF
    Single-amino-acid deletions are a common part of the natural evolutionary landscape but are rarely sampled during protein engineering owing to limited and prejudiced molecular understanding of mutations that shorten the protein backbone. Single-amino-acid deletion variants of enhanced green fluorescent protein (EGFP) have been identified by directed evolution with the beneficial effect of imparting increased cellular fluorescence. Biophysical characterization revealed that increased functional protein production and not changes to the fluorescence parameters was the mechanism that was likely to be responsible. The structure EGFP(D190Δ) containing a deletion within a loop revealed propagated changes only after the deleted residue. The structure of EGFP(A227Δ) revealed that a `flipping mechanism was used to adjust for residue deletion at the end of a β-strand, with amino acids C-terminal to the deletion site repositioning to take the place of the deleted amino acid. In both variants new networks of short-range and long-range interactions are generated while maintaining the integrity of the hydrophobic core. Both deletion variants also displayed significant local and long-range changes in dynamics, as evident by changes in B factors compared with EGFP. Rather than being detrimental, deletion mutations can introduce beneficial structural effects through altering core protein properties, folding and dynamics, as well as function

    Food Loss and Waste in Distribution: A Retailer’s Perspective

    Get PDF
    Food waste is an expected and inevitable byproduct of any supermarket retailer operation. In the US, the supermarket industry is held to high standards and expected to provide high quality fresh products and make them readily available to consumers during the regular shopping hours and at times, around the clock. These expectations, coupled with the ongoing dilemma of the product shelf life as related to “Best used by XXX date”, or “Sell by XXX date”, along with issues related to efficient packaging have added to the complexity of resolving the food waste problem. In addition to the waste generated by the in store food preparation processes, the quality control process of daily culling begs the question of what to do with unsold food and how to divert it from the waste stream. This presentation will address how Ahold USA and its retail divisions operating under the Stop & Shop banner in NE and NY, the Giant Martin’s in PA and VA, and Giant in MD are trying to deal with this issue. The practices currently in place include, marking down unsold product, reducing the waste by re-purposing the product, donations to food banks, composting, animal feed and last but not least planning to converting it to a renewable energy source (anaerobic digester), used to generate electricity

    Expanding Economic Opportunities in Lebanon

    Get PDF
    Following years of devastation from war, the infrastructure of the district of Jizzine in southern Lebanon was in shambles and the residents left without employment and dependent on agricultural products from outside the region. In February 2002, in an effort to re-establish self-sufficiency in the district, the World Rehabilitation Fund with support from the United States Agency for International Development and the Leahy War Victims Fund, formed The Development Cooperative in Jizzine (Co-op). By providing technical and material assistance to war/landmine victims, the Co-op has proven to be increasingly capable of addressing multiple socioeconomic needs of landmine survivors and other war victims

    Computer-language based data prefetching techniques

    Get PDF
    Data prefetching has long been used as a technique to improve access times to persistent data. It is based on retrieving data records from persistent storage to main memory before the records are needed. Data prefetching has been applied to a wide variety of persistent storage systems, from file systems to Relational Database Management Systems and NoSQL databases, with the aim of reducing access times to the data maintained by the system and thus improve the execution times of the applications using this data. However, most existing solutions to data prefetching have been based on information that can be retrieved from the storage system itself, whether in the form of heuristics based on the data schema or data access patterns detected by monitoring access to the system. There are multiple disadvantages of these approaches in terms of the rigidity of the heuristics they use, the accuracy of the predictions they make and / or the time they need to make these predictions, a process often performed while the applications are accessing the data and causing considerable overhead. In light of the above, this thesis proposes two novel approaches to data prefetching based on predictions made by analyzing the instructions and statements of the computer languages used to access persistent data. The proposed approaches take into consideration how the data is accessed by the higher-level applications, make accurate predictions and are performed without causing any additional overhead. The first of the proposed approaches aims at analyzing instructions of applications written in object-oriented languages in order to prefetch data from Persistent Object Stores. The approach is based on static code analysis that is done prior to the application execution and hence does not add any overhead. It also includes various strategies to deal with cases that require runtime information unavailable prior to the execution of the application. We integrate this analysis approach into an existing Persistent Object Store and run a series of extensive experiments to measure the improvement obtained by prefetching the objects predicted by the approach. The second approach analyzes statements and historic logs of the declarative query language SPARQL in order to prefetch data from RDF Triplestores. The approach measures two types of similarity between SPARQL queries in order to detect recurring query patterns in the historic logs. Afterwards, it uses the detected patterns to predict subsequent queries and launch them before they are requested to prefetch the data needed by them. Our evaluation of the proposed approach shows that it high-accuracy prediction and can achieve a high cache hit rate when caching the results of the predicted queries.Precargar datos ha sido una de las técnicas más comunes para mejorar los tiempos de acceso a datos persistentes. Esta técnica se basa en predecir los registros de datos que se van a acceder en el futuro y cargarlos del almacenimiento persistente a la memoria con antelación a su uso. Precargar datos ha sido aplicado en multitud de sistemas de almacenimiento persistente, desde sistemas de ficheros a bases de datos relacionales y NoSQL, con el objetivo de reducir los tiempos de acceso a los datos y por lo tanto mejorar los tiempos de ejecución de las aplicaciones que usan estos datos. Sin embargo, la mayoría de los enfoques existentes utilizan predicciones basadas en información que se encuentra dentro del mismo sistema de almacenimiento, ya sea en forma de heurísticas basadas en el esquema de los datos o patrones de acceso a los datos generados mediante la monitorización del acceso al sistema. Estos enfoques presentan varias desventajas en cuanto a la rigidez de las heurísticas usadas, la precisión de las predicciones generadas y el tiempo que necesitan para generar estas predicciones, un proceso que se realiza con frecuencia mientras las aplicaciones acceden a los datos y que puede tener efectos negativos en el tiempo de ejecución de estas aplicaciones. En vista de lo anterior, esta tesis presenta dos enfoques novedosos para precargar datos basados en predicciones generadas por el análisis de las instrucciones y sentencias del lenguaje informático usado para acceder a los datos persistentes. Los enfoques propuestos toman en consideración cómo las aplicaciones acceden a los datos, generan predicciones precisas y mejoran el rendimiento de las aplicaciones sin causar ningún efecto negativo. El primer enfoque analiza las instrucciones de applicaciones escritas en lenguajes de programación orientados a objetos con el fin de precargar datos de almacenes de objetos persistentes. El enfoque emplea análisis estático de código hecho antes de la ejecución de las aplicaciones, y por lo tanto no afecta negativamente el rendimiento de las mismas. El enfoque también incluye varias estrategias para tratar casos que requieren información de runtime no disponible antes de ejecutar las aplicaciones. Además, integramos este enfoque en un almacén de objetos persistentes y ejecutamos una serie extensa de experimentos para medir la mejora de rendimiento que se puede obtener utilizando el enfoque. Por otro lado, el segundo enfoque analiza las sentencias y logs del lenguaje declarativo de consultas SPARQL para precargar datos de triplestores de RDF. Este enfoque aplica dos medidas para calcular la similtud entre las consultas del lenguaje SPARQL con el objetivo de detectar patrones recurrentes en los logs históricos. Posteriormente, el enfoque utiliza los patrones detectados para predecir las consultas siguientes y precargar con antelación los datos que necesitan. Nuestra evaluación muestra que este enfoque produce predicciones de alta precisión y puede lograr un alto índice de aciertos cuando los resultados de las consultas predichas se guardan en el caché.Postprint (published version

    A {Simpl} Shortest Path Checker Verification

    No full text

    Dropping out in an Irish and German schooling system

    Get PDF
    Considering the European efforts to retain the newly defined rate of school dropouts under 9% in all European jurisdictions, this work attempts to study the dropout phenomenon of two countries: Germany and Ireland. The aims of this work are, on the one hand, to understand the connection between the school system as a whole and the personal decision of a dropout and, on the other hand, to compare that connection between the two countries involved and seek to learn from each other. In the last ten years, Ireland has managed through several educational restructuring steps to reduce the number of its school dropouts in half standing at about 5% of young people aged 18 to 24 whereas Germany has remained at just above 10%. Germany, however, has lower unemployment rates than that of Ireland excelling at offering a wide range of apprenticeships and dual trainings to its pupils. The first chapters will allow the reader to get an understanding of the historical education backgrounds in Germany and Ireland and pinpoint the struggles which school dropouts face daily. For instance, in Germany, each state is responsible for its education offering different education roads to its pupils. Ireland has a more unified system with the Department of Education and Skills offering a unique vision to all its schools. This thesis becomes more qualitative as Galleta’s semi-structural interviewing method is followed through meeting fourteen educational personnel and school dropouts from each jurisdiction. This interviewing method followed by using Kuckartz’s thematic qualitative text analysis to evaluate the findings of the interviews will be outlined in a separate chapter to give validity, reliability, and makeup to the research. The findings of these interviews are subsequently compared with the literature presented earlier as well as the preventative and curative measures undertaken by the Education departments in both countries. This synopsis ultimately reveals the actual concerns, struggles, and aspirations of the school dropouts. It highlights particularly the similarities as well as the differences of the German and Irish dropout phenomenon, due to the complexities of personal backgrounds, school systems, societal make-up, work markets, and others. The last section of this thesis reveals the necessity to focus on a unique goal which is to strengthen the bond between pupil and school. That goal can be achieved through implementing several steps at a national and local level

    Transformées basées graphes pour la compression de nouvelles modalités d’image

    Get PDF
    Due to the large availability of new camera types capturing extra geometrical information, as well as the emergence of new image modalities such as light fields and omni-directional images, a huge amount of high dimensional data has to be stored and delivered. The ever growing streaming and storage requirements of these new image modalities require novel image coding tools that exploit the complex structure of those data. This thesis aims at exploring novel graph based approaches for adapting traditional image transform coding techniques to the emerging data types where the sampled information are lying on irregular structures. In a first contribution, novel local graph based transforms are designed for light field compact representations. By leveraging a careful design of local transform supports and a local basis functions optimization procedure, significant improvements in terms of energy compaction can be obtained. Nevertheless, the locality of the supports did not permit to exploit long term dependencies of the signal. This led to a second contribution where different sampling strategies are investigated. Coupled with novel prediction methods, they led to very prominent results for quasi-lossless compression of light fields. The third part of the thesis focuses on the definition of rate-distortion optimized sub-graphs for the coding of omni-directional content. If we move further and give more degree of freedom to the graphs we wish to use, we can learn or define a model (set of weights on the edges) that might not be entirely reliable for transform design. The last part of the thesis is dedicated to theoretically analyze the effect of the uncertainty on the efficiency of the graph transforms.En raison de la grande disponibilité de nouveaux types de caméras capturant des informations géométriques supplémentaires, ainsi que de l'émergence de nouvelles modalités d'image telles que les champs de lumière et les images omnidirectionnelles, il est nécessaire de stocker et de diffuser une quantité énorme de hautes dimensions. Les exigences croissantes en matière de streaming et de stockage de ces nouvelles modalités d’image nécessitent de nouveaux outils de codage d’images exploitant la structure complexe de ces données. Cette thèse a pour but d'explorer de nouvelles approches basées sur les graphes pour adapter les techniques de codage de transformées d'image aux types de données émergents où les informations échantillonnées reposent sur des structures irrégulières. Dans une première contribution, de nouvelles transformées basées sur des graphes locaux sont conçues pour des représentations compactes des champs de lumière. En tirant parti d’une conception minutieuse des supports de transformées locaux et d’une procédure d’optimisation locale des fonctions de base , il est possible d’améliorer considérablement le compaction d'énergie. Néanmoins, la localisation des supports ne permettait pas d'exploiter les dépendances à long terme du signal. Cela a conduit à une deuxième contribution où différentes stratégies d'échantillonnage sont étudiées. Couplés à de nouvelles méthodes de prédiction, ils ont conduit à des résultats très importants en ce qui concerne la compression quasi sans perte de champs de lumière statiques. La troisième partie de la thèse porte sur la définition de sous-graphes optimisés en distorsion de débit pour le codage de contenu omnidirectionnel. Si nous allons plus loin et donnons plus de liberté aux graphes que nous souhaitons utiliser, nous pouvons apprendre ou définir un modèle (ensemble de poids sur les arêtes) qui pourrait ne pas être entièrement fiable pour la conception de transformées. La dernière partie de la thèse est consacrée à l'analyse théorique de l'effet de l'incertitude sur l'efficacité des transformées basées graphes

    Because we cannot walk on water

    Get PDF
    Climate induced migration became a recognized phenomenon. Due to the adverse effects of climate change, populations in certain affected areas will start to move either as a form of adaptation or because of failure to adapt thereof. The main questions raised are concerned with the rights of the people displaced. On the other hand, what obligations do states and the international community have to provide protection for these populations. This paper argues that climate migrants are not protected from both the causes and effects of climate change. The international environment governance system does not seem to have regulated the process that guarantees global environmental protection. On the other hand, if people start to move due to the effects of climate change, they will fall from the existing gaps in the international protection system. This paper also specifically looks at the moral dimension of the phenomenon of climate change, and presents why moral questioning is of value when dealing with such contentious issue. It also speaks to the no-harm principle being a fundamental principle in international law and specifically to environmental law. Despite its importance, this principle is usually neglected when formulating policies on climate change. It argues that the no-harm principle was missing from the context of Paris Agreement, and thus kept the prospects of harm in place. It does this by its commitment to industrial growth and avoiding having emission reduction targets. As well, this paper discusses how climate migrants are not adequately addressed in Paris Agreement due to certain geopolitical settings, sustaining the possibility of them remaining highly vulnerable. This paper highlights legal and moral failure of the international society towards climate change at large and towards climate migrants in particular
    corecore