2,532 research outputs found

    The Gender Inequalities Index (GII) as a New Way to Understand Gender Inequality Issues in Developing Countries

    Get PDF
    The measurement of gender inequalities has become an important topic in the academic literature. First, appropriate indicators are needed to compare the relative situation of women in developing countries. Second, there is renewed attention given to the relationship between gender inequality and economic growth. Measuring gender inequalities contributes to knowing whether greater inequality promotes or hampers growth. The aim of this paper is twofold. First, the Gender Inequalities Index (GII) is built through a new methodology using Multiple Correspondence Analysis (MCA), which determines endogenously the weight of each variable. The GII avoids comparison between countries and ranking. Second, the GII is used to study the relationship between gender inequalities and economic growth using seemingly unrelated regressions. Results show large variations between regions: South Asia has the worst score with an average of 0.63, Sub-Saharan Africa and Middle East and North Africa follow with an average of 0.48 and 0.46 respectively. These situations lead to reducing the potential growth rate by 4% in South Asia and 3% in Sub-Saharan Africa and Middle East and North Africa countries. --Composite index,gender inequality,development economics

    Examining Protein Localization via Fluorescence Microscopy in Saccharomyces cerevisiae

    Get PDF
    Twenty-seven years ago, Saccharomyces cerevisiae became the first eukaryote to have its full genome sequenced, thus leading to the discovery that 30% of genes related to human diseases are orthologous to those in S. cerevisiae. Since then, 75% of the proteome has had its localization classified, and we sought to fill the remaining gaps of knowledge by identifying the localization of three proteins: Fsh3, Gid10, and Ade13, which function as a serine hydrolase, ubiquitin ligase, and adenylosuccinate lyase, respectively. To visualize cellular localization, we used a C-terminal GFP tagging strategy and subsequent fluorescence microscopy. Through colocalization analyses, we identified the cellular localization of Ade13 to be mitochondrial while Fsh3 and Gid10 localize to distinct puncta which are predicted to be peroxisomal. Interestingly, we observed that the localization of Gid10 changes depending on the glucose availability in the media. As each investigated yeast protein has human orthologues whose malfunctions lead to diseases in humans, our findings lay important basic foundations for future studies comparing S. cerevisiae and human proteins in conserved processes

    Decision support system for cardiovascular problems

    Get PDF
    The DISHEART project aims at developing a new computer based decision support system (DSS) integrating medical image data, modelling, simulation, computational Grid technologies and artificial intelligence methods for assisting clinical diagnosis and intervention in cardiovascular problems. The RTD goal is to improve and link existing state of the art technologies in order to build a computerised cardiovascular model for the analysis of the heart and blood vessels. The resulting DISHEART DSS interfaces computational biomechanical analysis tools with the information coming from multimodal medical images. The computational model is coupled to an artificial neural network (ANN) based decision model that can be educated for each particular patient with data coming from his/her images and/or analyses. The DISHEART DSS system is validated in trials of clinical diagnosis, surgical intervention and subject-specific design of medical devices in the cardiovascular domain. The DISHEART DSS also contributes to a better understanding of cardiovascular morphology and function as inferred from routine imaging examinations. Four reputable medical centers in Europe took an active role in the validation and dissemination of the DISHEART DSS as well as the elaboration of computational material and medical images. The integrated DISHEART DSS supports health professionals in taking promptly the best possible decision for prevention, diagnosis and treatment. Emphasis was put in the development of userfriendly, fast and reliable tools and interfaces providing access to heterogeneous health information sources, as well as on new methods for decision support and risk analysis. The use of Grid computing technology is essential in order to optimise and distribute the heavy computational work required for physical modelling and numerical simulations and especially for the parametric analysis required for educating the DSS for every particular application. The four end user SMEs participating in the project benefits from the new DISHEART DSS. The companies COMPASS, QUANTECH and Heartcore will market the DSS among public and private organizations related to the cardiovascular field. EndoArt will exploit the DISHEART DSS as a support for enhanced design and production of clinical devices. The partnership was sought in order to gather the maximum complementary of skills for the successful development of the project Disheart DSS, requiring experts in Mechanical sciences, Medical sciences, Informatic, and FEM technique to grow up the testes.Postprint (published version

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Applying Semantic Web Technologies to Medieval Manuscript Research

    Get PDF
    Medieval manuscript research is a complex, fragmented, multilingual field of knowledge, which is difficult to navigate, analyse and exploit. Though printed sources are still of great importance and value to researchers, there are now many services on the Web, some commercial and many in the public domain. At present, these services have to be consulted separately and individually. They employ a range of different descriptive standards and vocabularies, and use a variety of technologies to make their information available on the Web. This chapter proposes a new approach to organizing the international collaborative infrastructure for interlinking knowledge and research about medieval European manuscripts, based on technologies associated with the Semantic Web and the Linked Data movement. This collaborative infrastructure will be an open space on the Web where information about medieval manuscripts can be shared, stored, exchanged and updated for research purposes. It will be possible to ask large-scale research questions across the virtual global manuscript collection, in a quicker and more effective way than has ever been feasible in the past. The proposed infrastructure will focus on building links between data and will provide the basis for new kinds of services which exploit these data. It will not aim to impose a single metadata standard on existing manuscript services, but will build on existing databases and vocabularies. The article describes the architecture, services and data which will comprise this infrastructure, and discusses strategies for making th challenging and exciting goal a reality

    Undergraduate Bulletin, 2022-2023

    Get PDF
    https://red.mnstate.edu/bulletins/1106/thumbnail.jp

    Undergraduate Bulletin, 2023-2024

    Get PDF
    https://red.mnstate.edu/bulletins/1107/thumbnail.jp

    Undergraduate Bulletin, 2021-2022

    Get PDF
    https://red.mnstate.edu/bulletins/1105/thumbnail.jp

    GTO : A toolkit to unify pipelines in genomic and proteomic research

    Get PDF
    Next-generation sequencing triggered the production of a massive volume of publicly available data and the development of new specialised tools. These tools are dispersed over different frameworks, making the management and analyses of the data a challenging task. Additionally, new targeted tools are needed, given the dynamics and specificities of the field. We present GTO, a comprehensive toolkit designed to unify pipelines in genomic and proteomic research, which combines specialised tools for analysis, simulation, compression, development, visualisation, and transformation of the data. This toolkit combines novel tools with a modular architecture, being an excellent platform for experimental scientists, as well as a useful resource for teaching bioinformatics enquiry to students in life sciences. GTO is implemented in C language and is available, under the MIT license, at https://bioinformatics.ua.pt/gto. (C) 2020 The Authors. Published by Elsevier B.V.Peer reviewe

    A Study on Variational Component Splitting approach for Mixture Models

    Get PDF
    Increase in use of mobile devices and the introduction of cloud-based services have resulted in the generation of enormous amount of data every day. This calls for the need to group these data appropriately into proper categories. Various clustering techniques have been introduced over the years to learn the patterns in data that might better facilitate the classification process. Finite mixture model is one of the crucial methods used for this task. The basic idea of mixture models is to fit the data at hand to an appropriate distribution. The design of mixture models hence involves finding the appropriate parameters of the distribution and estimating the number of clusters in the data. We use a variational component splitting framework to do this which could simultaneously learn the parameters of the model and estimate the number of components in the model. The variational algorithm helps to overcome the computational complexity of purely Bayesian approaches and the over fitting problems experienced with Maximum Likelihood approaches guaranteeing convergence. The choice of distribution remains the core concern of mixture models in recent research. The efficiency of Dirichlet family of distributions for this purpose has been proved in latest studies especially for non-Gaussian data. This led us to study the impact of variational component splitting approach on mixture models based on several distributions. Hence, our contribution is the application of variational component splitting approach to design finite mixture models based on inverted Dirichlet, generalized inverted Dirichlet and inverted Beta-Liouville distributions. In addition, we also incorporate a simultaneous feature selection approach for generalized inverted Dirichlet mixture model along with component splitting as another experimental contribution. We evaluate the performance of our models with various real-life applications such as object, scene, texture, speech and video categorization
    • …
    corecore