826,653 research outputs found

    Applying Science Models for Search

    Full text link
    The paper proposes three different kinds of science models as value-added services that are integrated in the retrieval process to enhance retrieval quality. The paper discusses the approaches Search Term Recommendation, Bradfordizing and Author Centrality on a general level and addresses implementation issues of the models within a real-life retrieval environment.Comment: 14 pages, 3 figures, ISI 201

    Quantitative modelling of the human–Earth System a new kind of science?

    No full text
    The five grand challenges set out for Earth System Science by the International Council for Science in 2010 require a true fusion of social science, economics and natural science—a fusion that has not yet been achieved. In this paper we propose that constructing quantitative models of the dynamics of the human–Earth system can serve as a catalyst for this fusion. We confront well-known objections to modelling societal dynamics by drawing lessons from the development of natural science over the last four centuries and applying them to social and economic science. First, we pose three questions that require real integration of the three fields of science. They concern the coupling of physical planetary boundaries via social processes; the extension of the concept of planetary boundaries to the human–Earth System; and the possibly self-defeating nature of the United Nation’s Millennium Development Goals. Second, we ask whether there are regularities or ‘attractors’ in the human–Earth System analogous to those that prompted the search for laws of nature. We nominate some candidates and discuss why we should observe them given that human actors with foresight and intentionality play a fundamental role in the human–Earth System. We conclude that, at sufficiently large time and space scales, social processes are predictable in some sense. Third, we canvass some essential mathematical techniques that this research fusion must incorporate, and we ask what kind of data would be needed to validate or falsify our models. Finally, we briefly review the state of the art in quantitative modelling of the human–Earth System today and highlight a gap between so-called integrated assessment models applied at regional and global scale, which could be filled by a new scale of model

    GAN Hyperparameters search through Genetic Algorithm

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceRecent developments in Deep Learning are remarkable when it comes to generative models. The main reason for such progress is because of Generative Adversarial Networks (GANs) [1]. Introduced in a paper by Ian Goodfellow in 2014 GANs are machine learning models that are made of two neural networks: a Generator and a Discriminator. These two compete amongst each other to generate new, synthetic instances of data that resemble the real one. Despite their great potential, there are present challenges in their training, which include training instability, mode collapse, and vanishing gradient. A lot of research has been done on how to overcome these challenges, however, there was no significant proof found on whether modern techniques consistently outperform vanilla GAN. The performances of GANs are also highly dependent on the dataset they are trained on. One of the main challenges is related to the search for hyperparameters. In this thesis, we try to overcome this challenge by applying an evolutionary algorithm to search for the best hyperparameters for a WGAN. We use Kullback-Leibler divergence to calculate the fitness of the individuals, and in the end, we select the best set of parameters generated by the evolutionary algorithm. The parameters of the best-selected individuals are maintained throughout the generations. We compare our approach with the standard hyperparameters given by the state-of-art

    Bias and Error Mitigation in Software-Generated Data: An Advanced Search and Optimization Framework Leveraging Generative Code Models

    Full text link
    Data generation and analysis is a fundamental aspect of many industries and disciplines, from strategic decision making in business to research in the physical and social sciences. However, data generated using software and algorithms can be subject to biases and errors. These can be due to problems with the original software, default settings that do not align with the specific needs of the situation, or even deeper problems with the underlying theories and models. This paper proposes an advanced search and optimization framework aimed at generating and choosing optimal source code capable of correcting errors and biases from previous versions to address typical problems in software systems specializing in data analysis and generation, especially those in the corporate and data science world. Applying this framework multiple times on the same software system would incrementally improve the quality of the output results. It uses Solomonoff Induction as a sound theoretical basis, extending it with Kolmogorov Conditional Complexity, a novel adaptation, to evaluate a set of candidate programs. We propose the use of generative models for the creation of this set of programs, with special emphasis on the capabilities of Large Language Models (LLMs) to generate high quality code

    Telecare acceptance as sticky entrapment: A realist review

    Get PDF
    Background Telecare is important in future governmental health and social plans. Telecare acceptance is one of the factors that appears to be vital for uptake and thus important to understand. Different technology acceptance models have been applied but judged to be insufficient in assessing telecare acceptance with older people. The purpose of this paper is to review and evaluate why the existing technology acceptance models fall short when applied to telecare and propose an improved approach for assessing telecare acceptance. Methods This is a realist review with iterative searches. Four search engines covering approximately 50 databases in health, social science and technology were used in each of the three stepwise searches. The searches started wide, funnelling down to pursue the interesting results that emerged. According to the realist approach, particular focus has been on context, and transparency is applied by explicitly documenting the reasons for decisions to enable readers to make their own judgments. Results & Discussion This literature review provides evidence for the shortcomings of the exciting technology acceptance models when used for assessing telecare acceptance. By applying entanglement theory on issues where technology assessment models have shown inadequate, new perspectives emerge. These perspectives are significant for users’ acceptance of telecare, however not highlighted when using technology acceptance models. These perspectives include dealing with imagined situations, fear of not handling technology, the significance of contexts, and users’ adjustments of technology to better suit their needs. The identification of these dependences and dependencies appear to be essential for assessing telecare acceptance, and previously not captured by technology acceptance models

    Learning positive-negative rule-based fuzzy associative classifiers with a good trade-off between complexity and accuracy

    Get PDF
    Nowadays, the call for transparency in Artificial Intelligence models is growing due to the need to understand how decisions derived from the methods are made when they ultimately affect human life and health. Fuzzy Rule-Based Classification Systems have been used successfully as they are models that are easily understood by models themselves. However, complex search spaces hinder the learning process, and in most cases, lead to problems of complexity (coverage and specificity). This problem directly affects the intention to use them to enable the user to analyze and understand the model. Because of this, we propose a fuzzy associative classification method to learn classifiers with an improved trade-off between accuracy and complexity. This method learns the most appropriate granularity of each variable to generate a set of simple fuzzy association rules with a reduced number of associations that consider positive and negative dependencies to be able to classify an instance depending on the presence or absence of certain items. The proposal also chooses the most interesting rules based on several interesting measures and finally performs a genetic rule selection and adjustment to reach the most suitable context of the selected rule set. The quality of our proposal has been analyzed using 23 real-world datasets, comparing them with other proposals by applying statistical analysis. Moreover, the study carried out on a real biomedical research problem of childhood obesity shows the improved trade-off between the accuracy and complexity of the models generated by our proposal.Funding for open access charge: Universidad de Granada / CBUA.ERDF and the Regional Government of Andalusia/Ministry of Economic Transformation, Industry, Knowledge and Universities (grant numbers P18-RT-2248 and B-CTS-536-UGR20)ERDF and Health Institute Carlos III/Spanish Ministry of Science, Innovation and Universities (grant number PI20/00711)Spanish Ministry of Science and Innovation (grant number PID2019-107793GB-I00

    Potential Molecular Targets of Oleanolic Acid in Insulin Resistance and Underlying Oxidative Stress: A Systematic Review

    Get PDF
    Oleanolic acid (OA) is a natural triterpene widely found in olive leaves that possesses antioxidant, anti-inflammatory, and insulin-sensitizing properties, among others. These OA characteristics could be of special interest in the treatment and prevention of insulin resistance (IR), but greater in-depth knowledge on the pathways involved in these properties is still needed. We aimed to systematically review the effects of OA on the molecular mechanisms and signaling pathways involved in the development of IR and underlying oxidative stress in insulin-resistant animal models or cell lines. The bibliographic search was carried out on PubMed, Web of Science, Scopus, Cochrane, and CINHAL databases between January 2001 and May 2022. The electronic search produced 5034 articles but, after applying the inclusion criteria, 13 animal studies and 3 cell experiments were identified, using SYRCLE’s Risk of Bias for assessing the risk of bias of the animal studies. OA was found to enhance insulin sensitivity and glucose uptake, and was found to suppress the hepatic glucose production, probably by modulating the IRS/PI3K/Akt/FoxO1 signaling pathway and by mitigating oxidative stress through regulating MAPK pathways. Future randomized controlled clinical trials to assess the potential benefit of OA as new therapeutic and preventive strategies for IR are warranted.Andalusia 2014-2020 European Regional Development Fund (ERDF) Operative Program B-AGR-287-UGR1

    A systematic review of the applications of Expert Systems (ES) and machine learning (ML) in clinical urology.

    Get PDF
    BackgroundTesting a hypothesis for 'factors-outcome effect' is a common quest, but standard statistical regression analysis tools are rendered ineffective by data contaminated with too many noisy variables. Expert Systems (ES) can provide an alternative methodology in analysing data to identify variables with the highest correlation to the outcome. By applying their effective machine learning (ML) abilities, significant research time and costs can be saved. The study aims to systematically review the applications of ES in urological research and their methodological models for effective multi-variate analysis. Their domains, development and validity will be identified.MethodsThe PRISMA methodology was applied to formulate an effective method for data gathering and analysis. This study search included seven most relevant information sources: WEB OF SCIENCE, EMBASE, BIOSIS CITATION INDEX, SCOPUS, PUBMED, Google Scholar and MEDLINE. Eligible articles were included if they applied one of the known ML models for a clear urological research question involving multivariate analysis. Only articles with pertinent research methods in ES models were included. The analysed data included the system model, applications, input/output variables, target user, validation, and outcomes. Both ML models and the variable analysis were comparatively reported for each system.ResultsThe search identified n = 1087 articles from all databases and n = 712 were eligible for examination against inclusion criteria. A total of 168 systems were finally included and systematically analysed demonstrating a recent increase in uptake of ES in academic urology in particular artificial neural networks with 31 systems. Most of the systems were applied in urological oncology (prostate cancer = 15, bladder cancer = 13) where diagnostic, prognostic and survival predictor markers were investigated. Due to the heterogeneity of models and their statistical tests, a meta-analysis was not feasible.ConclusionES utility offers an effective ML potential and their applications in research have demonstrated a valid model for multi-variate analysis. The complexity of their development can challenge their uptake in urological clinics whilst the limitation of the statistical tools in this domain has created a gap for further research studies. Integration of computer scientists in academic units has promoted the use of ES in clinical urological research
    • 

    corecore