3,140 research outputs found

    Improving the Prediction of Clinical Success Using Machine Learning

    Get PDF
    In pharmaceutical research, assessing drug candidates’ odds of success as they move through clinical research often relies on crude methods based on historical data. However, the rapid progress of machine learning offers a new tool to identify the more promising projects. To evaluate its usefulness, we trained and validated several machine learning algorithms on a large database of projects. Using various project descriptors as input data we were able to predict the clinical success and failure rates of projects with an average balanced accuracy of 83% to 89%, which compares favorably with the 56% to 70% balanced accuracy of the method based on historical data. We also identified the variables that contributed most to trial success and used the algorithm to predict the success (or failure) of assets currently in the industry pipeline. We conclude by discussing how pharmaceutical companies can use such model to improve the quantity and quality of their new drugs, and how the broad adoption of this technology could reduce the industry’s risk profile with important consequences for industry structure, R&D investment, and the cost of innovation

    Spatial prediction of landslide susceptibility/intensity through advanced statistical approaches implementation: applications to the Cinque Terre (Eastern Liguria, Italy)

    Get PDF
    Landslides are frequently responsible for considerable huge economic losses and casualties in mountainous regions especially nowadays as development expands into unstable hillslope areas under the pressures of increasing population size and urbanization (Di Martire et al. 2012). People are not the only vulnerable targets of landslides. Indeed, mass movements can easily lay waste to everything in their path, threatening human properties, infrastructures and natural environments. Italy is severely affected by landslide phenomena and it is one of the most European countries affected by this kind of phenomena. In this framework, Italy is particularly concerned with forecasting landslide effects (Calcaterra et al. 2003b), in compliance with the National Law n. 267/98, enforced after the devastating landslide event of Sarno (Campania, Southern Italy). According to the latest Superior Institute for the Environmental Protection and Research (ISPRA, 2018) report on "hydrogeological instability" of 2018, it emerges that the population exposed to landslides risk is more than 5 million and in particular almost half-million falls into very high hazard zones. The slope stability can be compromised by both natural and human-caused changes in the environment. The main reasons can be summarised into heavy rainfalls, earthquakes, rapid snow-melts, slope cut due to erosions, and variation in groundwater levels for the natural cases whilst slopes steepening through construction, quarrying, building of houses, and farming along the foot of mountainous zone correspond to the human component. This Ph.D. thesis was carried out in the Liguria region, inside the Cinque Terre National Park. This area was chosen due to its abundance of different types of landslides and its geological, geomorphological and urban characteristics. The Cinque Terre area can be considered as one of the most representative examples of human-modified landscape. Starting from the early centuries of the Middle Ages, local farmers have almost completely modified the original slope topography through the construction of dry-stone walls, creating an outstanding terraced coastal landscape (Terranova 1984, 1989; Terranova et al. 2006; Brandolini 2017). This territory is extremely dynamic since it is characterized by a complex geological and geomorphological setting, where many surficial geomorphic processes coexist, along with peculiar weather conditions (Cevasco et al. 2015). For this reason, part of this research focused on analyzing the disaster that hit the Cinque Terre on October, 25th, 2011. Multiple landslides took place in this occasion, triggering almost simultaneously hundreds of shallow landslides in the time-lapse of 5-6 hours, causing 13 victims, and severe structural and economic damage (Cevasco et al. 2012; D\u2019Amato Avanzi et al. 2013). Moreover, this artificial landscape experienced important land-use changes over the last century (Cevasco et al. 2014; Brandolini 2017), mostly related to the abandonment of agricultural activity. It is known that terraced landscapes, when no longer properly maintained, become more prone to erosion processes and mass movements (Lesschen et al. 2008; Brandolini et al. 2018a; Moreno-de-las-Heras et al. 2019; Seeger et al. 2019). Within the context of slope instability, the international community has been focusing for the last decade on recognising the landslide susceptibility/hazard of a given area of interest. Landslide susceptibility predicts "where" landslides are likely to occur, whereas, landslide hazard evaluates future spatial and temporal mass movement occurrence (Guzzetti et al., 1999). Although both definitions are incorrectly used as interchangeable. Such a recognition phase becomes crucial for land use planning activities aimed at the protection of people and infrastructures. In fact, only with proper risk assessment governments, regional institutions, and municipalities can prepare the appropriate countermeasures at different scales. Thus, landslide susceptibility is the keystone of a long chain of procedures that are actively implemented to manage landslide risk at all levels, especially in vulnerable areas such as Liguria. The methods implemented in this dissertation have the overall objective of evaluating advanced algorithms for modeling landslide susceptibility. The thesis has been structured in six chapters. The first chapter introduces and motivates the work conducted in the three years of the project by including information about the research objectives. The second chapter gives the basic concepts related to landslides, definition, classification and causes, landslide inventory, along with the derived products: susceptibility, hazard and risk zoning, with particular attention to the evaluation of landslide susceptibility. The objective of the third chapter is to define the different methodologies, algorithms and procedures applied during the research activity. The fourth chapter deals with the geographical, geological and geomorphological features of the study area. The fifth chapter provides information about the results of the applied methodologies to the study area: Machine Learning algorithms, runout method and Bayesian approach. Furthermore, critical discussions on the outcomes obtained are also described. The sixth chapter deals with the discussions and the conclusions of this research, critically analysing the role of such work in the general panorama of the scientific community and illustrating the possible future perspectives

    Amygdala Modeling with Context and Motivation Using Spiking Neural Networks for Robotics Applications

    Get PDF
    Cognitive capabilities for robotic applications are furthered by developing an artificial amygdala that mimics biology. The amygdala portion of the brain is commonly understood to control mood and behavior based upon sensory inputs, motivation, and context. This research builds upon prior work in creating artificial intelligence for robotics which focused on mood-generated actions. However, recent amygdala research suggests a void in greater functionality. This work developed a computational model of an amygdala, integrated this model into a robot model, and developed a comprehensive integration of the robot for simulation, and live embodiment. The developed amygdala, instantiated in the Nengo Brain Maker environment, leveraged spiking neural networks and the semantic pointer architecture to allow the abstraction of neuron ensembles into high-level concept vocabularies. Test and validation were performed on a TurtleBot in both simulated (Gazebo) and live testing. Results were compared to a baseline model which has a simplistic, amygdala-like model. Metrics of nearest distance and nearest time were used for assessment. The amygdala model is shown to outperform the baseline in both simulations, with a 70.8% improvement in nearest distance and, 4% improvement in the nearest time, and in real applications with a 62.4% improvement in nearest distance. Notably, this performance occurred despite a five-fold increase in architecture size and complexity

    On Experimentation in Software-Intensive Systems

    Get PDF
    Context: Delivering software that has value to customers is a primary concern of every software company. Prevalent in web-facing companies, controlled experiments are used to validate and deliver value in incremental deployments. At the same that web-facing companies are aiming to automate and reduce the cost of each experiment iteration, embedded systems companies are starting to adopt experimentation practices and leverage their activities on the automation developments made in the online domain. Objective: This thesis has two main objectives. The first objective is to analyze how software companies can run and optimize their systems through automated experiments. This objective is investigated from the perspectives of the software architecture, the algorithms for the experiment execution and the experimentation process. The second objective is to analyze how non web-facing companies can adopt experimentation as part of their development process to validate and deliver value to their customers continuously. This objective is investigated from the perspectives of the software development process and focuses on the experimentation aspects that are distinct from web-facing companies. Method: To achieve these objectives, we conducted research in close collaboration with industry and used a combination of different empirical research methods: case studies, literature reviews, simulations, and empirical evaluations. Results: This thesis provides six main results. First, it proposes an architecture framework for automated experimentation that can be used with different types of experimental designs in both embedded systems and web-facing systems. Second, it proposes a new experimentation process to capture the details of a trustworthy experimentation process that can be used as the basis for an automated experimentation process. Third, it identifies the restrictions and pitfalls of different multi-armed bandit algorithms for automating experiments in industry. This thesis also proposes a set of guidelines to help practitioners select a technique that minimizes the occurrence of these pitfalls. Fourth, it proposes statistical models to analyze optimization algorithms that can be used in automated experimentation. Fifth, it identifies the key challenges faced by embedded systems companies when adopting controlled experimentation, and we propose a set of strategies to address these challenges. Sixth, it identifies experimentation techniques and proposes a new continuous experimentation model for mission-critical and business-to-business. Conclusion: The results presented in this thesis indicate that the trustworthiness in the experimentation process and the selection of algorithms still need to be addressed before automated experimentation can be used at scale in industry. The embedded systems industry faces challenges in adopting experimentation as part of its development process. In part, this is due to the low number of users and devices that can be used in experiments and the diversity of the required experimental designs for each new situation. This limitation increases both the complexity of the experimentation process and the number of techniques used to address this constraint

    Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems

    Full text link
    As robotic systems are moved out of factory work cells into human-facing environments questions of choreography become central to their design, placement, and application. With a human viewer or counterpart present, a system will automatically be interpreted within context, style of movement, and form factor by human beings as animate elements of their environment. The interpretation by this human counterpart is critical to the success of the system's integration: knobs on the system need to make sense to a human counterpart; an artificial agent should have a way of notifying a human counterpart of a change in system state, possibly through motion profiles; and the motion of a human counterpart may have important contextual clues for task completion. Thus, professional choreographers, dance practitioners, and movement analysts are critical to research in robotics. They have design methods for movement that align with human audience perception, can identify simplified features of movement for human-robot interaction goals, and have detailed knowledge of the capacity of human movement. This article provides approaches employed by one research lab, specific impacts on technical and artistic projects within, and principles that may guide future such work. The background section reports on choreography, somatic perspectives, improvisation, the Laban/Bartenieff Movement System, and robotics. From this context methods including embodied exercises, writing prompts, and community building activities have been developed to facilitate interdisciplinary research. The results of this work is presented as an overview of a smattering of projects in areas like high-level motion planning, software development for rapid prototyping of movement, artistic output, and user studies that help understand how people interpret movement. Finally, guiding principles for other groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for the 21st Century)" http://www.mdpi.com/journal/arts/special_issues/Machine_Artis

    Integrated Applications of Geo-Information in Environmental Monitoring

    Get PDF
    This book focuses on fundamental and applied research on geo-information technology, notably optical and radar remote sensing and algorithm improvements, and their applications in environmental monitoring. This Special Issue presents ten high-quality research papers covering up-to-date research in land cover change and desertification analyses, geo-disaster risk and damage evaluation, mining area restoration assessments, the improvement and development of algorithms, and coastal environmental monitoring and object targeting. The purpose of this Special Issue is to promote exchanges, communications and share the research outcomes of scientists worldwide and to bridge the gap between scientific research and its applications for advancing and improving society

    The Effect Of Delayed Comparison In The Language Laboratory On Phoneme Discrimination And Pronunciation Accuracy

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98178/1/j.1467-1770.1970.tb00046.x.pd

    Using contextual information to understand searching and browsing behavior

    Get PDF
    There is great imbalance in the richness of information on the web and the succinctness and poverty of search requests of web users, making their queries only a partial description of the underlying complex information needs. Finding ways to better leverage contextual information and make search context-aware holds the promise to dramatically improve the search experience of users. We conducted a series of studies to discover, model and utilize contextual information in order to understand and improve users' searching and browsing behavior on the web. Our results capture important aspects of context under the realistic conditions of different online search services, aiming to ensure that our scientific insights and solutions transfer to the operational settings of real world applications

    Statistical Challenges in Online Controlled Experiments: A Review of A/B Testing Methodology

    Full text link
    The rise of internet-based services and products in the late 1990's brought about an unprecedented opportunity for online businesses to engage in large scale data-driven decision making. Over the past two decades, organizations such as Airbnb, Alibaba, Amazon, Baidu, Booking, Alphabet's Google, LinkedIn, Lyft, Meta's Facebook, Microsoft, Netflix, Twitter, Uber, and Yandex have invested tremendous resources in online controlled experiments (OCEs) to assess the impact of innovation on their customers and businesses. Running OCEs at scale has presented a host of challenges requiring solutions from many domains. In this paper we review challenges that require new statistical methodologies to address them. In particular, we discuss the practice and culture of online experimentation, as well as its statistics literature, placing the current methodologies within their relevant statistical lineages and providing illustrative examples of OCE applications. Our goal is to raise academic statisticians' awareness of these new research opportunities to increase collaboration between academia and the online industry
    corecore