35 research outputs found

    Connecting with Youth at Risk: Indigenous Organizations Use of Facebook

    Get PDF
    A qualitative study in which we conducted four interviews with two communication managers and two youth program managers of three indigenous organizations with offices in Ottawa, the data generated from the interviews were coded based on factors identified through thematic analysis. Indigenous organizations use Facebook for two main reasons. The first reason is to promote the work of these organizations to the public and for them, in turn to listen to the public’s opinions about news related to indigenous peoples’ wellbeing. Secondly, Facebook is also used to engage urban indigenous youth at risk with indigenous organizations that provide social programs and outreach. Indigenous organizations use Facebook because many urban indigenous youth in Ottawa are using Facebook and it is the fastest way to connect with them when they are or feel at risk

    Sources of Irreproducibility in Machine Learning: A Review

    Full text link
    Lately, several benchmark studies have shown that the state of the art in some of the sub-fields of machine learning actually has not progressed despite progress being reported in the literature. The lack of progress is partly caused by the irreproducibility of many model comparison studies. Model comparison studies are conducted that do not control for many known sources of irreproducibility. This leads to results that cannot be verified by third parties. Our objective is to provide an overview of the sources of irreproducibility that are reported in the literature. We review the literature to provide an overview and a taxonomy in addition to a discussion on the identified sources of irreproducibility. Finally, we identify three lines of further inquiry

    REFORMS: Reporting Standards for Machine Learning Based Science

    Full text link
    Machine learning (ML) methods are proliferating in scientific research. However, the adoption of these methods has been accompanied by failures of validity, reproducibility, and generalizability. These failures can hinder scientific progress, lead to false consensus around invalid claims, and undermine the credibility of ML-based science. ML methods are often applied and fail in similar ways across disciplines. Motivated by this observation, our goal is to provide clear reporting standards for ML-based science. Drawing from an extensive review of past literature, we present the REFORMS checklist (Re\textbf{Re}porting Standards For\textbf{For} M\textbf{M}achine Learning Based S\textbf{S}cience). It consists of 32 questions and a paired set of guidelines. REFORMS was developed based on a consensus of 19 researchers across computer science, data science, mathematics, social sciences, and biomedical sciences. REFORMS can serve as a resource for researchers when designing and implementing a study, for referees when reviewing papers, and for journals when enforcing standards for transparency and reproducibility

    An Ensemble Method: Case-Based Reasoning and the Inverse Problems in Investigating Financial Bubbles

    Get PDF
    This paper presents an ensemble approach and model; IPCBR, that leverages the capabilities of Case based Reasoning (CBR) and Inverse Problem Techniques (IPTs) to describe and model abnormal stock market fluctuations (often associated with asset bubbles) in time series datasets from historical stock market prices. The framework proposes to use a rich set of past observations and geometric pattern description and then applies a CBR to formulate the forward problem; Inverse Problem formulation is then applied to identify a set of parameters that can statistically be associated with the occurrence of the observed patterns. The technique brings a novel perspective to the problem of asset bubbles predictability. Conventional research practice uses traditional forward approaches to predict abnormal fluctuations in financial time series; conversely, this work proposes a formative strategy aimed to determine the causes of behaviour, rather than predict future time series points. This suggests a deviation from the existing research trend

    Standing on the feet of giants - Reproducibility in AI

    No full text
    A recent study implies that research presented at top artificial intelligence conferences is not documented well enough for the research to be reproduced. My objective was to investigate whether the quality of the documentation is the same for industry and academic research or if differences actually exist. My hypothesis is that industry and academic research presented at top artificial intelligence conferences is equally well documented. A total of 325 International Joint Conferences on Artificial Intelligence and Association for the Advancement of Artificial Intelligence research papers reporting empirical studies have been surveyed. Of these, 268 were conducted by academia, 47 were collaborations, and 10 were conducted by the industry. A set of 16 variables, which specifies how well the research is documented, was reviewed for each paper and each variable was analyzed individually. Three reproducibility metrics were used for assessing the documentation quality of each paper. The findings indicate that academic research does score higher than industry and collaborations on all three reproducibility metrics. Academic research also scores highest on 15 out of the 16 surveyed variables. The result is statistically significant for 3 out of the 16 variables, but none of the reproducibility metrics. The conclusion is that the results are not statistically significant, but still indicate that my hypothesis probably should be refuted. This is surprising, as the conferences use double-blind peer review and all research is judged according to the same standards

    Test AAAI

    No full text

    Out-of-the-Box Reproducibility: A Survey of Machine Learning Platforms

    No full text
    Even machine learning experiments that are fully conducted on computers are not necessarily reproducible. An increasing number of open source and commercial, closed source machine learning platforms are being developed that help address this problem. However, there is no standard for assessing and comparing which features are required to fully support reproducibility. We propose a quantitative method that alleviates this problem. Based on the proposed method we assess and compare the current state of the art machine learning platforms for how well they support making empirical results reproducible. Our results show that BEAT and Floydhub have the best support for reproducibility with Codalab and Kaggle as close contenders. The most commonly used machine learning platforms provided by the big tech companies have poor support for reproducibility

    Multiagent Based Problem-solving in a Mobile Environment

    No full text
    As users are becoming more mobile and information services are increasingly being personalised, the need for computer systems to adapt to the user is also increasing. One important aspect of this adaptation is the ability to solve problems in a dynamic environment. We propose a multiagent based approach to dynamically planning and problem solving. This approach allows for construction of plans based on available resources in the environment. We demonstrate the capabilities of this system by showing an implemented prototype.
    corecore