258,503 research outputs found

    High-Dimensional Analysis of Single-Cell Flow Cytometry Data Predicts Relapse in Childhood Acute Lymphoblastic Leukaemia

    Get PDF
    B-cell Acute Lymphoblastic Leukaemia is one of the most common cancers in childhood, with 20% of patients eventually relapsing. Flow cytometry is routinely used for diagnosis and follow-up, but it currently does not provide prognostic value at diagnosis. The volume and the high-dimensional character of this data makes it ideal for its exploitation by means of Artificial Intelligence methods. We collected flow cytometry data from 56 patients from two hospitals. We analysed differences in intensity of marker expression in order to predict relapse at the moment of diagnosis. We finally correlated this data with biomolecular information, constructing a classifier based on CD38 expression. Artificial intelligence methods may help in unveiling information that is hidden in high-dimensional oncological data. Flow cytometry studies of haematological malignancies provide quantitative data with the potential to be used for the construction of response biomarkers. Many computational methods from the bioinformatics toolbox can be applied to these data, but they have not been exploited in their full potential in leukaemias, specifically for the case of childhood B-cell Acute Lymphoblastic Leukaemia. In this paper, we analysed flow cytometry data that were obtained at diagnosis from 56 paediatric B-cell Acute Lymphoblastic Leukaemia patients from two local institutions. Our aim was to assess the prognostic potential of immunophenotypical marker expression intensity. We constructed classifiers that are based on the Fisher's Ratio to quantify differences between patients with relapsing and non-relapsing disease. We also correlated this with genetic information. The main result that arises from the data was the association between subexpression of marker CD38 and the probability of relapse

    Design reuse research : a computational perspective

    Get PDF
    This paper gives an overview of some computer based systems that focus on supporting engineering design reuse. Design reuse is considered here to reflect the utilisation of any knowledge gained from a design activity and not just past designs of artefacts. A design reuse process model, containing three main processes and six knowledge components, is used as a basis to identify the main areas of contribution from the systems. From this it can be concluded that while reuse libraries and design by reuse has received most attention, design for reuse, domain exploration and five of the other knowledge components lack research effort

    Parameterized Algorithmics for Computational Social Choice: Nine Research Challenges

    Full text link
    Computational Social Choice is an interdisciplinary research area involving Economics, Political Science, and Social Science on the one side, and Mathematics and Computer Science (including Artificial Intelligence and Multiagent Systems) on the other side. Typical computational problems studied in this field include the vulnerability of voting procedures against attacks, or preference aggregation in multi-agent systems. Parameterized Algorithmics is a subfield of Theoretical Computer Science seeking to exploit meaningful problem-specific parameters in order to identify tractable special cases of in general computationally hard problems. In this paper, we propose nine of our favorite research challenges concerning the parameterized complexity of problems appearing in this context

    Multi-layer Architecture For Storing Visual Data Based on WCF and Microsoft SQL Server Database

    Full text link
    In this paper we present a novel architecture for storing visual data. Effective storing, browsing and searching collections of images is one of the most important challenges of computer science. The design of architecture for storing such data requires a set of tools and frameworks such as SQL database management systems and service-oriented frameworks. The proposed solution is based on a multi-layer architecture, which allows to replace any component without recompilation of other components. The approach contains five components, i.e. Model, Base Engine, Concrete Engine, CBIR service and Presentation. They were based on two well-known design patterns: Dependency Injection and Inverse of Control. For experimental purposes we implemented the SURF local interest point detector as a feature extractor and KK-means clustering as indexer. The presented architecture is intended for content-based retrieval systems simulation purposes as well as for real-world CBIR tasks.Comment: Accepted for the 14th International Conference on Artificial Intelligence and Soft Computing, ICAISC, June 14-18, 2015, Zakopane, Polan
    corecore