3,657 research outputs found

    DMN for Data Quality Measurement and Assessment

    Get PDF
    Data Quality assessment is aimed at evaluating the suitability of a dataset for an intended task. The extensive literature on data quality describes the various methodologies for assessing data quality by means of data profiling techniques of the whole datasets. Our investigations are aimed to provide solutions to the need of automatically assessing the level of quality of the records of a dataset, where data profiling tools do not provide an adequate level of information. As most of the times, it is easier to describe when a record has quality enough than calculating a qualitative indicator, we propose a semi-automatically business rule-guided data quality assessment methodology for every record. This involves first listing the business rules that describe the data (data requirements), then those describing how to produce measures (business rules for data quality measurements), and finally, those defining how to assess the level of data quality of a data set (business rules for data quality assessment). The main contribution of this paper is the adoption of the OMG standard DMN (Decision Model and Notation) to support the data quality requirement description and their automatic assessment by using the existing DMN engines.Ministerio de Ciencia y TecnologĂ­a RTI2018-094283-B-C33Ministerio de Ciencia y TecnologĂ­a RTI2018-094283-B-C31European Regional Development Fund SBPLY/17/180501/00029

    Brain connectivity Patterns Dissociate action of specific Acupressure Treatments in Fatigued Breast cancer survivors

    Get PDF
    Funding This work was supported by grants R01 CA151445 and 2UL1 TR000433-06 from the National Institutes of Health. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. We thank the expert assistance by Dr. Bradley Foerster in acquisition of 1H-MRS and fMRI data.Peer reviewedPublisher PD

    Dual tasking in Parkinson's disease: cognitive consequences while walking

    Full text link
    Published in final edited form as: Neuropsychology. 2017 September; 31(6): 613–623. doi:10.1037/neu0000331.OBJECTIVE: Cognitive deficits are common in Parkinson's disease (PD) and exacerbate the functional limitations imposed by PD's hallmark motor symptoms, including impairments in walking. Though much research has addressed the effect of dual cognitive-locomotor tasks on walking, less is known about their effect on cognition. The purpose of this study was to investigate the relation between gait and executive function, with the hypothesis that dual tasking would exacerbate cognitive vulnerabilities in PD as well as being associated with gait disturbances. METHOD: Nineteen individuals with mild-moderate PD without dementia and 13 age- and education-matched normal control adults (NC) participated. Executive function (set-shifting) and walking were assessed singly and during dual tasking. RESULTS: Dual tasking had a significant effect on cognition (reduced set-shifting) and on walking (speed, stride length) for both PD and NC, and also on stride frequency for PD only. The impact of dual tasking on walking speed and stride frequency was significantly greater for PD than NC. Though the group by condition interaction was not significant, PD had fewer set-shifts than NC on dual task. Further, relative to NC, PD showed significantly greater variability in cognitive performance under dual tasking, whereas variability in motor performance remained unaffected by dual tasking. CONCLUSIONS: Dual tasking had a significantly greater effect in PD than in NC on cognition as well as on walking. The results suggest that assessment and treatment of PD should consider the cognitive as well as the gait components of PD-related deficits under dual-task conditions. (PsycINFO Database Record)

    How can neuroscience contribute to moral philosophy, psychology and education based on Aristotelian virtue ethics?

    Get PDF
    The present essay discusses the relationship between moral philosophy, psychology and education based on virtue ethics, contemporary neuroscience, and how neuroscientific methods can contribute to studies of moral virtue and character. First, the present essay considers whether the mechanism of moral motivation and developmental model of virtue and character are well supported by neuroscientific evidence. Particularly, it examines whether the evidence provided by neuroscientific studies can support the core argument of virtue ethics, that is, motivational externalism. Second, it discusses how experimental methods of neuroscience can be applied to studies in human morality. Particularly, the present essay examines how functional and structural neuroimaging methods can contribute to the development of the fields by reviewing the findings of recent social and developmental neuroimaging experiments. Meanwhile, the present essay also considers some limitations embedded in such discussions regarding the relationship between the fields and suggests directions for future studies to address these limitations

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets
    • 

    corecore