4 research outputs found

    Re-Shape: A Method to Teach Data Ethics for Data Science Education

    Get PDF
    Data has become central to the technologies and services that human-computer interaction (HCI) designers make, and the ethical use of data in and through these technologies should be given critical attention throughout the design process. However, there is little research on ethics education in computer science that explicitly addresses data ethics. We present and analyze Re-Shape, a method to teach students about the ethical implications of data collection and use. Re-Shape, as part of an educational environment, builds upon the idea of cultivating care and allows students to collect, process, and visualizetheir physical movement data in ways that support critical reflection and coordinated classroom activities about data, data privacy, and human-centered systems for data science. We also use a case study of Re-Shape in an undergraduate computer science course to explore prospects and limitations of instructional designs and educational technology such as Re-Shape that leverage personal data to teach data ethics

    Convergence Of Numerical Method For Multistate Stochastic Dynamic Programming

    No full text
    Convergence of corrections is examined for a predictorcorrector method to solve Bellman equations of multi-state stochastic optimal control in continuous time. Quadratic costs and constrained control are assumed. A heuristically linearized comparison equation makes the nonlinear, discontinuous Bellman equation amenable to linear convergence analysis. Convergence is studied using the Fourier stability method. A uniform mesh ratio type condition for the convergence is results. The results are valid for both Gaussian and Poisson type stochastic noise. The convergence criteria has been extremely useful for solving the larger multi-state problems on vector supercomputers and massively parallel processors

    Evaluation of Binning Strategies for Tissue Classification in Computed Tomography Images

    No full text
    ABSTRACT â™  Binning strategies have been used in much research work for image compression, feature extraction, classification, segmentation and other tasks, but rarely is there any rigorous investigation into which binning strategy is the best. Binning becomes a "hidden parameter " of the research method. This work rigorously investigates the results of three different binning strategies, linear binning, clipped binning, and nonlinear binning, for co-occurrence texture-based classification of the backbone, liver, heart, renal, and splenic parenchyma in high-resolution DICOM Computed Tomography (CT) images of the human chest and abdomen. Linear binning divides the gray-level range of [0..4095] into k1 equally sized bins, while clipped binning allocates one large bin for low intensity gray-levels [0..855] (air), one for higher intensities [1368..4095] (bone), and k2 equally sized bins for the soft tissues between [856..1368]. Nonlinear binning divides the gray-level range of [0..4095] into k3 bins of different sizes. These bins are further used to calculate the co-occurrence statistical model and its ten Haralick descriptors for texture quantification of gray-level images. The results of the texture quantification using each one of the three strategies and for different values of k1, k2 and k3 are evaluated with respect to their discrimination power using a decision tree classification algorithm and four classification performance metrics (sensitivity, specificity, precision and accuracy). Our preliminary results obtained on 1368 segmented DICOM images show that the optimal number of gray-levels is equal to 128 for linear binning, 512 for clipped binning, , and 256 for non-linear binning. Furthermore, when comparing the results of the three approaches, the nonlinear binning approach shows significant improvement for heart and spleen
    corecore