31 research outputs found

    Pregnancy with chronic myeloid leukemia: case report and literature review

    Get PDF
    Chronic myeloid leukemia (CML) is a rare condition during reproductive age. Still, women may present with pre-existing or newly diagnosed CML during pregnancy. The management of chronic myeloid leukemia during pregnancy requires balancing the well-being of the mother with that of fetus. Tyrosine Kinase inhibitors are considered the most effective drug against CML but they are still not considered safe during pregnancy and breast feeding. So, there is a need for management of CML with alternate drugs during pregnancy. Here we report a case of a 26-year-old lady who was diagnosed with chronic myelogenous leukemia (CML) at 20 weeks of gestation and had an atypical chromosome translocation t (9:22). She was managed jointly by obstetrician and haemato-oncologist for the remainder of her pregnancy and eventually she delivered a healthy baby at term

    CrossCheck:toward passive sensing and detection of mental health changes in people with schizophrenia

    Get PDF
    Early detection of mental health changes in individuals with serious mental illness is critical for effective intervention. CrossCheck is the first step towards the passive monitoring of mental health indicators in patients with schizophrenia and paves the way towards relapse prediction and early intervention. In this paper, we present initial results from an ongoing randomized control trial, where passive smartphone sensor data is collected from 21 outpatients with schizophrenia recently discharged from hospital over a period ranging from 2-8.5 months. Our results indicate that there are statistically significant associations between automatically tracked behavioral features related to sleep, mobility, conversations, smartphone usage and self-reported indicators of mental health in schizophrenia. Using these features we build inference models capable of accurately predicting aggregated scores of mental health indicators in schizophrenia with a mean error of 7.6% of the score range. Finally, we discuss results on the level of personalization that is needed to account for the known variations within people. We show that by leveraging knowledge from a population with schizophrenia, it is possible to train accurate personalized models that require fewer individual-specific data to quickly adapt to new user

    Talking Less during Social Interactions Predicts Enjoyment: A Mobile Sensing Pilot Study

    Get PDF
    Can we predict which conversations are enjoyable without hearing the words that are spoken? A total of 36 participants used a mobile app, My Social Ties, which collected data about 473 conversations that the participants engaged in as they went about their daily lives. We tested whether conversational properties (conversation length, rate of turn taking, proportion of speaking time) and acoustical properties (volume, pitch) could predict enjoyment of a conversation. Surprisingly, people enjoyed their conversations more when they spoke a smaller proportion of the time. This pilot study demonstrates how conversational properties of social interactions can predict psychologically meaningful outcomes, such as how much a person enjoys the conversation. It also illustrates how mobile phones can provide a window into everyday social experiences and well-being

    Adherence to antiretroviral therapy in young children in Cape Town, South Africa, measured by medication return and caregiver self-report: a prospective cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Antiretroviral therapy (ART) dramatically improves outcomes for children in Africa; however excellent adherence is required for treatment success. This study describes the utility of different measures of adherence in detecting lapses in infants and young children in Cape Town, South Africa.</p> <p>Methods</p> <p>In a prospective cohort of 122 HIV-infected children commenced on ART, adherence was measured monthly during the first year of treatment by medication return (MR) for both syrups and tablets/capsules. A questionnaire was administered to caregivers after 3 months of treatment to assess experience with giving medication and self-reported adherence. Viral and immune response to treatment were assessed at the end of one year and associations with measured adherence determined.</p> <p>Results</p> <p>Medication was returned for 115/122 (94%) children with median age (IQR) of 37 (16 – 61) months. Ninety-one (79%) children achieved annual average MR adherence ≥ 90%. This was an important covariate associated with viral suppression after adjustment for disease severity (OR = 5.5 [95%CI: 0.8–35.6], p = 0.075), however was not associated with immunological response to ART. By 3 months on ART, 13 (10%) children had deceased and 11 (10%) were lost to follow-up. Questionnaires were completed by 87/98 (90%) of caregivers of those who remained in care. Sensitivity of poor reported adherence (missing ≥ 1 dose in the previous 3 days) for MR adherence <90% was only 31.8% (95% CI: 10.7% – 53.0%). Caregivers of 33/87 (38.4%) children reported difficulties with giving medication, most commonly poor palatability (21.8%). Independent socio-demographic predictors of MR adherence ≥ 90% were secondary education of caregivers (OR = 4.49; 95%CI: 1.10 – 18.24) and access to water and electricity (OR = 2.65; 95%CI: 0.93 – 7.55). Taking ritonavir was negatively associated with MR adherence ≥ 90% (OR = 0.37; 95%CI: 0.13 – 1.02).</p> <p>Conclusion</p> <p>Excellent adherence to ART is possible in African infants and young children and the relatively simple low technology measure of adherence by MR strongly predicts viral response. Better socio-economic status and more palatable regimens are associated with better adherence.</p

    Machine learning for passive mental health symptom prediction: Generalization across different longitudinal mobile sensing studies.

    No full text
    Mobile sensing data processed using machine learning models can passively and remotely assess mental health symptoms from the context of patients' lives. Prior work has trained models using data from single longitudinal studies, collected from demographically homogeneous populations, over short time periods, using a single data collection platform or mobile application. The generalizability of model performance across studies has not been assessed. This study presents a first analysis to understand if models trained using combined longitudinal study data to predict mental health symptoms generalize across current publicly available data. We combined data from the CrossCheck (individuals living with schizophrenia) and StudentLife (university students) studies. In addition to assessing generalizability, we explored if personalizing models to align mobile sensing data, and oversampling less-represented severe symptoms, improved model performance. Leave-one-subject-out cross-validation (LOSO-CV) results were reported. Two symptoms (sleep quality and stress) had similar question-response structures across studies and were used as outcomes to explore cross-dataset prediction. Models trained with combined data were more likely to be predictive (significant improvement over predicting training data mean) than models trained with single-study data. Expected model performance improved if the distance between training and validation feature distributions decreased using combined versus single-study data. Personalization aligned each LOSO-CV participant with training data, but only improved predicting CrossCheck stress. Oversampling significantly improved severe symptom classification sensitivity and positive predictive value, but decreased model specificity. Taken together, these results show that machine learning models trained on combined longitudinal study data may generalize across heterogeneous datasets. We encourage researchers to disseminate collected de-identified mobile sensing and mental health symptom data, and further standardize data types collected across studies to enable better assessment of model generalizability

    Understanding Mental Health Clinicians’ Perceptions and Concerns Regarding Using Passive Patient-Generated Health Data for Clinical Decision-Making: Qualitative Semistructured Interview Study

    No full text
    BackgroundDigital health-tracking tools are changing mental health care by giving patients the ability to collect passively measured patient-generated health data (PGHD; ie, data collected from connected devices with little to no patient effort). Although there are existing clinical guidelines for how mental health clinicians should use more traditional, active forms of PGHD for clinical decision-making, there is less clarity on how passive PGHD can be used. ObjectiveWe conducted a qualitative study to understand mental health clinicians’ perceptions and concerns regarding the use of technology-enabled, passively collected PGHD for clinical decision-making. Our interviews sought to understand participants’ current experiences with and visions for using passive PGHD. MethodsMental health clinicians providing outpatient services were recruited to participate in semistructured interviews. Interview recordings were deidentified, transcribed, and qualitatively coded to identify overarching themes. ResultsOverall, 12 mental health clinicians (n=11, 92% psychiatrists and n=1, 8% clinical psychologist) were interviewed. We identified 4 overarching themes. First, passive PGHD are patient driven—we found that current passive PGHD use was patient driven, not clinician driven; participating clinicians only considered passive PGHD for clinical decision-making when patients brought passive data to clinical encounters. The second theme was active versus passive data as subjective versus objective data—participants viewed the contrast between active and passive PGHD as a contrast between interpretive data on patients’ mental health and objective information on behavior. Participants believed that prioritizing passive over self-reported, active PGHD would reduce opportunities for patients to reflect upon their mental health, reducing treatment engagement and raising questions about how passive data can best complement active data for clinical decision-making. Third, passive PGHD must be delivered at appropriate times for action—participants were concerned with the real-time nature of passive PGHD; they believed that it would be infeasible to use passive PGHD for real-time patient monitoring outside clinical encounters and more feasible to use passive PGHD during clinical encounters when clinicians can make treatment decisions. The fourth theme was protecting patient privacy—participating clinicians wanted to protect patient privacy within passive PGHD-sharing programs and discussed opportunities to refine data sharing consent to improve transparency surrounding passive PGHD collection and use. ConclusionsAlthough passive PGHD has the potential to enable more contextualized measurement, this study highlights the need for building and disseminating an evidence base describing how and when passive measures should be used for clinical decision-making. This evidence base should clarify how to use passive data alongside more traditional forms of active PGHD, when clinicians should view passive PGHD to make treatment decisions, and how to protect patient privacy within passive data–sharing programs. Clear evidence would more effectively support the uptake and effective use of these novel tools for both patients and their clinicians

    Measuring algorithmic bias to analyze the reliability of AI tools that predict depression risk using smartphone sensed-behavioral data

    No full text
    Abstract AI tools intend to transform mental healthcare by providing remote estimates of depression risk using behavioral data collected by sensors embedded in smartphones. While these tools accurately predict elevated depression symptoms in small, homogenous populations, recent studies show that these tools are less accurate in larger, more diverse populations. In this work, we show that accuracy is reduced because sensed-behaviors are unreliable predictors of depression across individuals: sensed-behaviors that predict depression risk are inconsistent across demographic and socioeconomic subgroups. We first identified subgroups where a developed AI tool underperformed by measuring algorithmic bias, where subgroups with depression were incorrectly predicted to be at lower risk than healthier subgroups. We then found inconsistencies between sensed-behaviors predictive of depression across these subgroups. Our findings suggest that researchers developing AI tools predicting mental health from sensed-behaviors should think critically about the generalizability of these tools, and consider tailored solutions for targeted populations
    corecore