13 research outputs found

    The highest frequency of c.3700_3704del detected among Albanians from Kosovo.

    Get PDF
    BACKGROUND The spectrum of and mutations varies among populations; however, some mutations may be frequent in particular ethnic groups due to the "founder" effect. The c.3700_3704del mutation was previously described as a recurrent variant in Eastern European countries. This study aimed to investigate the frequency of c.3700_3704del mutation in Albanian breast and ovarian cancer patients from North Macedonia and Kosovo. MATERIALS AND METHODS A total of 327 patients with invasive breast and/or ovarian cancer (111 Albanian women from North Macedonia and 216 from Kosovo) were screened for 13 recurrent mutations. Targeted NGS with a panel of 94 cancer-associated genes including and was performed in a selected group of 118 patients. RESULTS We have identified 21 /2 pathogenic variants, 17 (14 and 3 ) in patients from Kosovo (7.9%) and 4 (1 and 3 ) in patients from North Macedonia (3.6%). All mutations were found in one patient each, except for c.3700_3704del mutation which was observed in 14 unrelated families, all except one originating from Kosovo. The c.3700_3704del mutation accounts for 93% of mutation positive cases and is present with a frequency of 6% among breast cancer patients from Kosovo. CONCLUSIONS This is the first report of mutations among breast and ovarian cancer patients from Kosovo. The finding that c.3700_3704del represents a founder mutation in Kosovo with the highest worldwide reported frequency supports the implementation of fast and low-cost screening protocol, regardless of the family history and even a pilot population-based screening in at-risk population

    Breathing Rate Estimation from Head-Worn Photoplethysmography Sensor Data Using Machine Learning

    No full text
    Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson’s correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device

    Towards smart glasses for facial expression recognition using OMG and machine learning

    No full text
    Abstract This study aimed to evaluate the use of novel optomyography (OMG) based smart glasses, OCOsense, for the monitoring and recognition of facial expressions. Experiments were conducted on data gathered from 27 young adult participants, who performed facial expressions varying in intensity, duration, and head movement. The facial expressions included smiling, frowning, raising the eyebrows, and squeezing the eyes. The statistical analysis demonstrated that: (i) OCO sensors based on the principles of OMG can capture distinct variations in cheek and brow movements with a high degree of accuracy and specificity; (ii) Head movement does not have a significant impact on how well these facial expressions are detected. The collected data were also used to train a machine learning model to recognise the four facial expressions and when the face enters a neutral state. We evaluated this model in conditions intended to simulate real-world use, including variations in expression intensity, head movement and glasses position relative to the face. The model demonstrated an overall accuracy of 93% (0.90 f1-score)—evaluated using a leave-one-subject-out cross-validation technique

    Comparative Proteomics Analysis of Urine Reveals Down-Regulation of Acute Phase Response Signaling and LXR/RXR Activation Pathways in Prostate Cancer

    No full text
    Detecting prostate cancer (PCa) using non-invasive diagnostic markers still remains a challenge. The aim of this study was the identification of urine proteins that are sufficiently sensitive and specific to detect PCa in the early stages. Comparative proteomics profiling of urine from patients with PCa, benign prostate hyperplasia, bladder cancer, and renal cancer, coupled with bioinformatics analysis, were performed. Statistically significant difference in abundance showed 20 and 85 proteins in the 2-D DIGE/MS and label-free LC-MS/MS experiments, respectively. In silico analysis indicated activation, binding, and cell movement of subset of immune cells as the top affected cellular functions in PCa, together with the down-regulation of Acute Phase Response Signaling and Liver X Receptor/ Retinoid X Receptor (LXR/RXR) activation pathways. The most promising biomarkers were 35, altered in PCa when compared to more than one group. Half of these have confirmed localization in normal or PCa tissues. Twenty proteins (CD14, AHSG, ENO1, ANXA1, CLU, COL6A1, C3, FGA, FGG, HPX, PTGDS, S100A9, LMAN2, ITIH4, ACTA2, GRN, HBB, PEBP1, CTSB, SPP1) are oncogenes, tumor suppressors, and multifunctional proteins with highly confirmed involvement in PCa, while 9 (AZU1, IGHG1, RNASE2, PZP, REG1A, AMY1A, AMY2A, ACTG2, COL18A1) have been associated with different cancers, but not with PCa so far, and may represent novel findings. LC-MS/MS data are available via ProteomeXchange with identifier PXD008407

    What Actually Works for Activity Recognition in Scenarios with Significant Domain Shift: Lessons Learned from the 2019 and 2020 Sussex-Huawei Challenges

    No full text
    From 2018 to 2021, the Sussex-Huawei Locomotion-Transportation Recognition Challenge presented different scenarios in which participants were tasked with recognizing eight different modes of locomotion and transportation using sensor data from smartphones. In 2019, the main challenge was using sensor data from one location to recognize activities with sensors in another location, while in the following year, the main challenge was using the sensor data of one person to recognize the activities of other persons. We use these two challenge scenarios as a framework in which to analyze the effectiveness of different components of a machine-learning pipeline for activity recognition. We show that: (i) selecting an appropriate (location-specific) portion of the available data for training can improve the F1 score by up to 10 percentage points (p. p.) compared to a more naive approach, (ii) separate models for human locomotion and for transportation in vehicles can yield an increase of roughly 1 p. p., (iii) using semi-supervised learning can, again, yield an increase of roughly 1 p. p., and (iv) temporal smoothing of predictions with Hidden Markov models, when applicable, can bring an improvement of almost 10 p. p. Our experiments also indicate that the usefulness of advanced feature selection techniques and clustering to create person-specific models is inconclusive and should be explored separately in each use-case
    corecore