607 research outputs found

    Seeking Optimum System Settings for Physical Activity Recognition on Smartwatches

    Full text link
    Physical activity recognition (PAR) using wearable devices can provide valued information regarding an individual's degree of functional ability and lifestyle. In this regards, smartphone-based physical activity recognition is a well-studied area. Research on smartwatch-based PAR, on the other hand, is still in its infancy. Through a large-scale exploratory study, this work aims to investigate the smartwatch-based PAR domain. A detailed analysis of various feature banks and classification methods are carried out to find the optimum system settings for the best performance of any smartwatch-based PAR system for both personal and impersonal models. To further validate our hypothesis for both personal (The classifier is built using the data only from one specific user) and impersonal (The classifier is built using the data from every user except the one under study) models, we tested single subject validation process for smartwatch-based activity recognition.Comment: 15 pages, 2 figures, Accepted in CVC'1

    Human Activity Recognition Using Deep Models and Its Analysis from Domain Adaptation Perspective

    Get PDF
    © 2019, Springer Nature Switzerland AG. Human activity recognition (HAR) is a broad area of research which solves the problem of determining a user’s activity from a set of observations recorded on video or low-level sensors (accelerometer, gyroscope, etc.) HAR has important applications in medical care and entertainment. In this paper, we address sensor-based HAR, because it could be deployed on a smartphone and eliminates the need to use additional equipment. Using machine learning methods for HAR is common. However, such, methods are vulnerable to changes in the domain of training and test data. More specifically, a model trained on data collected by one user loses accuracy when utilised by another user, because of the domain gap (differences in devices and movement pattern results in differences in sensors’ readings.) Despite significant results achieved in HAR, it is not well-investigated from domain adaptation (DA) perspective. In this paper, we implement a CNN-LSTM based architecture along with several classical machine learning methods for HAR and conduct a series of cross-domain tests. The result of this work is a collection of statistics on the performance of our model under DA task. We believe that our findings will serve as a foundation for future research in solving DA problem for HAR

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Azimuthal anisotropy of charged particles at high transverse momenta in PbPb collisions at sqrt(s[NN]) = 2.76 TeV

    Get PDF
    The azimuthal anisotropy of charged particles in PbPb collisions at nucleon-nucleon center-of-mass energy of 2.76 TeV is measured with the CMS detector at the LHC over an extended transverse momentum (pt) range up to approximately 60 GeV. The data cover both the low-pt region associated with hydrodynamic flow phenomena and the high-pt region where the anisotropies may reflect the path-length dependence of parton energy loss in the created medium. The anisotropy parameter (v2) of the particles is extracted by correlating charged tracks with respect to the event-plane reconstructed by using the energy deposited in forward-angle calorimeters. For the six bins of collision centrality studied, spanning the range of 0-60% most-central events, the observed v2 values are found to first increase with pt, reaching a maximum around pt = 3 GeV, and then to gradually decrease to almost zero, with the decline persisting up to at least pt = 40 GeV over the full centrality range measured.Comment: Replaced with published version. Added journal reference and DO

    Search for new physics with same-sign isolated dilepton events with jets and missing transverse energy

    Get PDF
    A search for new physics is performed in events with two same-sign isolated leptons, hadronic jets, and missing transverse energy in the final state. The analysis is based on a data sample corresponding to an integrated luminosity of 4.98 inverse femtobarns produced in pp collisions at a center-of-mass energy of 7 TeV collected by the CMS experiment at the LHC. This constitutes a factor of 140 increase in integrated luminosity over previously published results. The observed yields agree with the standard model predictions and thus no evidence for new physics is found. The observations are used to set upper limits on possible new physics contributions and to constrain supersymmetric models. To facilitate the interpretation of the data in a broader range of new physics scenarios, information on the event selection, detector response, and efficiencies is provided.Comment: Published in Physical Review Letter

    Compressed representation of a partially defined integer function over multiple arguments

    Get PDF
    In OLAP (OnLine Analitical Processing) data are analysed in an n-dimensional cube. The cube may be represented as a partially defined function over n arguments. Considering that often the function is not defined everywhere, we ask: is there a known way of representing the function or the points in which it is defined, in a more compact manner than the trivial one

    Measurement of jet fragmentation into charged particles in pp and PbPb collisions at sqrt(s[NN]) = 2.76 TeV

    Get PDF
    Jet fragmentation in pp and PbPb collisions at a centre-of-mass energy of 2.76 TeV per nucleon pair was studied using data collected with the CMS detector at the LHC. Fragmentation functions are constructed using charged-particle tracks with transverse momenta pt > 4 GeV for dijet events with a leading jet of pt > 100 GeV. The fragmentation functions in PbPb events are compared to those in pp data as a function of collision centrality, as well as dijet-pt imbalance. Special emphasis is placed on the most central PbPb events including dijets with unbalanced momentum, indicative of energy loss of the hard scattered parent partons. The fragmentation patterns for both the leading and subleading jets in PbPb collisions agree with those seen in pp data at 2.76 TeV. The results provide evidence that, despite the large parton energy loss observed in PbPb collisions, the partition of the remaining momentum within the jet cone into high-pt particles is not strongly modified in comparison to that observed for jets in vacuum.Comment: Submitted to the Journal of High Energy Physic

    Investigation of tumor hypoxia using a two-enzyme system for in vitro generation of oxygen deficiency

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Oxygen deficiency in tumor tissue is associated with a malign phenotype, characterized by high invasiveness, increased metastatic potential and poor prognosis. Hypoxia chambers are the established standard model for <it>in vitro </it>studies on tumor hypoxia. An enzymatic hypoxia system (GOX/CAT) based on the use of glucose oxidase (GOX) and catalase (CAT) that allows induction of stable hypoxia for <it>in vitro </it>approaches more rapidly and with less operating expense has been introduced recently. Aim of this work is to compare the enzymatic system with the established technique of hypoxia chamber in respect of gene expression, glucose metabolism and radioresistance, prior to its application for <it>in vitro </it>investigation of oxygen deficiency.</p> <p>Methods</p> <p>Human head and neck squamous cell carcinoma HNO97 cells were incubated under normoxic and hypoxic conditions using both hypoxia chamber and the enzymatic model. Gene expression was investigated using Agilent microarray chips and real time PCR analysis. <sup>14</sup>C-fluoro-deoxy-glucose uptake experiments were performed in order to evaluate cellular metabolism. Cell proliferation after photon irradiation was investigated for evaluation of radioresistance under normoxia and hypoxia using both a hypoxia chamber and the enzymatic system.</p> <p>Results</p> <p>The microarray analysis revealed a similar trend in the expression of known HIF-1 target genes between the two hypoxia systems for HNO97 cells. Quantitative RT-PCR demonstrated different kinetic patterns in the expression of carbonic anhydrase IX and lysyl oxidase, which might be due to the faster induction of hypoxia by the enzymatic system. <sup>14</sup>C-fluoro-deoxy-glucose uptake assays showed a higher glucose metabolism under hypoxic conditions, especially for the enzymatic system. Proliferation experiments after photon irradiation revealed increased survival rates for the enzymatic model compared to hypoxia chamber and normoxia, indicating enhanced resistance to irradiation. While the GOX/CAT system allows independent investigation of hypoxia and oxidative stress, care must be taken to prevent acidification during longer incubation.</p> <p>Conclusion</p> <p>The results of our study indicate that the enzymatic model can find application for <it>in vitro </it>investigation of tumor hypoxia, despite limitations that need to be considered in the experimental design.</p
    corecore