2,453 research outputs found

    Genetic characterization of outbred Sprague Dawley rats and utility for genome-wide association studies

    Get PDF
    Sprague Dawley (SD) rats are among the most widely used outbred laboratory rat populations. Despite this, the genetic characteristics of SD rats have not been clearly described, and SD rats are rarely used for experiments aimed at exploring genotype-phenotype relationships. In order to use SD rats to perform a genome-wide association study (GWAS), we collected behavioral data from 4,625 SD rats that were predominantly obtained from two commercial vendors, Charles River Laboratories and Harlan Sprague Dawley Inc. Using double-digest genotyping-by-sequencing (ddGBS), we obtained dense, high-quality genotypes at 291,438 SNPs across 4,061 rats. This genetic data allowed us to characterize the variation present in Charles River vs. Harlan SD rats. We found that the two populations are highly diverged (FST > 0.4). Furthermore, even for rats obtained from the same vendor, there was strong population structure across breeding facilities and even between rooms at the same facility. We performed multiple separate GWAS by fitting a linear mixed model that accounted for population structure and using meta-analysis to jointly analyze all cohorts. Our study examined Pavlovian conditioned approach (PavCA) behavior, which assesses the propensity for rats to attribute incentive salience to reward-associated cues. We identified 46 significant associations for the various metrics used to define PavCA. The surprising degree of population structure among SD rats from different sources has important implications for their use in both genetic and non-genetic studies

    The seventh visual object tracking VOT2019 challenge results

    Get PDF
    180The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on 'real-time' shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.openopenKristan M.; Matas J.; Leonardis A.; Felsberg M.; Pflugfelder R.; Kamarainen J.-K.; Zajc L.C.; Drbohlav O.; Lukezic A.; Berg A.; Eldesokey A.; Kapyla J.; Fernandez G.; Gonzalez-Garcia A.; Memarmoghadam A.; Lu A.; He A.; Varfolomieiev A.; Chan A.; Tripathi A.S.; Smeulders A.; Pedasingu B.S.; Chen B.X.; Zhang B.; Baoyuanwu B.; Li B.; He B.; Yan B.; Bai B.; Li B.; Li B.; Kim B.H.; Ma C.; Fang C.; Qian C.; Chen C.; Li C.; Zhang C.; Tsai C.-Y.; Luo C.; Micheloni C.; Zhang C.; Tao D.; Gupta D.; Song D.; Wang D.; Gavves E.; Yi E.; Khan F.S.; Zhang F.; Wang F.; Zhao F.; De Ath G.; Bhat G.; Chen G.; Wang G.; Li G.; Cevikalp H.; Du H.; Zhao H.; Saribas H.; Jung H.M.; Bai H.; Yu H.; Peng H.; Lu H.; Li H.; Li J.; Li J.; Fu J.; Chen J.; Gao J.; Zhao J.; Tang J.; Li J.; Wu J.; Liu J.; Wang J.; Qi J.; Zhang J.; Tsotsos J.K.; Lee J.H.; Van De Weijer J.; Kittler J.; Ha Lee J.; Zhuang J.; Zhang K.; Wang K.; Dai K.; Chen L.; Liu L.; Guo L.; Zhang L.; Wang L.; Wang L.; Zhang L.; Wang L.; Zhou L.; Zheng L.; Rout L.; Van Gool L.; Bertinetto L.; Danelljan M.; Dunnhofer M.; Ni M.; Kim M.Y.; Tang M.; Yang M.-H.; Paluru N.; Martinel N.; Xu P.; Zhang P.; Zheng P.; Zhang P.; Torr P.H.S.; Wang Q.Z.Q.; Guo Q.; Timofte R.; Gorthi R.K.; Everson R.; Han R.; Zhang R.; You S.; Zhao S.-C.; Zhao S.; Li S.; Li S.; Ge S.; Bai S.; Guan S.; Xing T.; Xu T.; Yang T.; Zhang T.; Vojir T.; Feng W.; Hu W.; Wang W.; Tang W.; Zeng W.; Liu W.; Chen X.; Qiu X.; Bai X.; Wu X.-J.; Yang X.; Chen X.; Li X.; Sun X.; Chen X.; Tian X.; Tang X.; Zhu X.-F.; Huang Y.; Chen Y.; Lian Y.; Gu Y.; Liu Y.; Chen Y.; Zhang Y.; Xu Y.; Wang Y.; Li Y.; Zhou Y.; Dong Y.; Xu Y.; Zhang Y.; Li Y.; Luo Z.W.Z.; Zhang Z.; Feng Z.-H.; He Z.; Song Z.; Chen Z.; Zhang Z.; Wu Z.; Xiong Z.; Huang Z.; Teng Z.; Ni Z.Kristan, M.; Matas, J.; Leonardis, A.; Felsberg, M.; Pflugfelder, R.; Kamarainen, J. -K.; Zajc, L. C.; Drbohlav, O.; Lukezic, A.; Berg, A.; Eldesokey, A.; Kapyla, J.; Fernandez, G.; Gonzalez-Garcia, A.; Memarmoghadam, A.; Lu, A.; He, A.; Varfolomieiev, A.; Chan, A.; Tripathi, A. S.; Smeulders, A.; Pedasingu, B. S.; Chen, B. X.; Zhang, B.; Baoyuanwu, B.; Li, B.; He, B.; Yan, B.; Bai, B.; Li, B.; Li, B.; Kim, B. H.; Ma, C.; Fang, C.; Qian, C.; Chen, C.; Li, C.; Zhang, C.; Tsai, C. -Y.; Luo, C.; Micheloni, C.; Zhang, C.; Tao, D.; Gupta, D.; Song, D.; Wang, D.; Gavves, E.; Yi, E.; Khan, F. S.; Zhang, F.; Wang, F.; Zhao, F.; De Ath, G.; Bhat, G.; Chen, G.; Wang, G.; Li, G.; Cevikalp, H.; Du, H.; Zhao, H.; Saribas, H.; Jung, H. M.; Bai, H.; Yu, H.; Peng, H.; Lu, H.; Li, H.; Li, J.; Li, J.; Fu, J.; Chen, J.; Gao, J.; Zhao, J.; Tang, J.; Li, J.; Wu, J.; Liu, J.; Wang, J.; Qi, J.; Zhang, J.; Tsotsos, J. K.; Lee, J. H.; Van De Weijer, J.; Kittler, J.; Ha Lee, J.; Zhuang, J.; Zhang, K.; Wang, K.; Dai, K.; Chen, L.; Liu, L.; Guo, L.; Zhang, L.; Wang, L.; Wang, L.; Zhang, L.; Wang, L.; Zhou, L.; Zheng, L.; Rout, L.; Van Gool, L.; Bertinetto, L.; Danelljan, M.; Dunnhofer, M.; Ni, M.; Kim, M. Y.; Tang, M.; Yang, M. -H.; Paluru, N.; Martinel, N.; Xu, P.; Zhang, P.; Zheng, P.; Zhang, P.; Torr, P. H. S.; Wang, Q. Z. Q.; Guo, Q.; Timofte, R.; Gorthi, R. K.; Everson, R.; Han, R.; Zhang, R.; You, S.; Zhao, S. -C.; Zhao, S.; Li, S.; Li, S.; Ge, S.; Bai, S.; Guan, S.; Xing, T.; Xu, T.; Yang, T.; Zhang, T.; Vojir, T.; Feng, W.; Hu, W.; Wang, W.; Tang, W.; Zeng, W.; Liu, W.; Chen, X.; Qiu, X.; Bai, X.; Wu, X. -J.; Yang, X.; Chen, X.; Li, X.; Sun, X.; Chen, X.; Tian, X.; Tang, X.; Zhu, X. -F.; Huang, Y.; Chen, Y.; Lian, Y.; Gu, Y.; Liu, Y.; Chen, Y.; Zhang, Y.; Xu, Y.; Wang, Y.; Li, Y.; Zhou, Y.; Dong, Y.; Xu, Y.; Zhang, Y.; Li, Y.; Luo, Z. W. Z.; Zhang, Z.; Feng, Z. -H.; He, Z.; Song, Z.; Chen, Z.; Zhang, Z.; Wu, Z.; Xiong, Z.; Huang, Z.; Teng, Z.; Ni, Z

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    Interunit, environmental and interspecific influences on silverback-group dynamics in western lowland gorillas (Gorilla gorilla gorilla)

    Get PDF
    While a major benefit of female-male associations in gorillas is protection from infanticidal males, a silverback is also responsible for providing overall group stability and protection from predation and other environmental or interspecific risks and disturbances. A silverback’s reproductive success will be a function of his group’s survival, his females’ reproductive rates and the survival of his progeny. Here, I evaluate the western lowland silverback’s role as the protective leader of his group and provide the first detailed behavioural study of silverback-group dynamics for western lowland gorillas from a holistic perspective; in both forested and bai environments, from nest-to-nest. Behavioural data were collected from one single-male habituated western lowland gorilla group, over 12-months starting January 2007 at the Bai Hokou Primate Habituation Camp, Central African Republic. Data collection - instantaneous scans, continuous written records of all auditory signals, nesting data, and ad libitum notes on interunit interactions - focused on the silverback and those individuals in his immediate proximity. Analyses were conducted over 258 morning or afternoon sessions, on 3,252 silverback behaviour scans (plus 1,053 additional smell scans), 22,343 auditory signals and 166 nest sites. Evidence from neighbours to the silverback, group spread, progression, ranging, nesting, human directed aggression and silverback chemosignalling analyses suggest that silverback-group dynamics have developed complex, strategic spatial and social strategies to cope with perceived risk in rainforest environments, which respond to differing habitats, and differing intensities of interunit interactions and interspecific disturbance. I also show that the release of pungent extreme and high level silverback odours may function as both acute and chronic indicators of arousal designed to intimidate extragroup rival males and attract adult females by expressing dominance, strength, and health. Higher level silverback odours may also provide cues for group members to increase vigilance in risky situations, whereas low level smells may function as a baseline identification marker and provide both self and intragroup reassurance. Western lowland silverback-group relationships appear to be centred on providing a strong protective – rather than socially interactive - and stabilizing role to ensure group cohesion and safety, which ultimately increases the likelihood of male reproductive success

    The eighth visual object tracking VOT2020 challenge results

    Get PDF
    The Visual Object Tracking challenge VOT2020 is the eighth annual tracker benchmarking activity organized by the VOT initiative. Results of 58 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The VOT2020 challenge was composed of five sub-challenges focusing on different tracking domains: (i) VOT-ST2020 challenge focused on short-term tracking in RGB, (ii) VOT-RT2020 challenge focused on “real-time” short-term tracking in RGB, (iii) VOT-LT2020 focused on long-term tracking namely coping with target disappearance and reappearance, (iv) VOT-RGBT2020 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2020 challenge focused on long-term tracking in RGB and depth imagery. Only the VOT-ST2020 datasets were refreshed. A significant novelty is introduction of a new VOT short-term tracking evaluation methodology, and introduction of segmentation ground truth in the VOT-ST2020 challenge – bounding boxes will no longer be used in the VOT-ST challenges. A new VOT Python toolkit that implements all these novelites was introduced. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website

    The impacts of new technologies on physical activities: Based on fitness app use and fitness social media postings

    Get PDF
    Focusing on fitness app use and social context of fitness postings on social media, this study examined the implications of mHealth technologies use for fitness. This study explored descriptive information about respondents’ use of fitness apps such as self-monitoring, self-regulation, social facilitators, and rewards. Furthermore, respondents’ fitness posting experience was also explored. For respondents who saw others’ fitness posts, this study examined how viewers’ social comparison on fitness postings (upward and downward) related to their physical activity (PA) self-efficacy, motivation, and participation. For those who posted about their fitness information on social media, this study investigated fitness posters’ ways of self-presentation related to receiving supportive feedback, and how supportive feedback related to fitness posters’ PA motivation and participation. This study recruited fitness app users from a crowdsourcing internet marketplace. Quantitative data analysis examined the role of social comparison, self-presentation, and supportive feedback in respondents’ PA self-efficacy, motivation, and participation. The results revealed that people mostly used the fitness apps for physical activity-related self-monitoring and self-regulation. For those who engaged in upward social comparison tended to have more self-efficacy for PA, PA motivation, and therefore participated more in PA. Both positive and negative self-presenters received more supportive feedback from others. The more supportive feedback fitness posters received, the more self-efficacy for PA they had. The more self-efficacy for PA fitness posters had, the more PA motivation they had. The results also showed that people received more esteem support and emotional support from others when they positively presented their fitness on social media. Fitness posters with negative self-presentation received more emotional support and informational support

    Beyond counting steps:Measuring physical behavior with wearable technology in rehabilitation

    Get PDF

    Augmenting Vision-Based Human Pose Estimation with Rotation Matrix

    Full text link
    Fitness applications are commonly used to monitor activities within the gym, but they often fail to automatically track indoor activities inside the gym. This study proposes a model that utilizes pose estimation combined with a novel data augmentation method, i.e., rotation matrix. We aim to enhance the classification accuracy of activity recognition based on pose estimation data. Through our experiments, we experiment with different classification algorithms along with image augmentation approaches. Our findings demonstrate that the SVM with SGD optimization, using data augmentation with the Rotation Matrix, yields the most accurate results, achieving a 96% accuracy rate in classifying five physical activities. Conversely, without implementing the data augmentation techniques, the baseline accuracy remains at a modest 64%.Comment: 24 page

    Beyond counting steps:Measuring physical behavior with wearable technology in rehabilitation

    Get PDF

    Validating and improving the correction of ocular artifacts in electro-encephalography

    Get PDF
    For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are
    • …
    corecore