3,462 research outputs found

    Self-tracking modes: reflexive self-monitoring and data practices

    Get PDF
    The concept of ‘self-tracking’ (also referred to as life-logging, the quantified self, personal analytics and personal informatics) has recently begun to emerge in discussions of ways in which people can voluntarily monitor and record specific features of their lives, often using digital technologies. There is evidence that the personal data that are derived from individuals engaging in such reflexive self-monitoring are now beginning to be used by actors, agencies and organisations beyond the personal and privatised realm. Self-tracking rationales and sites are proliferating as part of a ‘function creep’ of the technology and ethos of self-tracking. The detail offered by these data on individuals and the growing commodification and commercial value of digital data have led government, managerial and commercial enterprises to explore ways of appropriating self-tracking for their own purposes. In some contexts people are encouraged, ‘nudged’, obliged or coerced into using digital devices to produce personal data which are then used by others. This paper examines these issues, outlining five modes of self-tracking that have emerged: private, communal, pushed, imposed and exploited. The analysis draws upon theoretical perspectives on concepts of selfhood, citizenship, biopolitics and data practices and assemblages in discussing the wider sociocultural implications of the emergence and development of these modes of self-tracking

    Big Data Analysis for PV Applications

    Get PDF
    With increasing photovoltaic (PV) installations, large amounts of time series data from utility-scale PV systems such as meteorological data and string level measurements are collected [1, 2]. Due to fluctuations in irradiance and temperature, PV data is highly stochastic. Spatio-temporal differences with potential time-lagged correlation are also exhibited, due to the wind directions affecting cloud movements [3]. Coupling these variations with different types of PV systems in terms of power output and wiring configuration, as well as localised PV effects like partial shading and module mismatches, lengthy time series data from solar systems are highly multi-dimensional and challenging to process. In addition, these raw datasets can rarely be used directly due to the possibly high noise and irrelevant information embedded in them. Moreover, it is challenging to operate directly on the raw datasets, especially when it comes to visualizing and analyzing these data. On this point, the Pareto principle, or better-known as the 80/20 rule, commonly applies: researchers and solar engineers often spend most of their time collecting, cleaning, filtering, reducing and formatting the data. In this work, a data analytics algorithm is applied to mitigate some of the complexities and make sense of the large time series data in PV systems. Each time series is treated as an individual entity which can be characterized by a set of generic or application-specific features. This reduces the dimension of the data, i.e., from hundreds of samples in a time series to a few descriptive features. It is is also easier to visualize big time series data in the feature space, as compared to the traditional time series visualization methods, such as the spaghetti plot and horizon plot, which are informative but not very scalable. The time series data is processed to extract features through clustering and identify correspondence between specific measurements and geographical location of the PV systems. This characterisation of the time series data can be used for several PV applications, namely, (1) PV fault identification, (2) PV network design and (3) PV type pre-design for PV installation in locations with different geographical attributes

    Detect or Track: Towards Cost-Effective Video Object Detection/Tracking

    Full text link
    State-of-the-art object detectors and trackers are developing fast. Trackers are in general more efficient than detectors but bear the risk of drifting. A question is hence raised -- how to improve the accuracy of video object detection/tracking by utilizing the existing detectors and trackers within a given time budget? A baseline is frame skipping -- detecting every N-th frames and tracking for the frames in between. This baseline, however, is suboptimal since the detection frequency should depend on the tracking quality. To this end, we propose a scheduler network, which determines to detect or track at a certain frame, as a generalization of Siamese trackers. Although being light-weight and simple in structure, the scheduler network is more effective than the frame skipping baselines and flow-based approaches, as validated on ImageNet VID dataset in video object detection/tracking.Comment: Accepted to AAAI 201
    corecore