39 research outputs found

    PUBLIC SECTOR CONTINUITY PLANNING: PREPARING THE BUREUCRACY IN THE AGE OF THE NEW “NORMAL”

    Get PDF
    This paper is both a theoretical and empirical discourse on the responsiveness of the bureaucratic norms of governmental response system in the aftermath of disasters. It starts by discussing the contemporary context, i.e., the “Age of the New Normal” where unexpected catastrophic disasters in increase frequency and more intensified intensity, which is seemingly becoming a everyday staple of life that mankind must learn to deal with. It then argues that to become responsive, bureaucracies must innovate to be able to restore normalcy immediately. The challenge becomes complicated, however, when the bureaucracy itself becomes a victim. The paper summarizes existing knowledge based on current literature on the challenges and problems that the “Age of the New Normal” pose to Public Administration and how the latter respond to them. Second, it discusses the how the main properties of bureaucracy serve either as facilitating or hindering factor during disaster/crisis situations. Empirical evidence is provided by showcasing four government agencies that prepared for the onslaught of Super Typhoon Haiyan on November 8, 2013 in Tacloban City, Philippines. Third, the paper presents the public service continuity planning that will enable government agencies to provide continuous service in the aftermath of disasters

    Difficulty Classification of Mountainbike Downhill Trails utilizing Deep Neural Networks

    Full text link
    The difficulty of mountainbike downhill trails is a subjective perception. However, sports-associations and mountainbike park operators attempt to group trails into different levels of difficulty with scales like the Singletrail-Skala (S0-S5) or colored scales (blue, red, black, ...) as proposed by The International Mountain Bicycling Association. Inconsistencies in difficulty grading occur due to the various scales, different people grading the trails, differences in topography, and more. We propose an end-to-end deep learning approach to classify trails into three difficulties easy, medium, and hard by using sensor data. With mbientlab Meta Motion r0.2 sensor units, we record accelerometer- and gyroscope data of one rider on multiple trail segments. A 2D convolutional neural network is trained with a stacked and concatenated representation of the aforementioned data as its input. We run experiments with five different sample- and five different kernel sizes and achieve a maximum Sparse Categorical Accuracy of 0.9097. To the best of our knowledge, this is the first work targeting computational difficulty classification of mountainbike downhill trails.Comment: 11 pages, 5 figure

    Human Activity Recognition Using Deep Models and Its Analysis from Domain Adaptation Perspective

    Get PDF
    © 2019, Springer Nature Switzerland AG. Human activity recognition (HAR) is a broad area of research which solves the problem of determining a user’s activity from a set of observations recorded on video or low-level sensors (accelerometer, gyroscope, etc.) HAR has important applications in medical care and entertainment. In this paper, we address sensor-based HAR, because it could be deployed on a smartphone and eliminates the need to use additional equipment. Using machine learning methods for HAR is common. However, such, methods are vulnerable to changes in the domain of training and test data. More specifically, a model trained on data collected by one user loses accuracy when utilised by another user, because of the domain gap (differences in devices and movement pattern results in differences in sensors’ readings.) Despite significant results achieved in HAR, it is not well-investigated from domain adaptation (DA) perspective. In this paper, we implement a CNN-LSTM based architecture along with several classical machine learning methods for HAR and conduct a series of cross-domain tests. The result of this work is a collection of statistics on the performance of our model under DA task. We believe that our findings will serve as a foundation for future research in solving DA problem for HAR

    A genetic algorithm approach to optimising random forests applied to class engineered data

    Get PDF
    In numerous applications and especially in the life science domain, examples are labelled at a higher level of granularity. For example, binary classification is dominant in many of these data sets, with the positive class denoting the existence of a particular disease in medical diagnosis applications. Such labelling does not depict the reality of having different categories of the same disease; a fact evidenced in the continuous research in root causes and variations of symptoms in a number of diseases. In a quest to enhance such diagnosis, data sets were decomposed using clustering of each class to reveal hidden categories. We then apply the widely adopted ensemble classification technique Random Forests. Such class decomposition has two advantages: (1) diversification of the input that enhances the ensemble classification; and (2) improving class separability, easing the follow-up classification process. However, to be able to apply Random Forests on such class decomposed data, three main parameters need to be set: number of trees forming the ensemble, number of features to split on at each node, and a vector representing the number of clusters in each class. The large search space for tuning these parameters has motivated the use of Genetic Algorithm to optimise the solution. A thorough experimental study on 22 real data sets was conducted, predominantly in a variety of life science applications. To prove the applicability of the method to other areas of application, the proposed method was tested on a number of data sets from other domains. Three variations of Random Forests including the proposed method as well as a boosting ensemble classifier were used in the experimental study. The results prove the superiority of the proposed method in boosting up the accuracy

    Modelling of interactions for the recognition of activities in groups of people

    Get PDF
    In this research study we adopt a probabilistic modelling of interactions in groups of people, using video sequences, leading to the recognition of their activities. Firstly, we model short smooth streams of localised movement. Afterwards, we partition the scene in regions of distinct movement, by using maximum a posteriori estimation, by fitting Gaussian Mixture Models (GMM) to the movement statistics. Interactions between moving regions are modelled using the Kullback–Leibler (KL) divergence between pairs of statistical representations of moving regions. Such interactions are considered with respect to the relative movement, moving region location and relative size, as well as to the dynamics of the movement and location inter-dependencies, respectively. The proposed methodology is assessed on two different data sets showing different categories of human interactions and group activities

    Smartphone-Based Human Activity Recognition Using CNN in Frequency Domain

    Full text link
    Human activity recognition (HAR) based on smartphone sensors provides an efficient way for studying the connection between human physical activities and health issues. In this paper, three feature sets are involved, including tri-axial angular velocity data collected from gyroscope sensor, tri-axial total acceleration data collected from accelerometer sensor, and the estimated tri-axial body acceleration data. The FFT components of the three feature sets are used to divide activities into six types like walking, walking upstairs, walking downstairs, sitting, standing and lying. Two kinds of CNN architectures are designed for HAR. The one is Architecture A in which only one set of features is combined at the first convolution layer; and the other one is Architecture B in which two sets of the features are combined at the first convolution layer. The validation data set is used to automatically determine the iteration number during the training process. It is shown that the performance of Architecture B is better compared to Architecture A. And the Architecture B is further improved by varying the number of the features maps at each convolution layer and the one producing the best result is selected. Compared with five other HAR methods using CNN, the proposed method could achieve a better recognition accuracy of 97.5% for a UCI HAR dataset
    corecore