5 research outputs found
Implementation of Winnowing Algorithm for Document Plagiarism Detection
Plagiarism prevention efforts are being evolved in various sector. Designing and developing plagiarism checker applications is the purpose of this paper. Specifically by knowing the percentage of similarity between the original document and the test document. Winnowing algorithm is used because it can detect plagiarism in documents up to sub-section of the document. In this paper using three validators consisting of computational mathematicians, software engineering experts, and users to test the feasibility of the application. Experiment using several scenarios, the result of the equation using winnowing algorithm is 90.12%
Abnormal driving detection with normalized driving behavior data: a deep learning approach
Abnormal driving may cause serious danger to both the driver and the public. Existing dectors of abnormal driving behaviour are mainly based on shallow models, which require large quantities of labelled data. The aquisition and labelling of abnormal driving data are, however, difficult, labour-intensive and time-consuming. This situation inspires us to rethink the abnormal driving detection problem and to apply deep architecture models. In this study, we establish a novel deep-learning-based model for abnormal driving detection. A stacked sparse autoencoders model is used to learn generic driving behavior features. The model is trained in a greedy layer-wise fashion. As far as the authors know, this is the first time that a deep learning approach is applied using autoencoders as building blocks to represent driving features for abnormal driving detection. In addition, a model for denoising is added to the algorithm to increase the robustness of feature expression. The dropout technology is introduced into the entire training process to avoid overfitting. Experiments carried out on our self-created driving behaviour dataset demonstrate that the proposed scheme achieves a superior performance for abnormal driving detection compared to the state-of-the-art
Beyond Cumulative Sum Charting in Non-Stationarity Detection and Estimation
In computer science, stochastic processes, and industrial engineering, stationarity is often taken to imply a stable, predictable flow of events and non-stationarity, consequently, a departure from such a flow. Efficient detection and accurate estimation of non-stationarity are crucial in understanding the evolution of the governing dynamics. Pragmatic considerations include protecting human lives and property in the context of devastating processes such as earthquakes or hurricanes. Cumulative Sum (CUSUM) charting, the prevalent technique to weed out such non-stationarities, suffers from assumptions on a priori knowledge of the pre and post-change process parameters and constructs such as time discretization. In this paper, we have proposed two new ways in which non-stationarity may enter an evolving system - an easily detectable way, which we term strong corruption, where the post-change probability distribution is deterministically governed, and an imperceptible way which we term hard detection, where the post-change distribution is a probabilistic mixture of several densities. In addition, by combining the ordinary and switched trend of incoming observations, we develop a new trend ratio statistic in order to detect whether a stationary environment has changed. Surveying a variety of distance metrics, we examine several parametric and non-parametric options in addition to the established CUSUM and find that the trend ratio statistic performs better under the especially difficult scenarios of hard detection. Simulations (both from deterministic and mixed inter-event time densities), sensitivity-specificity type analyses, and estimated time of change distributions enable us to track the ideal detection candidate under various non-stationarities. Applications on two real data sets sampled from volcanology and weather science demonstrate how the estimated change points are in agreement with those obtained in some of our previous works, using different methods. Incidentally, this study sheds light on the inverse nature of dependence between the Hawaiian volcanoes Kilauea and Mauna Loa and demonstrates how inhabitants of the now-restless Kilauea may be relocated to Mauna Loa to minimize the loss of lives and moving costs
Shortest Route at Dynamic Location with Node Combination-Dijkstra Algorithm
Abstract— Online transportation has become a basic
requirement of the general public in support of all activities to go
to work, school or vacation to the sights. Public transportation
services compete to provide the best service so that consumers
feel comfortable using the services offered, so that all activities
are noticed, one of them is the search for the shortest route in
picking the buyer or delivering to the destination. Node
Combination method can minimize memory usage and this
methode is more optimal when compared to A* and Ant Colony
in the shortest route search like Dijkstra algorithm, but can’t
store the history node that has been passed. Therefore, using
node combination algorithm is very good in searching the
shortest distance is not the shortest route. This paper is
structured to modify the node combination algorithm to solve the
problem of finding the shortest route at the dynamic location
obtained from the transport fleet by displaying the nodes that
have the shortest distance and will be implemented in the
geographic information system in the form of map to facilitate
the use of the system.
Keywords— Shortest Path, Algorithm Dijkstra, Node
Combination, Dynamic Location (key words