3,179 research outputs found
No NAT'd User left Behind: Fingerprinting Users behind NAT from NetFlow Records alone
It is generally recognized that the traffic generated by an individual
connected to a network acts as his biometric signature. Several tools exploit
this fact to fingerprint and monitor users. Often, though, these tools assume
to access the entire traffic, including IP addresses and payloads. This is not
feasible on the grounds that both performance and privacy would be negatively
affected. In reality, most ISPs convert user traffic into NetFlow records for a
concise representation that does not include, for instance, any payloads. More
importantly, large and distributed networks are usually NAT'd, thus a few IP
addresses may be associated to thousands of users. We devised a new
fingerprinting framework that overcomes these hurdles. Our system is able to
analyze a huge amount of network traffic represented as NetFlows, with the
intent to track people. It does so by accurately inferring when users are
connected to the network and which IP addresses they are using, even though
thousands of users are hidden behind NAT. Our prototype implementation was
deployed and tested within an existing large metropolitan WiFi network serving
about 200,000 users, with an average load of more than 1,000 users
simultaneously connected behind 2 NAT'd IP addresses only. Our solution turned
out to be very effective, with an accuracy greater than 90%. We also devised
new tools and refined existing ones that may be applied to other contexts
related to NetFlow analysis
Development and Applications of Similarity Measures for Spatial-Temporal Event and Setting Sequences
Similarity or distance measures between data objects are applied frequently in many fields or domains such as geography, environmental science, biology, economics, computer science, linguistics, logic, business analytics, and statistics, among others. One area where similarity measures are particularly important is in the analysis of spatiotemporal event sequences and associated environs or settings. This dissertation focuses on developing a framework of modeling, representation, and new similarity measure construction for sequences of spatiotemporal events and corresponding settings, which can be applied to different event data types and used in different areas of data science. The first core part of this dissertation presents a matrix-based spatiotemporal event sequence representation that unifies punctual and interval-based representation of events. This framework supports different event data types and provides support for data mining and sequence classification and clustering. The similarity measure is based on the modified Jaccard index with temporal order constraints and accommodates different event data types. This approach is demonstrated through simulated data examples and the performance of the similarity measures is evaluated with a k-nearest neighbor algorithm (k-NN) classification test on synthetic datasets. These similarity measures are incorporated into a clustering method and successfully demonstrate the usefulness in a case study analysis of event sequences extracted from space time series of a water quality monitoring system. This dissertation further proposes a new similarity measure for event setting sequences, which involve the space and time in which events occur. While similarity measures for spatiotemporal event sequences have been studied, the settings and setting sequences have not yet been considered. While modeling event setting sequences, spatial and temporal scales are considered to define the bounds of the setting and incorporate dynamic variables along with static variables. Using a matrix-based representation and an extended Jaccard index, new similarity measures are developed to allow for the use of all variable data types. With these similarity measures coupled with other multivariate statistical analysis approaches, results from a case study involving setting sequences and pollution event sequences associated with the same monitoring stations, support the hypothesis that more similar spatial-temporal settings or setting sequences may generate more similar events or event sequences. To test the scalability of STES similarity measure in a larger dataset and an extended application in different fields, this dissertation compares and contrasts the prospective space-time scan statistic with the STES similarity approach for identifying COVID-19 hotspots. The COVID-19 pandemic has highlighted the importance of detecting hotspots or clusters of COVID-19 to provide decision makers at various levels with better information for managing distribution of human and technical resources as the outbreak in the USA continues to grow. The prospective space-time scan statistic has been used to help identify emerging disease clusters yet results from this approach can encounter strategic limitations imposed by the spatial constraints of the scanning window. The STES-based approach adapted for this pandemic context computes the similarity of evolving normalized COVID-19 daily cases by county and clusters these to identify counties with similarly evolving COVID-19 case histories. This dissertation analyzes the spread of COVID-19 within the continental US through four periods beginning from late January 2020 using the COVID-19 datasets maintained by John Hopkins University, Center for Systems Science and Engineering (CSSE). Results of the two approaches can complement with each other and taken together can aid in tracking the progression of the pandemic. Overall, the dissertation highlights the importance of developing similarity measures for analyzing spatiotemporal event sequences and associated settings, which can be applied to different event data types and used for data mining, sequence classification, and clustering
User-centered visual analysis using a hybrid reasoning architecture for intensive care units
One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care
Big Data Preprocessing for Multivariate Time Series Forecast
Big data platforms alleviate collecting and organizing large datasets of varying content. A downside of this is the heavy preprocessing required to analyze their data by conventional analysis techniques. Especially time series data is found challenging to transform from platform-provided raw format into tables of feature and target values, required by supervised machine learning models. This thesis presents an experiment of preprocessing a data-platform-extracted collection of multivariate time series and forecasting it by machine learning models such as neural networks and support vector machines. Reviewed techniques of data preprocessing and time series analysis literature are utilized, but also custom solutions such as log level-based target variable, and valuedistribution-based feature elimination are developed. No significant forecasting accuracies are achieved, which indicates the difficulty of modelling big data. The expected reason for this is the inadequate validation of model parameters and preprocessing decisions, which would require excessive testing to improve.Big data -alustat helpottavat isojen datamäärien talletusta ja hallintaa. Niiden haittapuolena on kuitenkin laaja data-analyysiin vaadittava esikäsittelyn tarve, mikäli halutaan käyttää tavanomaisia analyysimenetelmiä. Erityisen haastavaksi todetaan aikasarjojen muuntaminen alustan tarjoamasta muodosta ohjatun koneoppimisen vaatimaan taulumuotoon, koostuen ennustettavasta kohdemuuttujasta sekä muista ominaisuusmuuttujista. Tässä tutkielmassa tutkitaan usean muuttujan aikasarjadatan esikäsittelyä, sekä käsitellyn datan ennustamista koneoppimismenetelmillä, kuten neuroverkoilla ja tukivektorimallinnuksella. Tutkimusmenetelmät perustuvat kirjallisuuteen datan esikäsittelystä ja aikasarja-analyysistä, mutta myös uusia menetelmiä kehitetään, kuten lokitasoon perustuva kohdemuuttuja sekä muuttujien arvojakaumaan perustuva karsiminen. Ennustustulokset jättävät kuitenkin toivomisen varaa, mikä kertoo big datan mallinnuksen vaikeudesta. Epäiltyinä syinä ovat liian vähäinen malliparametrien ja esikäsittelyvalintojen optimointi, joiden täydentäminen vaatisi resursseihin nähden liian kattavaa testausta
Design of monitoring applications and prediction of key industrial metrics: IIoT + AI
The global industry has suffered deep changes in the last years because of the successful development and integration of new technologies. Industry 4.0 has emerged as a new standard for achieving efficiency and improving processes. Among the technologies used in Industry 4.0, Internet of Things applied to industry (IIoT) enable real-time, intelligent, and autonomous access, collection, analysis, communications, and exchange of process, product and/or service information, within the industrial environment, so as to optimize overall production value. Because of its importance, in this project, a methodology for extracting, analyzing and using the data gathered by IIoT devices is proposed in order to extract meaningful information and to predict industrial key metrics with Artificial Intelligence. In addition, for the complete validation of the proposed methodology, a practical implementation of all the mentioned aspects is carried out by developing a study of the industrial process in the wastewater treatment field using the data collected by an Industrial Internet of Things infrastructure and modelling key time series metrics, such as total organic carbon (TOC) and carbon removal performance (CRP) by using Machine Learning models XGBOOST Regressor, Multi-Layer Perceptron (MLP) Regressor and Support Vector Regressor (SVR) to implement a dashboard with an operational panel and a decision-making panel that helps anticipate possible deviations in the performance of the industrial process
- …