11 research outputs found

    Towards interactive betweenness centrality estimation for transportation network using capsule network

    Get PDF
    Includes bibliographical references.2022 Fall.The node importance of a graph needs to be estimated for many graph-based applications. One of the most popular metrics for measuring node importance is betweenness centrality, which measures the amount of influence a node has over the flow of information in a graph. However, the computation complexity of calculating betweenness centrality is extremely high with large- scale graphs. This is especially true when analyzing the road networks of states with millions of nodes and edges, making it infeasible to calculate their betweenness centrality (BC) in real- time using traditional iterative methods. The application of a machine learning model to predict the importance of nodes provides opportunities to address this issue. Graph Neural Networks (GNNs), which have been gaining popularity in recent years, are particularly well-suited for graph analysis. In this study, we propose a deep learning architecture RoadCaps to estimate the BC by merging Capsule Neural Networks with Graph Convolutional Networks (GCN), a convolution operation based GNN. We target the effective aggregation of features from neighbor nodes to approximate the correct BC of a node. We leverage patterns capturing the strength of the capsule network to effectively estimate the node level BC from the high-level information generated by the GCN block. We further compare the model accuracy and effectiveness of RoadCaps with the other two GCN-based models. We also analyze the efficiency and effectiveness of RoadCaps for different aspects like scalability and robustness. We perform one empirical benchmark with the road network for the entire state of California. The overall analysis shows that our proposed network can provide more accurate road importance estimation, which is helpful for rapid response planning such as evacuation during wildfires and flooding

    Hierarchical Syntactic Models for Human Activity Recognition through Mobility Traces

    Get PDF
    Recognizing usersā€™ daily life activities without disrupting their lifestyle is a key functionality to enable a broad variety of advanced services for a Smart City, from energy-efficient management of urban spaces to mobility optimization. In this paper, we propose a novel method for human activity recognition from a collection of outdoor mobility traces acquired through wearable devices. Our method exploits the regularities naturally present in human mobility patterns to construct syntactic models in the form of finite state automata, thanks to an approach known as grammatical inference. We also introduce a measure of similarity that accounts for the intrinsic hierarchical nature of such models, and allows to identify the common traits in the paths induced by different activities at various granularity levels. Our method has been validated on a dataset of real traces representing movements of users in a large metropolitan area. The experimental results show the effectiveness of our similarity measure to correctly identify a set of common coarse-grained activities, as well as their refinement at a finer level of granularity

    Decentralised location-based reputation management system in IOT using blockchain

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesInternet of Things (IoT) allows an object to connect to the internet network and observe or interact with a physical phenomenon. The communication technologies allow an IoT device to discover and communicate with another one to exchange services like humans do in their social network. Knowing the reputation of another device is important to consider if it will trust before establishing a new connection to avoid an unexpected behaviour. The reputation of a device can also be varied depending on its geographical location. Thus, this thesis proposed an architecture to manage reputation values of end devices in an IoT system, based on their located area. To avoid a hard workload of the system in the cloud layer, the proposed architecture follows the cloud-fog-edge concept by adding an intermediate layer called a fog layer. In this layer, multiple smaller devices are distributed, so it used the Blockchain technology to keep the reputation management to be consistent and fault-tolerant across di erent nodes in the layer. Ethereum, which is a Blockchain implementation, was used in this work to ease the management functionalities, because it allows the Blockchain network to run a decentralised application through the Smart Contracts. The location-based part of the system was done by storing geographical areas in the Smart Contracts, and make the reputation values to be subjected to di erent regions depending on device geographical location. To reduce the spatial computation complexity in the Smart Contracts, the geographical data are geocoded by either one of two di erent spatial indexing techniques called Geohash and S2. This work introduced three experiments to test the proposed architecture, to deploy the architecture in IoT devices, and to compare the two geocoding techniques in the Smart Contracts. It also additionally proposed a compression algorithm of the geocoded data. The results showed that the proposed architecture is able to serve the objective of managing the reputation values based on location in a decentralised way. The test case scenario also demonstrated that the IoT devices were able to work as a Blockchain node. They also were able to discover the service providers in an area and obtain their reputation values by querying through the fog layer. Lastly, the comparison experiment results showed that Geohash performed better inside the developed Smart Contracts, while S2 encoded the data much faster outside the Smart Contracts. The proposed compression algorithm of geocoded data resulted in a signi cant size reduction, but it was computationally heavier in the developed Smart Contracts

    Supercritical CO2 extraction pilot plant design - towards IoT integration

    Get PDF
    Interes za tehnologije visokog tlaka tijekom posljednjih desetljeća se intenzivno povećava. Ekstrakcija super kritičnim fluidima (SFE) je process koji predstavlja alternative konvencionalnim postupcima separacije. U procesu ekstrakcije superkritičnim fluidima koristi se ekoloÅ”ki CO2 kao ekstrakcijsko otapalo zbog relativno niskog kritičnog tlaka (7,38 MPa), niske kritične temperature (304 K), poželjnih svojstava i niske cijene. Tijekom ovog postupka, potrebno je rabiti visoke tlakove. Ekstrakcijska posuda (posuda pod tlakom) je najvažnija oprema sustava, gdje se trebaju ostvariti kritični uvjeti i gdje se odvija ekstrakcija. Također, u procesu se treba obratiti pažnja i na druge dijelove uređaja (separator, izmjenjivače topline, ventile i sl.) zbog uporabe visokih tlakova. Sigurnost je najvažniji factor kada se radi o SFE sustavima i projektiranje takve opreme s potpunom sigurnosti procesa predstavlja vrlo težak zadatak. Stoga, kako bi se postigla visoka razina sigurnosti, pouzdan sustav kontrole mora biti osmiÅ”ljen kao komunikacijski segment sustava kontrole i podataka. Različiti procesni parametri, kao Å”to su protok CO2, tlak i temperature ekstrakcije, utječu na process ekstrakcije i kvalitetu dobivenog ekstrakta. Stoga ovi parametri trebaju biti precizno kontrolirani i nadzirani tijekom ekstrakcije. Projektiranje jednog laboratorijskog-pilot postrojenja za ekstrakciju superkritičnim CO2, te razvoj daljinskog upravljanja i nadzora sustava prikazani su u ovom radu. Razvijeni sustavi SFE (mehaničke i električne komponente) uspoređeni su s postojećim komercijalnim sustavima, gdje su prezentirane njegove glavne prednosti u odnosu na postojeće sustave. Omogućavanjem daljinskog upravljanja i nadzora klasična kontrola procesa je spojena s konceptom Interneta objekata - Internet of Things (IoT), gdje informacija postaje sve prisutna u ogromnom području Interneta.The interest in high pressure technology during last decades increased intensively. Supercritical Fluid Extraction (SFE) is a process that is growing in importance as an alternative to conventional separation processes. SFE uses environmentally friendly CO2 as the extracting agent in the process because of its relatively low critical pressure (7,38 MPa), its low critical temperature (304 K), its non-dangerous character and low cost. During this process it is necessary to use high pressures in the procedure. The extractor vessel (pressure vessel) is the most important equipment of the system, where the supercritical conditions need to be established and the extraction occurs. Also other devices (separator vessel, heat exchangers, valves etc.) are necessary to be involved in the process due to used high pressures. Safety is the most important factor while dealing with SFE systems and the design of such equipment with full safety of process is very hard task. Therefore, to achieve the high desired safety level, a reliable control system must be designed as the control system and data communication segment. Various different process parameters such as CO2 mass flow rate, extraction pressures and temperatures affect the extraction process and the quality of the extract; hence these parameters need to be precisely controlled and monitored during the extraction. A design of one supercritical CO2 extraction laboratory-pilot plant and development of a remote control and its supervision system is presented in this paper. The developed SFE system (mechanical and electrical components) was compared with the existing commercial systems and its main advantages over the existing systems are presented. By enabling remote control and supervision the classical process control is joined with the concept of Internet of Things (IoT), where the information becomes omnipresent in the vast realm of Internet

    Anonymization of Event Logs for Network Security Monitoring

    Get PDF
    A managed security service provider (MSSP) must collect security event logs from their customersā€™ network for monitoring and cybersecurity protection. These logs need to be processed by the MSSP before displaying it to the security operation center (SOC) analysts. The employees generate event logs during their working hours at the customersā€™ site. One challenge is that collected event logs consist of personally identifiable information (PII) data; visible in clear text to the SOC analysts or any user with access to the SIEM platform. We explore how pseudonymization can be applied to security event logs to help protect individualsā€™ identities from the SOC analysts while preserving data utility when possible. We compare the impact of using different pseudonymization functions on sensitive information or PII. Non-deterministic methods provide higher level of privacy but reduced utility of the data. Our contribution in this thesis is threefold. First, we study available architectures with different threat models, including their strengths and weaknesses. Second, we study pseudonymization functions and their application to PII fields; we benchmark them individually, as well as in our experimental platform. Last, we obtain valuable feedbacks and lessons from SOC analysts based on their experience. Existing works[43, 44, 48, 39] are generally restricting to the anonymization of the IP traces, which is only one part of the SOC analystsā€™ investigation of PCAP files inspection. In one of the closest work[47], the authors provide useful, practical anonymization methods for the IP addresses, ports, and raw logs

    Understanding the Socio-infrastructure Systems During Disaster from Social Media Data

    Get PDF
    Our socio-infrastructure systems are becoming more and more vulnerable due to the increased severity and frequency of extreme events every year. Effective disaster management can minimize the damaging impacts of a disaster to a large extent. The ubiquitous use of social media platforms in GPS enabled smartphones offers a unique opportunity to observe, model, and predict human behavior during a disaster. This dissertation explores the opportunity of using social media data and different modeling techniques towards understanding and managing disaster more dynamically. In this dissertation, we focus on four objectives. First, we develop a method to infer individual evacuation behaviors (e.g., evacuation decision, timing, destination) from social media data. We develop an input output hidden Markov model to infer evacuation decisions from user tweets. Our findings show that using geo-tagged posts and text data, a hidden Markov model can be developed to capture the dynamics of hurricane evacuation decision. Second, we develop evacuation demand prediction model using social media and traffic data. We find that trained from social media and traffic data, a deep learning model can predict well evacuation traffic demand up to 24 hours ahead. Third, we present a multi-label classification approach to identify the co-occurrence of multiple types of infrastructure disruptions considering the sentiment towards a disruptionā€”whether a post is reporting an actual disruption (negative), or a disruption in general (neutral), or not affected by a disruption (positive). We validate our approach for data collected during multiple hurricanes. Fourth, finally we develop an agent-based model to understand the influence of multiple information sources on risk perception dynamics and evacuation decisions. In this study, we explore the effects of socio-demographic factors and information sources such as social connectivity, neighborhood observation, and weather information and its credibility in forming risk perception dynamics and evacuation decisions
    corecore