21 research outputs found

    Institutional vs. Non-institutional use of Social Media during Emergency Response: A Case of Twitter in 2014 Australian Bush Fire

    Full text link
    © 2017, Springer Science+Business Media, LLC. Social media plays a significant role in rapid propagation of information when disasters occur. Among the four phases of disaster management life cycle: prevention, preparedness, response, and recovery, this paper focuses on the use of social media during the response phase. It empirically examines the use of microblogging platforms by Emergency Response Organisations (EROs) during extreme natural events, and distinguishes the use of Twitter by EROs from digital volunteers during a fire hazard occurred in Australia state of Victoria in early February 2014. We analysed 7982 tweets on this event. While traditionally theories such as World System Theory and Institutional Theory focus on the role of powerful institutional information outlets, we found that platforms like Twitter challenge such notion by sharing the power between large institutional (e.g. EROs) and smaller non-institutional players (e.g. digital volunteers) in the dissemination of disaster information. Our results highlight that both large EROs and individual digital volunteers proactively used Twitter to disseminate and distribute fire related information. We also found that the contents of tweets were more informative than directive, and that while the total number of messages posted by top EROs was higher than the non-institutional ones, non-institutions presented a greater number of retweets

    Co-location detection on the Cloud

    Get PDF
    In this work we focus on the problem of co-location as a first step of conducting Cross-VM attacks such as Prime and Probe or Flush+Reload in commercial clouds. We demonstrate and compare three co-location detection methods namely, cooperative Last-Level Cache (LLC) covert channel, software profiling on the LLC and memory bus locking. We conduct our experiments on three commercial clouds, Amazon EC2, Google Compute Engine and Microsoft Azure. Finally, we show that both cooperative and non-cooperative co-location to specific targets on cloud is still possible on major cloud services

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems

    Spam Reviews Detection in the Time of COVID-19 Pandemic: Background, Definitions, Methods and Literature Analysis

    Get PDF
    This work has been partially funded by projects PID2020-113462RB-I00 (ANIMALICOS), granted by Ministerio Espanol de Economia y Competitividad; projects P18-RT-4830 and A-TIC-608-UGR20 granted by Junta de Andalucia, and project B-TIC-402-UGR18 (FEDER and Junta de Andalucia).During the recent COVID-19 pandemic, people were forced to stay at home to protect their own and others’ lives. As a result, remote technology is being considered more in all aspects of life. One important example of this is online reviews, where the number of reviews increased promptly in the last two years according to Statista and Rize reports. People started to depend more on these reviews as a result of the mandatory physical distance employed in all countries. With no one speaking to about products and services feedback. Reading and posting online reviews becomes an important part of discussion and decision-making, especially for individuals and organizations. However, the growth of online reviews usage also provoked an increase in spam reviews. Spam reviews can be identified as fraud, malicious and fake reviews written for the purpose of profit or publicity. A number of spam detection methods have been proposed to solve this problem. As part of this study, we outline the concepts and detection methods of spam reviews, along with their implications in the environment of online reviews. The study addresses all the spam reviews detection studies for the years 2020 and 2021. In other words, we analyze and examine all works presented during the COVID-19 situation. Then, highlight the differences between the works before and after the pandemic in terms of reviews behavior and research findings. Furthermore, nine different detection approaches have been classified in order to investigate their specific advantages, limitations, and ways to improve their performance. Additionally, a literature analysis, discussion, and future directions were also presented.Spanish Government PID2020-113462RB-I00Junta de Andalucia P18-RT-4830 A-TIC-608-UGR20 B-TIC-402-UGR18European Commission B-TIC-402-UGR1

    Cloud service discovery and analysis: a unified framework

    Get PDF
    Over the past few years, cloud computing has been more and more attractive as a new computing paradigm due to high flexibility for provisioning on-demand computing resources that are used as services through the Internet. The issues around cloud service discovery have considered by many researchers in the recent years. However, in cloud computing, with the highly dynamic, distributed, the lack of standardized description languages, diverse services offered at different levels and non-transparent nature of cloud services, this research area has gained a significant attention. Robust cloud service discovery approaches will assist the promotion and growth of cloud service customers and providers, but will also provide a meaningful contribution to the acceptance and development of cloud computing. In this dissertation, we have proposed an automated cloud service discovery approach of cloud services. We have also conducted extensive experiments to validate our proposed approach. The results demonstrate the applicability of our approach and its capability of effectively identifying and categorizing cloud services on the Internet. Firstly, we develop a novel approach to build cloud service ontology. Cloud service ontology initially is built based on the National Institute of Standards and Technology (NIST) cloud computing standard. Then, we add new concepts to ontology by automatically analyzing real cloud services based on cloud service ontology Algorithm. We also propose cloud service categorization that use Term Frequency to weigh cloud service ontology concepts and calculate cosine similarity to measure the similarity between cloud services. The cloud service categorization algorithm is able to categorize cloud services to clusters for effective categorization of cloud services. In addition, we use Machine Learning techniques to identify cloud service in real environment. Our cloud service identifier is built by utilizing cloud service features extracted from the real cloud service providers. We determine several features such as similarity function, semantic ontology, cloud service description and cloud services components, to be used effectively in identifying cloud service on the Web. Also, we build a unified model to expose the cloud service’s features to a cloud service search user to ease the process of searching and comparison among a large amount of cloud services by building cloud service’s profile. Furthermore, we particularly develop a cloud service discovery Engine that has capability to crawl the Web automatically and collect cloud services. The collected datasets include meta-data of nearly 7,500 real-world cloud services providers and nearly 15,000 services (2.45GB). The experimental results show that our approach i) is able to effectively build automatic cloud service ontology, ii) is robust in identifying cloud service in real environment and iii) is more scalable in providing more details about cloud services.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201

    Micro-architectural Threats to Modern Computing Systems

    Get PDF
    With the abundance of cheap computing power and high-speed internet, cloud and mobile computing replaced traditional computers. As computing models evolved, newer CPUs were fitted with additional cores and larger caches to accommodate run multiple processes concurrently. In direct relation to these changes, shared hardware resources emerged and became a source of side-channel leakage. Although side-channel attacks have been known for a long time, these changes made them practical on shared hardware systems. In addition to side-channels, concurrent execution also opened the door to practical quality of service attacks (QoS). The goal of this dissertation is to identify side-channel leakages and architectural bottlenecks on modern computing systems and introduce exploits. To that end, we introduce side-channel attacks on cloud systems to recover sensitive information such as code execution, software identity as well as cryptographic secrets. Moreover, we introduce a hard to detect QoS attack that can cause over 90+\% slowdown. We demonstrate our attack by designing an Android app that causes degradation via memory bus locking. While practical and quite powerful, mounting side-channel attacks is akin to listening on a private conversation in a crowded train station. Significant manual labor is required to de-noise and synchronizes the leakage trace and extract features. With this motivation, we apply machine learning (ML) to automate and scale the data analysis. We show that classical machine learning methods, as well as more complicated convolutional neural networks (CNN), can be trained to extract useful information from side-channel leakage trace. Finally, we propose the DeepCloak framework as a countermeasure against side-channel attacks. We argue that by exploiting adversarial learning (AL), an inherent weakness of ML, as a defensive tool against side-channel attacks, we can cloak side-channel trace of a process. With DeepCloak, we show that it is possible to trick highly accurate (99+\% accuracy) CNN classifiers. Moreover, we investigate defenses against AL to determine if an attacker can protect itself from DeepCloak by applying adversarial re-training and defensive distillation. We show that even in the presence of an intelligent adversary that employs such techniques, DeepCloak still succeeds

    Requirements engineering aspects for sustainable eLearning systems

    Get PDF
    Sustainability in software engineering is about (1) continued functionality and maintainability in changing circumstances, and (2) functionality's effect on the surrounded environment, economic and people. Frequent changes of software requirements negatively affect sustainability of software systems. To reduce the number of requirements' changes and improve sustainability, sustainability requirements have to be considered from the beginning of the requirements engineering stage of software development. Sustainability in requirements engineering has five dimensions including individual, social, technical, economic and environmental dimensions. Most of the existing work analysed only one or two dimensions and ignore the interrelated effects among other dimensions. To address this issue, we selected eLearning systems because they provide comprehensive example to study. This thesis focuses on analysing sustainability requirements of eLearning systems with regard to the five sustainability dimensions. The following studies were performed: (1) identifying theoretically the sustainability requirements of eLearning systems, (2) investigating empirically the sustainability of eLearning systems, (3) constructing a methodology for the analysis and evaluation of sustainability requirements on eLearning systems, and (4) evaluating the constructed methodology. To the best of our knowledge, this is the first research conducted to investigate sustainability requirements of eLearning systems covering the five sustainability dimensions. Our findings highlighted that (1) technical, economic and environmental sustainability requirements are similar to other software domains, where individual and social sustainability requirements are specific for the domain of eLearning systems, (2) individual and social sustainability requirements need to be carefully considered and analysed together because of the strong correlation, and (3) culture and gender diversity play an important role for sustainability requirements. On this basis, we developed a framework for analysing sustainability requirements of software systems as well as a web-based tool SuSoftPro (the name stands from Software Sustainability Profiling) that allows requirements engineers to: investigate sustainability of software systems based on the systems' requirements, analyse the sustainability dimensions of software systems, measure the sustainability of each individual requirement, visualise analysis results to support decision making towards high-quality software, involve stakeholders to rate their requirements for one or more of the five sustainability dimensions, and manage requirement and stakeholder details easily. We evaluated the SuSoftPro framework through case studies, comparative evaluation and a quantitative questionnaire. Our framework successfully provides a comprehensive view of analysing sustainability requirements to improve the attention to sustainability and allow practitioners to develop sustainable software

    AI-enabled Automation for Completeness Checking of Privacy Policies

    Get PDF
    Technological advances in information sharing have raised concerns about data protection. Privacy policies contain privacy-related requirements about how the personal data of individuals will be handled by an organization or a software system (e.g., a web service or an app). In Europe, privacy policies are subject to compliance with the General Data Protection Regulation (GDPR). A prerequisite for GDPR compliance checking is to verify whether the content of a privacy policy is complete according to the provisions of GDPR. Incomplete privacy policies might result in large fines on violating organization as well as incomplete privacy-related software specifications. Manual completeness checking is both time-consuming and error-prone. In this paper, we propose AI-based automation for the completeness checking of privacy policies. Through systematic qualitative methods, we first build two artifacts to characterize the privacy-related provisions of GDPR, namely a conceptual model and a set of completeness criteria. Then, we develop an automated solution on top of these artifacts by leveraging a combination of natural language processing and supervised machine learning. Specifically, we identify the GDPR-relevant information content in privacy policies and subsequently check them against the completeness criteria. To evaluate our approach, we collected 234 real privacy policies from the fund industry. Over a set of 48 unseen privacy policies, our approach detected 300 of the total of 334 violations of some completeness criteria correctly, while producing 23 false positives. The approach thus has a precision of 92.9% and recall of 89.8%. Compared to a baseline that applies keyword search only, our approach results in an improvement of 24.5% in precision and 38% in recall

    Advanced traffic video analytics for robust traffic accident detection

    Get PDF
    Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time. First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road. Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system. The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents
    corecore