31,746 research outputs found

    Modeling an ontology on accessible evacuation routes for emergencies

    Get PDF
    Providing alert communication in emergency situations is vital to reduce the number of victims. However, this is a challenging goal for researchers and professionals due to the diverse pool of prospective users, e.g. people with disabilities as well as other vulnerable groups. Moreover, in the event of an emergency situation, many people could become vulnerable because of exceptional circumstances such as stress, an unknown environment or even visual impairment (e.g. fire causing smoke). Within this scope, a crucial activity is to notify affected people about safe places and available evacuation routes. In order to address this need, we propose to extend an ontology, called SEMA4A (Simple EMergency Alert 4 [for] All), developed in a previous work for managing knowledge about accessibility guidelines, emergency situations and communication technologies. In this paper, we introduce a semi-automatic technique for knowledge acquisition and modeling on accessible evacuation routes. We introduce a use case to show applications of the ontology and conclude with an evaluation involving several experts in evacuation procedures. © 2014 Elsevier Ltd. All rights reserved

    Bluetooth familiarity: methods of calculation, applications and limitations

    Get PDF
    We present an approach for utilising a mobile device’s Bluetooth sensor to automatically identify social interactions and relationships between individuals in the real world. We show that a high degree of accuracy is achievable in the automatic identification of mobile devices of familiar individuals. This has implications for mobile device security, social networking and in context aware information access on a mobile device

    A Multi-view Context-aware Approach to Android Malware Detection and Malicious Code Localization

    Full text link
    Existing Android malware detection approaches use a variety of features such as security sensitive APIs, system calls, control-flow structures and information flows in conjunction with Machine Learning classifiers to achieve accurate detection. Each of these feature sets provides a unique semantic perspective (or view) of apps' behaviours with inherent strengths and limitations. Meaning, some views are more amenable to detect certain attacks but may not be suitable to characterise several other attacks. Most of the existing malware detection approaches use only one (or a selected few) of the aforementioned feature sets which prevent them from detecting a vast majority of attacks. Addressing this limitation, we propose MKLDroid, a unified framework that systematically integrates multiple views of apps for performing comprehensive malware detection and malicious code localisation. The rationale is that, while a malware app can disguise itself in some views, disguising in every view while maintaining malicious intent will be much harder. MKLDroid uses a graph kernel to capture structural and contextual information from apps' dependency graphs and identify malice code patterns in each view. Subsequently, it employs Multiple Kernel Learning (MKL) to find a weighted combination of the views which yields the best detection accuracy. Besides multi-view learning, MKLDroid's unique and salient trait is its ability to locate fine-grained malice code portions in dependency graphs (e.g., methods/classes). Through our large-scale experiments on several datasets (incl. wild apps), we demonstrate that MKLDroid outperforms three state-of-the-art techniques consistently, in terms of accuracy while maintaining comparable efficiency. In our malicious code localisation experiments on a dataset of repackaged malware, MKLDroid was able to identify all the malice classes with 94% average recall

    RFID Localisation For Internet Of Things Smart Homes: A Survey

    Full text link
    The Internet of Things (IoT) enables numerous business opportunities in fields as diverse as e-health, smart cities, smart homes, among many others. The IoT incorporates multiple long-range, short-range, and personal area wireless networks and technologies into the designs of IoT applications. Localisation in indoor positioning systems plays an important role in the IoT. Location Based IoT applications range from tracking objects and people in real-time, assets management, agriculture, assisted monitoring technologies for healthcare, and smart homes, to name a few. Radio Frequency based systems for indoor positioning such as Radio Frequency Identification (RFID) is a key enabler technology for the IoT due to its costeffective, high readability rates, automatic identification and, importantly, its energy efficiency characteristic. This paper reviews the state-of-the-art RFID technologies in IoT Smart Homes applications. It presents several comparable studies of RFID based projects in smart homes and discusses the applications, techniques, algorithms, and challenges of adopting RFID technologies in IoT smart home systems.Comment: 18 pages, 2 figures, 3 table

    A Survey of Location Prediction on Twitter

    Full text link
    Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur

    Technology Integration around the Geographic Information: A State of the Art

    Get PDF
    One of the elements that have popularized and facilitated the use of geographical information on a variety of computational applications has been the use of Web maps; this has opened new research challenges on different subjects, from locating places and people, the study of social behavior or the analyzing of the hidden structures of the terms used in a natural language query used for locating a place. However, the use of geographic information under technological features is not new, instead it has been part of a development and technological integration process. This paper presents a state of the art review about the application of geographic information under different approaches: its use on location based services, the collaborative user participation on it, its contextual-awareness, its use in the Semantic Web and the challenges of its use in natural languge queries. Finally, a prototype that integrates most of these areas is presented

    Context aware advertising

    Get PDF
    IP Television (IPTV) has created a new arena for digital advertising that has not been explored to its full potential yet. IPTV allows users to retrieve on demand content and recommended content; however, very limited research has been applied in the domain of advertising in IPTV systems. The diversity of the field led to a lot of mature efforts in the fields of content recommendation and mobile advertising. The introduction of IPTV and smart devices led to the ability to gather more context information that was not subject of study before. This research attempts at studying the different contextual parameters, how to enrich the advertising context to tailor better ads for users, devising a recommendation engine that utilizes the new context, building a prototype to prove the viability of the system and evaluating it on different quality of service and quality of experience measures. To tackle this problem, a review of the state of the art in the field of context-aware advertising as well as the related field of context-aware multimedia have been studied. The intent was to come up with the most relevant contextual parameters that can possibly yield a higher percentage precision for recommending advertisements to users. Subsequently, a prototype application was also developed to validate the feasibility and viability of the approach. The prototype gathers contextual information related to the number of viewers, their age, genders, viewing angles as well as their emotions. The gathered context is then dispatched to a web service which generates advertisement recommendations and sends them back to the user. A scheduler was also implemented to identify the most suitable time to push advertisements to users based on their attention span. To achieve our contributions, a corpus of 421 ads was gathered and processed for streaming. The advertisements were displayed in reality during the holy month of Ramadan, 2016. A data gathering application was developed where sample users were presented with 10 random ads and asked to rate and evaluate the advertisements according to a predetermined criteria. The gathered data was used for training the recommendation engine and computing the latent context-item preferences. This also served to identify the performance of a system that randomly sends advertisements to users. The resulting performance is used as a benchmark to compare our results against. When it comes to the recommendation engine itself, several implementation options were considered that pertain to the methodology to create a vector representation of an advertisement as well as the metric to use to measure the similarity between two advertisement vectors. The goal is to find a representation of advertisements that circumvents the cold start problem and the best similarity measure to use with the different vectorization techniques. A set of experiments have been designed and executed to identify the right vectorization methodology and similarity measure to apply in this problem domain. To evaluate the overall performance of the system, several experiments were designed and executed that cover different quality aspects of the system such as quality of service, quality of experience and quality of context. All three aspects have been measured and our results show that our recommendation engine exhibits a significant improvement over other mechanisms of pushing ads to users that are employed in currently existing systems. The other mechanisms placed in comparison are the random ad generation and targeted ad generation. Targeted ads mechanism relies on demographic information of the viewer with disregard to his/her historical consumption. Our system showed a precision percentage of 69.70% which means that roughly 7 out of 10 recommended ads are actually liked and viewed to the end by the viewer. The practice of randomly generating ads yields a result of 41.11% precision which means that only 4 out of 10 recommended ads are actually liked by viewers. The targeted ads system resulted in 51.39% precision. Our results show that a significant improvement can be introduced when employing context within a recommendation engine. When introducing emotion context, our results show a significant improvement in case the user’s emotion is happiness; however, it showed a degradation of performance when the user’s emotion is sadness. When considering all emotions, the overall results did not show a significant improvement. It is worth noting though that ads recommended based on detected emotions using our systems proved to always be relevant to the user\u27s current mood

    Information Extraction in Illicit Domains

    Full text link
    Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have `long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18\% F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.Comment: 10 pages, ACM WWW 201
    corecore