694 research outputs found

    Reasoning about ideal interruptible moments: A soft computing implementation of an interruption classifier in free-form task environments

    Get PDF
    Current trends in society and technology make the concept of interruption a central human computer interaction problem. In this work, a novel soft computing implementation for an Interruption Classifier was designed, developed and evaluated that draws from a user model and real-time observations of the user\u27s actions as s/he works on computer-based tasks to determine ideal times to interact with the user. This research is timely as the number of interruptions people experience daily has grown considerably over the last decade. Thus, systems are needed to manage interruptions by reasoning about ideal timings of interactions. This research shows: (1) the classifier incorporates a user model in its’ reasoning process. Most of the research in this area has focused on task-based contextual information when designing systems that reason about interruptions; (2) the classifier performed at 96% accuracy in experimental test scenarios and significantly out-performed other comparable systems; (3) the classifier is implemented using an advanced machine learning technology—an Adaptive Neural-Fuzzy Inference System—this is unique since all other systems use Bayesian Networks or other machine learning tools; (4) the classifier does not require any direct user involvement—in other systems, users must provide interruption annotations while reviewing video sessions so the system can learn; and (5) a promising direction for reasoning about interruptions for free-form tasks–this is largely an unsolved problem

    Anomaly-based botnet detection for 10 Gb/s networks

    Get PDF
    Current network data rates have made it increasingly difficult for cyber security specialists to protect the information stored on private systems. Greater throughput not only allows for higher productivity, but also creates a “larger” security hole that may allow numerous malicious applications (e.g. bots) to enter a private network. Software-based intrusion detection/prevention systems are not fast enough for the massive amounts of traffic found on 1 Gb/s and 10 Gb/s networks to be fully effective. Consequently, businesses accept more risk and are forced to make a conscious trade-off between threat and performance. A solution that can handle a much broader view of large-scale, high-speed systems will allow us to increase maximum throughput and network productivity. This paper describes a novel method of solving this problem by joining a pre-existing signature-based intrusion prevention system with an anomaly-based botnet detection algorithm in a hybrid hardware/software implementation. Our contributions include the addition of an anomaly detection engine to a pre-existing signature detection engine in hardware. This hybrid system is capable of processing full-duplex 10 Gb/s traffic in real-time with no packet loss. The behavior-based algorithm and user interface are customizable. This research has also led to improvements of the vendor supplied signal and programming interface specifications which we have made readily available

    Securing Arm Platform: From Software-Based To Hardware-Based Approaches

    Get PDF
    With the rapid proliferation of the ARM architecture on smart mobile phones and Internet of Things (IoT) devices, the security of ARM platform becomes an emerging problem. In recent years, the number of malware identified on ARM platforms, especially on Android, shows explosive growth. Evasion techniques are also used in these malware to escape from being detected by existing analysis systems. In our research, we first present a software-based mechanism to increase the accuracy of existing static analysis tools by reassembleable bytecode extraction. Our solution collects bytecode and data at runtime, and then reassemble them offline to help static analysis tools to reveal the hidden behavior in an application. Further, we implement a hardware-based transparent malware analysis framework for general ARM platforms to defend against the traditional evasion techniques. Our framework leverages hardware debugging features and Trusted Execution Environment (TEE) to achieve transparent tracing and debugging with reasonable overhead. To learn the security of the involved hardware debugging features, we perform a comprehensive study on the ARM debugging features and summarize the security implications. Based on the implications, we design a novel attack scenario that achieves privilege escalation via misusing the debugging features in inter-processor debugging model. The attack has raised our concern on the security of TEEs and Cyber-physical System (CPS). For a better understanding of the security of TEEs, we investigate the security of various TEEs on different architectures and platforms, and state the security challenges. A study of the deploying the TEEs on edge platform is also presented. For the security of the CPS, we conduct an analysis on the real-world traffic signal infrastructure and summarize the security problems

    FROM SMALL-WORLDS TO BIG DATA:TEMPORAL AND MULTIDIMENSIONAL ASPECTS OF HUMAN NETWORKS

    Get PDF
    In this thesis we address the close interplay among mobility, offline relationships and online interactions and the related human networks at different dimensional scales and temporal granularities. By generally adopting a data-driven approach, we move from small datasets about physical interactions mediated by human-carried devices, describing small social realities, to large-scale graphs that evolve over time, as well as from human mobility trajectories to face-to-face contacts occurring in different geographical contexts. We explore in depth the relation between human mobility and the social structure induced by the overlapping of different people's trajectories on GPS traces collected in urban and metropolitan areas. We define the notions of geo-location and geo-community which are operational in describing in a unique framework both spatial and social aspects of human behavior. Through the concept of geo-community we model the human mobility adopting a bipartite graph. Thanks to this graph representation we can generate a social structure that is plausible w.r.t. the real interactions. In general the modeling approach have the merit for reporting the mobility in a graph-theoretic framework making the study of the interplay mobility/sociality more affordable and intuitive. Our modeling approach also results in a mobility model, Geo-CoMM, which lies on and exploits the idea of geo-community. The model represents a particular instance of a general framework we provide. A framework where the social structure behind the preferred-location based mobility models emerges. We validate Geo-CoMM on spatial, temporal, pairwise connectivity and social features showing that it reproduces the main statistical properties observed in real traces. As concerns the offline/online interplay we provide a complete overview of the close connection between online and offline sociality. To reach our goal we gather data about offline contacts and social interactions on Facebook of a group of students and we propose a multidimensional network analysis which allows us to deeply understand how the characteristics of users in the distinct networks impact each other. Results show how offline and Facebook friends are different. This way we confirm and worsen the general intuition that online social networks have shifted away from their original goal to mirror the offline sociality of individuals. As for the role and the social importance, it becomes apparent that social features such as user popularity or community structure do not transfer along social dimensions, as confirmed by our correlation analysis of the network layers and by the comparison among the communities. In the last chapters we analyze the evolution of the online social network from a physical time perspective, i.e. considering the graph evolution as a graph time-series and not as a function of the network basic properties (number of nodes or links). As for the physical time in a user-centric viewpoint, we investigate the bursty nature of the link creation process in online social network. We prove not only that it is a highly inhomogeneous process, but also identify patterns of burstiness common to all nodes. Then we focus on the dynamic formation of two fundamental network building components: dyads and triads. We propose two new metrics to aid the temporal analysis on physical time: link creation delay and triangle closure delay. These two metrics enable us to study the dynamic creation of dyads and triads, and to highlight network behavior that would otherwise remain hidden. In our analysis, we find that link delays are generally very low in absolute time and are largely independent of the dates people join the network. To highlight the social nature of this metric, we introduce the term \textit{peerness} to quantify how well linked users overlap in lifetimes. As for triadic closure delay we first introduce an algorithm to extract of temporal triangle which enables us to monitor the triangle formation process, and to detect sudden changes in the triangle formation behavior, possibly related to external events. In particular, we show that the introduction of new service functionalities had a disruptive impact on the triangle creation process in the network

    Wireless temperature sensing in hostile environments using a microcontroller powered by optical fiber

    Get PDF
    Uno de los mayores riegos del mundo industrial es el fallo de las maquinarias y aparatos que los forman. Un error, provocado por la causa que sea, puede tener consecuencias fatales, no solo para la empresa sino también para todo su entorno. Estas máquinas trabajan con altas cantidades de energía, por lo que su control y monitoreo disminuye los riesgos y asegura una mayor seguridad a la hora de trabajar con ellos. Un ejemplo de este tipo de máquinas son los transformadores. Estos dispositivos trabajan con circuitos eléctricos que intercambian altas cantidades de potencia para el funcionamiento y distribución eléctrica. Existen distintos parámetros a medir para poder monitorear el estado en que se encuentran estas máquinas, pero uno de los principales es la temperatura, y en ese se va a basar este proyecto. Controlar la temperatura de un transformador supone controlar el interior del mismo, y con ello asegurarse de que funciona correctamente, y que sigue en el periodo de su vida útil, ya que el envejecimiento y desgaste de esta puede llegar a generar graves consecuencias. La temperatura se va a medir utilizando un sensor de instrumentación. Para su diseño, la principal característica a tener en cuenta es la necesidad de que se adapte al entorno hostil que rodea a los transformadores. Es por ello que se va a utilizar un sensor de fibra óptica, inmune a las interferencias electromagnéticas y de radiofrecuencia, y garantizando un bajo coste. La información del sensor se va a obtener con un microprocesador, conectado en el punto de salida de señal del sensor. Este dispositivo va a obtener la data correspondiente y la va a transmitir al módulo de comunicación, encargado de emitir los resultados a la unidad de control. Como sistema de comunicación, se va a utilizar un protocolo inalámbrico. El protocolo ZigBee asegura una robustez y rápido start-up, así como un diseño simple y sencillo. Finalmente, la interfaz de ordenador se va a diseñar con el programa LabView. Va a tener la funcionalidad de punto de control, con la capacidad de activar el funcionamiento de la red sensorial, y su casi inmediato monitoreo. Eso es, que la interfaz estará diseñada para obtener la data emitida por el sensor, y analizarla, dándole al usuario la información correspondiente, casi en tiempo inmediato. Por lo que es posible conocer, casi al momento, la temperatura a la que se encuentra el sensor, por ende la temperatura en el transformador. En caso de requerir un sistema totalmente inmune a las interferencias electromagnéticas, la alimentación del sensor se podría hacer a través de la tecnología PoF (Power over Fiber). Utilizando un sistema ya diseñado e implementado de la universidad, se van a adaptar sus parámetros a los requerimientos del sistema para observar sus resultados, tanto teórica como experimentalmente. Este proyecto consiste en el diseño e implementación de todos los distintos componentes del sensor de temperatura, es decir, la fibra óptica y sus circuitos de adaptación, la programación del microprocesador, el establecimiento de la comunicación inalámbrica, y el diseño de la interfaz. Una vez implementado todo el sistema, se van a realizar distintas pruebas, donde se va a someter al sensor a bruscas variaciones de temperaturas para estudiar su respuesta. Y una vez comprobado que todo el sistema funciona correctamente, se va a sustituir la fuente de tensión, por la tecnología PoF, observando los resultados y su posible futura inclusión en el desarrollo de sensores.One of the greatest risks of the industrial area is the failure of the machines and devices composing in. Any mistake may have fatal consequences, not only for the industry but also for its environment. These machines work with high quantities of energy, so its control and monitoring decreases the risks and guarantees a greater security when working with them The transformers are an example of these machines. These devices work with electrical circuits exchanging great amounts of energy for the electrical distribution. There are different parameters that will enable the monitoring of the machine´s state, but one of the main ones is the temperature, and it is what this project will focus on. In order to control the temperature of the transformer, the sensor must be placed inside of it. This means one of the main characteristics of the designed sensor has to be its immunity to electromagnetic and radiofrequency interferences, this is why it the selected sensor uses optical fiber. The data acquisition is going to be done with a microprocessor, which will be connected to the sensor and programed to obtain the results and transmit them to communication module, which is set to emit them to the control unit. The communication is going to use a wireless protocol. The ZigBee protocol is going to provide roughness and fast commissioning, as well as a simple and nice design. The control unit is going to be designed with the LabView program. Its programming include the acquisition of the data received from the sensor and its analysis. This means it will take the results and give the user its equivalent temperature value, almost immediately to the response of the sensor. This way it is possible to know the temperature the sensor is at, hence the temperature of the transformer. In case of requiring a system totally immune to interferences, the system will have to be powered with a PoF technology. A PoF system already designed and implemented is going to be adapted to the system, and tested to read its response. The project consists on the design and implementation of the sensor temperature, and all its components, this is the optical fiber and its adaptation circuits, the microprocessor´s programming, the communication and the interface design. Once the whole system is implemented, different tests are going to be done where the sensor is going to be submitted to abrupt temperature variations and its response studied. Once checked the system is working correctly the power source will be replaced with the PoF, analyzing its results and future inclusion on the sensors development.Ingeniería Electrónica Industrial y Automátic

    Between security, law enforcement and harm reduction: drug policing at commercial music festivals in England

    Get PDF
    In this thesis, I use an ethnographic methodology to explore the implementation of drug policing at commercial music festivals in England. I argue that festival drug policing is primarily concerned with the anticipation and mitigation of drug-related risk, and festivals adopt an array of security, enforcement and harm reduction approaches under the ‘3: Ps’ (Prevent, Pursue and Protect) in pursuit of this. With an lens on the in-situ decision making of policing, security and management actors on the ground, I illustrate how drug policies are negotiated between agencies, in order to satisfy their sometimes competing risk-perceptions and interests in their pursuit of drug security

    Development of an ammonia portable low-cost air quality station

    Get PDF
    Tese de mestrado integrado, Engenharia da Energia e do Ambiente, Universidade de Lisboa, Faculdade de Ciências, 2019A deterioração da qualidade do ar é um problema cada vez mais significativo para a saúde humana e para o ambiente. Assim, a monitorização dos poluentes, seja em espaços interiores ou exteriores, é cada vez mais necessária, de forma a aumentar a consciencialização das populações e a potenciar a criação de medidas de mitigação eficazes. As estações de qualidade do ar de referência têm uma densidade espacial muito reduzida (uma estação por cada 1299 km2) devido aos seus elevados custos. Tendo isto em conta, o presente estudo visou estudar e desenvolver uma estação de qualidade do ar de baixo custo, testando a sua validade e viabilidade. A estação em questão, a QAPT (acrónimo de Qualidade de Ar Para Todos), teve como objetivo a monitorização de amoníaco e outras variáveis secundárias (temperatura, T, e humidade relativa, HR), de forma a melhorar a performance do sensor de amoníaco e consequentemente do sistema num todo. O QAPT foi criado de forma a demonstrar a viabilidade do ponto que poderá pertencer, num futuro, a uma rede de monitorização da qualidade do ar de maior densidade espacial. A escolha do poluente a ser estudado, o amoníaco, adveio da necessidade de fazer uma melhor caracterização das concentrações e fontes emissores de amoníaco, uma vez que este poluente é monitorizado num número reduzido de estações de referência (ao contrário de poluentes mais conhecidos como o ozono ou partículas) e tem um elevado potencial de eutrofização e acidificação, sendo um importante precursor secundário de partículas. Esta dissertação focou-se no estudo dos diversos sistemas da estação low-cost de qualidade do ar criada, nomeadamente, no sistema de amostragem de ar, no sistema eletrónico, no sistema de alimentação e no sistema de visualização e tratamento de dados. Assim, efetuou-se o estudo do comportamento e limitações dos diversos componentes da estação, verificando e estudando também os seus impactos no sistema como um todo. A validade da estação de baixo custo foi comprovada com a obtenção de resultados coerentes com a literatura existente aquando dos estudos do desempenho do QAPT em variados cenários, com particular ênfase na monitorização efetuada num salão de beleza e num armazém de criação de frangos. Aquando da monitorização no salão de beleza foi também possível verificar que as profissionais destes locais estão frequentemente expostas a muito elevadas concentrações de amoníaco, facto que é um perigo para a sua saúde. Os resultados obtidos na monitorização perto de estradas com muito tráfego e em cavalariças permitiu também mostrar um dos problemas mais recorrentes dos sensores de baixo custo: a falta de seletividade. No entanto há já variados estudos que se focaram na superação deste problema, sendo que os seus métodos poderão ser estudados em trabalhos futuros. Em termos económicos, a estação desenvolvida é muito satisfatória uma vez que custa apenas 133€ no seu estado atual. No entanto também se verificou que aquando futuras e eventuais melhorias, utilizando um Raspberry Pi de forma a fazer a comunicação para a cloud e ter mais espaço de memória, o protótipo fica a cerca de 165€, o que é muito satisfatório para uma estação de qualidade do ar. Não se aplicou a ligação à cloud uma vez que se verificou que esta punha em risco a portabilidade da estação pelo aumento do consumo de energia, o que inviabilizaria a autonomia do protótipo, baixando o seu tempo de monitorização de 28 horas para apenas 3 horas na melhor das hipóteses. Neste trabalho foi também estudado o impacto da temperatura e humidade relativa no sensor low-cost de amoníaco (MQ137) que levou à criação de uma rotina (código de calibração com T e HR) de forma a minimizar o impacto de ambas as variáveis no sensor, tendo esta sido aplicada com sucesso. O sistema de amostragem desenvolvido minimiza as desvantagens de se terem colocado os sensores dentro de uma caixa, permitindo assim que o sistema funcione de forma satisfatória tanto em medições estáticas como em medições em movimento. Já na parte das ligações entre os componentes foi tida em conta a necessidade de minimizar o espaço ocupado e o peso do sistema, tal como a sua estabilidade, tendo para isso sido feito um PCB. Em relação ao sistema de alimentação, das várias alternativas estudadas e tendo em conta os seus impactes no sistema (a nível térmico, das tensões impostas, capacidade, entre outros), verificou-se que a opção mais favorável é a utilização de um power bank solar. Por fim, verificou-se que a integração dos dados obtidos com um sistema de informação geográfica é de particular interesse quando aplicado às estações de qualidade do ar de baixo custo portáteis, facilitando a visualização espacial dos dados e consequentemente aumentando o potencial de consciencialização do sistema desenvolvido.Air pollution is becoming an ever-increasing problem for human health and the environment. As such, the monitorization of air pollutants, both indoors and outdoors, is increasingly necessary in order to enhance awareness and the creation of more effective mitigation measures. The existing reference air quality stations have a very low spatial density due to their high costs. Considering this, the following study focused on the development and study of a low-cost air quality station, testing its validity and viability. The developed station, QAPT (which stands for Air Quality for All – “Qualidade de Ar Para Todos”) focused on the monitorization of ammonia and other secondary variables to improve system performance, having been built as a proof-of-concept, as a node for a broader monitorization network of high spatial distribution. The monitorization of ammonia arose from the necessity to understand in greater detail the spatial disposition of ammonia emission sources, as this pollutant is monitored in very few stations (unlike better-known pollutants like ozone or particulate matter) and has a high eutrophication and acidification potential, as well as being a secondary precursor to particulate matter formation. This work focused on the study of the several systems of a low-cost air quality station, particularly: air sampling system, electrical connections, power supply and data visualisation and treatment. As such, the limitations of the low-cost components and their behaviour in the system are studied in detail and discussed. The station’s validity and viability, both in terms of the air quality monitorization (with consistent results from the low-cost MQ137 sensor) and total cost were assessed, laying the basic foundations for further work in the creation of nodes of a low-cost air-quality network of high spatial density. The study assessed the impact of both temperature and humidity in the readings of the low-cost ammonia sensor (MQ137) and created a routine (calibration code) to minimise these effects, with proven success. The developed air sampling system minimises the limitations of having the sensors inside a case, thus allowing them to perform satisfyingly in both static and moving monitorization. The electrical connections were developed taking into consideration the need to ease trouble-solving and minimise volume and weight, with the creation of a PCB that allowed for greater system stability. Different power supply options and their impact on the system were studied, pointing to the advantages of using a solar power bank. Lastly, the integration of GIS software to analyse the data is proven to be of particular interest when applied to the low-cost air quality station data, improving the spatial visualisation of the data and thus enhancing the awareness potential of the system

    Application of Mathematical and Computational Models to Mitigate the Overutilization of Healthcare Systems

    Get PDF
    The overutilization of the healthcare system has been a significant issue financially and politically, placing burdens on the government, patients, providers and individual payers. In this dissertation, we study how mathematical models and computational models can be utilized to support healthcare decision-making and generate effective interventions for healthcare overcrowding. We focus on applying operations research and data mining methods to mitigate the overutilization of emergency department and inpatient services in four scenarios. Firstly, we systematically review research articles that apply analytical queueing models to the study of the emergency department, with an additional focus on comparing simulation models with queueing models when applied to similar research questions. Secondly, we present an agent-based simulation model of epidemic and bioterrorism transmission, and develop a prediction scheme to differentiate the simulated transmission patterns during the initial stage of the event. Thirdly, we develop a machine learning framework for effectively selecting enrollees for case management based on Medicaid claims data, and demonstrate the importance of enrolling current infrequent users whose utilization of emergency visits might increase significantly in the future. Lastly, we study the role of temporal features in predicting future health outcomes for diabetes patients, and identify the levels to which the aggregation can be most informative

    From the Editor

    Get PDF

    From the Editor

    Get PDF
    corecore