5,206 research outputs found

    xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs

    Full text link
    In this paper we show how attackers can covertly leak data (e.g., encryption keys, passwords and files) from highly secure or air-gapped networks via the row of status LEDs that exists in networking equipment such as LAN switches and routers. Although it is known that some network equipment emanates optical signals correlated with the information being processed by the device ('side-channel'), intentionally controlling the status LEDs to carry any type of data ('covert-channel') has never studied before. A malicious code is executed on the LAN switch or router, allowing full control of the status LEDs. Sensitive data can be encoded and modulated over the blinking of the LEDs. The generated signals can then be recorded by various types of remote cameras and optical sensors. We provide the technical background on the internal architecture of switches and routers (at both the hardware and software level) which enables this type of attack. We also present amplitude and frequency based modulation and encoding schemas, along with a simple transmission protocol. We implement a prototype of an exfiltration malware and discuss its design and implementation. We evaluate this method with a few routers and different types of LEDs. In addition, we tested various receivers including remote cameras, security cameras, smartphone cameras, and optical sensors, and also discuss different detection and prevention countermeasures. Our experiment shows that sensitive data can be covertly leaked via the status LEDs of switches and routers at a bit rates of 10 bit/sec to more than 1Kbit/sec per LED

    Performance analysis and application development of hybrid WiMAX-WiFi IP video surveillance systems

    Get PDF
    Traditional Closed Circuit Television (CCTV) analogue cameras installed in buildings and other areas of security interest necessitates the use of cable lines. However, analogue systems are limited by distance; and storing analogue data requires huge space or bandwidth. Wired systems are also prone to vandalism, they cannot be installed in a hostile terrain and in heritage sites, where cabling would distort original design. Currently, there is a paradigm shift towards wireless solutions (WiMAX, Wi-Fi, 3G, 4G) to complement and in some cases replace the wired system. A wireless solution of the Fourth-Generation Surveillance System (4GSS) has been proposed in this thesis. It is a hybrid WiMAX-WiFi video surveillance system. The performance analysis of the hybrid WiMAX-WiFi is compared with the conventional WiMAX surveillance models. The video surveillance models and the algorithm that exploit the advantages of both WiMAX and Wi-Fi for scenarios of fixed and mobile wireless cameras have been proposed, simulated and compared with the mathematical/analytical models. The hybrid WiMAX-WiFi video surveillance model has been extended to include a Wireless Mesh configuration on the Wi-Fi part, to improve the scalability and reliability. A performance analysis for hybrid WiMAX-WiFi system with an appropriate Mobility model has been considered for the case of mobile cameras. A security software application for mobile smartphones that sends surveillance images to either local or remote servers has been developed. The developed software has been tested, evaluated and deployed in low bandwidth Wi-Fi wireless network environments. WiMAX is a wireless metropolitan access network technology that provides broadband services to the connected customers. Major modules and units of WiMAX include the Customer Provided Equipment (CPE), the Access Service Network (ASN) which consist one or more Base Stations (BS) and the Connectivity Service Network (CSN). Various interfaces exist between each unit and module. WiMAX is based on the IEEE 802.16 family of standards. Wi-Fi, on the other hand, is a wireless access network operating in the local area network; and it is based on the IEEE 802.11 standards

    The crowd as a cameraman : on-stage display of crowdsourced mobile video at large-scale events

    Get PDF
    Recording videos with smartphones at large-scale events such as concerts and festivals is very common nowadays. These videos register the atmosphere of the event as it is experienced by the crowd and offer a perspective that is hard to capture by the professional cameras installed throughout the venue. In this article, we present a framework to collect videos from smartphones in the public and blend these into a mosaic that can be readily mixed with professional camera footage and shown on displays during the event. The video upload is prioritized by matching requests of the event director with video metadata, while taking into account the available wireless network capacity. The proposed framework's main novelty is its scalability, supporting the real-time transmission, processing and display of videos recorded by hundreds of simultaneous users in ultra-dense Wi-Fi environments, as well as its proven integration in commercial production environments. The framework has been extensively validated in a controlled lab setting with up to 1 000 clients as well as in a field trial where 1 183 videos were collected from 135 participants recruited from an audience of 8 050 people. 90 % of those videos were uploaded within 6.8 minutes

    MonitorMe: sistema de reconhecimento de atividades baseado em Android

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaA monitorização de uma pessoa pode ser importante em várias situações do dia-a-dia. Um modo de monitorização é a identificação de atividades realizadas. Atualmente, vários sensores potencialmente úteis para o reconhecimento de atividades, são integrados em dispositivos móveis, o que os torna particularmente interessantes para este tipo de monitorização. Uma forma complementar de monitorização é a utilização da gravação de um vídeo do ambiente que rodeia a pessoa a ser monitorizada. No entanto, dado o tamanho elevado dos vídeos para transmissão por canais sem fios ou mesmo para gravação no dispositivo, torna-se necessário atuar na compressão e redução da informação associada. Uma forma de o conseguir é adaptar a cadência de imagens adquiridas à velocidade da pessoa que está ser monitorizada. Nesta dissertação é proposto um sistema de monitorização online, chamado MonitorMe, que permite o reconhecimento de atividades e a gravação de um vídeo do ambiente envolvente de uma pessoa. Este sistema inclui um smartphone Android, mantido num bolso de camisa, e um módulo MARG (Magnetic, Angular Rate and Gravity), colocado num bolso das calças. Foi desenvolvida uma aplicação para o smartphone, que obtém dados dos sensores integrados em ambos os dispositivos para a realização do reconhecimento online de 6 atividades diferentes (em pé, sentado, deitado, andar, correr e queda). Este reconhecimento é conseguido utilizando um algoritmo de baixo custo computacional, cujo desenvolvimento teve em consideração as restrições relativas à capacidade de processamento e à duração da bateria dos telemóveis. Paralelamente ao reconhecimento de atividades, a câmara do smartphone captura imagens com uma cadência que varia com a velocidade do utilizador, esta última estimada a partir dos dados dos sensores processados para o reconhecimento de atividades. Demonstra-se assim a possibilidade de, com baixo custo computacional, diminuir a largura de banda de transmissão ou o armazenamento no dispositivo móvel. O sistema MonitorMe foi treinado e depois testado com dados obtidos em duas experiências envolvendo 10 pessoas, num total de 440 eventos diferentes com uma duração total de 45 minutos (2/3 usados para treino e 1/3 para teste). Os resultados globais obtidos mostraram uma sensibilidade superior a 93% e uma especificidade superior a 98% para o reconhecimento de atividades, e um erro médio relativo de 8.6% para a estimativa de velocidade.The monitoring of a given person can be important in different day-to-day scenarios. Monitoring can be performed by detecting activities while being carried out. Presently, various sensors with potential for activity recognition are being included in mobile devices, so they are particularly interesting for this type of monitoring. A complementary way of monitoring consists in the use of a video recording of the subject’s surrounding environment. However, given the large size of the videos for transmission through wireless links or even for storage in the device, it is necessary to compress and reduce the corresponding information. This can be achieved by adapting the frame rate of the captured images to the speed of the user being monitored. In this dissertation an online monitoring system, MonitorMe, which performs activity recognition and video recording of the surrounding environment of a subject, is proposed. This system includes an Android smartphone, inserted in a shirt pocket, and an MARG (Magnetic, Angular Rate and Gravity) module, placed in a pants pocket. A smartphone application was developed, which collects data from the sensors integrated in both devices to perform the online recognition of 6 different activities (standing, sitting, lying, walking, running and fall). This was achieved by using an algorithm of low computational cost, which took into account the existing restrictions regarding processing power and battery life of mobile phones. In parallel with activity recognition, the smartphone camera captures images with a frame rate that varies with the user speed, the latter estimated from sensor data processed for activity recognition. This demonstrates the possibility of reducing the required transmission bandwidth or the storage in the mobile device, with a low computational cost. The MonitorMe system was trained and then tested using data collected in two experiments with a participation of 10 subjects, which resulted in a total of 440 different events with a total duration of 45 minutes (2/3 used for training and 1/3 for testing). The overall results have shown a sensibility greater than 93% and a specificity greater than 98% for activity recognition, and an average relative error of 8.6% for speed estimation

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving
    corecore