251 research outputs found

    Security of Electrical, Optical and Wireless On-Chip Interconnects: A Survey

    Full text link
    The advancement of manufacturing technologies has enabled the integration of more intellectual property (IP) cores on the same system-on-chip (SoC). Scalable and high throughput on-chip communication architecture has become a vital component in today's SoCs. Diverse technologies such as electrical, wireless, optical, and hybrid are available for on-chip communication with different architectures supporting them. Security of the on-chip communication is crucial because exploiting any vulnerability would be a goldmine for an attacker. In this survey, we provide a comprehensive review of threat models, attacks, and countermeasures over diverse on-chip communication technologies as well as sophisticated architectures.Comment: 41 pages, 24 figures, 4 table

    Distribuição de conteúdos em redes veiculares com mecanismos de filtragem

    Get PDF
    Mestrado em Engenharia Eletrónica e TelecomunicaçõeConectividade representa uma grande necessidade da população desde o início dos tempos. As pessoas têm, logo à partida, um desejo de estarem ligadas entre si e ao resto do mundo. Tal não mudou nos tempos actuais, especialmente na era das novas tecnologias onde conectarse com alguém está apenas a uns cliques de distância. Do ponto de vista de engenheiros da área das telecomunicações, este rápido desenvolvimento nas comunicações sem fios tem sido especialmente marcante. Devido a esta constante necessidade de comunicação, as VANETs (Vehicular Ad-Hoc Networks) atraem actualmente um interesse significativo. Esse interesse deve-se ao facto de as redes veiculares não só poderem ser usadas para uma condução potencialmente mais segura, como também poderem proporcionar aos passageiros o acesso à Internet. As redes veiculares têm características específicas face a outro tipo de redes, tais como o número elevado de veículos ou nós, rotas imprevis íveis e a constante perda de conectividade entre os mesmos, revelando vários desafios que propõem estudos para os solucionar. A solução encontrada para a conectividade intermitente prende-se com o uso de DTNs (Delay-Tolerant Networks) cuja arquitectura assegura a entrega de informação mesmo quando não há conhecimento do percurso completo que esta deve percorrer. Esta Dissertação de Mestrado foca-se no estudo da disseminação de conteúdo não-urgente via uso de DTNs, assegurando que esta mesma disseminação é feita no menor espaço de tempo possível e com o mínimo congestionamento possível na rede. Actualmente, embora a entrega de informação já seja efectuada na rede num espaço de tempo satisfatório, as estratégias implementadas forçam um congestionamento (overhead ) considerável na rede. Para combater este efeito, foi desenvolvida uma estratégia de disseminação através do uso de Bloom Filters, uma estrutura de dados capaz de eliminar a maior parte dos acessos desnecessários à memória, assegurando a um nó a existência de um pacote específico, com uma certa probabilidade, de entre toda a informação que os seus vizinhos contêm. Esta estratégia foi implementada no software de DTNs mOVERS Emulator, desenvolvido pelo Instituto de Telecomunicações de Aveiro (IT) e pela Veniam® e posteriormente testada no mesmo. O emulador utilizado simula uma rede veicular com base em informação recolhida da rede veicular da cidade do Porto. Após análise dos resultados obtidos, foi concluído que a nova estratégia de disseminação proposta, denominada FILTER, cumpriu o principal objectivo proposto, nomeadamente, a redução do overhead na rede veicular, com uma pequena perda de taxa de entrega da informação. Para trabalho futuro, é aconselhável realizar um estudo mais extenso em métodos relacionados com utilidade da informação para optimizar essa mesma taxa de entrega.Connectivity represents one of people's great needs since the beginning of times. From the start, people have a desire to be connected to each other and to the rest of the world. Such has not changed in modern times, especially in the era of new technologies where connecting with someone is only a few clicks away. From the point of view of engineers in the area of telecommunications, this fast development in wireless communications has been especially outstanding. Due to this constant need for communication, VANETs (Vehicular Ad- Hoc Networks) are currently attracting signi cant attention. Such attention is due to the fact that vehicular networks may be used for, not only potentially safer driving, they also provide its users with Internet access. Vehicular Networks have speci c characteristics when compared to other types of networks, such as the high number of vehicles or nodes, unpredictable routes and the constant loss of connectivity between these nodes, thus revealing several challenges which propose studies to solve them. The solution found for the intermittent connectivity involves the use of DTNs (Delay-Tolerant Networks) whose architecture ensures the delivery of information even without knowledge of the whole path it must travel. This Masters Dissertation focuses on the study of non-urgent content dissemination through the use of DTNs, ensuring that this same dissemination is done within the shortest time frame and with the minimum congestion possible in the network. Currently, though the information delivery is already performed in the network with a satisfactory time frame, the implemented strategies force considerable congestion in the network. To overcome this e ect, a dissemination strategy was developed through the use of Bloom Filters, a data structure capable of eliminating most of the unnecessary access to memory, by ensuring a node the existence of a speci c packet, with a certain probability, from among all the information its neighbours contain. This strategy was implemented in the DTN software mOVERS, developed by Instituto de Telecomunicações in Aveiro (IT) and Veniam® and posteriorly tested in the same emulator. The emulator used simulates a vehicular network with information gathered from the vehicular network in the city of Porto. After the analysis of the obtained results, it was concluded that the new proposed dissemination strategy, named FILTER, has ful lled its primary objective, namely, the reduction of the vehicular network's overhead, with a small loss in the delivery rate of the information. For future work, it is advisable to perform a more extensive study in methods related to the information's usefulness to a neighbour to optimize such delivery rate

    Elastic phone : towards detecting and mitigating computation and energy inefficiencies in mobile apps

    Get PDF
    Mobile devices have become ubiquitous and their ever evolving capabilities are bringing them closer to personal computers. Nonetheless, due to their mobility and small size factor constraints, they still present many hardware and software challenges. Their limited battery life time has led to the design of mobile networks that are inherently different from previous networks (e.g., wifi) and more restrictive task scheduling. Additionally, mobile device ecosystems are more susceptible to the heterogeneity of hardware and from conflicting interests of distributors, internet service providers, manufacturers, developers, etc. The high number of stakeholders ultimately responsible for the performance of a device, results in an inconsistent behavior and makes it very challenging to build a solution that improves resource usage in most cases. The focus of this thesis is on the study and development of techniques to detect and mitigate computation and energy inefficiencies in mobile apps. It follows a bottom-up approach, starting from the challenges behind detecting inefficient execution scheduling by looking only at apps’ implementations. It shows that scheduling APIs are largely misused and have a great impact on devices wake up frequency and on the efficiency of existing energy saving techniques (e.g., batching scheduled executions). Then it addresses many challenges of app testing in the dynamic analysis field. More specifically, how to scale mobile app testing with realistic user input and how to analyze closed source apps’ code at runtime, showing that introducing humans in the app testing loop improves the coverage of app’s code and generated network volume. Finally, using the combined knowledge of static and dynamic analysis, it focuses on the challenges of identifying the resource hungry sections of apps and how to improve their execution via offloading. There is a special focus on performing non-intrusive offloading transparent to existing apps and on in-network computation offloading and distribution. It shows that, even without a custom OS or app modifications, in-network offloading is still possible, greatly improving execution times, energy consumption and reducing both end-user experienced latency and request drop rates. It concludes with a real app measurement study, showing that a good portion of the most popular apps’ code can indeed be offloaded and proposes future directions for the app testing and computation offloading fields.Los dispositivos móviles se han tornado omnipresentes y sus capacidades están en constante evolución acercándolos a los computadoras personales. Sin embargo, debido a su movilidad y tamaño reducido, todavía presentan muchos desafíos de hardware y software. Su duración limitada de batería ha llevado al diseño de redes móviles que son inherentemente diferentes de las redes anteriores y una programación de tareas más restrictiva. Además, los ecosistemas de dispositivos móviles son más susceptibles a la heterogeneidad de hardware y los intereses conflictivos de las entidades responsables por el rendimiento final de un dispositivo. El objetivo de esta tesis es el estudio y desarrollo de técnicas para detectar y mitigar las ineficiencias de computación y energéticas en las aplicaciones móviles. Empieza con los desafíos detrás de la detección de planificación de ejecución ineficientes, mirando sólo la implementación de las aplicaciones. Se muestra que las API de planificación son en gran medida mal utilizadas y tienen un gran impacto en la frecuencia con que los dispositivos despiertan y en la eficiencia de las técnicas de ahorro de energía existentes. A continuación, aborda muchos desafíos de las pruebas de aplicaciones en el campo de análisis dinámica. Más específicamente, cómo escalar las pruebas de aplicaciones móviles con una interacción realista y cómo analizar código de aplicaciones de código cerrado durante la ejecución, mostrando que la introducción de humanos en el bucle de prueba de aplicaciones mejora la cobertura del código y el volumen de comunicación de red generado. Por último, combinando la análisis estática y dinámica, se centra en los desafíos de identificar las secciones de aplicaciones con uso intensivo de recursos y cómo mejorar su ejecución a través de la ejecución remota (i.e.,"offload"). Hay un enfoque especial en el "offload" no intrusivo y transparente a las aplicaciones existentes y en el "offload"y distribución de computación dentro de la red. Demuestra que, incluso sin un sistema operativo personalizado o modificaciones en la aplicación, el "offload" en red sigue siendo posible, mejorando los tiempos de ejecución, el consumo de energía y reduciendo la latencia del usuario final y las tasas de caída de solicitudes de "offload". Concluye con un estudio real de las aplicaciones más populares, mostrando que una buena parte de su código puede de hecho ser ejecutado remotamente y propone direcciones futuras para los campos de "offload" de aplicaciones.Postprint (published version

    Elastic phone : towards detecting and mitigating computation and energy inefficiencies in mobile apps

    Get PDF
    Mobile devices have become ubiquitous and their ever evolving capabilities are bringing them closer to personal computers. Nonetheless, due to their mobility and small size factor constraints, they still present many hardware and software challenges. Their limited battery life time has led to the design of mobile networks that are inherently different from previous networks (e.g., wifi) and more restrictive task scheduling. Additionally, mobile device ecosystems are more susceptible to the heterogeneity of hardware and from conflicting interests of distributors, internet service providers, manufacturers, developers, etc. The high number of stakeholders ultimately responsible for the performance of a device, results in an inconsistent behavior and makes it very challenging to build a solution that improves resource usage in most cases. The focus of this thesis is on the study and development of techniques to detect and mitigate computation and energy inefficiencies in mobile apps. It follows a bottom-up approach, starting from the challenges behind detecting inefficient execution scheduling by looking only at apps’ implementations. It shows that scheduling APIs are largely misused and have a great impact on devices wake up frequency and on the efficiency of existing energy saving techniques (e.g., batching scheduled executions). Then it addresses many challenges of app testing in the dynamic analysis field. More specifically, how to scale mobile app testing with realistic user input and how to analyze closed source apps’ code at runtime, showing that introducing humans in the app testing loop improves the coverage of app’s code and generated network volume. Finally, using the combined knowledge of static and dynamic analysis, it focuses on the challenges of identifying the resource hungry sections of apps and how to improve their execution via offloading. There is a special focus on performing non-intrusive offloading transparent to existing apps and on in-network computation offloading and distribution. It shows that, even without a custom OS or app modifications, in-network offloading is still possible, greatly improving execution times, energy consumption and reducing both end-user experienced latency and request drop rates. It concludes with a real app measurement study, showing that a good portion of the most popular apps’ code can indeed be offloaded and proposes future directions for the app testing and computation offloading fields.Los dispositivos móviles se han tornado omnipresentes y sus capacidades están en constante evolución acercándolos a los computadoras personales. Sin embargo, debido a su movilidad y tamaño reducido, todavía presentan muchos desafíos de hardware y software. Su duración limitada de batería ha llevado al diseño de redes móviles que son inherentemente diferentes de las redes anteriores y una programación de tareas más restrictiva. Además, los ecosistemas de dispositivos móviles son más susceptibles a la heterogeneidad de hardware y los intereses conflictivos de las entidades responsables por el rendimiento final de un dispositivo. El objetivo de esta tesis es el estudio y desarrollo de técnicas para detectar y mitigar las ineficiencias de computación y energéticas en las aplicaciones móviles. Empieza con los desafíos detrás de la detección de planificación de ejecución ineficientes, mirando sólo la implementación de las aplicaciones. Se muestra que las API de planificación son en gran medida mal utilizadas y tienen un gran impacto en la frecuencia con que los dispositivos despiertan y en la eficiencia de las técnicas de ahorro de energía existentes. A continuación, aborda muchos desafíos de las pruebas de aplicaciones en el campo de análisis dinámica. Más específicamente, cómo escalar las pruebas de aplicaciones móviles con una interacción realista y cómo analizar código de aplicaciones de código cerrado durante la ejecución, mostrando que la introducción de humanos en el bucle de prueba de aplicaciones mejora la cobertura del código y el volumen de comunicación de red generado. Por último, combinando la análisis estática y dinámica, se centra en los desafíos de identificar las secciones de aplicaciones con uso intensivo de recursos y cómo mejorar su ejecución a través de la ejecución remota (i.e.,"offload"). Hay un enfoque especial en el "offload" no intrusivo y transparente a las aplicaciones existentes y en el "offload"y distribución de computación dentro de la red. Demuestra que, incluso sin un sistema operativo personalizado o modificaciones en la aplicación, el "offload" en red sigue siendo posible, mejorando los tiempos de ejecución, el consumo de energía y reduciendo la latencia del usuario final y las tasas de caída de solicitudes de "offload". Concluye con un estudio real de las aplicaciones más populares, mostrando que una buena parte de su código puede de hecho ser ejecutado remotamente y propone direcciones futuras para los campos de "offload" de aplicaciones

    Privacy in characterizing and recruiting patients for IoHT-aided digital clinical trials

    Get PDF
    Nowadays there is a tremendous amount of smart and connected devices that produce data. The so-called IoT is so pervasive that its devices (in particular the ones that we take with us during all the day - wearables, smartphones...) often provide some insights on our lives to third parties. People habitually exchange some of their private data in order to obtain services, discounts and advantages. Sharing personal data is commonly accepted in contexts like social networks but individuals suddenly become more than concerned if a third party is interested in accessing personal health data. The healthcare systems worldwide, however, begun to take advantage of the data produced by eHealth solutions. It is clear that while on one hand the technology proved to be a great ally in the modern medicine and can lead to notable benefits, on the other hand these processes pose serious threats to our privacy. The process of testing, validating and putting on the market a new drug or medical treatment is called clinical trial. These trials are deeply impacted by the technological advancements and greatly benefit from the use of eHealth solutions. The clinical research institutes are the entities in charge of leading the trials and need to access as much health data of the patients as possible. However, at any phase of a clinical trial, the personal information of the participants should be preserved and maintained private as long as possible. During this thesis, we will introduce an architecture that protects the privacy of personal data during the first phases of digital clinical trials (namely the characterization phase and the recruiting phase), allowing potential participants to freely join trials without disclosing their personal health information without a proper reward and/or prior agreement. We will illustrate what is the trusted environment that is the most used approach in eHealth and, later, we will dig into the untrusted environment where the concept of privacy is more challenging to protect while maintaining usability of data. Our architecture maintains the individuals in full control over the flow of their personal health data. Moreover, the architecture allows the clinical research institutes to characterize the population of potentiant users without direct access to their personal data. We validated our architecture with a proof of concept that includes all the involved entities from the low level hardware up to the end application. We designed and realized the hardware capable of sensing, processing and transmitting personal health data in a privacy preserving fashion that requires little to none maintenance

    Development of a Security-Focused Multi-Channel Communication Protocol and Associated Quality of Secure Service (QoSS) Metrics

    Get PDF
    The threat of eavesdropping, and the challenge of recognizing and correcting for corrupted or suppressed information in communication systems is a consistent challenge. Effectively managing protection mechanisms requires an ability to accurately gauge the likelihood or severity of a threat, and adapt the security features available in a system to mitigate the threat. This research focuses on the design and development of a security-focused communication protocol at the session-layer based on a re-prioritized communication architecture model and associated metrics. From a probabilistic model that considers data leakage and data corruption as surrogates for breaches of confidentiality and integrity, a set of metrics allows the direct and repeatable quantification of the security available in single- or multi-channel networks. The quantification of security is based directly upon the probabilities that adversarial listeners and malicious disruptors are able to gain access to or change the original message. Fragmenting data across multiple channels demonstrates potential improvements to confidentiality, while duplication improves the integrity of the data against disruptions. Finally, the model and metrics are exercised in simulation. The ultimate goal is to minimize the information available to adversaries

    Evaluation of Spatio-Temporal Queries in Sensor Networks

    Get PDF

    Artificial immune system for the Internet

    Get PDF
    We investigate the usability of the Artificial Immune Systems (AIS) approach for solving selected problems in computer networks. Artificial immune systems are created by using the concepts and algorithms inspired by the theory of how the Human Immune System (HIS) works. We consider two applications: detection of routing misbehavior in mobile ad hoc networks, and email spam filtering. In mobile ad hoc networks the multi-hop connectivity is provided by the collaboration of independent nodes. The nodes follow a common protocol in order to build their routing tables and forward the packets of other nodes. As there is no central control, some nodes may defect to follow the common protocol, which would have a negative impact on the overall connectivity in the network. We build an AIS for the detection of routing misbehavior by directly mapping the standard concepts and algorithms used for explaining how the HIS works. The implementation and evaluation in a simulator shows that the AIS mimics well most of the effects observed in the HIS, e.g. the faster secondary reaction to the already encountered misbehavior. However, its effectiveness and practical usability are very constrained, because some particularities of the problem cannot be accounted for by the approach, and because of the computational constrains (reported also in AIS literature) of the used negative selection algorithm. For the spam filtering problem, we apply the AIS concepts and algorithms much more selectively and in a less standard way, and we obtain much better results. We build the AIS for antispam on top of a standard technique for digest-based collaborative email spam filtering. We notice un advantageous and underemphasized technological difference between AISs and the HIS, and we exploit this difference to incorporate the negative selection in an innovative and computationally efficient way. We also improve the representation of the email digests used by the standard collaborative spam filtering scheme. We show that this new representation and the negative selection, when used together, improve significantly the filtering performance of the standard scheme on top of which we build our AIS. Our complete AIS for antispam integrates various innate and adaptive AIS mechanisms, including the mentioned specific use of the negative selection and the use of innate signalling mechanisms (PAMP and danger signals). In this way the AIS takes into account users' profiles, implicit or explicit feedback from the users, and the bulkiness of spam. We show by simulations that the overall AIS is very good both in detecting spam and in avoiding misdetection of good emails. Interestingly, both the innate and adaptive mechanisms prove to be crucial for achieving the good overall performance. We develop and test (within a simulator) our AIS for collaborative spam filtering in the case of email communications. The solution however seems to be well applicable to other types of Internet communications: Internet telephony, chat/sms, forum, news, blog, or web. In all these cases, the aim is to allow the wanted communications (content) and prevent those unwanted from reaching the end users and occupying their time and communication resources. The filtering problems, faced or likely to be faced in the near future by these applications, have or are likely to have the settings similar to those that we have in the email case: need for openness to unknown senders (creators of content, initiators of the communication), bulkiness in receiving spam (many recipients are usually affected by the same spam content), tolerance of the system to a small damage (to small amounts of unfiltered spam), possibility to implicitly or explicitly and in a cheap way obtain a feedback from the recipients about the damage (about spam that they receive), need for strong tolerance to wanted (non-spam) content. Our experiments with the email spam filtering show that our AIS, i.e. the way how we build it, is well fitted to such problem settings

    The Eavesdropper\u27s Dilemma

    Get PDF
    This paper examines the problem of surreptitious Internet interception from the eavesdropper\u27s point of view. We introduce the notion of fidelity in digital eavesdropping. In particular, we formalize several kinds of network noise that might degrade fidelity, most notably confusion, and show that reliable network interception may not be as simple as previously thought or even always possible. Finally, we suggest requirements for high fidelity network interception, and show how systems that do not meet these requirements can be vulnerable to countermeasures, which in some cases can be performed entirely by a third party without the cooperation or even knowledge of the communicating parties
    corecore