876 research outputs found

    TrustShadow: Secure Execution of Unmodified Applications with ARM TrustZone

    Full text link
    The rapid evolution of Internet-of-Things (IoT) technologies has led to an emerging need to make it smarter. A variety of applications now run simultaneously on an ARM-based processor. For example, devices on the edge of the Internet are provided with higher horsepower to be entrusted with storing, processing and analyzing data collected from IoT devices. This significantly improves efficiency and reduces the amount of data that needs to be transported to the cloud for data processing, analysis and storage. However, commodity OSes are prone to compromise. Once they are exploited, attackers can access the data on these devices. Since the data stored and processed on the devices can be sensitive, left untackled, this is particularly disconcerting. In this paper, we propose a new system, TrustShadow that shields legacy applications from untrusted OSes. TrustShadow takes advantage of ARM TrustZone technology and partitions resources into the secure and normal worlds. In the secure world, TrustShadow constructs a trusted execution environment for security-critical applications. This trusted environment is maintained by a lightweight runtime system that coordinates the communication between applications and the ordinary OS running in the normal world. The runtime system does not provide system services itself. Rather, it forwards requests for system services to the ordinary OS, and verifies the correctness of the responses. To demonstrate the efficiency of this design, we prototyped TrustShadow on a real chip board with ARM TrustZone support, and evaluated its performance using both microbenchmarks and real-world applications. We showed TrustShadow introduces only negligible overhead to real-world applications.Comment: MobiSys 201

    Internet of Things and Neural Network Based Energy Optimization and Predictive Maintenance Techniques in Heterogeneous Data Centers

    Full text link
    Rapid growth of cloud-based systems is accelerating growth of data centers. Private and public cloud service providers are increasingly deploying data centers all around the world. The need for edge locations by cloud computing providers has created large demand for leasing space and power from midsize data centers in smaller cities. Midsize data centers are typically modular and heterogeneous demanding 100% availability along with high service level agreements. Data centers are recognized as an increasingly troublesome percentage of electricity consumption. Growing energy costs and environmental responsibility have placed the data center industry, particularly midsize data centers under increasing pressure to improve its operational efficiency. The power consumption is mainly due to servers and networking devices on computing side and cooling systems on the facility side. The facility side systems have complex interactions with each other. The static control logic and high number of configuration and nonlinear interdependency create challenges in understanding and optimizing energy efficiency. Doing analytical or experimental approach to determine optimum configuration is very challenging however, a learning based approach has proven to be effective for optimizing complex operations. Machine learning methodologies have proven to be effective for optimizing complex systems. In this thesis, we utilize a learning engine that learns from operationally collected data to accurately predict Power Usage Effectiveness (PUE) and creation of intelligent method to validate and test results. We explore new techniques on how to design and implement Internet of Things (IoT) platform to collect, store and analyze data. First, we study using machine learning framework to predictively detect issues in facility side systems in a modular midsize data center. We propose ways to recognize gaps between optimal values and operational values to identify potential issues. Second, we study using machine learning techniques to optimize power usage in facility side systems in a modular midsize data center. We have experimented with neural network controllers to further optimize the data suite cooling system energy consumption in real time. We designed, implemented, and deployed an Internet of Things framework to collect relevant information from facility side infrastructure. We designed flexible configuration controllers to connect all facility side infrastructure within data center ecosystem. We addressed resiliency by creating reductant controls network and mission critical alerting via edge device. The data collected was also used to enhance service processes that improved operational service level metrics. We observed high impact on service metrics with faster response time (increased 77%) and first time resolution went up by 32%. Further, our experimental results show that we can predictively identify issues in the cooling systems. And, the anomalies in the systems can be identified 30 days to 60 days ahead. We also see the potential to optimize power usage efficiency in the range of 3% to 6%. In the future, more samples of issues and corrective actions can be analyzed to create practical implementation of neural network based controller for real-time optimization.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136074/1/Final Dissertation Vishal Singh.pdfDescription of Final Dissertation Vishal Singh.pdf : Dissertatio

    Navigating to New Shores: Seizing the Future for Sustainable and Resilient U.S. Freshwater Resources

    Get PDF
    After more than six years of intensive, solution oriented work on U.S. freshwater issues, The Johnson Foundation at Wingspread is concluding its Charting New Waters initiative. Through convening hundreds of experts representing different sectors and perspectives, we have amplified important ideas and innovations that can make a difference. This executive summary of our final report synthesizes insights from the full arc of Charting New Waters and is meant to provide a platform for our many partners and other leaders as they continue to address water resource and infrastructure challenges. Without significant changes, existing water systems will soon no longer be able to provide the services that citizens have come to expect. Recent water crises have illustrated that the economic and social consequences of inaction are far too great for this nation and its communities. It is time to accelerate the adoption and implementation of the transformative solutions we know are possible.Our full report leads with a vision that illustrates what The Johnson Foundation believes is both possible and necessary to achieve if our nation is to successfully navigate our water challenges. It then presents a set of principles, summarized below, to help guide the efforts of leaders in various sectors as they act upon the recommendations we offer. The recommendations themselves, which are also summarized in brief below, fall under the following five key ideas:Optimize the use of available water suppliesTransition to next-generation wastewater systemsIntegrate the management of water, energy and food productionInstitutionalize the value of waterCreate integrated utilitiesThe Johnson Foundation selected the recommendations presented in the report because of their timeliness and promise for leveraging existing momentum. We hope the recommendations shed light on what is needed to catalyze transformative change and the benefits we can reap as a result. We also hope they will inspire additional action to seize the future for sustainable and resilient U.S. freshwater resources

    Delay Performance and Cybersecurity of Smart Grid Infrastructure

    Get PDF
    To address major challenges to conventional electric grids (e.g., generation diversification and optimal deployment of expensive assets), full visibility and pervasive control over utilities\u27 assets and services are being realized through the integratio

    The 14th Overture Workshop: Towards Analytical Tool Chains

    Get PDF
    This report contains the proceedings from the 14th Overture workshop organized in connection with the Formal Methods 2016 symposium. This includes nine papers describing different technological progress in relation to the Overture/VDM tool support and its connection with other tools such as Crescendo, Symphony, INTO-CPS, TASTE and ViennaTalk

    A study of the applicability of software-defined networking in industrial networks

    Get PDF
    173 p.Las redes industriales interconectan sensores y actuadores para llevar a cabo funciones de monitorización, control y protección en diferentes entornos, tales como sistemas de transporte o sistemas de automatización industrial. Estos sistemas ciberfísicos generalmente están soportados por múltiples redes de datos, ya sean cableadas o inalámbricas, a las cuales demandan nuevas prestaciones, de forma que el control y gestión de tales redes deben estar acoplados a las condiciones del propio sistema industrial. De este modo, aparecen requisitos relacionados con la flexibilidad, mantenibilidad y adaptabilidad, al mismo tiempo que las restricciones de calidad de servicio no se vean afectadas. Sin embargo, las estrategias de control de red tradicionales generalmente no se adaptan eficientemente a entornos cada vez más dinámicos y heterogéneos.Tras definir un conjunto de requerimientos de red y analizar las limitaciones de las soluciones actuales, se deduce que un control provisto independientemente de los propios dispositivos de red añadiría flexibilidad a dichas redes. Por consiguiente, la presente tesis explora la aplicabilidad de las redes definidas por software (Software-Defined Networking, SDN) en sistemas de automatización industrial. Para llevar a cabo este enfoque, se ha tomado como caso de estudio las redes de automatización basadas en el estándar IEC 61850, el cual es ampliamente usado en el diseño de las redes de comunicaciones en sistemas de distribución de energía, tales como las subestaciones eléctricas. El estándar IEC 61850 define diferentes servicios y protocolos con altos requisitos en terminos de latencia y disponibilidad de la red, los cuales han de ser satisfechos mediante técnicas de ingeniería de tráfico. Como resultado, aprovechando la flexibilidad y programabilidad ofrecidas por las redes definidas por software, en esta tesis se propone una arquitectura de control basada en el protocolo OpenFlow que, incluyendo tecnologías de gestión y monitorización de red, permite establecer políticas de tráfico acorde a su prioridad y al estado de la red.Además, las subestaciones eléctricas son un ejemplo representativo de infraestructura crítica, que son aquellas en las que un fallo puede resultar en graves pérdidas económicas, daños físicos y materiales. De esta forma, tales sistemas deben ser extremadamente seguros y robustos, por lo que es conveniente la implementación de topologías redundantes que ofrezcan un tiempo de reacción ante fallos mínimo. Con tal objetivo, el estándar IEC 62439-3 define los protocolos Parallel Redundancy Protocol (PRP) y High-availability Seamless Redundancy (HSR), los cuales garantizan un tiempo de recuperación nulo en caso de fallo mediante la redundancia activa de datos en redes Ethernet. Sin embargo, la gestión de redes basadas en PRP y HSR es estática e inflexible, lo que, añadido a la reducción de ancho de banda debida la duplicación de datos, hace difícil un control eficiente de los recursos disponibles. En dicho sentido, esta tesis propone control de la redundancia basado en el paradigma SDN para un aprovechamiento eficiente de topologías malladas, al mismo tiempo que se garantiza la disponibilidad de las aplicaciones de control y monitorización. En particular, se discute cómo el protocolo OpenFlow permite a un controlador externo configurar múltiples caminos redundantes entre dispositivos con varias interfaces de red, así como en entornos inalámbricos. De esta forma, los servicios críticos pueden protegerse en situaciones de interferencia y movilidad.La evaluación de la idoneidad de las soluciones propuestas ha sido llevada a cabo, principalmente, mediante la emulación de diferentes topologías y tipos de tráfico. Igualmente, se ha estudiado analítica y experimentalmente cómo afecta a la latencia el poder reducir el número de saltos en las comunicaciones con respecto al uso de un árbol de expansión, así como balancear la carga en una red de nivel 2. Además, se ha realizado un análisis de la mejora de la eficiencia en el uso de los recursos de red y la robustez alcanzada con la combinación de los protocolos PRP y HSR con un control llevado a cabo mediante OpenFlow. Estos resultados muestran que el modelo SDN podría mejorar significativamente las prestaciones de una red industrial de misión crítica

    Software-Defined Networking: A Comprehensive Survey

    Get PDF
    peer reviewedThe Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment
    corecore