101 research outputs found

    An Analysis of the Conventional Wire Maintenance Methods and Transition Wire Integrity Programs Utilized in the Aviation Industry.

    Get PDF
    Aging aircraft wiring poses a significant threat to both commercial and military aircraft. Recent air disasters involving aging aircraft wiring have made it clear that aging wiring can be catastrophic. Aging of an electrical wiring system can result in loss of critical functions of equipment or loss of information regarding equipment operation. Either result can lead to an electrical failure causing smoke and fire, consequently being a danger to public health and aircraft safety. Conventional maintenance practices do not effectively manage aging wiring problems. More proactive methods are needed so that aircraft wiring failures can be anticipated and wiring systems can be repaired or replaced before failures occur. This thesis will identify the effects of aging wiring systems, the potential degradation to aircraft safety and regulations regarding aircraft wire safety. This thesis will evaluate the conventional wire maintenance practices and transition wire integrity programs in the aviation industry

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment

    A survey on cost-effective context-aware distribution of social data streams over energy-efficient data centres

    Get PDF
    Social media have emerged in the last decade as a viable and ubiquitous means of communication. The ease of user content generation within these platforms, e.g. check-in information, multimedia data, etc., along with the proliferation of Global Positioning System (GPS)-enabled, always-connected capture devices lead to data streams of unprecedented amount and a radical change in information sharing. Social data streams raise a variety of practical challenges, including derivation of real-time meaningful insights from effectively gathered social information, as well as a paradigm shift for content distribution with the leverage of contextual data associated with user preferences, geographical characteristics and devices in general. In this article we present a comprehensive survey that outlines the state-of-the-art situation and organizes challenges concerning social media streams and the infrastructure of the data centres supporting the efficient access to data streams in terms of content distribution, data diffusion, data replication, energy efficiency and network infrastructure. We systematize the existing literature and proceed to identify and analyse the main research points and industrial efforts in the area as far as modelling, simulation and performance evaluation are concerned

    Big Data and the Digitalizing Society in China

    Full text link
    This thesis investigates the development of big data and the smart city, and the relationship between humans, digital technologies, and cities in the context of China. Contributing to the emerging interest of human geography in how big data and other digital technologies reshape the urban space and everyday life, the thesis presents a distinct data story about a digitalizing society of China. In a big data era, accompanying the ubiquity of digital devices and technologies is the lack of consciousness of their socio-political consequences, which nonetheless constitute an important productive aspect of society. Engaging with the discussions in human geography and beyond about the relationships between digital technologies and Deleuzian ‘societies of control’, Maurizio Lazzarato’s work on the production of subjectivity and Gilles Deleuze and Félix Guattari’s conception of the machine and the organism, I argue for further understandings of the coexistence of control and discipline as distinct yet dependent modes of social control. I place specific emphasis upon the coexisting processes of dividualisation and individualisation in the operation of big data and other digital technologies. The thesis further illustrates this through the empirical analysis of the development of two smart urbanism projects, the City Brain and the Health Code, and of short video platforms in China, which for me represent two different aspects of everyday life influenced by big data that concern two different political relations, that is, biopolitics, as understood by Michel Foucault, and noopolitics (i.e., politics of the mind) as understood by Lazzarato. In order to de-fetishize big data, the thesis proceeds to discuss its technicity by characterising big data as mnemotechnics, a real-time technology, and a cosmotechnology respectively through the work of philosophers Bernard Stiegler and Yuk Hui. This intervention is also a proposal to rethink and reinvent the relations between humans and digital technology. Turning to Foucault’s ‘aesthetic of existence’, the thesis discusses the possibility of alternative ways of life in a big data era and drawing on Deleuze and Guattari’s work, proposes ‘becoming a digital nomad’ as a methodology to live with digital technologies, explore new possibilities and events, embrace unplanned encounters, and make new, temporary connections in the big data era

    Remain

    Get PDF
    In a world undergoing constant media-driven change, the infrastructures, materialities, and temporalities of remains have become urgent. This book engages with the remains and remainders of media cultures through the lens both of theater and performance studies and of media archaeology. By taking "remain" as a verb, noun, state, and process of becoming, the authors explore the epistemological, social, and political implications

    Process Control for Biological Nutrient Removal Processes in Fluidized Beds Treating Low Carbon to Nitrogen Municipal Wastewater

    Get PDF
    Conventional wastewater treatment techniques - utilizing microorganisms to remove organics and nutrients (i.e. nitrogen and phosphorus) from a water stream and partially incorporate them into their cell structure - struggle to adapt with increased urbanization due to land and infrastructure requirements. The circulating fluidized-bed bioreactor (CFBBR) was developed as a way to provide biological treatment in an urbanized area by cultivating high-density bacteria on an inert media. The technology operates as a pre-anoxic nitrification/denitrification wastewater treatment process. The system is initially loaded with media, providing a platform for microbial growth. Internal recycle streams in the system provide the energy to fluidize the media – increasing mass transfer and accelerating microbial growth and pollutant removal rates. A pilot-scale CFBBR unit operated in Guangzhou, China, at an organic loading rate of 0.50 kg COD/day and a nitrogen loading rate of 0.075 kg N/day, was able to achieve a 93% reduction in carbon and an 88% reduction in nitrogen. In addition, an innovative sensor network was constructed from open source hardware to monitor and adjust dissolved oxygen (DO) levels inside a 15 L lab-scale partial nitrification fluidized-bed. The treatment strategy for this biological process was to create reactor conditions that favour nitrifying bacteria that convert ammonia to nitrite, called ammonia oxidizing bacteria (AOB), over nitrifying bacteria that convert nitrite to nitrate, called nitrite oxidizing bacteria (NOB). The CFBBR, by virtue of its unique abilities to control biofilm thickness and accordingly biological solids retention time, offers significant advantages over other emerging nitrogen removal processes. The control system was designed to automatically adjust the air flow to the bioreactor to maintain a DO level of approximately 1 mg/L, conditions that favour AOBs activity over NOBs. The unit operated continuously for 40 days as the bioreactor was fed with 200 mg/L of synthetic ammonia wastewater (devoid of carbon) to a maximum nitrogen loading rate of 6 g NH4-N/day. The control system was able to maintain an ambient DO level of 1.30 mg/L. At this loading rate, the effluent nitrate concentration was approximately 5% of the influent feed – indicating low NOB populations in the reactor
    corecore