2,439 research outputs found
Waste Reduction, Construction and Demolition Debris: Guide for Building, Construction and Environmental Professionals, Revised November 2008
This document is intended to lay the foundation for resource reduction strategies in new construction, renovation and demolition. If you have an innovative idea or information that you believe should be included in future updates of this manual please email Shelly
Codner at [email protected] or Jan Loyson at [email protected].
Throughout this manual, we use the term “waste reduction” to define waste management initiatives that will result in less waste going to the landfill. In accordance with the waste management hierarchy these practices include reducing (waste prevention), reusing
(deconstruction and salvage), recycling and renewing (making old things new again) - in that order. This manual will explain what these practices are and how to incorporate them into your projects
Trusted Computing and Secure Virtualization in Cloud Computing
Large-scale deployment and use of cloud computing in industry
is accompanied and in the same time hampered by concerns regarding protection of
data handled by cloud computing providers. One of the consequences of moving
data processing and storage off company premises is that organizations have
less control over their infrastructure. As a result, cloud service (CS) clients
must trust that the CS provider is able to protect their data and
infrastructure from both external and internal attacks. Currently however, such
trust can only rely on organizational processes declared by the CS
provider and can not be remotely verified and validated by an external party.
Enabling the CS client to verify the integrity of the host where the
virtual machine instance will run, as well as to ensure that the virtual
machine image has not been tampered with, are some steps towards building
trust in the CS provider. Having the tools to perform such
verifications prior to the launch of the VM instance allows the CS
clients to decide in runtime whether certain data should be stored- or calculations
should be made on the VM instance offered by the CS provider.
This thesis combines three components -- trusted computing, virtualization technology
and cloud computing platforms -- to address issues of trust and
security in public cloud computing environments. Of the three components,
virtualization technology has had the longest evolution and is a cornerstone
for the realization of cloud computing. Trusted computing is a recent
industry initiative that aims to implement the root of trust in a hardware
component, the trusted platform module. The initiative has been formalized
in a set of specifications and is currently at version 1.2. Cloud computing
platforms pool virtualized computing, storage and network resources in
order to serve a large number of customers customers that use a multi-tenant
multiplexing model to offer on-demand self-service over broad network.
Open source cloud computing platforms are, similar to trusted computing, a
fairly recent technology in active development.
The issue of trust in public cloud environments is addressed
by examining the state of the art within cloud computing security and
subsequently addressing the issues of establishing trust in the launch of a
generic virtual machine in a public cloud environment. As a result, the thesis
proposes a trusted launch protocol that allows CS clients
to verify and ensure the integrity of the VM instance at launch time, as
well as the integrity of the host where the VM instance is launched. The protocol
relies on the use of Trusted Platform Module (TPM) for key generation and data protection.
The TPM also plays an essential part in the integrity attestation of the
VM instance host. Along with a theoretical, platform-agnostic protocol,
the thesis also describes a detailed implementation design of the protocol
using the OpenStack cloud computing platform.
In order the verify the implementability of the proposed protocol, a prototype
implementation has built using a distributed deployment of OpenStack.
While the protocol covers only the trusted launch procedure using generic
virtual machine images, it presents a step aimed to contribute towards
the creation of a secure and trusted public cloud computing environment
Can Carbon Sinks be Operational? An RFF Workshop Summary
An RFF Workshop brought together experts from around the world to assess the feasibility of using biological sinks to sequester carbon as part of a global atmospheric mitigation effort. The chapters of this proceeding are a result of that effort. Although the intent of the workshop was not to generate a consensus, a number of studies suggest that sinks could be a relatively inexpensive and effective carbon management tool. The chapters cover a variety of aspects and topics related to the monitoring and measurement of carbon in biological systems. They tend to support the view the carbon sequestration using biological systems is technically feasible with relatively good precision and at relatively low cost. Thus carbon sinks can be operational.carbon, sinks, global warming, sequestration, forests
Design for Test and Hardware Security Utilizing Tester Authentication Techniques
Design-for-Test (DFT) techniques have been developed to improve testability of integrated circuits. Among the known DFT techniques, scan-based testing is considered an efficient solution for digital circuits. However, scan architecture can be exploited to launch a side channel attack. Scan chains can be used to access a cryptographic core inside a system-on-chip to extract critical information such as a private encryption key. For a scan enabled chip, if an attacker is given unlimited access to apply all sorts of inputs to the Circuit-Under-Test (CUT) and observe the outputs, the probability of gaining access to critical information increases. In this thesis, solutions are presented to improve hardware security and protect them against attacks using scan architecture. A solution based on tester authentication is presented in which, the CUT requests the tester to provide a secret code for authentication. The tester authentication circuit limits the access to the scan architecture to known testers. Moreover, in the proposed solution the number of attempts to apply test vectors and observe the results through the scan architecture is limited to make brute-force attacks practically impossible. A tester authentication utilizing a Phase Locked Loop (PLL) to encrypt the operating frequency of both DUT/Tester has also been presented. In this method, the access to the critical security circuits such as crypto-cores are not granted in the test mode. Instead, a built-in self-test method is used in the test mode to protect the circuit against scan-based attacks. Security for new generation of three-dimensional (3D) integrated circuits has been investigated through 3D simulations COMSOL Multiphysics environment. It is shown that the process of wafer thinning for 3D stacked IC integration reduces the leakage current which increases the chip security against side-channel attacks
Hobson Manufacturing Corp. v. SE/Z Const. Augmentation Record 2 Dckt. 38202
https://digitalcommons.law.uidaho.edu/idaho_supreme_court_record_briefs/4488/thumbnail.jp
New readout and data-acquisition system in an electron-tracking Compton camera for MeV gamma-ray astronomy (SMILE-II)
For MeV gamma-ray astronomy, we have developed an electron-tracking Compton
camera (ETCC) as a MeV gamma-ray telescope capable of rejecting the radiation
background and attaining the high sensitivity of near 1 mCrab in space. Our
ETCC comprises a gaseous time-projection chamber (TPC) with a micro pattern gas
detector for tracking recoil electrons and a position-sensitive scintillation
camera for detecting scattered gamma rays. After the success of a first balloon
experiment in 2006 with a small ETCC (using a 101015 cm
TPC) for measuring diffuse cosmic and atmospheric sub-MeV gamma rays (Sub-MeV
gamma-ray Imaging Loaded-on-balloon Experiment I; SMILE-I), a (30 cm)
medium-sized ETCC was developed to measure MeV gamma-ray spectra from celestial
sources, such as the Crab Nebula, with single-day balloon flights (SMILE-II).
To achieve this goal, a 100-times-larger detection area compared with that of
SMILE-I is required without changing the weight or power consumption of the
detector system. In addition, the event rate is also expected to dramatically
increase during observation. Here, we describe both the concept and the
performance of the new data-acquisition system with this (30 cm) ETCC to
manage 100 times more data while satisfying the severe restrictions regarding
the weight and power consumption imposed by a balloon-borne observation. In
particular, to improve the detection efficiency of the fine tracks in the TPC
from 10\% to 100\%, we introduce a new data-handling algorithm in
the TPC. Therefore, for efficient management of such large amounts of data, we
developed a data-acquisition system with parallel data flow.Comment: 11 pages, 24 figure
Creation of backdoors in quantum communications via laser damage
Practical quantum communication (QC) protocols are assumed to be secure
provided implemented devices are properly characterized and all known side
channels are closed. We show that this is not always true. We demonstrate a
laser-damage attack capable of modifying device behaviour on-demand. We test it
on two practical QC systems for key distribution and coin-tossing, and show
that newly created deviations lead to side channels. This reveals that laser
damage is a potential security risk to existing QC systems, and necessitates
their testing to guarantee security.Comment: Changed the title to match the journal version. 9 pages, 5 figure
SystemC Model of Power Side-Channel Attacks Against AI Accelerators: Superstition or not?
As training artificial intelligence (AI) models is a lengthy and hence costly
process, leakage of such a model's internal parameters is highly undesirable.
In the case of AI accelerators, side-channel information leakage opens up the
threat scenario of extracting the internal secrets of pre-trained models.
Therefore, sufficiently elaborate methods for design verification as well as
fault and security evaluation at the electronic system level are in demand. In
this paper, we propose estimating information leakage from the early design
steps of AI accelerators to aid in a more robust architectural design. We first
introduce the threat scenario before diving into SystemC as a standard method
for early design evaluation and how this can be applied to threat modeling. We
present two successful side-channel attack methods executed via SystemC-based
power modeling: correlation power analysis and template attack, both leading to
total information leakage. The presented models are verified against an
industry-standard netlist-level power estimation to prove general feasibility
and determine accuracy. Consequently, we explore the impact of additive noise
in our simulation to establish indicators for early threat evaluation. The
presented approach is again validated via a model-vs-netlist comparison,
showing high accuracy of the achieved results. This work hence is a solid step
towards fast attack deployment and, subsequently, the design of
attack-resilient AI accelerators
How Insurers Benefit from the Housing Rehabilitation Efforts of NeighborWorks Organizations
Insurance companies have a vested interest in communities and homes that are safe and secure. Through their successful but underutilized housing rehabilitation expertise, NeighborWorks organizations seek to improve the quality of older, unsafe and/or vacant and abandoned properties in the communities they serve
Low-power CMOS rectifier and Chien search design for RFID tags
Automotive sensors implemented in radio frequency identification (RFID) tags can correct data errors by using BCH (Bose-Chaudhuri-Hocquenghem) decoder, for which Chien search is a computation-intensive key step. Existing low power approaches have drastically degrading performance for multiple-bit-correcting codes. This thesis presents a novel approach of using register-transfer-level (RTL) power management in the search process, leading to significant power savings for BCH codes with higher correction capability. An example for the (255, 187, 9) BCH code has been implemented in 0.18μm CMOS technology.
We also consider ways of conserving power for the sole power harvester on a passive tag – the rectifier. With ST CMOS 90nm technology, a three-stage differential-drive CMOS rectifier is designed by using a new transistor scaling method and a piece-wise linear matching technique. For the standard 915MHz band, simulation indicates high power conversion efficiency (PCE) of 74% and a significantly increased output power of 30.3μW at 10 meters
- …