12 research outputs found
Sensor de performance para células de memória CMOS
Vivemos hoje em dia tempos em que quase tudo tem um pequeno componente
eletrónico e por sua vez esse componente precisa de uma memória para guardar as suas
instruções. Dentro dos vários tipos de memórias, as Complementary Metal Oxide
Semiconductor (CMOS) são as que mais utilização têm nos circuitos integrados e, com o
avançar da tecnologia a ficar cada vez com uma escala mais reduzida, faz com que os
problemas de performance e fiabilidade sejam uma constante. Efeitos como o BTI (Bias
Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier
Injection), EM (Electromigration), ao longo do tempo vão deteriorando os parâmetros físicos
dos transístores de efeito de campo (MOSFET), mudando as suas propriedades elétricas.
Associado ao efeito de BTI podemos ter o efeito PBTI (Positive BTI), que afeta mais
os transístores NMOS, e o efeito NBTI (Negative BTI), que afeta mais os transístores PMOS.
Se para nanotecnologias até 32 nanómetros o efeito NBTI é dominante, para tecnologias mais
baixas os 2 efeitos são igualmente importantes. Porém, existem ainda outras variações no
desempenho que podem colocar em causa o bom funcionamento dos circuitos, como as
variações de processo (P), tensão (V) e temperatura (T), ou considerando todas estas variações,
e de uma forma genérica, PVTA (Process, Voltage, Temperature and Aging).
Tendo como base as células de memória de acesso aleatório (RAM, Random Access
Memory), em particular as memórias estáticas (SRAM, Static Random Access Memory) e
dinâmicas (DRAM, Dynamic Random Access Memory) que possuem tempos de leitura e
escrita precisos, estas ficam bastante expostas ao envelhecimento dos seus componentes e,
consecutivamente, acontece um decréscimo na sua performance, resultando em transições
mais lentas, que por sua vez fará com que existam leituras e escritas mais lentas e poderão
ocorrer erros nessas leituras e escritas . Para além destes fenómenos, temos também o facto de
a margem de sinal ruido (SNM - Static Noise Margin) diminuir, fazendo com que a fiabilidade
da memória seja colocada em causa.
O envelhecimento das memórias CMOS traduz-se, portanto, na ocorrência de erros nas
memórias ao longo do tempo, o que é indesejável, especialmente em sistemas críticos onde a
ocorrência de um erro ou uma falha na memória pode significar por em risco sistemas de elevada importância e fundamentais (por exemplo, em sistemas de segurança, um erro pode desencadear um conjunto de ações não desejadas). Anteriormente já foram apresentadas algumas soluções para esta monitorização dos
erros de uma memória, disponíveis na literatura, como é o caso do sensor de envelhecimento
embebido no circuito OCAS (On-Chip Aging Sensor), que permite detetar envelhecimento
numa SRAM provocado pelo envelhecimento por NBTI. Contudo este sensor demonstra
algumas limitações, pois apenas se aplica a um conjunto de células SRAM conectadas a uma
bit line, não sendo aplicado individualmente a outras células de memória como uma DRAM e
não contemplando o efeito PBTI. Outra solução apresentada anteriormente é o Sensor de
Envelhecimento para Células de Memória CMOS que demonstra alguma evolução em relação
ao sensor OCAS. Contudo, ainda tem limitações, como é o caso de estar bastante dependente
do sincronismo com a memória e não permitir qualquer tipo de calibração do sistema ao longo
do seu funcionamento.
O trabalho apresentado nesta dissertação resolve muitos dos problemas existentes nos
trabalhos anteriores. Isto é, apresenta-se um sensor de performance para memórias capaz de
reconhecer quando é que a memória pode estar na eminência de falhar, devido a fatores que
afetam o desempenho da memória nas operações de escrita e leitura. Ou seja, sinaliza de forma
preditiva as falhas.
Este sensor está dividido em três grandes partes, como a seguir se descreve. O
Transistion Detector é uma delas, que funciona como um “conversor” das transições na bit
line da memória para o sensor, criando pulsos de duração proporcional à duração da transição
na bit line, sendo que uma transição rápida resulta em pulsos curtos e uma transição lenta
resulta em pulsos longos. Esta parte do circuito apresenta 2 tipos de configurações para o caso
de ser aplicado numa SRAM, sendo que uma das configurações é para as memórias SRAM
inicializadas a VDD, e a segunda configuração para memórias SRAM inicializadas a VDD/2.
É também apresentada uma terceira configuração para o caso de o detetor ser aplicado numa
DRAM. O funcionamento do detetor de transições está baseado num conjunto de inversores
desequilibrados (ou seja, com capacidades de condução diferentes entre o transístor N e P no
inversor), criando assim inversores do tipo N (com o transístor N mais condutivo que o P) e
inversores do tipo P (com o transístor P mais condutivo que o N) que respondem de forma
diferente às transições de 1 para 0 e vice-versa. Estas diferenças serão cruciais para a criação
do pulso final que entrará no Pulse Detetor. Este segundo bloco do sensor é responsável por
carregar um condensador com uma tensão proporcional ao tempo que a bit line levou a
transitar. É nesta parte que se apresenta uma caraterística nova e importante, quando
comparado com as soluções já existentes, que é a capacidade do sensor poder ser calibrado. Para isso, é utilizado um conjunto de transístores para carregar o condensador durante o impulso gerado no detetor de transições, que permitem aumentar ou diminuir a resistência de
carga do condensador, ficando este com mais ou menos tensão (a tensão proporcional ao tempo
da transição da bit line) a ser usada na Comparação seguinte. O terceiro grande bloco deste
sensor é resumidamente um bloco comparador, que compara a tensão guardada no
condensador com uma tensão de referência disponível no sensor e definida durante o projeto.
Este comparador tem a função de identificar qual destas 2 tensões é a mais alta (a do
condensador, que é proporcional ao tempo de transição da bit line, ou a tensão de referência)
e fazer com a mesma seja “disparada” para VDD, sendo que a tensão mais baixa será colocada
a VSS. Desta forma é sinalizado se a transição que está a ser avaliada deve ser considerada
um erro ou não.
Para controlar todo o processo, o sensor tem na sua base de funcionamento um
controlador (uma máquina de estados finita composta por 3 estados). O primeiro estado do
controlador é o estado de Reset, que faz com que todos os pontos do circuito estejam com as
tenções necessárias ao início de funcionamento do mesmo. O segundo estado é o Sample, que
fica a aguardar uma transição na bit line para ser validada pelo sensor e fazer com que o mesmo
avance para o terceiro estado, que é o de Compare, onde ativa o comparador do sensor e coloca
no exterior o resultado dessa comparação. Assim, se for detetado uma transição demasiado
lenta na bit line, que é um sinal de erro, o mesmo será sinalizado para o exterior activando o
sinal de saída. Caso o sensor não detete nenhum erro nas transições, o sinal de saída não é
activado.
O sensor tem a capacidade de funcionar em modo on-line, ou seja, não é preciso
desligar o circuito de memória do seu funcionamento normal para poder ser testado. Para além
disso, pode ainda ser utilizado internamente na memória, como sensor local (monitorizando
as células reais de memória), ou externamente, como sensor global, caso seja colocado a
monitorizar uma célula de memória fictícia.Within the several types of memories, the Complementary Metal Oxide
Semiconductor (CMOS) are the most used in the integrated circuits and, as technology
advances and becomes increasingly smaller in scale, it makes performance and reliability a
constant problem. Effects such as BTI (Bias Thermal Instability), the positive (PBTI - Positive
BTI) and the negative (NBTI - Negative BTI), TDDB (Time Dependent Dielectric
Breakdown), HCI (Hot Carrier Injection), EM (Electromigration), etc., are aging effects that
contribute to a cumulatively degradation of the transistors. Moreover, other parametric
variations may also jeopardize the proper functioning of circuits and contribute to reduce
circuits’ performance, such as process variations (P), power-supply voltage variations (V) and
temperature variations (T), or considering all these variations, and in a generic way, PVTA
(Process, Voltage, Temperature and Aging).
The Sensor proposed in this paper aims to signalize these problems so that the user
knows when the memory operation may be compromised. The sensor is made up of three
important parts, the Transition Detector, the Pulse Detector and the Comparator, creating a
sensor that converts bit line transition created in a memory operation (read or write) into a
pulse and a voltage, that can be compared with a reference voltage available in the sensor. If
the reference voltage is higher than the voltage proportional to the bit line transition time, the
sensor output is not activated; but if the bit line transition time is high enough to generate a
voltage higher than the reference voltage in the sensor, the sensor output signalizes a predictive
error, denoting that the memory performance is in a critical state that may lead to an error if
corrective measures are not taken.
One important feature in this sensor topology is that it can be calibrated during
operation, by controlling sensor’s sensibility to the bit line transition. Another important
feature is that it can be applied locally, to monitor the online operation of the memory, or
globally, by monitoring a dummy memory in pre-defined conditions. Moreover, it can be
applied to SRAM or DRAM, being the first online sensor available for DRAM memories
Low-toxic chemical solution deposition methods for the preparation of multi-functional (Pb1-xCax)TiO3 thin films
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid. Facultad de Ciencias. Departamento de Química Inorgánica. Fecha de lectura: 18-09-200
Physical Unclonability Framework for the Internet of Things
Ph. D. ThesisThe rise of the Internet of Things (IoT) creates a tendency to construct unified architectures
with a great number of edge nodes and inherent security risks due to centralisation.
At the same time, security and privacy defenders advocate for decentralised solutions
which divide the control and the responsibility among the entirety of the network nodes.
However, spreading secrets among several parties also expands the attack surface.
This conflict is in part due to the difficulty in differentiating between instances of the
same hardware, which leads to treating physically distinct devices as identical. Harnessing
the uniqueness of each connected device and injecting it into security protocols can provide
solutions to several common issues of the IoT. Secrets can be generated directly from this
uniqueness without the need to manually embed them into devices, reducing both the risk
of exposure and the cost of managing great numbers of devices.
Uniqueness can then lead to the primitive of unclonability. Unclonability refers to
ensuring the difficulty of producing an exact duplicate of an entity via observing and
measuring the entity’s features and behaviour. Unclonability has been realised on a physical
level via the use of Physical Unclonable Functions (PUFs). PUFs are constructions
that extract the inherent unclonable features of objects and compound them into a usable
form, often that of binary data. PUFs are also exceptionally useful in IoT applications
since they are low-cost, easy to integrate into existing designs, and have the potential to
replace expensive cryptographic operations. Thus, a great number of solutions have been
developed to integrate PUFs in various security scenarios. However, methods to expand
unclonability into a complete security framework have not been thoroughly studied.
In this work, the foundations are set for the development of such a framework through
the formulation of an unclonability stack, in the paradigm of the OSI reference model. The
stack comprises layers propagating the primitive from the unclonable PUF ICs, to devices,
network links and eventually unclonable systems. Those layers are introduced, and work
towards the design of protocols and methods for several of the layers is presented.
A collection of protocols based on one or more unclonable tokens or authority devices
is proposed, to enable the secure introduction of network nodes into groups or neighbourhoods.
The role of the authority devices is that of a consolidated, observable root of
ownership, whose physical state can be verified. After their introduction, nodes are able
to identify and interact with their peers, exchange keys and form relationships, without
the need of continued interaction with the authority device.
Building on this introduction scheme, methods for establishing and maintaining unclonable
links between pairs of nodes are introduced. These pairwise links are essential for
the construction of relationships among multiple network nodes, in a variety of topologies.
Those topologies and the resulting relationships are formulated and discussed.
While the framework does not depend on specific PUF hardware, SRAM PUFs are
chosen as a case study since they are commonly used and based on components that
are already present in the majority of IoT devices. In the context of SRAM PUFs and
with a view to the proposed framework, practical issues affecting the adoption of PUFs in
security protocols are discussed. Methods of improving the capabilities of SRAM PUFs
are also proposed, based on experimental data.School of Engineering Newcastle Universit
Notes on pre-Nightingale nursing: what it was and what it was not
Tanya Langtree studied the evolution of nursing praxis between the sixteenth and mid-nineteenth centuries. Tanya found pre-professionalised nursing practice was scientifically informed and aimed to restore health, promote comfort and prevent complications. These findings disrupt assumptions about early nursing and encourages the profession to reframe its understanding of the past
Error Detecting Refreshment for Embedded DRAMs
This paper presents a new technique for on-line consistency checking of embedded DRAMs. The basic idea is to use the periodic refresh operation for concurrently computing a test characteristic of the memory contents and compare it to a precomputed reference characteristic. Experiments show that the proposed technique significantly reduces the time between the occurrence of an error and its detection (error detection latency). It also achieves a very high error coverage at low hardware costs. Therefore it perfectly complements standard on-line checking approaches relying on error detecting codes, where the detection of certain types of errors is guaranteed, but only during READ operations accessing the erroneous data
Alcohol treatment policy 1950-1990 : from alcohol treatment to alcohol problems management.
The thesis draws on historical and social policy perspectives to
examine the
factors
influencing development and change
in
alcohol treatment
policy
between
1950
and
1990. The study uses data from primary and secondary
documentation
and
from taped
interviews.
Three themes are highlighted as particularly relevant to
an examination
of policy
trends.
The first of these is the emergence and evolution
of a
`policy
community'.
Spearheaded
by psychiatrists in the 1960s, the `policy community'
broadened to include
other
professional groups and the voluntary
sector
by the
1990s.
The
second theme
concerns
the role of research in influencing the nature and
direction
of treatment
policy.
The
study indicates increasing use of research as the
rationale
for
policy
and
illustrates the
move towards a `contractor' relationship
between
research workers
and policy
makers.
The final theme deals with the influence on policy of
ideological frames
and
changing
conceptualisations of the alcohol problem.
Two
major
shifts were
important for
treatment, the re-discovery of the disease concept of alcoholism
in the
1950s
and the
emergence of a new public health model of alcohol
problems
in the
1970s.
Within these
broad themes, the study includes an
examination of tensions - between
different
professional perspectives, between government
departments
with
differing
responsibilities, between different ideologies - and of moves to
secure consensus
in the
formulation and implementation of treatment policy.
The final chapter addresses shifts in thinking from the
re-emergence
of
a
`disease'
model
of alcoholism in the 1950s, to a `consumptionist' (population-based)
model
in the
1970s,
towards a `harm reduction' approach to
alcohol
problems management
in the
1990s.
The
thesis concludes that over the past
forty
years
competing
paradigms
of the
alcohol
problem have emerged and gained policy
salience
within particular
historical-social
contexts in the search for policy consensus to
manage the
problematic
aspects
of
alcohol
consumption
Op. 48 : composition as re-creation
What is it to write 'new' music? Music is not written in a vacuum, and Op. 48 investigates how one small Bach piece's (re)sourcefulness can result in a variety of musics. The collection (of 48 pieces) explores not only scientific areas of musicology and analysis, but subjective and intuitive areas of performance, resonances with other art forms and more fantastical elements such as virtual history and humour.More challengingly, the amount of music (some 2 hours) presents an issue over the language used in discourse, for the linearity of words is partial and even misleading. Op. 48 is a criticism of what Bach notated and an economic way of talking about how music talks. Drawing on poetic and philosophic insights, 'Bach' is played with re-creatively: the precedents and parallel developments of the procedures I employ form a further stage of possible development.Rather than repeating empty encomia in this Bach Festschrift, Op. 48 honours Bach's invention by creating further music. Op. 48 is arguably not subservient to the Bach, and asks when (if ever) pieces grow up and become independent organisms. For while Op. 48 exhibits a wide-ranging diversity, it does not (and perhaps cannot) claim to be exhaustive, since the music seeds further pieces, which questions if it is viable to talk of an art work as discrete at all
ULTRARAM™:Design, Modelling, Fabrication and Testing of Ultra-low-power III-V Memory Devices and Arrays
In this thesis, a novel memory based on III-V compound semiconductors is studied, both theoretically and experimentally, with the aim of developing a technology with superior performance capabilities to established and emerging rival memories. This technology is known as ULTRARAM™. The memory concept is based on quantum resonant tunnelling through InAs/AlSb heterostructures, which are engineered to only allow electron tunnelling at precise energy alignment(s) when a bias is applied. The memory device features a floating gate (FG) as the storage medium, where electrons that tunnel through the InAs/AlSb heterostructure are confined in the FG to define the memory logic (0 or 1). The large conduction band offset of the InAs/AlSb heterojunction (2.1 eV) keeps electrons in the FG indefinitely, constituting a non-volatile logic state. Electrons can be removed from the FG via a similar resonant tunnelling process by reversing the voltage polarity. This concept shares similarities with flash memory, however the resonant tunnelling mechanism provides ultra-low-power, low-voltage, high-endurance and high-speed switching capability. The quantum tunnelling junction is studied in detail using the non-equilibrium Green’s function (NEGF) method. Then, Poisson-Schrödinger simulations are used to design a high-contrast readout procedure for the memory using the unusual type-III band-offset of the InAs/GaSb heterojunction. With the theoretical groundwork for the technology laid out, the memory performance is modelled and a high-density ULTRARAM™ memory architecture is proposed for random-access memory applications. Later, NEGF calculations are used for a detailed study of the process tolerances in the tunnelling region required for ULTRARAM™ large-scale wafer manufacture. Using interfacial misfit array growth techniques, III-V layers (InAs, AlSb and GaSb) for ULTRARAM™ were successfully implemented on both GaAs and Si substrates. Single devices and 2×2 arrays were then fabricated using a top-down processing approach. The memories demonstrated outstanding memory performance on both substrate materials at 10, 20 and 50 µm gate lengths at room temperature. Non-volatile switching was obtained with ≤ 2.5 V pulses, corresponding to a switching energy per unit area that is lower than DRAM and flash by factors of 100 and 1000 respectively. Memory logic was retained for over 24 hours whilst undergoing over 10^6 readout operations. Analysis of the retention data suggests a storage time exceeding 1000 years. Devices showed promising durability results, enduring over 10^7 cycles without degradation, at least two orders of magnitude improvement over flash memory. Switching of the cell’s logic was possible at 500 µs pulse durations for a 20 µm gate length, suggesting a subns switching time if scaled to modern-day feature sizes. The proposed half-voltage architecture is shown to operate in principle, where the memory state is preserved during a disturbance test of > 10^5 half-cycles. With regard to the device physics, these findings point towards ULTRARAM™ as a universal memory candidate. The path towards future commercial viability relies on process development for aggressive device and array-size scaling and implementation on larger Si wafe
The fisheries and fishery industries of the United States, Part 6
Fishery Industry of the U.S. 18 July. SMD 124 (pts. 1-7), 47-1, v6-11, 3569p. [1998-2003] Indian porpoise, sea-otter, and whale hunting; Indian shell middens; use of mussels, shell-fish, clams, and oysters; sealing by Makah Indians