2,064 research outputs found
Recommended from our members
Sex and the city: A postmodern reading
Sex and the City was a television show that aired on Home Box Office from 1998-2004. The tv show succeeded because of feminist\u27s wanting a modern woman\u27s drama rich in episodes about consumption, women\u27s sexuality, financial independence, fashion and contemporary relationship dynamics. The characters captured and perpetuated just that. The modern take on the ideologies that drive women\u27s perception of personal fulfillment, body image, consumerism, social behavior and values and romantic relationship dynamics made this tv show the phenomena that it became
Whole genome metagenomic analysis of the gut microbiome of differently fed infants identifies differences in microbial composition and functional genes, including an absent CRISPR/Cas9 gene in the formula-fed cohort
Background: Advancements in sequencing capabilities have enhanced the study of the human microbiome. There are limited studies focused on the gastro-intestinal (gut) microbiome of infants, particularly the impact of diet between breast-fed (BF) versus formula-fed (FF). It is unclear what effect, if any, early feeding has on short- term or long-term composition and function of the gut microbiome.
Results: Using a shotgun metagenomics approach, differences in the gut microbiome between BF (n = 10) and FF (n = 5) infants were detected. A Jaccard distance principle coordinate analysis was able to cluster BF versus FF infants based on the presence or absence of species identified in their gut microbiome. Thirty-two genera were identified as statistically different in the gut microbiome sequenced between BF and FF infants. Furthermore, the computational workflow identified 371 bacterial genes that were statistically different between the BF and FF cohorts in abundance. Only seven genes were lower in abundance (or absent) in the FF cohort compared to the BF cohort, including CRISPR/Cas9; whereas, the remaining candidates, including autotransporter adhesins, were higher in abundance in the FF cohort compared to BF cohort.
Conclusions: These studies demonstrated that FF infants have, at an early age, a significantly different gut microbiome with potential implications for function of the fecal microbiota. Interactions between the fecal microbiota and host hinted at here have been linked to numerous diseases. Determining whether these non- abundant or more abundant genes have biological consequence related to infant feeding may aid in under- standing the adult gut microbiome, and the pathogenesis of obesity
Magnetic resonance imaging tumor regression shrinkage patterns after neoadjuvant chemotherapy in patients with locally advanced breast cancer: correlation with tumor biological subtypes and pathological response after therapy
The objective of this study is to analyze magnetic resonance imaging shrinkage pattern of tumor regression after neoadjuvant chemotherapy and to evaluate its relationship with biological subtypes and pathological response. We reviewed the magnetic resonance imaging studies of 51 patients with single mass-enhancing lesions (performed at time 0 and at the II and last cycles of neoadjuvant chemotherapy). Tumors were classified as Luminal A, Luminal B, HER2+, and Triple Negative based on biological and immunohistochemical analysis after core needle biopsy. We classified shrinkage pattern, based on tumor regression morphology on magnetic resonance imaging at the II cycle, as concentric, nodular, and mixed. We assigned a numeric score (0: none; 1: low; 2: medium; 3: high) to the enhancement intensity decrease. Pathological response on the surgical specimen was classified as complete (grade 5), partial (grades 4-3), and non-response (grades 1-2) according to Miller and Payne system. Fisher test was used to relate shrinkage pattern with biological subtypes and final pathological response. Seventeen patients achieved complete response, 25 partial response, and 9 non-response. A total of 13 lesions showed nodular pattern, 20 concentric, and 18 mixed. We found an association between concentric pattern and HER2+ (p < 0.001) and mixed pattern and Luminal A lesions (p < 0.001). We observed a statistical significant correlation between concentric pattern and complete response (p < 0.001) and between mixed pattern and non-response (p = 0.005). Enhancement intensity decrease 3 was associated with complete response (p < 0.001). Shrinkage pattern and enhancement intensity decrease may serve as early response indicators after neoadjuvant chemotherapy. Shrinkage pattern correlates with tumor biological subtypes
Realizability of embedded controllers: from hybrid models to correct implementations
Un controller embedded \ue8 un dispositivo (ovvero, un'opportuna combinazione di componenti hardware e software) che, immerso in un ambiente dinamico, deve reagire alle variazioni ambientali in tempo reale. I controller embedded sono largamente adottati in molti contesti della vita moderna, dall'automotive all'avionica, dall'elettronica di consumo alle attrezzature mediche. La correttezza di tali controller \ue8 indubbiamente cruciale.
Per la progettazione e per la verifica di un controller embedded, spesso sorge la necessit\ue0 di modellare un intero sistema che includa sia il controller, sia il suo ambiente circostante.
La natura di tale sistema \ue8 ibrido. Esso, infatti, \ue8 ottenuto integrando processi ad eventi discreti (i.e., il controller) e processi a tempo continuo (i.e., l'ambiente). Sistemi di questo tipo sono chiamati cyber-physical (CPS) o sistemi ibridi. Le dinamiche di tali sistemi non possono essere rappresentati efficacemente utilizzando o solo un modello (i.e., rappresentazione) discreto o solo un modello continuo.
Diversi tipi di modelli possono sono stati proposti per descrivere i sistemi ibridi. Questi si concentrano su obiettivi diversi: modelli dettagliati sono eccellenti per la simulazione del sistema, ma non sono adatti per la sua verifica; modelli meno dettagliati sono eccellenti per la verifica, ma non sono convenienti per i successivi passi di raffinamento richiesti per la progettazione del sistema, e cos\uec via.
Tra tutti questi modelli, gli Automi Ibridi (HA) [8, 77] rappresentano il formalismo pi\uf9 efficace per la simulazione e la verifica di sistemi ibridi. In particolare, un automa ibrido rappresenta i processi ad eventi discreti per mezzo di macchine a stati finiti (FSM), mentre i processi a tempo continuo sono rappresentati mediante variabili "continue" la cui dinamica \ue8 specificata da equazioni differenziali ordinarie (ODE) o loro generalizzazioni (e.g., inclusioni differenziali).
Sfortunatamente, a causa della loro particolare semantica, esistono diverse difficolt\ue0 nel raffinare un modello basato su automi ibridi in un modello realizzabile e, di conseguenza, esistono difficolt\ue0 nell'automatizzare il flusso di progettazione di sistemi ibridi a partire da automi ibridi.
Gli automi ibridi, infatti, sono considerati dispositivi "perfetti e istantanei". Essi adottano una nozione di tempo e di variabili basata su insiemi "densi" (i.e., l'insieme dei numeri reali). Pertanto, gli automi ibridi possono valutare lo stato (i.e., i valori delle variabili) del sistema in ogni istante, ovvero in ogni infinitesimo di tempo, e con la massima precisione. Inoltre, sono in grado di eseguire computazioni o reagire ad eventi di sincronizzazione in modo istantaneo, andando a cambiare la modalit\ue0 di funzionamento del sistema senza alcun ritardo. Questi aspetti sono convenienti a livello di modellazione, ma nessun dispositivo hardware/software potrebbe implementare correttamente tali comportamenti, indipendentemente dalle sue prestazioni.
In altre parole, il controller modellato potrebbe non essere implementabile, ovvero, esso potrebbe non essere realizzabile affatto.
Questa tesi affronta questo problema proponendo una metodologia completa e gli strumenti necessari per derivare da modelli basati su automi ibridi, modelli realizzabili e le corrispondenti implementazioni corrette.
In un modello realizzabile, il controller analizza lo stato del sistema ad istanti temporali discreti, tipicamente fissati dalla frequenza di clock del processore installato sul dispositivo che implementa il controller. Lo stato del sistema \ue8 dato dai valori delle variabili rilevati dai sensori. Questi valori vengono digitalizzati con precisione finita e propagati al controller che li elabora per decidere se cambiare la modalit\ue0 di funzionamento del sistema. In tal caso, il controller genera segnali che, una volta trasmessi agli attuatori, determineranno il cambiamento della modalit\ue0 di funzionamento del sistema. \uc8 necessario tener presente che i sensori e gli attuatori introducono ritardi che seppur limitati, non possono essere trascurati.An embedded controller is a reactive device (e.g., a suitable combination of hardware and software components) that is embedded in a dynamical environment and has to react to environment changes in real time. Embedded controllers are
widely adopted in many contexts of modern life, from automotive to avionics, from consumer electronics to medical equipment. Noticeably, the correctness of such controllers is crucial. When designing and verifying an embedded controller, often the need arises to model the controller and also its surrounding environment.
The nature of the obtained system is hybrid because of the inclusion of both discrete-event (i.e., controller) and continuous-time (i.e., environment) processes whose dynamics cannot be characterized faithfully using either a discrete or continuous model only. Systems of this kind are named cyber-physical (CPS)
or hybrid systems.
Different types of models may be used to describe hybrid systems and they focus on different objectives: detailed models are excellent for simulation but not suitable for verification, high-level models are excellent for verification but not convenient for refinement, and so forth. Among all these models, hybrid automata (HA) [8, 77] have been proposed as a powerful formalism for the design, simulation and verification of hybrid systems. In particular, a hybrid automaton represents discrete-event processes by means of finite state machines (FSM), whereas
continuous-time processes are represented by using real-numbered variables whose dynamics is specified by (ordinary) differential equation (ODE) or their generalizations (e.g., differential inclusions).
Unfortunately, when the high-level model of the hybrid system is a hybrid automaton, several difficulties should be solved in order to automate the refinement phase in the design flow, because of the classical semantics of hybrid automata. In fact, hybrid automata can be considered perfect and instantaneous devices. They adopt a notion of time and evaluation of continuous variables based on
dense sets of values (usually R, i.e., Reals). Thus, they can sample the state (i.e., value assignments on variables) of the hybrid system at any instant in such a dense set R 650. Further, they are capable of instantaneously evaluating guard constraints or reacting to incoming events by performing changes in the operating mode of the hybrid system without any delay. While these aspects are convenient at the modeling level, any model of an embedded controller that relies for its correctness on such precision and instantaneity cannot be implemented by any
hardware/software device, no matter how fast it is. In other words, the controller is un-realizable, i.e., un-implementable.
This thesis proposes a complete methodology and a framework that allows to derive from hybrid automata proved correct in the hybrid domain, correct realizable models of embedded controllers and the related discrete implementations. In a realizable model, the controller samples the state of the environment at periodic
discrete time instants which, typically, are fixed by the clock frequency of the processor implementing the controller. The state of the environment consists of the current values of the relevant variables as observed by the sensors. These values are digitized with finite precision and reported to the controller that may decide
to switch the operating mode of the environment. In such a case, the controller generates suitable output signals that, once transmitted to the actuators, will effect the desired change in the operating mode. It is worth noting that the sensors will report the current values of the variables and the actuators will effect changes in the rates of evolution of the variables with bounded delays
Control of the chemiluminescence spectrum with porous Bragg mirrors
Tunable, battery free light emission is demonstrated in a solid state device
that is compatible with lab on a chip technology and easily fabricated via
solution processing techniques. A porous one dimensional (1D) photonic crystal
(also called Bragg stack or mirror) is infiltrated by chemiluminescence
rubrene-based reagents. The Bragg mirror has been designed to have the photonic
band gap overlapping with the emission spectrum of rubrene. The
chemiluminescence reaction occurs in the intrapores of the photonic crystal and
the emission spectrum of the dye is modulated according to the photonic band
gap position. This is a compact, powerless emitting source that can be
exploited in disposable photonic chip for sensing and point of care
applications.Comment: 8 pages, 3 figure
Test Generation Based on CLP
Functional ATPGs based on simulation are fast,
but generally, they are unable to cover corner cases, and
they cannot prove untestability. On the contrary, functional
ATPGs exploiting formal methods, being exhaustive,
cover corner cases, but they tend to suffer of the state
explosion problem when adopted for verifying large designs.
In this context, we have defined a functional ATPG
that relies on the joint use of pseudo-deterministic simulation
and Constraint Logic Programming (CLP), to
generate high-quality test sequences for solving complex
problems. Thus, the advantages of both simulation-based
and static-based verification techniques are preserved, while
their respective drawbacks are limited. In particular, CLP,
a form of constraint programming in which logic programming
is extended to include concepts from constraint satisfaction,
is well-suited to be jointly used with simulation. In
fact, information learned during design exploration by simulation
can be effectively exploited for guiding the search of
a CLP solver towards DUV areas not covered yet. The test
generation procedure relies on constraint logic programming
(CLP) techniques in different phases of the test generation
procedure.
The ATPG framework is composed of three functional
ATPG engines working on three different models of the
same DUV: the hardware description language (HDL)
model of the DUV, a set of concurrent EFSMs extracted
from the HDL description, and a set of logic constraints
modeling the EFSMs. The EFSM paradigm has been selected
since it allows a compact representation of the DUV
state space that limits the state explosion problem typical
of more traditional FSMs. The first engine is randombased,
the second is transition-oriented, while the last is
fault-oriented.
The test generation is guided by means of transition coverage and fault coverage. In particular, 100% transition
coverage is desired as a necessary condition for fault
detection, while the bit coverage functional fault model
is used to evaluate the effectiveness of the generated test
patterns by measuring the related fault coverage.
A random engine is first used to explore the DUV state
space by performing a simulation-based random walk. This
allows us to quickly fire easy-to-traverse (ETT) transitions
and, consequently, to quickly cover easy-to-detect (ETD)
faults. However, the majority of hard-to-traverse (HTT)
transitions remain, generally, uncovered.
Thus, a transition-oriented engine is applied to
cover the remaining HTT transitions by exploiting a
learning/backjumping-based strategy.
The ATPG works on a special kind of EFSM, called
SSEFSM, whose transitions present the most uniformly
distributed probability of being activated and can be effectively
integrated to CLP, since it allows the ATPG to invoke
the constraint solver when moving between EFSM states.
A constraint logic programming-based (CLP) strategy is
adopted to deterministically generate test vectors that satisfy
the guard of the EFSM transitions selected to be traversed. Given a transition of the SSEFSM, the solver
is required to generate opportune values for PIs that enable
the SSEFSM to move across such a transition.
Moreover, backjumping, also known as nonchronological
backtracking, is a special kind of backtracking
strategy which rollbacks from an unsuccessful
situation directly to the cause of the failure. Thus,
the transition-oriented engine deterministically backjumps
to the source of failure when a transition, whose guard
depends on previously set registers, cannot be traversed.
Next it modifies the EFSM configuration to satisfy the
condition on registers and successfully comes back to the
target state to activate the transition.
The transition-oriented engine generally allows us to
achieve 100% transition coverage. However, 100% transition
coverage does not guarantee to explore all DUV corner
cases, thus some hard-to-detect (HTD) faults can escape
detection preventing the achievement of 100% fault coverage.
Therefore, the CLP-based fault-oriented engine is finally
applied to focus on the remaining HTD faults.
The CLP solver is used to deterministically search for
sequences that propagate the HTD faults observed, but not
detected, by the random and the transition-oriented engine.
The fault-oriented engine needs a CLP-based representation
of the DUV, and some searching functions to generate
test sequences. The CLP-based representation is automatically
derived from the S2EFSM models according to the
defined rules, which follow the syntax of the ECLiPSe CLP
solver. This is not a trivial task, since modeling the
evolution in time of an EFSM by using logic constraints
is really different with respect to model the same behavior
by means of a traditional HW description language. At
first, the concept of time steps is introduced, required to
model the SSEFSM evolution through the time via CLP.
Then, this study deals with modeling of logical variables
and constraints to represent enabling functions and update
functions of the SSEFSM.
Formal tools that exhaustively search for a solution frequently
run out of resources when the state space to be analyzed
is too large. The same happens for the CLP solver,
when it is asked to find a propagation sequence on large sequential
designs. Therefore we have defined a set of strategies
that allow to prune the search space and to manage the
complexity problem for the solver
- …