99,197 research outputs found
Intervention strategies for children and adolescent with disorders: from intrapsychic to transactional perspective
A large amount of studies and clinical evidence document the importance of infancy and early childhood influences on long term developmental
trajectories toward mental health or psychopathology (Sameroff, 2000, 2010). Without healthy, productive adults no culture could continue to be successful. This concern is the main motivation for society to support child development research. Although the academic interests of contemporary developmental researchers range widely in cognitive and socialemotional domains, the political justification for supporting such studies is that they will lead to the understanding and ultimate prevention of behavioural problems that are costly to society. With these motivations and support, there have been major advances in our understanding of the intellectual, emotional, and social behaviour of children, adolescents and adults.
This progress has forced conceptual reorientations from a unidirectionalunderstanding of development (e.g., parents affect children and not vice versa) toward a bidirectional conceptualization of development. Childrenare now assumed to affect and even select their environments as much as their environments affect their behaviour. Indeed, key among many of the most influential developmental theories in the past several decades is the assumption that children have bidirectional, or reciprocal, relationships with their environments (Bandura, 1977; Bronfenbrenner, 1979).
To date, it is widely accepted that childrenâs healthy development is shaped by complex transactional processes among a variety of risk and
protective factors, with cumulative risk factors increasing the prediction of emotional and behavioural problems (Anda et al., 2007; Rutter & Sroufe,
2000; Sameroff, 2000). Risk and protective factors include individual child characteristics such as genetic and constitutional propensities and
cognitive strengths and vulnerabilities; parent characteristics such as mental health, education level, sense of efficacy, and resourcefulness; family
factors such as quality of the parent-child relationship, emotional climate, and marital quality; community connectedness factors such as parental
social support, social resources, and childrenâs peer relationships; and neighbourhood factors such as availability of resources, adequacy of housing,
and levels of crime and violence (Sameroff & Fiese, 2000). The predictive value of these factors across many studies led to the development
of transactional-bioecological models that attempt to conceptualize the relative contributions of proximal and distal risk and protective factors to
childrenâs developmental outcome (Bronfenbrenner & Morris, 2006). In 1975, Sameroff and Chandler proposed the transactional model.
This theoretical framework has become central to understanding the interplay between nature and nurture in explaining the development of positive
and negative outcomes for children. The transactional model is a model of qualitative change. Sameroff asserted that the transactional model concerned qualitative rather than incremental change and that the underlying process was dialectical rather mechanistic in nature.
The aim of this chapter is to explore this theoretical framework and its intervention strategies.
In the first part, the transactional model will be described after a brief summary that will illustrate the transition from intrapsychic to transactional
perspective. In the second part, intervention strategies for children and adolescent will be described. The attention of research on environmental risk and
protective factors has fostered a more comprehensive understanding of what is necessary to improve the cognitive and social-emotional welfare of
children and adolescents
Noise-Resilient Group Testing: Limitations and Constructions
We study combinatorial group testing schemes for learning -sparse Boolean
vectors using highly unreliable disjunctive measurements. We consider an
adversarial noise model that only limits the number of false observations, and
show that any noise-resilient scheme in this model can only approximately
reconstruct the sparse vector. On the positive side, we take this barrier to
our advantage and show that approximate reconstruction (within a satisfactory
degree of approximation) allows us to break the information theoretic lower
bound of that is known for exact reconstruction of
-sparse vectors of length via non-adaptive measurements, by a
multiplicative factor .
Specifically, we give simple randomized constructions of non-adaptive
measurement schemes, with measurements, that allow efficient
reconstruction of -sparse vectors up to false positives even in the
presence of false positives and false negatives within the
measurement outcomes, for any constant . We show that, information
theoretically, none of these parameters can be substantially improved without
dramatically affecting the others. Furthermore, we obtain several explicit
constructions, in particular one matching the randomized trade-off but using measurements. We also obtain explicit constructions
that allow fast reconstruction in time \poly(m), which would be sublinear in
for sufficiently sparse vectors. The main tool used in our construction is
the list-decoding view of randomness condensers and extractors.Comment: Full version. A preliminary summary of this work appears (under the
same title) in proceedings of the 17th International Symposium on
Fundamentals of Computation Theory (FCT 2009
Automated Discharging Arguments for Density Problems in Grids
Discharging arguments demonstrate a connection between local structure and
global averages. This makes it an effective tool for proving lower bounds on
the density of special sets in infinite grids. However, the minimum density of
an identifying code in the hexagonal grid remains open, with an upper bound of
and a lower bound of . We present a new, experimental framework for producing discharging
arguments using an algorithm. This algorithm replaces the lengthy case analysis
of human-written discharging arguments with a linear program that produces the
best possible lower bound using the specified set of discharging rules. We use
this framework to present a lower bound of on
the density of an identifying code in the hexagonal grid, and also find several
sharp lower bounds for variations on identifying codes in the hexagonal,
square, and triangular grids.Comment: This is an extended abstract, with 10 pages, 2 appendices, 5 tables,
and 2 figure
A precise asteroseismic age and radius for the evolved Sun-like star KIC 11026764
The primary science goal of the Kepler Mission is to provide a census of
exoplanets in the solar neighborhood, including the identification and
characterization of habitable Earth-like planets. The asteroseismic
capabilities of the mission are being used to determine precise radii and ages
for the target stars from their solar-like oscillations. Chaplin et al. (2010)
published observations of three bright G-type stars, which were monitored
during the first 33.5 days of science operations. One of these stars, the
subgiant KIC 11026764, exhibits a characteristic pattern of oscillation
frequencies suggesting that it has evolved significantly. We have derived
asteroseismic estimates of the properties of KIC 11026764 from Kepler
photometry combined with ground-based spectroscopic data. We present the
results of detailed modeling for this star, employing a variety of independent
codes and analyses that attempt to match the asteroseismic and spectroscopic
constraints simultaneously. We determine both the radius and the age of KIC
11026764 with a precision near 1%, and an accuracy near 2% for the radius and
15% for the age. Continued observations of this star promise to reveal
additional oscillation frequencies that will further improve the determination
of its fundamental properties.Comment: 16 pages, 6 figures, 4 tables, ApJ in pres
Results on the Redundancy of Universal Compression for Finite-Length Sequences
In this paper, we investigate the redundancy of universal coding schemes on
smooth parametric sources in the finite-length regime. We derive an upper bound
on the probability of the event that a sequence of length , chosen using
Jeffreys' prior from the family of parametric sources with unknown
parameters, is compressed with a redundancy smaller than
for any . Our results also confirm
that for large enough and , the average minimax redundancy provides a
good estimate for the redundancy of most sources. Our result may be used to
evaluate the performance of universal source coding schemes on finite-length
sequences. Additionally, we precisely characterize the minimax redundancy for
two--stage codes. We demonstrate that the two--stage assumption incurs a
negligible redundancy especially when the number of source parameters is large.
Finally, we show that the redundancy is significant in the compression of small
sequences.Comment: accepted in the 2011 IEEE International Symposium on Information
Theory (ISIT 2011
Optimal Iris Fuzzy Sketches
Fuzzy sketches, introduced as a link between biometry and cryptography, are a
way of handling biometric data matching as an error correction issue. We focus
here on iris biometrics and look for the best error-correcting code in that
respect. We show that two-dimensional iterative min-sum decoding leads to
results near the theoretical limits. In particular, we experiment our
techniques on the Iris Challenge Evaluation (ICE) database and validate our
findings.Comment: 9 pages. Submitted to the IEEE Conference on Biometrics: Theory,
Applications and Systems, 2007 Washington D
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.Comment: 47 pp., 3 figures. Prelim. version in Eurocrypt 2004, Springer LNCS
3027, pp. 523-540. Differences from version 3: minor edits for grammar,
clarity, and typo
- âŠ