395,732 research outputs found
Growing islands of interest: nurturing the development of young children’s working theories
This presentation draws on the work from a 2-year collaborative practitioner research project, Moments of wonder, every day events: how are young children theorising and making sense of their world. The project aimed to contribute perspectives to the discussion around the ways young children express and develop working theories, how practitioners understand these and how best to respond to this learning in five Playcentres (parent-led early childhood education settings) in Canterbury, New Zealand.
Children’s working theories, as described in Te Whāriki (the New Zealand early childhood education curriculum), are derived from Claxton’s view that knowledge consists of a large number of purpose-built situation specific packages called ‘mini theories’, and that ‘learning involved a gradual process of editing these mini theories so that they come to contain better knowledge and skill and be better located with respect to the area of experience for which they are suitable’. When children are engaged with others in complex thinking they are forming and strengthening their working theories.
In exploring working theories we recognise that children have many interests. Some of these are fleeting, while others are more connected or revisited more frequently by children. Over the course of our research, we have come to think of these interests as ‘islands’ and in doing so have adopted this as a metaphor for working theories. We were keen to see how we can grow some of these islands of interest: making them more complex, more connected, and more compelling to children.
The research team explored the different ways opportunities can be created for children to express and develop working theories and the outcomes for children’s learning as a result. The presentation will focus on some of the strategies implemented and the ways these have contributed to children’s ‘working theories’ learning as the practitioner researchers attempted to build communities of thinkers and ‘wonderers’
Developing Cross-cultural Understanding through Sociolinguistic Dissemination: A Practice in Multicultural Education
The use of language cannot be separated from the culture of its speakers. Most experts agree that language is the cultural reflection of social community. Therefore, the language learning must involve the learning of related culture with the language being learnt. This paper describes my personal experience in teaching Sociolinguistics II for the students of the English Language and Literature Study Program, Yogyakarta State University through sociolinguistic dissemination to develop their cross-cultural understanding. One of the main issues in the teaching of sociolinguistics is to see how cultural aspects are reflected in the use of language. Realizing the importance of this course toward the understanding of the relationship between language and culture, English Language and Literature Study Program, Yogyakarta State University provides its students with this course in two semesters, sociolinguistics I and II. Unlike the teaching of sociolinguistics I which is more theoretical, sociolinguistics II is more practical. The students are expected to have an overview and experience in conducting a mini research that will be beneficial for them in writing thesis. In my experiences in teaching this subject, the students were assigned to conduct mini research on the issue of cross-cultural understanding. In this case, they were proposed to observe multicultural films from different points of view; namely language and society, bilingualism, language variation, choosing code, language and sex, and politeness and solidarity. In the end, they had to disseminate their observation result. The teaching of this course prioritized the process approach. The students were given a chance to consult their observation, present the research report, and revise it. In fact, the implementation of sociolinguistic dissemination not only shows the students on the significance of cross-cultural understanding in the process of communication but also gives them the experience of doing mini research, group work, writing a paper, consultation, and reporting the result.. Key words: developing, cross cultural understanding, sociolinguistic disseminatio
Development of an oceanographic application in HPC
High Performance Computing (HPC) is used for running advanced application programs
efficiently, reliably, and quickly.
In earlier decades, performance analysis of HPC applications was evaluated based on
speed, scalability of threads, memory hierarchy. Now, it is essential to consider the
energy or the power consumed by the system while executing an application.
In fact, the High Power Consumption (HPC) is one of biggest problems for the High
Performance Computing (HPC) community and one of the major obstacles for exascale
systems design.
The new generations of HPC systems intend to achieve exaflop performances and will
demand even more energy to processing and cooling. Nowadays, the growth of HPC
systems is limited by energy issues
Recently, many research centers have focused the attention on doing an automatic tuning
of HPC applications which require a wide study of HPC applications in terms of power
efficiency.
In this context, this paper aims to propose the study of an oceanographic application,
named OceanVar, that implements Domain Decomposition based 4D Variational model
(DD-4DVar), one of the most commonly used HPC applications, going to evaluate not
only the classic aspects of performance but also aspects related to power efficiency in
different case of studies.
These work were realized at Bsc (Barcelona Supercomputing Center), Spain within the
Mont-Blanc project, performing the test first on HCA server with Intel technology and then on a mini-cluster Thunder with ARM technology.
In this work of thesis it was initially explained the concept of assimilation date, the
context in which it is developed, and a brief description of the mathematical model
4DVAR.
After this problem’s close examination, it was performed a porting from Matlab
description of the problem of data-assimilation to its sequential version in C language.
Secondly, after identifying the most onerous computational kernels in order of time, it
has been developed a parallel version of the application with a parallel multiprocessor
programming style, using the MPI (Message Passing Interface) protocol.
The experiments results, in terms of performance, have shown that, in the case of
running on HCA server, an Intel architecture, values of efficiency of the two most
onerous functions obtained, growing the number of process, are approximately equal to
80%.
In the case of running on ARM architecture, specifically on Thunder mini-cluster,
instead, the trend obtained is labeled as "SuperLinear Speedup" and, in our case, it can
be explained by a more efficient use of resources (cache memory access) compared with
the sequential case.
In the second part of this paper was presented an analysis of the some issues of this
application that has impact in the energy efficiency.
After a brief discussion about the energy consumption characteristics of the Thunder
chip in technological landscape, through the use of a power consumption detector, the
Yokogawa Power Meter, values of energy consumption of mini-cluster Thunder were
evaluated in order to determine an overview on the power-to-solution of this application
to use as the basic standard for successive analysis with other parallel styles.
Finally, a comprehensive performance evaluation, targeted to estimate the goodness of
MPI parallelization, is conducted using a suitable performance tool named Paraver,
developed by BSC.
Paraver is such a performance analysis and visualisation tool which can be used to
analyse MPI, threaded or mixed mode programmes and represents the key to perform a parallel profiling and to optimise the code for High Performance Computing.
A set of graphical representation of these statistics make it easy for a developer to
identify performance problems. Some of the problems that can be easily identified are
load imbalanced decompositions, excessive communication overheads and poor average
floating operations per second achieved.
Paraver can also report statistics based on hardware counters, which are provided by the
underlying hardware.
This project aimed to use Paraver configuration files to allow certain metrics to be
analysed for this application.
To explain in some way the performance trend obtained in the case of analysis on the
mini-cluster Thunder, the tracks were extracted from various case of studies and the
results achieved is what expected, that is a drastic drop of cache misses by the case ppn
(process per node) = 1 to case ppn = 16.
This in some way explains a more efficient use of cluster resources with an increase of
the number of processes
Partition strategies for incremental Mini-Bucket
Los modelos en grafo probabilísticos, tales como los campos aleatorios de
Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la
representación de conocimiento y el razonamiento en modelos con gran número
de variables. Sin embargo, los problemas de inferencia exacta en modelos de
grafos son NP-hard en general, lo que ha causado que se produzca bastante
interés en métodos de inferencia aproximados.
El mini-bucket incremental es un marco de trabajo para inferencia aproximada
que produce como resultado límites aproximados inferior y superior de la
función de partición exacta, a base de -empezando a partir de un modelo con
todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente
añadir regiones más grandes a la aproximación. Los métodos
de inferencia aproximada que existen actualmente producen límites superiores
ajustados de la función de partición, pero los límites inferiores suelen ser demasiado
imprecisos o incluso triviales.
El objetivo de este proyecto es investigar estrategias de partición que mejoren
los límites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro
del marco de trabajo de mini-bucket incremental.
Empezamos a partir de la idea de que creemos que debería ser beneficioso
razonar conjuntamente con las variables de un modelo que tienen una alta correlación,
y desarrollamos una estrategia para la selección de regiones basada en
esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas
de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra
estrategia y los comparamos con varios métodos de referencia.
Nuestros resultados indican que nuestra estrategia obtiene límites inferiores
más ajustados que nuestros dos métodos de referencia. También consideramos
y descartamos dos posibles hipótesis que podrían explicar esta mejora.Els models en graf probabilístics, com bé els camps aleatoris de Markov i les
xarxes bayesianes, ofereixen poderosos marcs de treball per la representació
del coneixement i el raonament en models amb grans quantitats de variables.
Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard
en general, el qual ha provocat que es produeixi bastant d’interès en mètodes
d’inferència aproximats.
El mini-bucket incremental es un marc de treball per a l’inferència aproximada
que produeix com a resultat límits aproximats inferior i superior de la
funció de partició exacta que funciona començant a partir d’un model al qual
se li han relaxat tots els constraints -és a dir, un model amb les regions més
petites possibles- i anar afegint a l’aproximació regions incrementalment més
grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen
límits superiors ajustats de la funció de partició. Tanmateix, els límits
inferiors acostumen a ser massa imprecisos o fins aviat trivials.
El objectiu d’aquest projecte es recercar estratègies de partició que millorin
els límits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del
marc de treball del mini-bucket incremental.
La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós
per la qualitat de l’aproximació raonar conjuntament amb les variables del
model que tenen una alta correlació entre elles, i desenvolupem una estratègia
per a la selecció de regions basada en aquesta idea. Posteriorment, implementem
la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els
resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes
de referència.
Els nostres resultats indiquen que la nostra estratègia obté límits inferiors
més ajustats que els nostres dos mètodes de referència. També considerem i
descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks
provide powerful frameworks for knowledge representation and reasoning
over models with large numbers of variables. Unfortunately, exact inference
problems on graphical models are generally NP-hard, which has led to signifi-
cant interest in approximate inference algorithms.
Incremental mini-bucket is a framework for approximate inference that provides
upper and lower bounds on the exact partition function by, starting from
a model with completely relaxed constraints, i.e. with the smallest possible
regions, incrementally adding larger regions to the approximation. Current
approximate inference algorithms provide tight upper bounds on the exact partition
function but loose or trivial lower bounds.
This project focuses on researching partitioning strategies that improve the
lower bounds obtained with mini-bucket elimination, working within the framework
of incremental mini-bucket.
We start from the idea that variables that are highly correlated should be
reasoned about together, and we develop a strategy for region selection based
on that idea. We implement the strategy and explore ways to improve it, and
finally we measure the results obtained using the strategy and compare them to
several baselines.
We find that our strategy performs better than both of our baselines. We
also rule out several possible explanations for the improvement
Downstream reactions and engineering in the microbially reconstituted pathway for Taxol
Taxol (a trademarked product of Bristol-Myers Squibb) is a complex isoprenoid natural product which has displayed potent anticancer activity. Originally isolated from the Pacific yew tree (Taxus brevifolia), Taxol has been mass-produced through processes reliant on plant-derived biosynthesis. Recently, there have been alternative efforts to reconstitute the biosynthetic process through technically convenient microbial hosts, which offer unmatched growth kinetics and engineering potential. Such an approach is made challenging by the need to successfully introduce the significantly foreign enzymatic steps responsible for eventual biosynthesis. Doing so, however, offers the potential to engineer more efficient and economical production processes and the opportunity to design and produce tailored analog compounds with enhanced properties. This mini review will specifically focus on heterologous biosynthesis as it applies to Taxol with an emphasis on the challenges associated with introducing and reconstituting the downstream reaction steps needed for final bioactivity.National Institutes of Health (U.S.) (GM085323)Milheim Foundation (2006-2017
Keeping conceptualizations simple: Examples with family carers of people with dementia
This paper forms the second in a series of three articles on conceptualizations of older people's distress. The focus is on simple and concrete "mini-formulations" that keep the amount of information in them to a minimum, yet retain explanatory and predictive power. Such formulations can be used as the basis for action plans for intervention, while avoiding overburdening the cognitive capacity of the client or therapist. Simple linear and cyclical models are described, as are cognitive triad and dyad models. The uses of "mini-formulations" for group and individual settings are illustrated in a case example of a lady caring for her husband who has dementia
- …