17 research outputs found
Automatic skeleton-driven performance optimizations for transactional memory
The recent shift toward multi-core chips has pushed the burden of extracting performance to the programmer. In fact, programmers now have to be able to uncover more
coarse-grain parallelism with every new generation of processors, or the performance
of their applications will remain roughly the same or even degrade. Unfortunately,
parallel programming is still hard and error prone. This has driven the development of
many new parallel programming models that aim to make this process efficient.
This thesis first combines the skeleton-based and transactional memory programming models in a new framework, called OpenSkel, in order to improve performance
and programmability of parallel applications. This framework provides a single skeleton that allows the implementation of transactional worklist applications. Skeleton or
pattern-based programming allows parallel programs to be expressed as specialized instances of generic communication and computation patterns. This leaves the programmer with only the implementation of the particular operations required to solve the
problem at hand. Thus, this programming approach simplifies parallel programming
by eliminating some of the major challenges of parallel programming, namely thread
communication, scheduling and orchestration. However, the application programmer
has still to correctly synchronize threads on data races. This commonly requires the
use of locks to guarantee atomic access to shared data. In particular, lock programming
is vulnerable to deadlocks and also limits coarse grain parallelism by blocking threads
that could be potentially executed in parallel.
Transactional Memory (TM) thus emerges as an attractive alternative model to simplify parallel programming by removing this burden of handling data races explicitly.
This model allows programmers to write parallel code as transactions, which are then
guaranteed by the runtime system to execute atomically and in isolation regardless of
eventual data races. TM programming thus frees the application from deadlocks and
enables the exploitation of coarse grain parallelism when transactions do not conflict
very often. Nevertheless, thread management and orchestration are left for the application programmer. Fortunately, this can be naturally handled by a skeleton framework.
This fact makes the combination of skeleton-based and transactional programming a
natural step to improve programmability since these models complement each other.
In fact, this combination releases the application programmer from dealing with thread
management and data races, and also inherits the performance improvements of both
models. In addition to it, a skeleton framework is also amenable to skeleton-driven
performance optimizations that exploits the application pattern and system information.
This thesis thus also presents a set of pattern-oriented optimizations that are automatically selected and applied in a significant subset of transactional memory applications that shares a common pattern called worklist. These optimizations exploit the
knowledge about the worklist pattern and the TM nature of the applications to avoid
transaction conflicts, to prefetch data, to reduce contention etc. Using a novel autotuning mechanism, OpenSkel dynamically selects the most suitable set of these pattern-oriented performance optimizations for each application and adjusts them accordingly.
Experimental results on a subset of five applications from the STAMP benchmark suite
show that the proposed autotuning mechanism can achieve performance improvements
within 2%, on average, of a static oracle for a 16-core UMA (Uniform Memory Access) platform and surpasses it by 7% on average for a 32-core NUMA (Non-Uniform
Memory Access) platform.
Finally, this thesis also investigates skeleton-driven system-oriented performance
optimizations such as thread mapping and memory page allocation. In order to do
it, the OpenSkel system and also the autotuning mechanism are extended to accommodate these optimizations. The conducted experimental results on a subset of five
applications from the STAMP benchmark show that the OpenSkel framework with the
extended autotuning mechanism driving both pattern and system-oriented optimizations can achieve performance improvements of up to 88%, with an average of 46%,
over a baseline version for a 16-core UMA platform and up to 162%, with an average
of 91%, for a 32-core NUMA platform
Proyectos educativos en las escuelas de la comunidad de las propuestas Espíritu Santo que se suman a la educación rural
A Educação do Campo representa novas possibilidades pedagógicas da educação ofertada nas escolas do campo do Espírito Santo. Este paper discute dois projetos pedagógicos realizados na escola municipal de ensino fundamental Pedra Torta no município de Águia Branca – ES, também denominadas de Escolas Comunitárias Agroecológicas com uma nova ideia de escola com modelo de alternância, fora do eixo das Escolas Famílias Agrícolas e Centros Estaduais Integrados de Educação Rural. Esse modelo de educação visa valorizar a cultura do campo, o cultivo da terra, a qualidade de vida e o equilíbrio harmônico do ambiente. Dentro desse contexto, temos os projetos “Horta Comunitária e Mãos Que Fazem”. Trata-se de uma pesquisa qualitativa conduzida através de um estudo de caso, onde foram realizadas observações de campo e análise de documentos relacionados ao desenvolvimento dos projetos pedagógicos no período de 08/4 a 22/06 de 2016. Como resultado, podemos afirmar que essas experiências possibilitam a percepção das diversas dimensões da realidade rural e a valorização da cultura do campo.
Palavras-chave: Projetos Pedagógicos, Educação do Campo, Pedagogia da Alternância.
Educational projects in the community schools of the Espírito Santo proposals that add up to rural education
ABSTRACT. The Rural Education represents new pedagogical possibilities of education offered in the rural schools of the Espírito Santo. This paper discusses two educational projects carried out in the municipal elementary school Pedra Torta in the city of Águia Branca - ES, also called Community Schools Agroecology with a new idea of school with switching model, off-axis of the Agricultural Family Schools and Integrated State Centers for Rural Education. This model of education is aimed at valuing the rural culture, the cultivation of land, the quality of life and the harmonious balance of the environment. In this context, we have the project “Horta Comunitária and Mãos Que Fazem”. This is a qualitative research conducted through a case study, where was carried out field observations and analysis of documents related to the development of pedagogical projects in the period from 08/04 to 06/22 in 2016. As a result, we can affirm that these experiences enable the perception of the various dimensions of rural reality and valorization of the rural culture.
Keywords: Pedagogical Projects, Rural Education, Pedagogy of Alternation.
Proyectos educativos en las escuelas de la comunidad de las propuestas Espíritu Santo que se suman a la educación rural
RESUMEN. La educación del campo es nuevas posibilidades pedagógicas de la educación que se ofrece en las escuelas del campo Espírito Santo. Este documento analiza dos proyectos educativos llevados a cabo en la escuela primaria municipal Pedra Torta en la ciudad de Águia Branca - ES, también llamada Comunidad Escuelas Agroecología con una nueva idea de la escuela con el cambio de modelo, fuera del eje de las Escuelas de la Familia Agrícola y Centros de Estado integrado para la Educación Rural. Este modelo de educación está dirigido a incrementar la cultura rural, el cultivo de la tierra, la calidad de vida y el equilibrio armónico del medio ambiente. En este contexto, tenemos el proyecto “Horta Comunitária e Mão Que Fazem”. Se trata de una investigación cualitativa a realizada través de un estudio de caso, que se llevó a cabo observaciones de campo y análisis de los documentos relacionados con el desarrollo de proyectos educativos en el período 04/08 a 22/06 en 2016. Como resultado, se puede decir que estas experiencias permiten la percepción de las diversas dimensiones de la realidad rural y valoración de la cultura rural.
Palabras clave: Proyectos Pedagógicos, Educación del Campo, Pedagogía de la Alternancia.ABSTRACT. The Rural Education represents new pedagogical possibilities of education offered in the rural schools of the Espírito Santo. This paper discusses two educational projects carried out in the municipal elementary school Pedra Torta in the city of Águia Branca - ES, also called Community Schools Agroecology with a new idea of school with switching model, off-axis of the Agricultural Family Schools and Integrated State Centers for Rural Education. This model of education is aimed at valuing the rural culture, the cultivation of land, the quality of life and the harmonious balance of the environment. In this context, we have the project “Horta Comunitária and Mãos Que Fazem”. This is a qualitative research conducted through a case study, where was carried out field observations and analysis of documents related to the development of pedagogical projects in the period from 08/04 to 06/22 in 2016. As a result, we can affirm that these experiences enable the perception of the various dimensions of rural reality and valorization of the rural culture.RESUMEN. La educación del campo es nuevas posibilidades pedagógicas de la educación que se ofrece en las escuelas del campo Espírito Santo. Este documento analiza dos proyectos educativos llevados a cabo en la escuela primaria municipal Pedra Torta en la ciudad de Águia Branca - ES, también llamada Comunidad Escuelas Agroecología con una nueva idea de la escuela con el cambio de modelo, fuera del eje de las Escuelas de la Familia Agrícola y Centros de Estado integrado para la Educación Rural. Este modelo de educación está dirigido a incrementar la cultura rural, el cultivo de la tierra, la calidad de vida y el equilibrio armónico del medio ambiente. En este contexto, tenemos el proyecto “Horta Comunitária e Mão Que Fazem”. Se trata de una investigación cualitativa a realizada través de un estudio de caso, que se llevó a cabo observaciones de campo y análisis de los documentos relacionados con el desarrollo de proyectos educativos en el período 04/08 a 22/06 en 2016. Como resultado, se puede decir que estas experiencias permiten la percepción de las diversas dimensiones de la realidad rural y valoración de la cultura rural
Crowd score: a method for the evaluation of jokes using Large Language Model AI voters as judges
This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts
Pushing GPT’s creativity to Its limits: alternative uses and Torrance Tests
In this paper, we investigate the potential of Large Language Models (LLMs), specifically GPT-4, to improve their creative responses in well-known creativity tests, such as Guilford's Alternative Uses Test (AUT) and an adapted version of the Torrance Test of Creative Thinking (TTCT) visual completion tests. We exploit GPT-4's self-improving ability by using a sequence of forceful interactive prompts in a multi-step conversation, aiming to accelerate the convergence process towards more creative responses. Our contributions include an automated approach to enhance GPT's responses in the AUT and TTCT visual completion test and a series of prompts to generate and evaluate GPT's responses in these tests. Our results show that the creativity of GPT's responses can be improved through the use of forceful prompts. This paper opens up possibilities for future research on different sets of prompts to further improve the creativity convergence of LLM-generated responses and the application of similar interactive processes to tasks involving other cognitive skills
Is GPT-4 good enough to evaluate jokes?
In this paper, we investigate the ability of large language models (LLMs), specifically GPT-4, to assess the funniness of jokes in comparison to human ratings. We use a dataset of jokes annotated with human ratings and explore different system descriptions in GPT-4 to imitate human judges with various types of humour. We propose a novel method to create a system description using many-shot prompting, providing numerous examples of jokes and their evaluation scores. Additionally, we examine the performance of different system descriptions when given varying amounts of instructions and examples on how to evaluate jokes. Our main contributions include a new method for creating a system description in LLMs to evaluate jokes and a comprehensive methodology to assess LLMs' ability to evaluate jokes using rankings rather than individual scores
Bits of grass: does GPT already know how to write like Whitman?
This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts (which use the maximum context length of 8192 tokens). We assess the performance of models that are not fine-tuned for generating poetry in the style of specific authors, via automated evaluation. Our findings indicate that without finetuning, even when provided with the maximum number of 17 poem examples (8192 tokens) in the prompt, these models do not generate poetry in the desired style
Consumo de álcool em adolescentes de uma escola da rede publica de ensino do município de Ponta Grossa (PR)
Analisar o alcoolismo em adolescentes de uma escola da rede publica de ensino. Participaram do estudo 63 estudantes de escola da rede publica de ensino do município de Ponta Grossa (PR) (39 do sexo feminino e 24 do sexo masculino). O consumo de álcool foi mensurado pelo AUDIT e a consistência interna atraves do coeficiente Alfa de Cronbach. Para analise do consumo de álcool recorreu-se a estatística descritiva e o teste de Qui-Quadrado para verificar as possíveis associates do álcool ao sexo. Os resultados demonstram que o consumo de álcool foi de baixo risco e não foram encontradas associates entre o alcoolismo e o sexo para esta pesquisa
Automatic skeleton-driven performance optimizations for transactional memory
The recent shift toward multi-core chips has pushed the burden of extracting performance to the programmer. In fact, programmers now have to be able to uncover more coarse-grain parallelism with every new generation of processors, or the performance of their applications will remain roughly the same or even degrade. Unfortunately, parallel programming is still hard and error prone. This has driven the development of many new parallel programming models that aim to make this process efficient. This thesis first combines the skeleton-based and transactional memory programming models in a new framework, called OpenSkel, in order to improve performance and programmability of parallel applications. This framework provides a single skeleton that allows the implementation of transactional worklist applications. Skeleton or pattern-based programming allows parallel programs to be expressed as specialized instances of generic communication and computation patterns. This leaves the programmer with only the implementation of the particular operations required to solve the problem at hand. Thus, this programming approach simplifies parallel programming by eliminating some of the major challenges of parallel programming, namely thread communication, scheduling and orchestration. However, the application programmer has still to correctly synchronize threads on data races. This commonly requires the use of locks to guarantee atomic access to shared data. In particular, lock programming is vulnerable to deadlocks and also limits coarse grain parallelism by blocking threads that could be potentially executed in parallel. Transactional Memory (TM) thus emerges as an attractive alternative model to simplify parallel programming by removing this burden of handling data races explicitly. This model allows programmers to write parallel code as transactions, which are then guaranteed by the runtime system to execute atomically and in isolation regardless of eventual data races. TM programming thus frees the application from deadlocks and enables the exploitation of coarse grain parallelism when transactions do not conflict very often. Nevertheless, thread management and orchestration are left for the application programmer. Fortunately, this can be naturally handled by a skeleton framework. This fact makes the combination of skeleton-based and transactional programming a natural step to improve programmability since these models complement each other. In fact, this combination releases the application programmer from dealing with thread management and data races, and also inherits the performance improvements of both models. In addition to it, a skeleton framework is also amenable to skeleton-driven performance optimizations that exploits the application pattern and system information. This thesis thus also presents a set of pattern-oriented optimizations that are automatically selected and applied in a significant subset of transactional memory applications that shares a common pattern called worklist. These optimizations exploit the knowledge about the worklist pattern and the TM nature of the applications to avoid transaction conflicts, to prefetch data, to reduce contention etc. Using a novel autotuning mechanism, OpenSkel dynamically selects the most suitable set of these pattern-oriented performance optimizations for each application and adjusts them accordingly. Experimental results on a subset of five applications from the STAMP benchmark suite show that the proposed autotuning mechanism can achieve performance improvements within 2%, on average, of a static oracle for a 16-core UMA (Uniform Memory Access) platform and surpasses it by 7% on average for a 32-core NUMA (Non-Uniform Memory Access) platform. Finally, this thesis also investigates skeleton-driven system-oriented performance optimizations such as thread mapping and memory page allocation. In order to do it, the OpenSkel system and also the autotuning mechanism are extended to accommodate these optimizations. The conducted experimental results on a subset of five applications from the STAMP benchmark show that the OpenSkel framework with the extended autotuning mechanism driving both pattern and system-oriented optimizations can achieve performance improvements of up to 88%, with an average of 46%, over a baseline version for a 16-core UMA platform and up to 162%, with an average of 91%, for a 32-core NUMA platform.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Bits of Grass: Does GPT already know how to write like Whitman?
This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4
models to generate poems in the style of specific authors using zero-shot and
many-shot prompts (which use the maximum context length of 8192 tokens). We
assess the performance of models that are not fine-tuned for generating poetry
in the style of specific authors, via automated evaluation. Our findings
indicate that without fine-tuning, even when provided with the maximum number
of 17 poem examples (8192 tokens) in the prompt, these models do not generate
poetry in the desired style.Comment: short paper 5 page