8 research outputs found
Vizualizációs alkalmazás fejlesztése a BCC trace moduljához
A szakdolgozat célja egy webes felhasználó interfész készítése a BCC trace moduljához. A felületen kiválasztható a megfigyelni kívánt alkalmazás, kijelölhető melyik függvényeit szeretnénk monitorozni. A futtatás végén egy interaktívan böngészhető gráf jön létre a hívási láncokkal és gyakoriságokkal.A program a gráfban való keresést segítő funkciókat is tartalmaz
Recommended from our members
Northern Eurasia Future Initiative (NEFI): facing the challenges and pathways of global change in the 21st century
During the past several decades, the Earth system has changed significantly, especially across Northern Eurasia. Changes in the socio-economic conditions of the larger countries in the region have also resulted in a variety of regional environmental changes that can
have global consequences. The Northern Eurasia Future Initiative (NEFI) has been designed as an essential continuation of the Northern Eurasia Earth Science
Partnership Initiative (NEESPI), which was launched in 2004. NEESPI sought to elucidate all aspects of ongoing environmental change, to inform societies and, thus, to
better prepare societies for future developments. A key principle of NEFI is that these developments must now be secured through science-based strategies co-designed
with regional decision makers to lead their societies to prosperity in the face of environmental and institutional challenges. NEESPI scientific research, data, and
models have created a solid knowledge base to support the NEFI program. This paper presents the NEFI research vision consensus based on that knowledge. It provides the reader with samples of recent accomplishments in regional studies and formulates new NEFI science questions. To address these questions, nine research foci are identified and their selections are briefly justified. These foci include: warming of the Arctic; changing frequency, pattern, and intensity of extreme and inclement environmental conditions; retreat of the cryosphere; changes in terrestrial water cycles; changes in the biosphere; pressures on land-use; changes in infrastructure; societal actions in response to environmental change; and quantification of Northern Eurasia's role in the global Earth system. Powerful feedbacks between the Earth and human systems in Northern Eurasia (e.g., mega-fires, droughts, depletion of the cryosphere essential for water supply, retreat of sea ice) result from past and current human activities (e.g., large scale water withdrawals, land use and governance change) and
potentially restrict or provide new opportunities for future human activities. Therefore, we propose that Integrated Assessment Models are needed as the final stage of global
change assessment. The overarching goal of this NEFI modeling effort will enable evaluation of economic decisions in response to changing environmental conditions and justification of mitigation and adaptation efforts
Towards Causal Credit Assignment
Adequately assigning credit to actions for future outcomes based on their
contributions is a long-standing open challenge in Reinforcement Learning. The
assumptions of the most commonly used credit assignment method are
disadvantageous in tasks where the effects of decisions are not immediately
evident. Furthermore, this method can only evaluate actions that have been
selected by the agent, making it highly inefficient. Still, no alternative
methods have been widely adopted in the field. Hindsight Credit Assignment is a
promising, but still unexplored candidate, which aims to solve the problems of
both long-term and counterfactual credit assignment. In this thesis, we
empirically investigate Hindsight Credit Assignment to identify its main
benefits, and key points to improve. Then, we apply it to factored state
representations, and in particular to state representations based on the causal
structure of the environment. In this setting, we propose a variant of
Hindsight Credit Assignment that effectively exploits a given causal structure.
We show that our modification greatly decreases the workload of Hindsight
Credit Assignment, making it more efficient and enabling it to outperform the
baseline credit assignment method on various tasks. This opens the way to other
methods based on given or learned causal structures.Comment: In case I write a paper about this thesis, I do not want them to be
duplicates of each othe
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative
impact, these new capabilities are as yet poorly characterized. In order to
inform future research, prepare for disruptive new model capabilities, and
ameliorate socially harmful effects, it is vital that we understand the present
and near-future capabilities and limitations of language models. To address
this challenge, we introduce the Beyond the Imitation Game benchmark
(BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442
authors across 132 institutions. Task topics are diverse, drawing problems from
linguistics, childhood development, math, common-sense reasoning, biology,
physics, social bias, software development, and beyond. BIG-bench focuses on
tasks that are believed to be beyond the capabilities of current language
models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense
transformer architectures, and Switch-style sparse transformers on BIG-bench,
across model sizes spanning millions to hundreds of billions of parameters. In
addition, a team of human expert raters performed all tasks in order to provide
a strong baseline. Findings include: model performance and calibration both
improve with scale, but are poor in absolute terms (and when compared with
rater performance); performance is remarkably similar across model classes,
though with benefits from sparsity; tasks that improve gradually and
predictably commonly involve a large knowledge or memorization component,
whereas tasks that exhibit "breakthrough" behavior at a critical scale often
involve multiple steps or components, or brittle metrics; social bias typically
increases with scale in settings with ambiguous context, but this can be
improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo:
https://github.com/google/BIG-benc