657 research outputs found
Recommended from our members
High temporal resolution estimates of Arctic snowfall rates emphasizing gauge and radar-based retrievals from the MOSAiC expedition
This article presents the results of snowfall rate and accumulation estimates from a vertically pointing 35-GHz radar and other sensors deployed during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition. The radar-based retrievals are the most consistent in terms of data availability and are largely immune to blowing snow. The total liquid-equivalent accumulation during the snow accumulation season is around 110 mm, with more abundant precipitation during spring months. About half of the total accumulation came from weak snowfall with rates less than approximately 0.2 mmh–1. The total snowfall estimates from a Vaisala optical sensor aboard the icebreaker are similar to those from radar retrievals, though their daily and monthly accumulations and instantaneous rates varied significantly. Compared to radar retrievals and the icebreaker optical sensor data, measurements from an identical optical sensor at an ice camp are biased high. Blowing snow effects, in part, explain differences. Weighing gauge measurements significantly overestimate snowfall during February–April 2020 as compared to other sensors and are not well suited for estimating instantaneous snowfall rates. The icebreaker optical disdrometer estimates of snowfall rates are, on average, relatively little biased compared to radar retrievals when raw particle counts are available and appropriate snowflake mass-size relations are used. These counts, however, are not available during periods that produced more than a third of the total snowfall. While there are uncertainties in the radar-based retrievals due to the choice of reflectivity-snowfall rate relations, the major error contributor is the uncertainty in the radar absolute calibration. The MOSAiC radar calibration is evaluated using comparisons with other radars and liquid water cloud–drizzle processes observed during summer. Overall, this study describes a consistent, radar-based snowfall rate product for MOSAiC that provides significant insight into Central Arctic snowfall and can be used for many other purposes.</p
Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement
Learners’ spatial skill is a reliable and significant predictor of achievement in STEM, including computing, education. Spatial skill is also malleable, meaning it can be improved through training. Most cognitive skill training improves performance on only a narrow set of similar tasks, but researchers have found ample evidence that spatial training can broadly improve STEM achievement. We do not yet know the cognitive mechanisms that make spatial skill training broadly transferable when other cognitive training is not, but understanding these mechanisms is important for developing training and instruction that consistently benefits learners, especially those starting with low spatial skill. This paper proposes the spatial encoding strategy (SpES) theory to explain the cognitive mechanisms connecting spatial skill and STEM achievement. To motivate SpES theory, the paper reviews research from STEM education, learning sciences, and psychology. SpES theory provides compelling post hoc explanations for the findings from this literature and aligns with neuroscience models about the functions of brain structures. The paper concludes with a plan for testing the theory’s validity and using it to inform future research and instruction. The paper focuses on implications for computing education, but the transferability of spatial skill to STEM performance makes the proposed theory relevant to many education communities
Psychophysics, Gestalts and Games
International audienceMany psychophysical studies are dedicated to the evaluation of the human gestalt detection on dot or Gabor patterns, and to model its dependence on the pattern and background parameters. Nevertheless, even for these constrained percepts, psychophysics have not yet reached the challenging prediction stage, where human detection would be quantitatively predicted by a (generic) model. On the other hand, Computer Vision has attempted at defining automatic detection thresholds. This chapter sketches a procedure to confront these two methodologies inspired in gestaltism. Using a computational quantitative version of the non-accidentalness principle, we raise the possibility that the psychophysical and the (older) gestaltist setups, both applicable on dot or Gabor patterns, find a useful complement in a Turing test. In our perceptual Turing test, human performance is compared by the scientist to the detection result given by a computer. This confrontation permits to revive the abandoned method of gestaltic games. We sketch the elaboration of such a game, where the subjects of the experiment are confronted to an alignment detection algorithm, and are invited to draw examples that will fool it. We show that in that way a more precise definition of the alignment gestalt and of its computational formulation seems to emerge. Detection algorithms might also be relevant to more classic psychophysical setups, where they can again play the role of a Turing test. To a visual experiment where subjects were invited to detect alignments in Gabor patterns, we associated a single function measuring the alignment detectability in the form of a number of false alarms (NFA). The first results indicate that the values of the NFA, as a function of all simulation parameters, are highly correlated to the human detection. This fact, that we intend to support by further experiments , might end up confirming that human alignment detection is the result of a single mechanism
FIRE Arctic Clouds Experiment
An overview is given of the First ISCCP Regional Experiment (FIRE) Arctic Clouds Experiment that was conducted in the Arctic during April through July, 1998. The principal goal of the field experiment was to gather the data needed to examine the impact of arctic clouds on the radiation exchange between the surface, atmosphere, and space, and to study how the surface influences the evolution of boundary layer clouds. The observations will be used to evaluate and improve climate model parameterizations of cloud and radiation processes, satellite remote sensing of cloud and surface characteristics, and understanding of cloud-radiation feedbacks in the Arctic. The experiment utilized four research aircraft that flew over surface-based observational sites in the Arctic Ocean and Barrow, Alaska. In this paper we describe the programmatic and science objectives of the project, the experimental design (including research platforms and instrumentation), conditions that were encountered during the field experiment, and some highlights of preliminary observations, modelling, and satellite remote sensing studies
Overview of the MOSAiC expedition-Atmosphere INTRODUCTION
With the Arctic rapidly changing, the needs to observe, understand, and model the changes are essential. To support these needs, an annual cycle of observations of atmospheric properties, processes, and interactions were made while drifting with the sea ice across the central Arctic during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) expedition from October 2019 to September 2020. An international team designed and implemented the comprehensive program to document and characterize all aspects of the Arctic atmospheric system in unprecedented detail, using a variety of approaches, and across multiple scales. These measurements were coordinated with other observational teams to explore crosscutting and coupled interactions with the Arctic Ocean, sea ice, and ecosystem through a variety of physical and biogeochemical processes. This overview outlines the breadth and complexity of the atmospheric research program, which was organized into 4 subgroups: atmospheric state, clouds and precipitation, gases and aerosols, and energy budgets. Atmospheric variability over the annual cycle revealed important influences from a persistent large-scale winter circulation pattern, leading to some storms with pressure and winds that were outside the interquartile range of past conditions suggested by long-term reanalysis. Similarly, the MOSAiC location was warmer and wetter in summer than the reanalysis climatology, in part due to its close proximity to the sea ice edge. The comprehensiveness of the observational program for characterizing and analyzing atmospheric phenomena is demonstrated via a winter case study examining air mass transitions and a summer case study examining vertical atmospheric evolution. Overall, the MOSAiC atmospheric program successfully met its objectives and was the most comprehensive atmospheric measurement program to date conducted over the Arctic sea ice. The obtained data will support a broad range of coupled-system scientific research and provide an important foundation for advancing multiscale modeling capabilities in the Arctic.Peer reviewe
Emergence of qualia from brain activity or from an interaction of proto-consciousness with the brain: which one is the weirder? Available evidence and a research agenda
This contribution to the science of consciousness aims at comparing how two different theories can
explain the emergence of different qualia experiences, meta-awareness, meta-cognition, the placebo
effect, out-of-body experiences, cognitive therapy and meditation-induced brain changes, etc.
The first theory postulates that qualia experiences derive from specific neural patterns, the second
one, that qualia experiences derive from the interaction of a proto-consciousness with the brain\u2019s
neural activity. From this comparison it will be possible to judge which one seems to better explain
the different qualia experiences and to offer a more promising research agenda
Visual Exploration and Object Recognition by Lattice Deformation
Mechanisms of explicit object recognition are often difficult to investigate and require stimuli with controlled features whose expression can be manipulated in a precise quantitative fashion. Here, we developed a novel method (called “Dots”), for generating visual stimuli, which is based on the progressive deformation of a regular lattice of dots, driven by local contour information from images of objects. By applying progressively larger deformation to the lattice, the latter conveys progressively more information about the target object. Stimuli generated with the presented method enable a precise control of object-related information content while preserving low-level image statistics, globally, and affecting them only little, locally. We show that such stimuli are useful for investigating object recognition under a naturalistic setting – free visual exploration – enabling a clear dissociation between object detection and explicit recognition. Using the introduced stimuli, we show that top-down modulation induced by previous exposure to target objects can greatly influence perceptual decisions, lowering perceptual thresholds not only for object recognition but also for object detection (visual hysteresis). Visual hysteresis is target-specific, its expression and magnitude depending on the identity of individual objects. Relying on the particular features of dot stimuli and on eye-tracking measurements, we further demonstrate that top-down processes guide visual exploration, controlling how visual information is integrated by successive fixations. Prior knowledge about objects can guide saccades/fixations to sample locations that are supposed to be highly informative, even when the actual information is missing from those locations in the stimulus. The duration of individual fixations is modulated by the novelty and difficulty of the stimulus, likely reflecting cognitive demand
Recommended from our members
The De-Icing Comparison Experiment (D-ICE): a study of broadband radiometric measurements under icing conditions in the Arctic
Surface-based measurements of broadband shortwave (solar) and longwave (infrared) radiative fluxes using thermopile radiometers are made regularly around the globe for scientific and operational environmental monitoring. The occurrence of ice on sensor windows in cold environments – whether snow, rime, or frost – is a common problem that is difficult to prevent as well as difficult to correct in post-processing. The Baseline Surface Radiation Network (BSRN) community recognizes radiometer icing as a major outstanding measurement uncertainty. Towards constraining this uncertainty, the De-Icing Comparison Experiment (D-ICE) was carried out at the NOAA Atmospheric Baseline Observatory in Utqiaġvik (formerly Barrow), Alaska, from August 2017 to July 2018. The purpose of D-ICE was to evaluate existing ventilation and heating technologies developed to mitigate radiometer icing. D-ICE consisted of 20 pyranometers and 5 pyrgeometers operating in various ventilator housings alongside operational systems that are part of NOAA's Barrow BSRN station and the US Department of Energy Atmospheric Radiation Measurement (ARM) program North Slope of Alaska and Oliktok Point observatories. To detect icing, radiometers were monitored continuously using cameras, with a total of more than 1 million images of radiometer domes archived. Ventilator and ventilator–heater performance overall was skillful with the average of the systems mitigating ice formation 77 % (many >90 %) of the time during which icing conditions were present. Ventilators without heating elements were also effective and capable of providing heat through roughly equal contributions of waste energy from the ventilator fan and adiabatic heating downstream of the fan. This provided ∼0.6 ∘C of warming, enough to subsaturate the air up to a relative humidity (with respect to ice) of ∼105 %. Because the mitigation technologies performed well, a near complete record of verified ice-free radiometric fluxes was assembled for the duration of the campaign. This well-characterized data set is suitable for model evaluation, in particular for the Year of Polar Prediction (YOPP) first Special Observing Period (SOP1). We used the data set to calculate short- and long-term biases in iced sensors, finding that biases can be up to +60 W m−2 (longwave) and −211 to +188 W m−2 (shortwave). However, because of the frequency of icing, mitigation of ice by ventilators, cloud conditions, and the timing of icing relative to available sunlight, the biases in the monthly means were generally less than the aggregate uncertainty attributed to other conventional sources in both the shortwave and longwave.
</p
Recommended from our members
Current model capabilities for simulating black carbon and sulfate concentrations in the Arctic atmosphere: a multi-model evaluation using a comprehensive measurement data set
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution
- …