1,650 research outputs found

    Groundwater level assessment and prediction in the Nebraska Sand Hills using LIDAR-derived lake water level

    Get PDF
    The spatial variability of groundwater levels is often inferred from sparsely located hydraulic head observations in wells. The spatial correlation structure derived from sparse observations is associated with uncertainties that spread to estimates at unsampled locations. In areas where surface water represents the nearby groundwater level, remote sensing techniques can estimate and increase the number of hydraulic head measurements. This research uses light detection and ranging (LIDAR) to estimate lake surface water level to characterize the groundwater level in the Nebraska Sand Hills (NSH), an area with few observation wells. The LIDAR derived lake groundwater level accuracy was within 40 cm mean square error (MSE) of the nearest observation wells. The lake groundwater level estimates were used to predict the groundwater level at unsampled locations using universal kriging (UK) and kriging with an external drift (KED). The results indicate unbiased estimates of groundwater level in the NSH. UK showed the influence of regional trends in groundwater level while KED revealed the local variation present in the groundwater level. A 10-fold cross-validation demonstrated KED with better mean squared error (ME) [–0.003, 0.007], root mean square error (RMSE) [2.39, 4.46], residual prediction deviation (RPD) [1.32, 0.71] and mean squared deviation ratio (MSDR) [1.01, 1.49] than UK. The research highlights that the lake groundwater level provides an accurate and cost-effective approach to measure and monitor the subtle changes in groundwater level in the NSH. This methodology can be applied to other locations where surface water bodies represent the water level of the unconfined aquifer and the results can aid in groundwater management and modeling

    Cracking the code of oscillatory activity

    Get PDF
    Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency) code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times) relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response-that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brai

    The gray matter volume of the amygdala is correlated with the perception of melodic intervals: a voxel-based morphometry study

    Get PDF
    Music is not simply a series of organized pitches, rhythms, and timbres, it is capable of evoking emotions. In the present study, voxel-based morphometry (VBM) was employed to explore the neural basis that may link music to emotion. To do this, we identified the neuroanatomical correlates of the ability to extract pitch interval size in a music segment (i.e., interval perception) in a large population of healthy young adults (N = 264). Behaviorally, we found that interval perception was correlated with daily emotional experiences, indicating the intrinsic link between music and emotion. Neurally, and as expected, we found that interval perception was positively correlated with the gray matter volume (GMV) of the bilateral temporal cortex. More important, a larger GMV of the bilateral amygdala was associated with better interval perception, suggesting that the amygdala, which is the neural substrate of emotional processing, is also involved in music processing. In sum, our study provides one of first neuroanatomical evidence on the association between the amygdala and music, which contributes to our understanding of exactly how music evokes emotional responses

    Mirror Mirror on the Wall, Which Is the Most Convincing of Them All? Exploring Anti-Domestic Violence Posters.

    Get PDF
    Although domestic abuse of women by men has received significant media, police, and research attention, domestic violence directed toward men has been marginalized across the board and is still rarely treated seriously. The purpose of this research, then, is to examine and compare different anti-domestic violence messages in which the abuser's gender is not always clear. In Study 1, 200 U.K. participants (100 females and 100 males, aged 18-67, M = 28.98, SD = 9.613) evaluated posters that varied across three levels; in that the subject (male or female) was depicted as being silenced, bruised, or experiencing live abuse. The results showed that the posters featuring female victims were all rated as more effective than posters showing male victims. In Study 2, 140 different U.K. participants (95 females; 45 males) aged 18 to 59 (M = 27.27, SD = 10.662) evaluated the cartoon facial images of Disney characters who had been altered to look like victims of violence and real-life corresponding photos of human models. The results showed that the realistic posters were found to be more believable, emotional, and effective than the cartoons. The implications of such perceptions are discussed

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300
    • …
    corecore