195 research outputs found
Literature Review Reveals a Global Access Inequity to Urban Green Spaces
Differences in the accessibility to urban resources between different racial and socioeconomic groups have exerted pressure on effective planning and management for sustainable city development. However, few studies have examined the multiple factors that may influence the mitigation of urban green spaces (UGS) inequity. This study reports the results of a systematic mapping of access inequity research through correspondence analysis (CA) to reveal critical trends, knowledge gaps, and clusters based on a sample of 49 empirical studies screened from 563 selected papers. Our findings suggest that although the scale of cities with UGS access inequity varies between countries, large cities (more than 1,000,000 population), especially in low- and middle-income countries (LMICs), are particularly affected. Moreover, the number of cities in which high socioeconomic status (high-SES) groups (e.g., young, rich, or employed) are at an advantage concerning access to UGS is substantially higher than the number of cities showing better accessibility for low-SES groups. Across the reviewed papers, analyses on mitigating interventions are sparse, and among the few studies that touch upon this, we found different central issues in local mitigating strategies between high-income countries (HICs) and LMICs. An explanatory framework is offered, explaining the interaction between UGS access inequity and local mitigating measures
Neural Responses During Trace Conditioning with Face and Non-Face Stimuli Recorded with Magnetoencephalography
During fear conditioning a subject is presented with an initially innocuous stimulus like an image (conditioned stimulus; CS) that predicts an aversive outcome like a mild electric shock (unconditioned stimulus; UCS). Subjects rapidly learn that the CS predicts the UCS, and show autonomic fear responses (CRs) during the presentation of the CS. When the CS and the UCS coterminate, as is the case for delay conditioning, individuals can acquire CRs even if they are unable to predict the occurrence of the UCS. However when there is a temporal gap between the CS and the UCS, CR expression is typically dependent upon explicit awareness of the CS-UCS pairing. Research with non-human animals suggests that both the hippocampus and the prefrontal cortex are needed for trace but not delay fear conditioning, and that communication between these areas may help to maintain the CS during the trace interval. We tested this hypothesis by exposing subjects to differential delay and trace fear conditioning while we recorded their brain activity with magnetoencephalography. Faces and houses served as CSs and an aversive electrical stimulation served as the UCS. As predicted, subjects show evidence of conditioning on both implicit and explicit measures. In addition, there is a learning related increase in theta coherence between the left parahippocampal gyrus and several frontal and parietal cortical regions for trace but not delay conditioning. These results suggest that trace conditioning recruits a network of cortical regions, and that the activity of these regions is coordinated by the medial temporal lobe
Economic Research on Ethanol Feed-Use Coproducts: A Review, Synthesis, and Path Forward
During the mid-2000s to the early 2010s, the domestic ethanol industry witnessed substantial growth, with ethanol coproducts emerging as vital elements for plant profitability and livestock feeding. Initially serving as supplementary revenue streams, coproducts from ethanol production have evolved into diverse value-added offerings, bolstering revenue streams, and sustaining profit margins. This study reviews existing economic research on ethanol coproducts, detailing methodologies, product focus, and research locations. Initially gathering 972 articles from 9 databases, 110 articles were synthesized. We find that most studies primarily examined the growth and future of the ethanol industry with a limited focus on specific coproducts. Feed-use distillers’ grains, especially dried distillers’ grains, were the most widely published while newer coproducts like pelletized, deoiled, and high-protein distillers’ grains were relatively understudied. Non-feed-use products were notably overlooked, highlighting the need for exploration beyond conventional applications. The evolving market landscape for ethanol co-products has surpassed published academic understanding of the economic tradeoffs necessitating further research into product dynamics, pricing, marketing, market structures, and regulatory frameworks. This highlights and underscores the importance of investigating value-added grains across diverse commodities and geographic contexts to inform strategic decision-making and policy formulation
Influence of the Algorithmization Process on the Mathematical Competence: A Case Study of Trainee Teachers Assessing ABN- and CBC-Instructed Schoolchildren by Gamification
In this manuscript, schoolchild mathematical competencies have been assessed by using educational gamification methodologies; specifically, Educational Escape Rooms (EER). To ease the interpretation of results, Spanish schoolchildren trained by using two different methodologies (ABN and CBC) were selected to participate in the experience. The gamified environment used as assessment tool was co-designed by trainee teachers, on-service teachers, and university researchers. The design was implemented in different educational centers and the results were transcribed to deliver a didactic analysis. Among the findings of this study, we uncovered: (i) the reduction of the math anxiety, (ii) the different performance of the schoolchild involved-ABN students show an additional and positive 10% development of certain mathematical competences-and (iii) a positive didactic-mathematic development of the participant trainee teachers
Visualizing data mining results with the Brede tools
A few neuroinformatics databases now exist that record results from neuroimaging studies in the form of brain coordinates in stereotaxic space. The Brede Toolbox was originally developed to extract, analyze and visualize data from one of them --- the BrainMap database. Since then the Brede Toolbox has expanded and now includes its own database with coordinates along with ontologies for brain regions and functions: The Brede Database. With Brede Toolbox and Database combined we setup automated workflows for extraction of data, mass meta-analytic data mining and visualizations. Most of the Web presence of the Brede Database is established by a single script executing a workflow involving these steps together with a final generation of Web pages with embedded visualizations and links to interactive three-dimensional models in the Virtual Reality Modeling Language. Apart from the Brede tools I briefly review alternate visualization tools and methods for Internet-based visualization and information visualization as well as portals for visualization tools
Towards precise classification of cancers based on robust gene functional expression profiles
BACKGROUND: Development of robust and efficient methods for analyzing and interpreting high dimension gene expression profiles continues to be a focus in computational biology. The accumulated experiment evidence supports the assumption that genes express and perform their functions in modular fashions in cells. Therefore, there is an open space for development of the timely and relevant computational algorithms that use robust functional expression profiles towards precise classification of complex human diseases at the modular level. RESULTS: Inspired by the insight that genes act as a module to carry out a highly integrated cellular function, we thus define a low dimension functional expression profile for data reduction. After annotating each individual gene to functional categories defined in a proper gene function classification system such as Gene Ontology applied in this study, we identify those functional categories enriched with differentially expressed genes. For each functional category or functional module, we compute a summary measure (s) for the raw expression values of the annotated genes to capture the overall activity level of the module. In this way, we can treat the gene expressions within a functional module as an integrative data point to replace the multiple values of individual genes. We compare the classification performance of decision trees based on functional expression profiles with the conventional gene expression profiles using four publicly available datasets, which indicates that precise classification of tumour types and improved interpretation can be achieved with the reduced functional expression profiles. CONCLUSION: This modular approach is demonstrated to be a powerful alternative approach to analyzing high dimension microarray data and is robust to high measurement noise and intrinsic biological variance inherent in microarray data. Furthermore, efficient integration with current biological knowledge has facilitated the interpretation of the underlying molecular mechanisms for complex human diseases at the modular level
Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification
Recent work has shown that language models' (LMs) prompt-based learning
capabilities make them well suited for automating data labeling in domains
where manual annotation is expensive. The challenge is that while writing an
initial prompt is cheap, improving a prompt is costly -- practitioners often
require significant labeled data in order to evaluate the impact of prompt
modifications. Our work asks whether it is possible to improve prompt-based
learning without additional labeled data. We approach this problem by
attempting to modify the predictions of a prompt, rather than the prompt
itself. Our intuition is that accurate predictions should also be consistent:
samples which are similar under some feature representation should receive the
same prompt prediction. We propose Embroid, a method which computes multiple
representations of a dataset under different embedding functions, and uses the
consistency between the LM predictions for neighboring samples to identify
mispredictions. Embroid then uses these neighborhoods to create additional
predictions for each sample, and combines these predictions with a simple
latent variable graphical model in order to generate a final corrected
prediction. In addition to providing a theoretical analysis of Embroid, we
conduct a rigorous empirical evaluation across six different LMs and up to 95
different tasks. We find that (1) Embroid substantially improves performance
over original prompts (e.g., by an average of 7.3 points on GPT-JT), (2) also
realizes improvements for more sophisticated prompting strategies (e.g.,
chain-of-thought), and (3) can be specialized to domains like law through the
embedding functions.Comment: 38 pages, 22 figures, 8 table
Recommended from our members
Learning and validating clinically meaningful phenotypes from electronic health data
The ever-growing adoption of electronic health records (EHR) to record patients' health journeys has resulted in vast amounts of heterogeneous, complex, and unwieldy information [Hripcsak and Albers, 2013]. Distilling this raw data into clinical insights presents great opportunities and challenges for the research and medical communities. One approach to this distillation is called computational phenotyping. Computational phenotyping is the process of extracting clinically relevant and interesting characteristics from a set of clinical documentation, such as that which is recorded in electronic health records (EHRs). Clinicians can use computational phenotyping, which can be viewed as a form of dimensionality reduction where a set of phenotypes form a latent space, to reason about populations, identify patients for randomized case-control studies, and extrapolate patient disease trajectories. In recent years, high-throughput computational approaches have made strides in extracting potentially clinically interesting phenotypes from data contained in EHR systems.
Tensor factorization methods have shown particular promise in deriving phenotypes. However, phenotyping methods via tensor factorization have the following weaknesses: 1) the extracted phenotypes can lack diversity, which makes them more difficult for clinicians to reason about and utilize in practice, 2) many of the tensor factorization methods are unsupervised and do not utilize side information that may be available about the population or about the relationships between the clinical characteristics in the data (e.g., diagnoses and medications), and 3) validating the clinical relevance of the extracted phenotypes requires domain training and expertise. This dissertation addresses all three of these limitations. First, we present tensor factorization methods that discover sparse and concise phenotypes in unsupervised, supervised, and semi-supervised settings. Second, via two tools we built, we show how to leverage domain expertise in the form of publicly available medical articles to evaluate the clinical validity of the discovered phenotypes. Third, we combine tensor factorization and the phenotype validation tools to guide the discovery process to more clinically relevant phenotypes.Computational Science, Engineering, and Mathematic
- …