721 research outputs found

    Big Data in Economics

    Get PDF
    Big Data refers to data sets of much larger size, higher frequency, and often more personalized information. Examples include data collected by smart sensors in homes or aggregation of tweets on Twitter. In small data sets, traditional econometric methods tend to outperform more complex techniques. In large data sets, however, machine learning methods shine. New analytic approaches are needed to make the most of Big Data in economics. Researchers and policymakers should thus pay close attention to recent developments in machine learning techniques if they want to fully take advantage of these new sources of Big Data

    Hybrid U-Net: Semantic Segmentation of High-Resolution Satellite Images to Detect War Destruction

    Get PDF
    Destruction caused by violent conflicts play a big role in understanding the dynamics and consequences of conflicts, which is now the focus of a large body of ongoing literature in economics and political science. However, existing data on conflict largely come from news or eyewitness reports, which makes it incomplete, potentially unreliable, and biased for ongoing conflicts. Using satellite images and deep learning techniques, we can automatically extract objective information on violent events. To automate this process, we created a dataset of high-resolution satellite images of Syria and manually annotated the destroyed areas pixel-wise. Then, we used this dataset to train and test semantic segmentation networks to detect building damage of various size. We specifically utilized a U-Net model for this task due to its promising performance on small and imbalanced datasets. However, the raw U-Net architecture does not fully exploit multi-scale feature maps, which are among the important factors for generating fine-grained segmentation maps, especially for high-resolution images. To address this deficiency, we propose a multi-scale feature fusion approach and design a multi-scale skip-connected Hybrid U-Net for segmenting high-resolution satellite images. In our experiments, U-Net and its variants demonstrated promising segmentation results to detect various war-related building destruction. In addition, Hybrid U-Net resulted in significant improvement in segmentation performance compared to U-Net and other baselines. In particular, the mean intersection over union and mean dice score improved by 7.05% and 8.09%, respectively, compared to those in the raw U-Net

    “It can’t really be answered in an information pack…”: A realist evaluation of a telephone housing options service for older people.

    Get PDF
    Despite calls for better support to aid older people’s decision-making about housing options, two recent literature reviews highlight a paucity of research on the efficacy of such services. This paper reports a qualitative realist evaluation on the efficacy of a UK telephone service providing information to older people concerning specialist housing. The findings of 31 realist interviews with 16 older people are presented. Information provided to social tenants did not add much to what people already knew as social tenants tended to be familiar and knowledgeable about housing options available to them. Information seekers in mainstream housing (typically owner-occupiers), who were less familiar with housing options than social tenants, remained uncertain about their housing issues and tended to desire more substantive discussion and deliberation to become more informed. Information was considered too ‘light touch’, although the widely recognised lack of accessible housing options and reports of non-transparent and unresponsive market practices were also key factors. This study underlines the widely acknowledged need to increase the supply of specialist housing, and in the current UK context recommends that housing options support for older people be more substantive – particularly for those residing in mainstream housing

    Molecular flexibility of citrus pectins by combined sedimentation and viscosity analysis

    Get PDF
    The flexibility/rigidity of pectins plays an important part in their structure-function relationship and therefore on their commercial applications in the food and biomedical industries. Earlier studies based on sedimentation analysis in the ultracentrifuge have focused on molecular weight distributions and qualitative and semi-quantitative descriptions based on power law and Wales-van Holde treatments of conformation in terms of "extended" conformations [Harding, S. E., Berth, G., Ball, A., Mitchell, J.R., & Garcìa de la Torre, J. (1991). The molecular weight distribution and conformation of citrus pectins in solution studied by hydrodynamics. Carbohydrate Polymers, 168, 1-15; Morris, G. A., Foster, T. J., & Harding, S.E. (2000). The effect of degree of esterification on the hydrodynamic properties of citrus pectin. Food Hydrocolloids, 14, 227-235]. In the present study, four pectins of low degree of esterification 17-27% and one of high degree of esterification (70%) were characterised in aqueous solution (0.1 M NaCl) in terms of intrinsic viscosity [η], sedimentation coefficient (s°20,w) and weight average molar mass (Mw). Solution conformation/flexibility was estimated qualitatively using the conformation zoning method [Pavlov, G.M., Rowe, A.J., & Harding, S.E. (1997). Conformation zoning of large molecules using the analytical ultracentrifuge. Trends in Analytical Chemistry, 16, 401-405] and quantitatively (persistence length Lp) using the traditional Bohdanecky and Yamakawa-Fujii relations combined together by minimisation of a target function. Sedimentation conformation zoning showed an extended coil (Type C) conformation and persistence lengths all within the range Lp=10-13 nm (for a fixed mass per unit length)

    Thermospheric Weather as Observed by Ground‐Based FPIs and Modeled by GITM

    Full text link
    The first long‐term comparison of day‐to‐day variability (i.e., weather) in the thermospheric winds between a first‐principles model and data is presented. The definition of weather adopted here is the difference between daily observations and long‐term averages at the same UT. A year‐long run of the Global Ionosphere Thermosphere Model is evaluated against a nighttime neutral wind data set compiled from six Fabry‐Perot interferometers at middle and low latitudes. First, the temporal persistence of quiet‐time fluctuations above the background climate is evaluated, and the decorrelation time (the time lag at which the autocorrelation function drops to e−1) is found to be in good agreement between the data (1.8 hr) and the model (1.9 hr). Next, comparisons between sites are made to determine the decorrelation distance (the distance at which the cross‐correlation drops to e−1). Larger Fabry‐Perot interferometer networks are needed to conclusively determine the decorrelation distance, but the current data set suggests that it is ∼1,000 km. In the model the decorrelation distance is much larger, indicating that the model results contain too little spatial structure. The measured decorrelation time and distance are useful to tune assimilative models and are notably shorter than the scales expected if tidal forcing were responsible for the variability, suggesting that some other source is dominating the weather. Finally, the model‐data correlation is poor (−0.07 < ρ < 0.36), and the magnitude of the weather is underestimated in the model by 65%.Plain Language SummaryMuch like in the lower atmosphere, weather in the upper atmosphere is harder to predict than climate. Physics‐based models are becoming sophisticated enough that they can in principle predict the weather, and we present the first long‐term evaluation of how well a particular model, Global Ionosphere Thermosphere Model, performs. To evaluate the model, we compare it with a year of data from six ground‐based sites that measure the thermospheric wind. First, we calculate statistics of the weather, such as the decorrelation time, which characterizes how long weather fluctuations persist (1.8 hr in the data and 1.9 hr in the model). We also characterize the spatial decorrelation by comparing weather at different sites. The model predicts that the weather is much more widespread than the data indicates; sites that are 790 km apart have a measured correlation of 0.4, while the modeled correlation is 0.8. In terms of being able to actually predict a weather fluctuation on a particular day, the model performs poorly, with a correlation that is near zero at the low latitude sites, but reaches an average of 0.19 at the midlatitude sites, which are closer to the source that most likely dominates the weather: heating in the auroral zone.Key PointsA long‐term data‐model comparison of day‐to‐day thermospheric variability finds that GITM represents the weather poorly (−0.07 < ρ < 0.36)The average measured decorrelation time of 1.8 hr agrees with the modeled time of 1.9 hrThe weather in GITM contains too little spatial structure, when compared with the measured ∼1,000‐km decorrelation distancePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/148359/1/jgra54757_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/148359/2/jgra54757.pd

    ATM Dependent Silencing Links Nucleolar Chromatin Reorganization to DNA Damage Recognition

    Get PDF
    SummaryResolution of DNA double-strand breaks (DSBs) is essential for the suppression of genome instability. DSB repair in transcriptionally active genomic regions represents a unique challenge that is associated with ataxia telangiectasia mutated (ATM) kinase-mediated transcriptional silencing. Despite emerging insights into the underlying mechanisms, how DSB silencing connects to DNA repair remains undefined. We observe that silencing within the rDNA depends on persistent DSBs. Non-homologous end-joining was the predominant mode of DSB repair allowing transcription to resume. ATM-dependent rDNA silencing in the presence of persistent DSBs led to the large-scale reorganization of nucleolar architecture, with movement of damaged chromatin to nucleolar cap regions. These findings identify ATM-dependent temporal and spatial control of DNA repair and provide insights into how communication between DSB signaling and ongoing transcription promotes genome integrity

    Evidence for variable selective pressures at MC1R

    Get PDF
    It is widely assumed that genes that influence variation in skin and hair pigmentation are under selection. To date,the melanocortin 1 receptor (MC1R) is the only gene identified that explains substantial phenotypic variance inhuman pigmentation. Here we investigate MC1R polymorphism in several populations, for evidence of selection.We conclude that MC1R is under strong functional constraint in Africa, where any diversion from eumelanin production (black pigmentation) appears to be evolutionarily deleterious. Although many of the MC1R amino acid variants observed in non-African populations do affect MC1R function and contribute to high levels of MC1R diversity in Europeans, we found no evidence, in either the magnitude or the patterns of diversity, for its enhancement by selection; rather, our analyses show that levels of MC1R polymorphism simply reflect neutral expectations underrelaxation of strong functional constraint outside Africa

    Lipid, detergent, and coomassie blue G-250 affect the migration of small membrane proteins in blue native gels:Mitochondrial carriers migrate as monomers not dimers

    Get PDF
    Background: Mitochondrial carriers were thought to be dimeric based on their migration in blue native gels.  Results: The high molecular mass species observed in blue native gels are composed of protein monomers, detergent, lipid, and Coomassie stain.  Conclusion: The mitochondrial carriers are monomeric not dimeric.  Significance: The apparent mass of small membrane proteins in blue native gels requires significant correction
    corecore