4,595 research outputs found

    Optimizing automated preprocessing streams for brain morphometric comparisons across multiple primate species

    Get PDF
    INTRODUCTION

MR techniques have delivered images of brains from a wide array of species, ranging from invertebrates to birds to elephants and whales. However, their potential to serve as a basis for comparative brain morphometric investigations has rarely been tapped so far (Christidis and Cox, 2006; Van Essen & Dierker, 2007), which also hampers a deeper understanding of the mechanisms behind structural alterations in neurodevelopmental disorders (Kochunov et al., 2010). One of the reasons for this is the lack of computational tools suitable for morphometrci comparisons across multiple species. In this work, we aim to characterize this gap, taking primates as an example.

METHODS

Using a legacy dataset comprising MR scans from eleven species of haplorhine primates acquired on the same scanner (Rilling & Insel, 1998), we tested different automated processing streams, focusing on denoising and brain segmentation. Newer multi-species datasets are not currently available, so our experiments with this decade-old dataset (which had a very low signal-to-noise ratio by contemporary standards) can serve to highlight the lower boundary of the current possibilities of automated processing pipelines. After manual orientation into Talairach space, an automated bias correction was performed using CARET (Van Essen et al., 2001) before the brains were extracted with FSL BET (Smith, 2002; Fig. 1) and either smoothed by an isotropic Gaussian Kernel, FSL SUSAN (Smith, 1996), an anisotropic diffusion filter (Perona & Malik, 1990), an optimized Rician non-local means filter (Gaser & Coupé, 2010), or not at all (Fig. 2 & 3). Segmentation of the brains (Fig. 2 & 4) was performed separately by either FSL FAST (Zhang, 2001) without atlas priors, or using an Adaptive Maximum A Posteriori Approach (Rajapakse et al., 1997). Finally, the white matter surface was extracted with CARET, and inspected for anatomical and topological correctness. 

RESULTS

Figure 3 shows that noise reduction was generally necessary but, at least for these noisy data, anisotropic filtering (SUSAN, diffusion filter, Rician filter) provided little improvement over simple isotropic filtering. While several segmentations worked well in individual species, our focus was on cross-species optimization of the processing pipeline, and none of the tested segmentations performed uniformly well in all 11 species. The performance could be improved by some of the denoising approaches and by deviating systematically from the default parameters recommended for processing human brains (cf. Fig. 4). Depending on the size of the brains and on the processing path, it took a double-core 2.4GHz iMac from about two minutes (squirrel monkeys) to half an hour (humans) to generate the white matter surface from the T1 image. Nonetheless, the resulting surfaces always necessitated topology correction and - often considerable - manual cleanup. 


CONCLUSIONS

Automated processing pipelines for surface-based morphometry still require considerable adaptations to reach optimal performance across brains of multiple species, even within primates (cf. Fig. 5). However, most contemporary datasets have a better signal-to-noise ratio than the ones used here, which provides for better segmentations and cortical surface reconstructions. Considering further that cross-scanner variability is well below within-species differences (Stonnington, 2008), the prospects look good for comparative evolutionary analyses of cortical parameters, and gyrification in particular. In order to succeed, however, computational efforts on comparative morphometry depend on high-quality imaging data from multiple species being more widely available.

ACKNOWLEDGMENTS

D.M, R.D, & C.G are supported by the German BMBF grant 01EV0709.


REFERENCES

Christidis, P & Cox, RW (2006), A Step-by-Step Guide to Cortical Surface Modeling of the Nonhuman Primate Brain Using FreeSurfer, Proc Human Brain Mapping Annual Meeting, http://afni.nimh.nih.gov/sscc/posters/file.2006-06-01.4536526043 .
Gaser, C & Coupé, P (2010), Impact of Non-local Means filtering on Brain Tissue Segmentation, OHBM 2010, Abstract 1770.
Kochunov, P & al. (2010), Mapping primary gyrogenesis during fetal development in primate brains: high-resolution in utero structural MRI study of fetal brain development in pregnant baboons, Frontiers in Neurogenesis, in press, DOI: 10.3389/fnins.2010.00020.
Perona, P & Malik J (1990), Scale space and edge detection using anisotropic diffusion, IEEE Trans Pattern Anal Machine Intell, vol. 12, no. 7, pp. 629-639.
Rajapakse, JC & al. (1997), Statistical approach to segmentation of single-channel cerebral MR images, IEEE Trans Med Imaging, vol. 16, no. 2, pp. 176-186.
Rilling, JK & Insel TR (1998), Evolution of the cerebellum in primates: differences in relative volume among monkeys, apes and humans. Brain Behav. Evol. 52, 308-314 doi:10.1159/000006575. Dataset available at http://www.fmridc.org/f/fmridc/77.html .
Smith, SM (1996), Flexible filter neighbourhood designation, Proc. 13th Int. Conf. on Pattern Recognition, vol. 1, pp. 206-212.
Smith, SM (2002), Fast robust automated brain extraction, Hum Brain Mapp, vol. 17, no. 3, pp. 143-155.
Stonnington, CM & al. (2008), Interpreting scan data acquired from multiple scanners: a study with Alzheimers disease, Neuroimage, vol. 39, no. 3, pp. 1180-1185.
Van Essen, DC & al. (2001), An Integrated Software System for Surface-based Analyses of Cerebral Cortex, J Am Med Inform Assoc, vol. 8, no. 5, pp. 443-459.
Van Essen, DC & Dierker DL (2007), Surface-based and probabilistic atlases of primate cerebral cortex, Neuron, vol. 56, no. 2, pp. 209-225.
Zhang, Y & al. (2001), Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm, IEEE Trans Med Imaging, vol. 20, no. 1, pp. 45-57.
&#xa

    The Cost of the Culturati: Studying the Neighborhood Stability Impact of Cultural District Designations

    Get PDF
    The decision to declare a district for a specific cause is a critical policy decision; making an area an official office park or designated cultural site means it will attract specific types of residents and businesses and require specific amenities. This paper reviews the impact of designating a cultural district as a place-based policy, specifically by developing a measure of neighborhood stability and applying a stress test of neighborhood stability in cultural districts during the Great Recession. The model underpining the neighborhood stability measure is an optimal stopping time model which frames neighborhood rents as a Brownian motion with drift. This structure imposes minimalist assumptions and develops two reduced form parameters which describe individual preferences for how long to live in a neighborhood. This analysis is in the style of \cite{alvarez2015nonparametric}. The parameters are then used to test neighborhood stability, with the result that neighborhoods designated specifically as cultural districts are far less likely to experience negative stability (e.g., large amounts of residential out-migration and thus shorter residency spells) with a causal effect size four times larger than the effect size of a recession itself. However, such neighborhoods are also more likely to experience an influx of newer higher income residents after designation, implying the beneficiaries of the new stability may be those who priced out the original creators of the neighborhood\u27s cultural capital

    Capuchin Search Particle Swarm Optimization (CS-PSO) based Optimized Approach to Improve the QoS Provisioning in Cloud Computing Environment

    Get PDF
    This review introduces the methods for further enhancing resource assignment in distributed computing situations taking into account QoS restrictions. While resource distribution typically affects the quality of service (QoS) of cloud organizations, QoS constraints such as response time, throughput, hold-up time, and makespan are key factors to take into account. The approach makes use of a methodology from the Capuchin Search Particle Large Number Improvement (CS-PSO) apparatus to smooth out resource designation while taking QoS constraints into account. Throughput, reaction time, makespan, holding time, and resource use are just a few of the objectives the approach works on. The method divides the resources in an optimum way using the K-medoids batching scheme. During batching, projects are divided into two-pack assembles, and the resource segment method is enhanced to obtain the optimal configuration. The exploratory association makes use of the JAVA device and the GWA-T-12 Bitbrains dataset for replication. The outrageous worth advancement problem of the multivariable capacity is addressed using the superior calculation. The simulation findings demonstrate that the core (Cloud Molecule Multitude Improvement, CPSO) computation during 500 ages has not reached assembly repeatedly, repeatedly, repeatedly, and repeatedly, respectively.The connection analysis reveals that the developed model outperforms the state-of-the-art approaches. Generally speaking, this approach provides significant areas of strength for a successful procedure for improving resource designation in distributed processing conditions and can be applied to address a variety of resource segment challenges, such as virtual machine setup, work arranging, and resource allocation. Because of this, the capuchin search molecule enhancement algorithm (CSPSO) ensures the success of the improvement measures, such as minimal streamlined polynomial math, rapid consolidation speed, high productivity, and a wide variety of people

    Leveraging the private sector to enable the delivery of well-located affordable housing in Cape Town

    Get PDF
    Includes bibliographical referencesAffordable housing in Cape Town tends to be located far away from economic opportunities, social facilities and public transport infrastructure, which serves to reinforce inequality, burdening poor households and the City. This dissertation explores the current challenges in bringing well-located, affordable housing units to market in Cape Town; the opportunities for greater private sector participation; and the public interventions required in order to enable actors to overcome these challenges and capitalise on the opportunities. These issues were gradually refined from a global scale to a local area, beginning with a review of the relevant urban development and housing economics literature in order to form a theoretical framework, followed by an overview of the local housing market and national housing policy. Precedent, interviews and a workshop were then conducted with participants from the private and public sectors, NGOs and academia in order to explore the key challenges, opportunities and potential solutions in Cape Town. Finally, these challenges and opportunities were investigated and interventions proposed in a particular context, namely Parow train station precinct within the Voortrekker Road Corridor (VRC) in Cape Town. While a comprehensive review of national housing policy and funding is required, the focus of this dissertation is on the many city-scale interventions which are possible within a short- to medium-term, which tackle inefficiencies in the market and regulatory system in order to leverage the power of the private sector towards the goal of well-located affordable housing. The findings for Cape Town indicate that the greatest challenges for developers are the limited availability of well-located land at affordable prices; lack of depreciated, higher-density buildings for redevelopment; excessive parking ratios; delays in the development process; and a lack of nuanced market demand information. Fortunately, there are many opportunities, including a capable and facilitative municipality in Cape Town; growing private sector interest in affordable housing; the power of small-scale landlords and innovative design; a shift from ownership to rental; and potential synergy between affordable housing, transit-oriented development (TOD) and urban regeneration (provided policy and public spending are aligned). Key recommendations for public intervention, applicable both city-wide and to the Parow Study Area, are: firstly, to urgently develop programmatic (national and city scale) and area-based (precinct scale) strategies which position affordable housing (including social housing) as a catalyst for urban regeneration and TOD, and align public investment in order to incrementally densify appropriate areas; secondly, to protect and package public land for affordable housing and other public benefit uses; and thirdly, to remove obstacles to private sector provision of affordable housing by both institutional and small-scale actors (for example, by reducing parking requirements and restrictive development parameters (potentially through affordable housing overlay zones), making market data available and fast-tracking approvals). An essential institutional intervention is the creation of an inter-departmental 'affordable housing task-team' within the municipality to champion and facilitate such interventions

    An Investigation towards Effectiveness in Image Enhancement Process in MPSoC

    Get PDF
    Image enhancement has a primitive role in the vision-based applications. It involves the processing of the input image by boosting its visualization for various applications. The primary objective is to filter the unwanted noises, clutters, sharpening or blur. The characteristics such as resolution and contrast are constructively altered to obtain an outcome of an enhanced image in the bio-medical field. The paper highlights the different techniques proposed for the digital enhancement of images. After surveying these methods that utilize Multiprocessor System-on-Chip (MPSoC), it is concluded that these methodologies have little accuracy and hence none of them are efficiently capable of enhancing the digital biomedical images

    Identifying Ultra-Cool Dwarfs at Low Galactic Latitudes: A Southern Candidate Catalogue

    Get PDF
    We present an Ultra-Cool Dwarf (UCD) catalogue compiled from low southern Galactic latitudes and mid-plane, from a cross-correlation of the 2MASS and SuperCOSMOS surveys. The catalogue contains 246 members identified from 5042 sq. deg. within 220 deg. <= l <= 360 deg. and 0 deg. < l <= 30 deg., for |b| <= 15 deg. Sixteen candidates are spectroscopically confirmed in the near-IR as UCDs with spectral types from M7.5V to L9. Our catalogue selection method is presented enabling UCDs from ~M8V to the L-T transition to be selected down to a 2MASS limiting magnitude of Ks ~= 14.5 mag. This method does not require candidates to have optical detections for catalogue inclusion. An optimal set of optical/near-IR and reduced proper-motion selection criteria have been defined that includes: an Rf and Ivn photometric surface gravity test, a dual Rf-band variability check, and an additional photometric classification scheme to selectively limit contaminants. We identify four candidates as possible companions to nearby Hipparcos stars -- observations are needed to identify these as potential benchmark UCD companions. We also identify twelve UCDs within a possible distance 20 pc, three are previously unknown of which two are estimated within 10 pc, complimenting the nearby volume-limited census of UCDs. An analysis of the catalogue spatial completeness provides estimates for distance completeness over three UCD MJ ranges, while Monte-Carlo simulations provide an estimate of catalogue areal completeness at the 75 per cent level. We estimate a UCD space density of Rho (total) = (6.41+-3.01)x10^3/pc^3 over the range of 10.5 <= MJ ~< 14.9, similar to values measured at higher Galactic latitudes (|b| ~> 10 deg.) in the field population and obtained from more robust spectroscopically confirmed UCD samples.Comment: MNRAS accepted April 2012. Contains 30 figures and 11 tables. Tables 2 and 6 to be published in full and on-line only. The on-line tables can also be obtained by contacting the author

    Leitor SDR para sensores passivos backscatter

    Get PDF
    As the Internet of Things concept moves closer to reality, some technological gaps come to light and require addressing. Advancements in reader technology are needed in order to achieve higher degrees of capability and adaptability at lower costs. Conventional readers are currently priced in the range of thousands of euros. It’s crucial to lower this entry barrier, because the lower it gets, the more attractive IoT solutions will become and the faster the global network of sensors will grow. Improvements in the performance and cost of single board computers such as the Raspberry Pi and also of RF front-ends such as the RTL-SDR have opened the way to a new class of ultra-low-cost readers. This dissertation aims to explore a new possibility of reception and demodulation of an RF signal transmitted from a passive sensor using the aforementioned technologies. The final prototype’s architecture was designed and implemented to facilitate the reception, conditioning and discretization/quantization of the ASK modulated RF signal by the RTL-SDR based front-end, so that it can then be sent via USB into the Raspberry Pi where its information will be processed through steps such as filtering, decimating, demodulating, validating and integrating into the network. This process is first introduced under the architectural point of view and then, under a more detailed approach, in a description of the implementation. Testing results are presented with variations of antenna distance and decimation ratio. The prototype was tested in laboratory and field environments and the results are promising. Also included are some potential future development suggestions and a package of development tools.À medida que o conceito Internet of Things se aproxima cada vez mais da realidade, são também identificadas novas lacunas que necessitam de ser preenchidas. São necessários novos avanços na tecnologia de leitores, de maneira a que os mesmos fiquem mais capazes, baratos e adaptáveis. Os leitores convencionais situam-se presentemente, a nível de custo, na ordem dos dos milhares de euros. É importante baixar esta barreira de entrada, pois quanto menor esta for, mais atraentes serão as soluções IoT e maior será a velocidade de crescimento desta rede global. Melhorias de desempenho e preço em single board computers comercialmente disponíveis como o Raspberry Pi, e em receptores RF como os RTL-SDR abriram as portas a uma nova classe de leitores ultra-low-cost. Esta dissertação procurará explorar uma nova possibilidade de recepção e desmodulação sinal RF oriundo de tags passivos utilizando as tecnologias já mencionadas. A arquitetura do protótipo final foi desenhada e implementada de maneira a que o front end RF baseado numa dongle RTL-SDR, receba o sinal ASK oriundo da tag, e o acondicione e discretize/quantize, para então ser passado através de USB para o Raspberry Pi, onde o mesmo será processado sob a forma de filtragem, decimação, desmodulação, validação e integração em rede. Este processo é primeiro introduzido do ponto de vista arquitetural e depois, mais detalhadamente, é feita uma descrição da implementação. Serão também apresentados resultados de teste com variações de distância entre antennas e grau de decimação do sinal. O protótipo foi testado em condições de laboratório e de campo, os resultados obtidos são promissores. São adicionalmente deixados algumas sugestões de tópicos de trabalho futuro, juntamente com um manual de utilizador e um pacote de ferramentas uteis a eventuais futuros desenvolvimentos.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    Making Room for Quantitative Literacy in Historic Preservation: Local Historic District Designation and Property Values as a Case Study

    Get PDF
    This thesis calls for a twofold shift in the training in and practice of historic preservation: first, increased data literacy and use of data in the discipline, and second, for a higher degree of skepticism about the implications of data-driven findings. Even if the results of quantitative studies are less definitive in their findings than preservation advocates would like, these grey areas can serve a valuable purpose of forcing stakeholders to become more deeply engaged in why effects might be what they are, and how policy can intervene to achieve more desirable outcomes. Following a review of previous studies and their methodologies, this project looks to Philadelphia as a case study for the quantitative analysis of the association between local historic district designation and residential property values, exploring whether it is possible to develop a straightforward and meaningful methodology for assessing the economic impact of local historic district designation on residential property values. Transaction prices serve as the dependent variable in three separate models, each corresponding to a locally designated historic district and a similar but undesignated neighborhood. Limitations are explored in detail, and future directions for study are outlined in order to offer insight to others who might undertake similar work going forward
    corecore