1,591 research outputs found

    Suburban Land Constraints And Intraurban Income Distribution

    Get PDF
    This dissertation examines the effect of space restrictions on the intraurban distribution of socioeconomic groups. It proposes that the centripetal income shifts evident in North American cities with suburban containment can be explained in part within the context of traditional neoclassical theory. A model is introduced to predict the residential location pattern in a metropolitan center when suburban land consumption constraints and a polycentric urban form are incorporated into the Alonso budget equation (Alonso, 1964). In this model two things happen to higher socioeconomic groups with a fixing of the size of new lots; the accessibility variable rises in relative importance and the demand is refocused on the existing supply of large lots. Geographically, the higher income groups shift inward.;The model, based on the variables of residential space, accessibility and income, applies block-level data (for the most part) for Metropolitan Toronto. Site area, floor area or a composite area value measures residential space; distance from the core or an employment potential calculation quantifies relative accessibility; and, household income or dwelling value represents the latter variable.;Based on different combinations of the measures, iterations of the model predict theoretical incomes. These values and their resultant spatial patterns are compared to the actual configuration. The goodness of fit and the applicability of the model\u27s components are assessed using weighted least squares multiple regression and cluster analysis.;Empirical tests reveal that the representativeness of the predicted values are moderate when the metropolis is treated as an entity. However, analysis by socioeconomic sector indicates that the model\u27s fit is much improved for the middle and upper income zones. When the predicted spatial patterns are compared to those of actual values, the quality of representation improves again. This is especially the case for the site area and composite area-based iterations. Further, examination of levels of similar accessibility shows greater revaluations for those cases with greater space. These results are significant at different levels of aggregation, and support the fundamental premise of the model. Overall, the model performs reasonably well

    Kinematically redundant arm formulations for coordinated multiple arm implementations

    Get PDF
    Although control laws for kinematically redundant robotic arms were presented as early as 1969, redundant arms have only recently become recognized as viable solutions to limitations inherent to kinematically sufficient arms. The advantages of run-time control optimization and arm reconfiguration are becoming increasingly attractive as the complexity and criticality of robotic systems continues to progress. A generalized control law for a spatial arm with 7 or more degrees of freedom (DOF) based on Whitney's resolved rate formulation is given. Results from a simulation implementation utilizing this control law are presented. Furthermore, results from a two arm simulation are presented to demonstrate the coordinated control of multiple arms using this formulation

    High-Contrast 3.8 Micron Imaging Of The Brown Dwarf/Planet-Mass Companion to GJ 758

    Get PDF
    We present L' band (3.8 Ī¼m\mu m) MMT/Clio high-contrast imaging data for the nearby star GJ 758, which was recently reported by Thalmann et al. (2009) to have one -- possibly two-- faint comoving companions (GJ 758B and ``C", respectively). GJ 758B is detected in two distinct datasets. Additionally, we report a \textit{possible} detection of the object identified by Thalmann et al as ``GJ 758C" in our more sensitive dataset, though it is likely a residual speckle. However, if it is the same object as that reported by Thalmann et al. it cannot be a companion in a bound orbit. GJ 758B has a H-L' color redder than nearly all known L--T8 dwarfs. Based on comparisons with the COND evolutionary models, GJ 758B has Te_{e} āˆ¼\sim 560 Kāˆ’90K+150K^{^{+150 K}_{-90K}} and a mass ranging from āˆ¼\sim 10--20 MJ_{J} if it is āˆ¼\sim 1 Gyr old to āˆ¼\sim 25--40 MJ_{J} if it is 8.7 Gyr old. GJ 758B is likely in a highly eccentric orbit, e āˆ¼\sim 0.73āˆ’0.21+0.12^{^{+0.12}_{-0.21}}, with a semimajor axis of āˆ¼\sim 44 AUāˆ’14AU+32AU^{^{+32 AU}_{-14 AU}}. Though GJ 758B is sometimes discussed within the context of exoplanet direct imaging, its mass is likely greater than the deuterium-burning limit and its formation may resemble that of binary stars rather than that of jovian-mass planets.Comment: 14 pages, 3 figures. Accepted for publication in The Astrophysical Journal Letter

    Automatic Dispensing Pill Caddy for the Elderly

    Get PDF
    The report, Aging in the United States, finds that baby boomers who are at retirement age are in worse health compared with previous generations. More of them are living with chronic conditions such as high cholesterol, hypertension, diabetes, arthritis, and heart disease; all which require medication. The report also anticipates the number of people with dementia could nearly triple in the coming decades, resulting in senior adults requiring more assistance with daily activities. Our product intends to enhance the quality of life of the older adult population by providing a pill dispenser that creates convenience with alerts and notifications. This makes the dosage easily accessible to those with cognitive and other impairments, and helps these adults to live a healthier lifestyle all the while minimizing the stress involved and time needed to take their medication. The demographics for our customer base include those who are ages 65+ who have difficulty with memory, individuals who live with multiple diseases/chronic conditions, and elderly individuals who live independently and require regular assistance. We will reach our most relevant market by selling our product individually and also by providing access to hospitals, insurance companies, and care providers. Our product will add value to our end userā€™s life, is easily accessible for the elderly customers and can easily be changed with evolving technology.https://scholarscompass.vcu.edu/capstone/1190/thumbnail.jp

    Rule 50 and Its Discontents: Athletesā€™ Right to Protest

    Get PDF
    This issue brief discusses the debate surrounding Rule 50 of the Olympic Charter and athletesā€™ right to protest emphasizing the current importance of the matter concerning the recently concluded Tokyo 2021 Games. First, it discusses those who argue for the rule such as the president of the International Olympic Committee (IOC), the IOC itself, and athletes such as Feyisa Lilesa, Gwen Berry, and Race Imboden. Next, the brief turns to the cases against Rule 50 with an examination of scholarship on the matter as well as two case studies of Lilesa, and Berry/Imboden. These case studies examine three instances of protest over two different IOC sanctioned events. The issue brief then pivots to an examination of the idea of athletesā€™ protest from a communications perspective with a look into nonverbal demonstration. Finally, the paper provides a possible explanation for the Olympicsā€™ long-standing commitment to Rule 50 through the intersection of Coakleyā€™s Great Sport Myth and the Myth of Sportā€™s Autonomy

    The megaprior heuristic for discovering protein sequence patterns

    Get PDF
    Several computer algorithms for discovering patterns in groups of protein sequences are in use that are based on fitting the parameters of a statistical model to a group of related sequences. These include hidden Markov model (HMM) algorithms for multiple sequence alignment, and the MEME and Gibbs sampler aagorithms for discovering motifs. These algorithms axe sometimes prone to producing models that are incorrect because two or more patterns have been tombitted. The statistical model produced in this situation is a convex combination (weighted average) two or more different models. This paper presents a solution to the problem of convex combinations in the form of a heuristic based on using extremely low variance Dirichlet mixture priors as past of the statistical model. This heuristic, which we call the megaprior heuristic, increases the strength (i.e., decreases the variance) of the prior in proportion to the size of the sequence dataset. This causes each column in the final model to strongly resemble the mean of a single component of the prior, regardless of the size of the dataset. We describe the cause of the convex combination problem, analyze it mathematically, motivate and describe the implementation of the megaprior heuristic, and show how it can effectively eliminate the problem of convex combinations in protein sequence pattern discovery

    Motif Enrichment Analysis: a unified framework and an evaluation on ChIP data

    Get PDF
    A major goal of molecular biology is determining the mechanisms that control the transcription of genes. Motif Enrichment Analysis (MEA) seeks to determine which DNA-binding transcription factors control the transcription of a set of genes by detecting enrichment of known binding motifs in the genes' regulatory regions. Typically, the biologist specifies a set of genes believed to be co-regulated and a library of known DNA-binding models for transcription factors, and MEA determines which (if any) of the factors may be direct regulators of the genes. Since the number of factors with known DNA-binding models is rapidly increasing as a result of high-throughput technologies, MEA is becoming increasingly useful. In this paper, we explore ways to make MEA applicable in more settings, and evaluate the efficacy of a number of MEA approaches.We first define a mathematical framework for Motif Enrichment Analysis that relaxes the requirement that the biologist input a selected set of genes. Instead, the input consists of all regulatory regions, each labeled with the level of a biological signal. We then define and implement a number of motif enrichment analysis methods. Some of these methods require a user-specified signal threshold, some identify an optimum threshold in a data-driven way and two of our methods are threshold-free. We evaluate these methods, along with two existing methods (Clover and PASTAA), using yeast ChIP-chip data. Our novel threshold-free method based on linear regression performs best in our evaluation, followed by the data-driven PASTAA algorithm. The Clover algorithm performs as well as PASTAA if the user-specified threshold is chosen optimally. Data-driven methods based on three statistical tests-Fisher Exact Test, rank-sum test, and multi-hypergeometric test--perform poorly, even when the threshold is chosen optimally. These methods (and Clover) perform even worse when unrestricted data-driven threshold determination is used.Our novel, threshold-free linear regression method works well on ChIP-chip data. Methods using data-driven threshold determination can perform poorly unless the range of thresholds is limited a priori. The limits implemented in PASTAA, however, appear to be well-chosen. Our novel algorithms--AME (Analysis of Motif Enrichment)-are available at http://bioinformatics.org.au/ame/

    FIMO: scanning for occurrences of a given motif

    Get PDF
    Summary: A motif is a short DNA or protein sequence that contributes to the biological function of the sequence in which it resides. Over the past several decades, many computational methods have been described for identifying, characterizing and searching with sequence motifs. Critical to nearly any motif-based sequence analysis pipeline is the ability to scan a sequence database for occurrences of a given motif described by a position-specific frequency matrix

    MEME-ChIP: motif analysis of large DNA datasets

    Get PDF
    Motivation: Advances in high-throughput sequencing have resulted in rapid growth in large, high-quality datasets including those arising from transcription factor (TF) ChIP-seq experiments. While there are many existing tools for discovering TF binding site motifs in such datasets, most web-based tools cannot directly process such large datasets

    Performance assessment of density and level-set topology optimisation methods for 3D heatsink design

    Get PDF
    In this paper, two most prevalent topological optimisation approaches namely Density and Level set method are applied to a three dimensional heatsink design problem. The relative performance of the two approaches are compared in terms of design quality, robustness and computational speed. The work is original as for the first time it demonstrates the relative advantages and disadvantages for each method when applied to a practical engineering problem. It is additionally novel in that it presents the design of a convectively cooled heatsink by solving full thermo-fluid equations for two different solid-fluid material sets. Further, results are validated using a separate CFD study with the optimised designs are compared against a standard pin-fin based heatsink design. The results show that the Density method demonstrates better performance in terms of robustness and computational speed, while Level-set method yields a better quality design
    • ā€¦
    corecore