620 research outputs found
Systemic: A Testbed For Characterizing the Detection of Extrasolar Planets. I. The Systemic Console Package
We present the systemic Console, a new all-in-one, general-purpose software
package for the analysis and combined multiparameter fitting of Doppler radial
velocity (RV) and transit timing observations. We give an overview of the
computational algorithms implemented in the Console, and describe the tools
offered for streamlining the characterization of planetary systems. We
illustrate the capabilities of the package by analyzing an updated radial
velocity data set for the HD128311 planetary system. HD128311 harbors a pair of
planets that appear to be participating in a 2:1 mean motion resonance. We show
that the dynamical configuration cannot be fully determined from the current
data. We find that if a planetary system like HD128311 is found to undergo
transits, then self-consistent Newtonian fits to combined radial velocity data
and a small number of timing measurements of transit midpoints can provide an
immediate and vastly improved characterization of the planet's dynamical state.Comment: 10 pages, 5 figures, accepted for publication on PASP. Additional
material at http://www.ucolick.org/~smeschia/systemic.ph
Recommended from our members
Applicability of Drug Response Metrics for Cancer Studies using Biomaterials
Bioengineers have built models of the tumour microenvironment (TME) in which to study cellâcell interactions, mechanisms of cancer growth and metastasis, and to test new therapies. These models allow researchers to culture cells in conditions that include features of the in vivo TME implicated in regulating cancer progression, such as extracellular matrix (ECM) stiffness, integrin binding to the ECM, immune and stromal cells, growth factor and cytokine depots, and a three-dimensional geometry more representative of the in vivo TME than tissue culture polystyrene (TCPS). These biomaterials could be particularly useful for drug screening applications to make better predictions of efficacy, offering better translation to preclinical models and clinical trials. However, it can be challenging to compare drug response reports across different biomaterial platforms in the current literature. This is, in part, a result of inconsistent reporting and improper use of drug response metrics, and vast differences in cell growth rates across a large variety of biomaterial designs. This study attempts to clarify the definitions of drug response measurements used in the field, and presents examples in which these measurements can and cannot be applied. We suggest as best practice to measure the growth rate of cells in the absence of drug, and follow our âdecision treeâ when reporting drug response metrics
Luminosity determination for the measurement of the proton-proton total cross section at 8 TeV in the Atlas experiment
La sezione dâurto totale adronica gioca un ruolo fondamentale nel programma di fisica di LHC. Un calcolo di questo parametro, fondamentale nellâambito della teoria delle interazioni forti, non Ă© possibile a causa dellâinapplicabilitĂ dellâapproccio perturbativo. Nonostante ciĂČ, la sezione dâurto puĂČ essere stimata, o quanto meno le puĂČ essere dato un limite, grazie ad un certo numero di relazioni, come ad esempio il Teorema Ottico. In questo contesto, il detector ALFA (An Absolute Luminosity For ATLAS) sfrutta il Teorema Ottico per determinare la sezione dâurto totale misurando il rate di eventi elastici nella direzione forward. Un tale approccio richiede un metodo accurato di misura della luminositĂ in condizioni sperimentali difficoltose, caratterizzate da valori di luminositĂ istantanea inferiore fino a 7 ordini di grandezza rispetto alle normali condizioni di LHC. Lo scopo di questa tesi Ăš la determinazione della luminositĂ integrata di due run ad alto ÎČ*, utilizzando diversi algoritmi di tipo Event-Counting dei detector BCM e LUCID. Particolare attenzione Ăš stata riservata alla sottrazione del fondo e allo studio delle in- certezze sistematiche. I valori di luminositĂ integrata ottenuti sono L = 498.55 ± 0.31 (stat) ± 16.23 (sys) ÎŒb^(-1) and L = 21.93 ± 0.07 (stat) ± 0.79 (sys) ÎŒb^(-1), rispettivamente per i due run. Tali saranno forniti alla comunitĂ di fisica che si occupa della misura delle sezioni dâurto protone-protone, elastica e totale. Nel Run II di LHC, la sezione dâurto totale protone-protone sarĂ stimata con unâenergia nel centro di massa di 13 TeV per capire meglio la sua dipendenza dallâenergia in un simile regime. Gli strumenti utilizzati e lâesperienza acquisita in questa tesi saranno fondamentali per questo scopo
Modeling of magnetic field driven simultaneous assembly
The Magnetic Field Driven Simultaneous Assembly (MFDSA) is a method that offers a non-statistical and deterministic solution to the problem of assembly via batch processing; a hybrid of serial and parallel processing. The technique requires the use of electromagnets as well as soft and hard magnetic materials that are applied to devices and recesses respectively. The MFDSA approach offers the ability to check and correct errors in real-time and is capable of scalable, versatile, and high-yield integration.
Devices, coated with a layer of soft magnetic material, are moved from initial to final positions along predetermined pathways through the action of an array of electromagnets. Various devices, of arbitrary geometries, with different physical and functional properties, are manipulated simultaneously toward specific desired locations and then dropped onto a template under the influence of gravity by weakening the local applied field. Locations on the template correspond to sites on a substrate that contain recesses. When a number of devices have been dropped onto the template, a substrate is pressed onto it and the soft magnetic layers on the devices adhere to the hard magnetic strips in the recesses, completing integration in a single step.
The objectives of this dissertation are the following: to present the MFDSA method; comparing and contrasting it with other extant techniques employed by the semiconductor industry; to discuss key aspects of this solution with respect to the problem of assembly, and to model the calculations involved with determining both device pathways and field interactions that are required to implement the approach. The Fourier Series technique will be used to describe the force of attraction between the device\u27s soft magnetic layer and the recess\u27s hard magnetic strips. Methodology from finite element analysis will be employed to calculate the force exerted on a device by an array of electromagnets. The Swarm Algorithm, which was developed in this work to calculate device pathways, will be presented as a stable, well-defined solution.
Other concepts, such as the magnetic retention factor and the collision crosssection area, will be presented and developed. The solution to the problem of assembly, via the Swarm Algorithm, will be compared and contrasted with other analogous problems found in the literature. The results of these models, including software implementation, will be presented
Predicting the Price of a Stock
The goal of this project was to apply research and signal processing techniques towards the development of an accurate model for the prices of securities traded on American stock exchanges, with the intent of producing short-term forecasts. The final model utilized previous stock prices, exchange index values, and company research, with the purpose of providing investors a tool for making informed financial decisions. This model was tested using several trials of virtual investment portfolios, encompassing a wide range of stocks
Essays in applied microeconometrics with applications to risk-taking and savings decisions
Cette thĂšse prĂ©sente trois chapitres qui utilisent et dĂ©veloppent des mĂ©thodes microĂ©conomĂ©triques pour lâanalyse de microdonnĂ©es en Ă©conomique. Le premier chapitre Ă©tudie comment les interactions sociales entre entrepreneurs affectent la prise de dĂ©cisions en face de risque. Pour ce faire, nous menons deux expĂ©riences permettant de mesurer le niveau dâaversion au risque avec de jeunes entrepreneurs ougandais. Entre les deux expĂ©riences, les entrepreneurs participent Ă une activitĂ© sociale dans laquelle ils peuvent partager leur connaissance et discuter entre eux. Nous recueillons des donnĂ©es sur la formation du rĂ©seau de pairs rĂ©sultant de cette activitĂ© et sur les choix des participants avant et aprĂšs lâactivitĂ©. Nous trouvons que les participants ont tendance Ă faire des choix plus (moins) risquĂ©s dans la seconde expĂ©rience si les pairs avec qui ils ont discutĂ© font en moyenne des choix plus (moins) risquĂ©s dans la premiĂšre expĂ©rience. Ceci suggĂšre que mĂȘme les interactions sociales Ă court terme peuvent affecter la prise de dĂ©cisions en face de risque. Nous constatons Ă©galement que les participants qui font des choix (in)cohĂ©rents dans les expĂ©riences ont tendance Ă dĂ©velopper des relations avec des individus qui font des choix (in)cohĂ©rents, mĂȘme en conditionnant sur des variables observables comme lâĂ©ducation et le genre, suggĂ©rant que les rĂ©seaux de pairs sont formĂ©s en fonction de caractĂ©ristiques difficilement observables liĂ©es Ă la capacitĂ© cognitive. Le deuxiĂšme chapitre Ă©tudie si les politiques de comptes dâĂ©pargne Ă avantages fiscaux au Canada conviennent Ă tous les individus Ă©tant donnĂ© lâĂ©volution de leur revenu et les diffĂ©rences dans la fiscalitĂ© entre les provinces. Les deux principales formes de comptes dâĂ©pargne Ă avantages fiscaux, les TEE et les EET, imposent lâĂ©pargne Ă lâannĂ©e de cotisation et de retrait respectivement. Ainsi, les rendements relatifs des deux vĂ©hicules dâĂ©pargne dĂ©pendent des taux dâimposition marginaux effectifs au cours de ces deux annĂ©es, qui dĂ©pendent Ă leur tour de la dynamique des revenus. Jâestime un modĂšle de dynamique des revenus Ă lâaide dâune base de donnĂ©es administrative longitudinale canadienne contenant des millions dâindividus, ce qui permet une hĂ©tĂ©rogĂ©nĂ©itĂ© substantielle dans lâĂ©volution des revenus entre diffĂ©rents groupes. Le modĂšle est ensuite utilisĂ©, conjointement avec un calculateur dâimpĂŽt et de transferts gouvernementaux, pour prĂ©dire comment les rendements des EET et des TEE varient entre ces groupes. Les rĂ©sultats suggĂšrent que les comptes de type TEE gĂ©nĂšrent en gĂ©nĂ©ral des rendements plus Ă©levĂ©s, en particulier pour les groupes Ă faible revenu. La comparaison des choix dâĂ©pargne optimaux prĂ©dits par le modĂšle avec les choix dâĂ©pargne observĂ©s dans les donnĂ©es suggĂšre que les EET sont en gĂ©nĂ©ral trop favorisĂ©es dans la population, surtout au QuĂ©bec. Ces rĂ©sultats ont dâimportantes implications sur les politiques de « nudge »qui sont actuellement mises en oeuvre au QuĂ©bec, obligeant les employeurs Ă inscrire automatiquement leurs employĂ©s dans des comptes dâĂ©pargne de type EET. Ceux-ci pourraient produire des rendements trĂšs faibles pour les personnes Ă faible revenu, qui sont connues pour ĂȘtre les plus sensibles au « nudge ». Enfin, le troisiĂšme chapitre Ă©tudie les problĂšmes mĂ©thodologiques qui surviennent frĂ©quemment dans les modĂšles de rĂ©gression par discontinuitĂ© (RD). Il considĂšre plus prĂ©cisĂ©ment le problĂšme des erreurs dâarrondissement dans la variable dĂ©terminant le traitement, ce qui rend souvent la variable de traitement inobservable pour certaines observations autour du seuil. Alors que les chercheurs rejettent gĂ©nĂ©ralement ces observations, je montre quâils contiennent des informations importantes, car la distribution des rĂ©sultats se divise en deux en fonction de lâeffet du traitement. LâintĂ©gration de cette information dans des critĂšres standard de sĂ©lection de modĂšles amĂ©liore la performance et permet dâĂ©viter les biais de spĂ©cification. Cette mĂ©thode est prometteuse, en particulier pour amĂ©liorer les estimations des effets causaux dans les trĂšs grandes bases de donnĂ©es, oĂč le nombre dâobservations rejetĂ©es peut ĂȘtre trĂšs important, comme le LAD utilisĂ© au chapitre 2.This thesis presents three chapters that use and develop microeconometric methods for microdata analysis in economics. The first chapter studies how social interactions influence entrepreneursâ risk-taking decisions. We conduct two risk-taking experiments with young Ugandan entrepreneurs. Between the two experiments, the entrepreneurs participate in a networking activity where they build relationships and discuss with each other. We collect data on peer network formation and on participantsâ choices before and after the networking activity. We find that participants tend to make more (less) risky choices in the second experiment if the peers they discuss with make on average more (less) risky choices in the first experiment. This suggests that even short term social interactions may affect risk-taking decisions. We also find that participants who make (in)consistent choices in the experiments tend to develop relationships with individuals who also make (in)consistent choices, even when controlling for observable variables such as education and gender, suggesting that peer networks are formed according to unobservable characteristics linked to cognitive ability. The second chapter studies whether tax-preferred saving accounts policies in Canada are suited to all individuals given they different income path and given differences in tax codes across provinces. The two main forms of tax-preferred saving accounts â TEE and EET â tax savings at the contribution and withdrawal years respectively. Thus the relative returns of the two saving vehicles depend on the effective marginal tax rates in these two years, which in turn depend on earning dynamics. This chapter estimates a model of earning dynamics on a Canadian longitudinal administrative database containing millions of individuals, allowing for substantial heterogeneity in the evolution of income across income groups. The model is then used, together with a tax and credit calculator, to predict how the returns of EET and TEE vary across these groups. The results suggest that TEE accounts yield in general higher returns, especially for low-income groups. Comparing optimal saving choices predicted by the model with observed saving choices in the data suggests that EET are over-chosen, especially in the province of Quebec. These results have important implications for ânudgingâ policies that are currently being implemented in Quebec, forcing employers to automatically enrol their employees in savings accounts similar to EET. These could yield very low returns for low-income individuals, which are known to be the most sensitive to nudging. Finally, the third chapter is concerned with methodological problems often arising in regression discontinuity designs (RDD). It considers the problem of rounding errors in the running variable of RDD, which often make the treatment variable unobservable for some observations around the threshold. While researchers usually discard these observations, I show that they contain valuable information because the outcomeâs distribution splits in two as a function of the treatment effect. Integrating this information in standard data driven criteria helps in choosing the best model specification and avoid specification biases. This method is promising, especially for improving estimates of causal effects in very large database (where the number of observations discarded can be very large), such as the LAD used in Chapter 2
Prediction of drug-drug interaction potential using machine learning approaches
Drug discovery is a long, expensive, and complex, yet crucial process for the benefit of society. Selecting potential drug candidates requires an understanding of how well a compound will perform at its task, and more importantly, how safe the compound will act in patients. A key safety insight is understanding a molecule\u27s potential for drug-drug interactions. The metabolism of many drugs is mediated by members of the cytochrome P450 superfamily, notably, the CYP3A4 enzyme. Inhibition of these enzymes can alter the bioavailability of other drugs, potentially increasing their levels to toxic amounts. Four models were developed to predict CYP3A4 inhibition: logistic regression, random forests, support vector machine, and neural network. Two novel convolutional approaches were explored for data featurization: SMILES string auto-extraction and 2D structure auto-extraction. The logistic regression model achieved an accuracy of 83.2%, the random forests model, 83.4%, the support vector machine model, 81.9%, and the neural network model, 82.3%. Additionally, the model built with SMILE string auto-extraction had an accuracy of 82.3%, and the model with 2D structure auto-extraction, 76.4%. The advantages of the novel featurization methods are their ability to learn relevant features from compound SMILE strings, eliminating feature engineering. The developed methodologies can be extended towards predicting any structure-activity relationship and fitted for other areas of drug discovery and development
FPGA implementation of a LSTM Neural Network
Este trabalho pretende fazer uma implementação customizada, em Hardware, duma Rede Neuronal Long Short-Term Memory. O modelo python, assim como a descrição Verilog, e sĂntese RTL, encontram-se terminadas. Falta apenas fazer o benchmarking e a integração de um sistema de aprendizagem
- âŠ