33 research outputs found

    International price discovery in the presence of market microstructure effects

    Get PDF
    This paper addresses and resolves the problems caused by microstructure effects when measuring the relative importance of home and U.S. market in the price discovery process of internationally cross listed stocks. In order to avoid large bounds for information shares, previous studies applying the Cholesky decomposition within the Hasbrouck (1995) framework had to rely on high frequency data. However, this entails a potential bias of estimated information shares induced by microstructure effects. We propose a modified approach that relies on distributional assumptions and yields unique and unbiased information shares. Our results indicate that the role of the U.S. market in the price discovery process of Canadian interlisted stocks has been severely underestimated to date. Moreover, we find that rather than stock specific factors, market design determines information shares. --international cross-listings,market microstructure effects,price discovery

    Assessing the causal effect of binary interventions from observational panel data with few treated units

    Get PDF
    Researchers are often challenged with assessing the impact of an intervention on an outcome of interest in situations where the intervention is non-randomised, the intervention is only applied to one or few units, the intervention is binary, and outcome measurements are available at multiple time points. In this paper, we review existing methods for causal inference in these situations. We detail the assumptions underlying each method, emphasize connections between the different approaches and provide guidelines regarding their practical implementation. Several open problems are identified thus highlighting the need for future research

    Functional Coefficient Models for Economic and Financial Data

    Get PDF
    This paper gives a selective overview on the functional coefficient models with their particular applications in economics and finance. Functional coefficient models are very useful analytic tools to explore complex dynamic structures and evolutions for functional data in various areas, particularly in economics and finance. They are natural generalizations of classical parametric models with good interpretability by allowing coefficients to be governed by some variables or to change over time, and also they have abilities to capture nonlinearity and heteroscedasticity. Furthermore, they can be regarded as one of dimensionality reduction methods for functional data exploration and have nice interpretability. Due to their great properties, functional coefficient models have had many methodological and theoretical developments and they have become very popular in various applications

    Statistical Modelling

    Get PDF
    The book collects the proceedings of the 19th International Workshop on Statistical Modelling held in Florence on July 2004. Statistical modelling is an important cornerstone in many scientific disciplines, and the workshop has provided a rich environment for cross-fertilization of ideas from different disciplines. It consists in four invited lectures, 48 contributed papers and 47 posters. The contributions are arranged in sessions: Statistical Modelling; Statistical Modelling in Genomics; Semi-parametric Regression Models; Generalized Linear Mixed Models; Correlated Data Modelling; Missing Data, Measurement of Error and Survival Analysis; Spatial Data Modelling and Time Series and Econometrics

    Computational Multispectral Endoscopy

    Get PDF
    Minimal Access Surgery (MAS) is increasingly regarded as the de-facto approach in interventional medicine for conducting many procedures this is due to the reduced patient trauma and consequently reduced recovery times, complications and costs. However, there are many challenges in MAS that come as a result of viewing the surgical site through an endoscope and interacting with tissue remotely via tools, such as lack of haptic feedback; limited field of view; and variation in imaging hardware. As such, it is important best utilise the imaging data available to provide a clinician with rich data corresponding to the surgical site. Measuring tissue haemoglobin concentrations can give vital information, such as perfusion assessment after transplantation; visualisation of the health of blood supply to organ; and to detect ischaemia. In the area of transplant and bypass procedures measurements of the tissue tissue perfusion/total haemoglobin (THb) and oxygen saturation (SO2) are used as indicators of organ viability, these measurements are often acquired at multiple discrete points across the tissue using with a specialist probe. To acquire measurements across the whole surface of an organ one can use a specialist camera to perform multispectral imaging (MSI), which optically acquires sequential spectrally band limited images of the same scene. This data can be processed to provide maps of the THb and SO2 variation across the tissue surface which could be useful for intra operative evaluation. When capturing MSI data, a trade off often has to be made between spectral sensitivity and capture speed. The work in thesis first explores post processing blurry MSI data from long exposure imaging devices. It is of interest to be able to use these MSI data because the large number of spectral bands that can be captured, the long capture times, however, limit the potential real time uses for clinicians. Recognising the importance to clinicians of real-time data, the main body of this thesis develops methods around estimating oxy- and deoxy-haemoglobin concentrations in tissue using only monocular and stereo RGB imaging data

    Overcoming Noise and Variations In Low-Precision Neural Networks

    Get PDF
    This work explores the impact of various design and training choices on the resilience of a neural network when subjected to noise and/or device variations. Simulations were performed under the expectation that the neural network would be implemented on analog hardware; this context asserts that there will be random noise within the circuit as well as variations in device characteristics between each fabricated device. The results show how noise can be added during the training process to reduce the impact of post-training noise. Architectural choices for the neural network also directly impact the performance variation between devices. The simulated neural networks were more robust to noise with a minimal architecture with fewer layers; if more neurons are needed for better fitting, networks with more neurons in shallow layers and fewer in deeper layers closer to the output tend to perform better. The paper also demonstrates that activation functions with lower slopes do a better job of suppressing noise in the neural network. It also shown that the accuracy can be made more consistent by introducing sparsity into the neural network. To that end, an evaluation is included of different methods for generating sparse architectures for smaller neural networks. A new method is proposed that consistently outperforms the most common methods used in larger, deeper networks.Ph.D

    Analysing datafied life

    No full text
    Our life is being increasingly quantified by data. To obtain information from quantitative data, we need to develop various analysis methods, which can be drawn from diverse fields, such as computer science, information theory and statistics. This thesis focuses on investigating methods for analysing data generated for medical research. Its focus is on the purpose of using various data to quantify patients for personalized treatment. From the perspective of data type, this thesis proposes analysis methods for the data from the fields of Bioinformatics and medical imaging. We will discuss the need of using data from molecular level to pathway level and also incorporating medical imaging data. Different preprocessing methods should be developed for different data types, while some post-processing steps for various data types, such as classification and network analysis, can be done by a generalized approach. From the perspective of research questions, this thesis studies methods for answering five typical questions from simple to complex. These questions are detecting associations, identifying groups, constructing classifiers, deriving connectivity and building dynamic models. Each research question is studied in a specific field. For example, detecting associations is investigated for fMRI signals. However, the proposed methods can be naturally extended to solve questions in other fields. This thesis has successfully demonstrated that applying a method traditionally used in one field to a new field can bring lots of new insights. Five main research contributions for different research questions have been made in this thesis. First, to detect active brain regions associated to tasks using fMRI signals, a new significance index, CR-value, has been proposed. It is originated from the idea of using sparse modelling in gene association study. Secondly, in quantitative Proteomics analysis, a clustering based method has been developed to extract more information from large scale datasets than traditional methods. Clustering methods, which are usually used in finding subgroups of samples or features, are used to match similar identities across samples. Thirdly, a pipeline originally proposed in the field of Bioinformatics has been adapted to multivariate analysis of fMRI signals. Fourthly, the concept of elastic computing in computer science has been used to develop a new method for generating functional connectivity from fMRI data. Finally, sparse signal recovery methods from the domain of signal processing are suggested to solve the underdetermined problem of network model inference.Open Acces

    Constraint-driven RF test stimulus generation and built-in test

    Get PDF
    With the explosive growth in wireless applications, the last decade witnessed an ever-increasing test challenge for radio frequency (RF) circuits. While the design community has pushed the envelope far into the future, by expanding CMOS process to be used with high-frequency wireless devices, test methodology has not advanced at the same pace. Consequently, testing such devices has become a major bottleneck in high-volume production, further driven by the growing need for tighter quality control. RF devices undergo testing during the prototype phase and during high-volume manufacturing (HVM). The benchtop test equipment used throughout prototyping is very precise yet specialized for a subset of functionalities. HVM calls for a different kind of test paradigm that emphasizes throughput and sufficiency, during which the projected performance parameters are measured one by one for each device by automated test equipment (ATE) and compared against defined limits called specifications. The set of tests required for each product differs greatly in terms of the equipment required and the time taken to test individual devices. Together with signal integrity, precision, and repeatability concerns, the initial cost of RF ATE is prohibitively high. As more functionality and protocols are integrated into a single RF device, the required number of specifications to be tested also increases, adding to the overall cost of testing, both in terms of the initial and recurring operating costs. In addition to the cost problem, RF testing proposes another challenge when these components are integrated into package-level system solutions. In systems-on-packages (SOP), the test problems resulting from signal integrity, input/output bandwidth (IO), and limited controllability and observability have initiated a paradigm shift in high-speed analog testing, favoring alternative approaches such as built-in tests (BIT) where the test functionality is brought into the package. This scheme can make use of a low-cost external tester connected through a low-bandwidth link in order to perform demanding response evaluations, as well as make use of the analog-to-digital converters and the digital signal processors available in the package to facilitate testing. Although research on analog built-in test has demonstrated hardware solutions for single specifications, the paradigm shift calls for a rather general approach in which a single methodology can be applied across different devices, and multiple specifications can be verified through a single test hardware unit, minimizing the area overhead. Specification-based alternate test methodology provides a suitable and flexible platform for handling the challenges addressed above. In this thesis, a framework that integrates ATE and system constraints into test stimulus generation and test response extraction is presented for the efficient production testing of high-performance RF devices using specification-based alternate tests. The main components of the presented framework are as follows: Constraint-driven RF alternate test stimulus generation: An automated test stimulus generation algorithm for RF devices that are evaluated by a specification-based alternate test solution is developed. The high-level models of the test signal path define constraints in the search space of the optimized test stimulus. These models are generated in enough detail such that they inherently define limitations of the low-cost ATE and the I/O restrictions of the device under test (DUT), yet they are simple enough that the non-linear optimization problem can be solved empirically in a reasonable amount of time. Feature extractors for BIT: A methodology for the built-in testing of RF devices integrated into SOPs is developed using additional hardware components. These hardware components correlate the high-bandwidth test response to low bandwidth signatures while extracting the test-critical features of the DUT. Supervised learning is used to map these extracted features, which otherwise are too complicated to decipher by plain mathematical analysis, into the specifications under test. Defect-based alternate testing of RF circuits: A methodology for the efficient testing of RF devices with low-cost defect-based alternate tests is developed. The signature of the DUT is probabilistically compared with a class of defect-free device signatures to explore possible corners under acceptable levels of process parameter variations. Such a defect filter applies discrimination rules generated by a supervised classifier and eliminates the need for a library of possible catastrophic defects.Ph.D.Committee Chair: Chatterjee, Abhijit; Committee Member: Durgin, Greg; Committee Member: Keezer, David; Committee Member: Milor, Linda; Committee Member: Sitaraman, Sures
    corecore