32,163 research outputs found

    Optical measurements of phase steps in segmented mirrors - fundamental precision limits

    Full text link
    Phase steps are an important type of wavefront aberrations generated by large telescopes with segmented mirrors. In a closed-loop correction cycle these phase steps have to be measured with the highest possible precision using natural reference stars, that is with a small number of photons. In this paper the classical Fisher information of statistics is used for calculating the Cramer-Rao bound, which determines the limit to the precision with which the height of the steps can be estimated in an unbiased fashion with a given number of photons and a given measuring device. Four types of measurement devices are discussed: a Shack-Hartmann sensor with one small cylindrical lenslet covering a sub-aperture centred over a border, a modified Mach-Zehnder interferometer, a Foucault test, and a curvature sensor. The Cramer-Rao bound is calculated for all sensors under ideal conditions, that is narrowband measurements without additional noise or disturbances apart from the photon shot noise. This limit is compared with the ultimate quantum statistical limit for the estimate of such a step which is independent of the measuring device. For the Shack-Hartmann sensor, the effects on the Cramer-Rao bound of broadband measurements, finite sampling, and disturbances such as atmospheric seeing and detector readout noise are also investigated. The methods presented here can be used to compare the precision limits of various devices for measuring phase steps and for optimising the parameters of the devices. Under ideal conditions the Shack-Hartmann and the Foucault devices nearly attain the ultimate quantum statistical limits, whereas the Mach-Zehnder and the curvature devices each require approximately twenty times as many photons in order to reach the same precision.Comment: 23 pages, 19 figures, to be submitted to Journal of Modern Optic

    The degree of the dual of a homogeneous space

    Get PDF

    The response of a neutral atom to a strong laser field probed by transient absorption near the ionisation threshold

    Get PDF
    We present transient absorption spectra of an extreme ultraviolet attosecond pulse train in helium dressed by an 800 nm laser field with intensity ranging from 2times10122times10^{12} W/cm2^2 to 2times10142times10^{14} W/cm2^2. The energy range probed spans 16-42 eV, straddling the first ionisation energy of helium (24.59 eV). By changing the relative polarisation of the dressing field with respect to the attosecond pulse train polarisation we observe a large change in the modulation of the absorption reflecting the vectorial response to the dressing field. With parallel polarized dressing and probing fields, we observe significant modulations with periods of one half and one quarter of the dressing field period. With perpendicularly polarized dressing and probing fields, the modulations of the harmonics above the ionisation threshold are significantly suppressed. A full-dimensionality solution of the single-atom time-dependent Schr odinger equation obtained using the recently developed ab-initio time-dependent B-spline ADC method reproduce some of our observations

    Feynman Rules in the Type III Natural Flavour-Conserving Two-Higgs Doublet Model

    Full text link
    We consider a two Higgs-doublet model with S3S_3 symmetry, which implies a π2\pi \over 2 rather than 0 relative phase between the vacuum expectation values and and . The corresponding Feynman rules are derived accordingly and the transformation of the Higgs fields from the weak to the mass eigenstates includes not only an angle rotation but also a phase transformation. In this model, both doublets couple to the same type of fermions and the flavour-changing neutral currents are naturally suppressed. We also demonstrate that the Type III natural flavour-conserving model is valid at tree-level even when an explicit S3S_3 symmetry breaking perturbation is introduced to get a reasonable CKM matrix. In the special case β=α\beta = \alpha, as the ratio tanβ=v2v1\tan\beta = {v_2 \over v_1} runs from 0 to \infty, the dominant Yukawa coupling will change from the first two generations to the third generation. In the Feynman rules, we also find that the charged Higgs currents are explicitly left-right asymmetric. The ratios between the left- and right-handed currents for the quarks in the same generations are estimated.Comment: 16 pages (figures not included), NCKU-HEP/93-1

    Unexpected drop of dynamical heterogeneities in colloidal suspensions approaching the jamming transition

    Full text link
    As the glass (in molecular fluids\cite{Donth}) or the jamming (in colloids and grains\cite{LiuNature1998}) transitions are approached, the dynamics slow down dramatically with no marked structural changes. Dynamical heterogeneity (DH) plays a crucial role: structural relaxation occurs through correlated rearrangements of particle ``blobs'' of size ξ\xi\cite{WeeksScience2000,DauchotPRL2005,Glotzer,Ediger}. On approaching these transitions, ξ\xi grows in glass-formers\cite{Glotzer,Ediger}, colloids\cite{WeeksScience2000,BerthierScience2005}, and driven granular materials\cite{KeysNaturePhys2007} alike, strengthening the analogies between the glass and the jamming transitions. However, little is known yet on the behavior of DH very close to dynamical arrest. Here, we measure in colloids the maximum of a ``dynamical susceptibility'', χ\chi^*, whose growth is usually associated to that of ξ\xi\cite{LacevicPRE}. χ\chi^* initially increases with volume fraction ϕ\phi, as in\cite{KeysNaturePhys2007}, but strikingly drops dramatically very close to jamming. We show that this unexpected behavior results from the competition between the growth of ξ\xi and the reduced particle displacements associated with rearrangements in very dense suspensions, unveiling a richer-than-expected scenario.Comment: 1st version originally submitted to Nature Physics. See the Nature Physics website fro the final, published versio

    Identifikasi Bottleneck Pada Hasil Ekstraksi Proses Bisnis ERP Dengan Membandingkan Algoritma Alpha++ Dan Heuristics Miner

    Get PDF
    Saat ini banyak Perusahaan menggunakan sistem informasi untuk menunjang proses bisnis. Namun Kenyataannya hanya beberapa Perusahaan yang melakukan evaluasi untuk proses bisnis pada sistem informasi tersebut. Evaluasi ini didapat dari data Event log yang merupakan hasil proses ekstrasi ERP. Cara melakukan evaluasi yaitu dengan process mining. Process mining berfungsi untuk menggali proses transaksi sehingga terbentuk suatu workflow proses bisnis yang actual. Workflow proses bisnis ini akan digambarkan dalam bentuk Petri Net, selanjutnya dari Petri Net inilah akan dilakukan analisis untuk mengidentifikasi adanya Bottleneck. Bottleneck merupakan peristiwa pada suatu transaksi yang memiliki waktu tunggu yang lebih lama dibandingkan transaksi lainnya dalam suatu proses bisnis. Dengan adanya PROM Tools maka penggambaran mengenai proses bisnis ERP dapat terbentuk. Penggambaran model tersebut menggunakan dua algoritma, yaitu Algoritma Alpha ++ dan Heuristics Miner. Kedua algoritma ini digunakan untuk mencari perbedaan bottleneck yang terjadi. Dari penelitian didapatkan hasil bahwa algoritma sangat mempengaruhi letak bottleneck. Letak tersebut didasarkan pada perhitungan waktu token yang ada pada place (tempat antara dua transisi atau dua transaksi) saat terbentuk model

    Copernicus Global Land Cover Layers—Collection 2

    Get PDF
    In May 2019, Collection 2 of the Copernicus Global Land Cover layers was released. Next to a global discrete land cover map at 100 m resolution, a set of cover fraction layers is provided depicting the percentual cover of the main land cover types in a pixel. This additional continuous classification scheme represents areas of heterogeneous land cover better than the standard discrete classification scheme. Overall, 20 layers are provided which allow customization of land cover maps to specific user needs or applications (e.g., forest monitoring, crop monitoring, biodiversity and conservation, climate modeling, etc.). However, Collection 2 was not just a global up-scaling, but also includes major improvements in the map quality, reaching around 80% or more overall accuracy. The processing system went into operational status allowing annual updates on a global scale with an additional implemented training and validation data collection system. In this paper, we provide an overview of the major changes in the production of the land cover maps, that have led to this increased accuracy, including aligning with the Sentinel 2 satellite system in the grid and coordinate system, improving the metric extraction, adding better auxiliary data, improving the biome delineations, as well as enhancing the expert rules. An independent validation exercise confirmed the improved classification results. In addition to the methodological improvements, this paper also provides an overview of where the different resources can be found, including access channels to the product layer as well as the detailed peer-review product documentation

    Adapting a Computational Multi Agent Model for Humpback Whale Song Research for use as a Tool for Algorithmic Composition

    Get PDF
    Humpback whales (Megaptera Novaengliae) present one of the most complex displays of cultural transmission amongst non-humans. During breeding seasons, male humpback whales create long, hierarchical songs, which are shared amongst a population. Every male in the population conforms to the same song in a population. During the breeding season these songs slowly change and the song at the end of the breeding season is significantly different from the song heard at the start of the breeding season. The song of a population can also be replaced, if a new song from a different population is introduced.This is known as song revolution. Our research focuses on building computational multi agent models, which seek to recreate these phenomena observed in the wild.Our research relies on methods inspired by computational multi agent models for the evolution of music. This interdisciplinary approach has allowed us to adapt our model so that it may be used not only as a scientific tool, but also a creative tool for algorithmic composition. This paper discusses the model in detail, and then demonstrates how it may be adapted for use as an algorithmic composition tool.Publisher PD

    Sensor data management with probabilistic models

    Get PDF
    The anticipated ‘sensing environments’ of the near future pose new requirements to the data management systems that mediate between sensor data supply and demand sides. We identify and investigate one of them: the need to deal with the inherent uncertainty in sensor data due to measurement noise, missing data, the semantic gap between the measured data and relevant information, and the integration of data from different sensors.\ud \ud Probabilistic models of sensor data can be used to deal with these uncertainties in the well-understood and fruitful framework of probability theory. In particular, the Bayesian network formalism proves useful for modeling sensor data in a flexible environment, because its comprehensiveness and modularity. We provide extensive technical argumentation for this claim. As a demonstration case, we define a discrete Bayesian network for location tracking using Bluetooth transceivers.\ud \ud In order to scale up sensor models, efficient probabilistic inference on the Bayesian network is crucial. However, we observe that the conventional inference methods do not scale well for our demonstration case. We propose several optimizations, making it possible to jointly scale up the number of locations and sensors in sublinear time, and to scale up the time resolution in linear time. Moreover, we define a theoretical framework in which these optimizations are derived by translating an inference query into relational algebra. This allows the query to be analyzed and optimized using insights and techniques from the database community; for example, using cost metrics based on cardinality rather than dimensionality.\ud \ud An orthogonal research question investigates the possibility of collecting transition statistics in a local, clustered fashion, in which transitions between states of different clusters cannot be directly observed. We show that this problem can be written as a constrained system of linear equations, for which we describe a specialized solution method

    Hubungan antara Kadar Creatine Kinase-MB dengan Mortalitas Pasien Infark Miokard Akut Selama Perawatan di RS. Dr. Wahidin Sudirohusodo, Makasar

    Full text link
    Increase of CK-MB level is associated with myocardial infarction size and severity. The aim of this study is to evaluate the correlation between the admission CK-MB level of acute myocardial patients and the in-hospital mortality. Secondary data of 60 acute myocardial infarction patientshospitalized in Intensive Cardiac Care Unit of Dr.Wahidin Sudirohusodo Hospital Makassar from June 2010 to July 2011 were taken. Admission CK-MB levels between the period of 3 hours to 1 week after onset were then analyzed. The mean of admission CK-MB level in the in-hospital survived and non survived acute myocardial infarction patients were 89.52+121.59 U/l and 202.88+192.75 U/l respectively (Mann Whitney Test, p=0.005). There were significant mortality rate difference amongall CK-MB quartiles with mortality rate 13.3%, 6.7%, 40% and 46.7 % in 1st, 2nd, 3rd, and 4th quartile respectively (Chi Square Test, p=0.031) but the odds ratio of mortality between quartiles were not different.. There was significant difference of admission CK-MB levels in the in-hospitalsurvived and non survived acute myocardial infarction.Keywords : CK-MB, myocardial infarction, mortalityAbstrakPeningkatan kadar CK-MB pada infark miokard akut menunjukkan luas dan parahnya penyakit. Penelitian ini bertujuan untuk menilai hubungan antara kadar CK-MB pada pasien infark miokard akut saat masuk rumah sakit dan mortalitas pasien selama perawatan di rumah sakit.. Data sekunder diambil dari rekam medis 60 pasien infark miokard akut yang dirawat di Unit Perawatan Jantung Intensif Rumah Sakit Dr. Wahidin Sudirohusodo, Makassar periode Juli 2010 hingga Juni 2011. Kadar CK-MB diperoleh saat masuk rumah sakit antara 3 jam hingga 1 minggu setelah onset. Rerata kadar CK-MB pada penderita infark miokard akut yang survive dan meninggal selama perawatan adalah 89,52+121,59 U/l dan 202,88+192,75U/l (Uji Mann Whitney, p=0,005). Ditemukan perbedaantingkat mortalitas yang bermakna antar kuartil CK-MB masing-masing 13,3%, 6,7%, 40% dan 46,7% pada kuartil 1, 2, 3 dan 4 berturut-turut ( Uji Chi Square, p=0,031) tetapi risiko mortalitas antar kuartil tidak berbeda bermakna. Ditemukan perbedaan bermakna kadar CK-MB pada pasien yang survive maupun yang meninggal selama perawatan.Kata kunci : CK-MB, infark miokard, mortalita
    corecore