22 research outputs found

    Real-Time Flux Density Measurements of the 2011 Draconid Meteor Outburst

    Get PDF
    During the 2011 outburst of the Draconid meteor shower, members of the Video Meteor Network of the International Meteor Organization provided, for the first time, fully automated flux density measurements in the optical domain. The data set revealed a primary maximum at 20:09 UT ± 5 min on 8 October 2011 (195.036° solar longitude) with an equivalent meteoroid flux density of (118 ± 10) × 10/km/h at a meteor limiting magnitude of +6.5, which is thought to be caused by the 1900 dust trail. We also find that the outburst had a full width at half maximum of 80 min, a mean radiant position of α = 262.2°, δ = +56.2° (±1.3°) and geocentric velocity of v = 17.4 km/s (±0.5 km/s). Finally, our data set appears to be consistent with a small sub-maximum at 19:34 UT ±7 min (195.036° solar longitude) which has earlier been reported by radio observations and may be attributed to the 1907 dust trail. We plan to implement automated real-time flux density measurements for all known meteor showers on a regular basis soon.Peer reviewedFinal Accepted Versio

    Проектирование системы электроснабжения электротехнического завода

    Get PDF
    Проводя исследование, на основе исходных данных, произведен выбор метода расчета, осуществился расчет электрических нагрузок завода и рассматриваемого цеха, подбор электроприемников и их проверка в зависимости от режима работы. Итогом исследования является, спроектированная конкретная модель электроснабжения промышленного предприятия, показана безопасность для окружающей среды и экономическая целесообразность.Carrying out the research, based on the initial data, the choice of the calculation method was made, the calculation of the electrical loads of the plant and the shop in question, the selection of electrical receivers and their verification, depending on the operating mode, were carried out. The result of the research is, the projected specific model of power supply of an industrial enterprise, shows the safety for the environment and the economic feasibility

    Analysis of the technical biases of meteor video cameras used in the CILBO system

    Get PDF
    In this paper, we analyse the technical biases of two intensified video cameras, ICC7 and ICC9, of the double-station meteor camera system CILBO (Canary Island Long-Baseline Observatory). This is done to thoroughly understand the effects of the camera systems on the scientific data analysis. We expect a number of errors or biases that come from the system: instrumental errors, algorithmic errors and statistical errors. We analyse different observational properties, in particular the detected meteor magnitudes, apparent velocities, estimated goodness-of-fit of the astrometric measurements with respect to a great circle and the distortion of the camera. We find that, due to a loss of sensitivity towards the edges, the cameras detect only about 55 % of the meteors it could detect if it had a constant sensitivity. This detection efficiency is a function of the apparent meteor velocity. We analyse the optical distortion of the system and the "goodness-of-fit" of individual meteor position measurements relative to a fitted great circle. The astrometric error is dominated by uncertainties in the measurement of the meteor attributed to blooming, distortion of the meteor image and the development of a wake for some meteors. The distortion of the video images can be neglected. We compare the results of the two identical camera systems and find systematic differences. For example, the peak magnitude distribution for ICC9 is shifted by about 0.2–0.4 mag towards fainter magnitudes. This can be explained by the different pointing directions of the cameras. Since both cameras monitor the same volume in the atmosphere roughly between the two islands of Tenerife and La Palma, one camera (ICC7) points towards the west, the other one (ICC9) to the east. In particular, in the morning hours the apex source is close to the field-of-view of ICC9. Thus, these meteors appear slower, increasing the dwell time on a pixel. This is favourable for the detection of a meteor of a given magnitude

    Digitale Vermessung und Auswertung von All-Sky-Meteorfotografien

    Get PDF
    Im Rahmen des ¨European Fireball Networks¨ fallen jährlich etwa 100 Aufnahmen von hellen Feuerkugeln an. Zur Zeit kann jedoch nur ein Bruchteil des Datenmaterials umfassend ausgewertet werden, da die manuelle Vermessung der Aufnahmen an einem Meßtisch sehr aufwendig ist. Im Rahmen der Diplomarbeit wurde das Programm ¨Fireball¨ zur automatischen Vermessung von Feuerkugelfotografien entwickelt. Die Arbeit beginnt mit einer umfassenden Einführung in die Thematik der Meteorbeobachtung. Die Algorithmen der digitalen Bildverarbeitung bilden den Schwerpunkt der Diplomarbeit. Sie umfassen * die Digitalisierung der Negative * die automatische Segmentierung, Identifizierung und Vermessung von Sternspuren (ohne Zusatzinformationen) * die manuelle Vermessung der Meteorspur im digitalisierten Bild Im Anschluß wird die Schnittstelle zu weiterführender Software diskutiert. Der zweite Schwerpunkt ist der Entwurf einer komfortablen grafischen Benutzerschnittstelle. Deren Implementation erfolgt in C bei Verwendung des Motif-Toolkits unter UNIX. Die Arbeit endet mit dem Ausblick auf zukünftige Entwicklungen sowie einem Glossar und dem Literaturverzeichnis

    Histogram based normalization in the acoustic feature space

    No full text
    We describe a technique called histogram normalization that aims at normalizing feature space distributions at different stages in the signal analysis front-end, namely the log-compressed filterbank vectors, cepstrum coefficients, and LDA-transformed acoustic vectors. Best results are obtained at the filterbank, and in most cases there is a minor additional gain when normalization is applied sequentially at different stages. We show that histogram normalization performs best if applied both in training and recognition, and that smoothing the target histogram obtained on the training data is also helpful. On the VerbMobil II corpus, a German large-vocabulary conversational speech recognition task, we achieve an overall reduction in word error rate of about 10 % relative. 1

    Feature Space Normalization In Adverse Acoustic Conditions

    No full text
    We study the effect of different feature space normalization techniques in adverse acoustic conditions. Recognition tests are reported for cepstral mean and variance normalization, histogram normalization, feature space rotation, and vocal tract length normalization on a German isolated word recognition task with large acoustic mismatch. The training data was recorded in clean office environment and the test data in cars. Speech recognition failed completely without normalization on the highway dataset, whereas the word error rate could be reduced to 17% using an online setup and to 10% with an offline setup

    Efficient Vocal Tract Normalization in Automatic Speech Recognition

    No full text
    In this paper we study the effect of vocal tract normalization (VTN) on the word error rate (WER) in speaker independent large vocabulary speech recognition. Evaluation test results are reported for the German VerbMobil II (VM II) and the English Wall Street Journal (WSJ) corpus. In particular, we analyse: ffl the effect of the type of warping function (linear vs. non-linear) on the WER; ffl different methods for estimating the warping factor in recognition; ffl incremental warping factor estimation for single-pass online recognition; ffl phoneme dependence of the warping factors. We find that a simple piecewise linear warping function performs better than non-linear frequency warping. In recognition, a two-pass approach performs as good as supervised VTN on the reference transcription even if the WER of the first recognition pass is of the order of 20..30%. Fast warping factor estimation with text independent models results in only a slight performance degradation but allows the ..

    Automatic Transcription Verification Of Broadcast News And Similar Speech Corpora

    Get PDF
    In the last few years, the focus in ASR research has shifted from the recognition of clean read speech (i.e. WSJ) to the more challenging task of transcribing found speech like broadcast news (Hub-4 task) and telephone conversations (Switchboard). Available training corpora tend to become larger and more erroneous than before, as transcribing found speech is more difficult. In this paper we present a method to automatically detect faulty training scripts. Based on the Hub-4 task we will report on the efficiency of error detection with the proposed method and investigate the effect of both manually and automatically cleaned training corpora on the word error rate (WER) of the RWTH large vocabulary continuous speech recognition (LVCSR) system. This work is a joint effort of the University of Technology (RWTH) and Philips Research Laboratories Aachen, Germany
    corecore