57 research outputs found

    Application of different building representation techniques in HEC-RAS 2-D for urban flood modeling using the Toce River experimental case

    Get PDF
    This paper presents the impact of the choice of building representation techniques and hydrodynamic models on urban flood simulations using HEC-RAS 2-D for the Toce River physical model. To this end, eight numerical models based on previous laboratory experiments were prepared to simulate unsteady urban flooding on each side of building units. Two simplified building layouts (aligned and staggered) were examined, where models were prepared for two different building representation techniques: Building Block (BB) and Building Resistance (BR). Water depth variation computations using the BR and BB techniques were compared to the laboratory measurements and previous studies in the literature. A statistical analysis was performed using both the Root Mean Square Error (RMSE) and the Pearson Product-Moment Correlation Coefficient (PPMCC) in order to evaluate the performance of the models. A sensitivity analysis showed that the proper mesh resolution and model parameter values were obtained. As far as the BR technique is concerned, it is well-suited for representing building units in numerical simulations using high Manning coefficients. Furthermore, this study confirms the importance of the BR technique, which should help researchers in using low-resolution Digital Elevation Models (DEMs) along with open-source programs. Moreover, the study aims to produce a deeper comprehension of numerical modeling and urban flooding

    Probable rainfall in Gdańsk in view of climate change

    Get PDF
    One of the manifestations of climate changes is the occurrence of a greater number of precipitation events, characterized by greater rain intensity that affects the economic stability of cities. Gdańsk is an example of a city in which such events have occurred since the beginning of the twenty-first century. Due to the altitude differences in the area of Gdańsk city (between –2 m and 180 m a.s.l.), the occurrence of extreme atmospheric precipitation almost immediately causes hydrological effects in the water network consisting of several streams of montane character, which flow eastwards from the plateau of the Kashubian Lakeland. Meteorological stations of the National Meteorological Service (IMGW-PIB) are located in the coastal zone (Port Północny/Northern Port, Świbno) and in the highest part of the city (the Rębiechowo airport). Because this is insufficient, the city of Gdańsk has been expanding the local rain monitoring network since 2001, currently having reliable 10-year observation data sequences. The said network is operated by the Gdańsk Water municipal company. Climate changes resulting in different characteristics of rainfall episodes in Gdańsk naturally influence the determination of the probability of their occurrence. According to the rainfall model developed by Bogdanowicz and Stachy at the turn of the 20th and 21st centuries, at least 4 rainfall events lasting for over 8 hours in the last 17 years should be classified as a 100-year rain event. One of these extended the parameters of a 300-year rain event; whereas we asses the rain in the year 2016, when even 170 mm of rainfall was recorded on July 14, as at least a 500-year rain event. During this period, several-minute events were also recorded, which also exceeded the parameters of a 100-year rain event. The paper presents precipitation models for the region of Gdańsk. Based on the maximum annual daily rainfall from Rębiechowo meteorological station from the years 1974–2017, an analysis of changes in precipitation values corresponding to certain probabilities of occurrence was conducted. An assessment was also made of the projected decrease in the value of precipitation in relation to hydro-technical constructions, road-engineering structures, and rainwater drainage systems in view of changing legal regulations, as well as the latest trends related to the management of rainwater

    Object’s Optical Geometry Measurements Based on EDoF Approach

    Get PDF
    Machine vision applications are getting more popular in many manufacturing applications. Although vision techniques have many advantages there are still numerous problems related with those methods. One of the drawbacks is that when measuring or performing an inspection task the image resolution must be as high as possible. When inspecting an object of complicated geometry, with a specific lens and camera to achieve a given accuracy, the field of view, or the depth of field might be insufficient for the task. Using cameras placed on manipulators, or on moving stages leads to solving the problem, but it also causes various implementation issues. During the measurement process when the camera or the inspected object is moving, images are acquired and need to be processed separately. If the inspection task is a simple feature extraction might be sufficient. On the other hand if the image processing is more complex it might take time to process each image separately. For example when a feature is located on the border of a image, to measure it or properly assess, two or more images with the feature need to be combined. When it comes to field of view limitations, there are known methods of image stitching, and combining [1,2]. When the depth of field is narrow, for example when using fixed telecentric lenses the problem is more complex. The Extended Depth of Field (EDoF) is an approach known in microscopy imagining. It allows to stitch images taken form a range of distances that are minimum spaced. Acquiring images of the same object with differently placed depth of field reveals elements otherwise hidden (due to shallow depth of field). The methods of extracting information form a set of images taken with different depths of field is known in microscopy and wieldy used [3,4]. On the other hand using EDoF in non-microscopic inspections is not utilized because of the fact of changing the focal distance from the inspected object leads to resizing the object in the frame. The longer the focal length the higher is the compression rate of the imagining. The authors propose a method of using EDoF in macro inspections using bi-telecentric lenses and a specially designed experimental machine setup, allowing accurate focal distance changing. Also a software method is presented allowing EDoF image reconstruction using the continuous wavelet transform (CWT). Exploited method results are additionally compared with measurements performed with Keyence’s LJ-V Series in-line Profilometer for reference matters

    Electrocardiographic algorithms to guide a management strategy of idiopathic outflow tract ventricular arrhythmias

    Get PDF
    The current guidelines of the European Society of Cardiology outlined electrocardiographic (ECG) differentiation of the site of origin (SoO) in patients with idiopathic ventricular arrhythmias (IVAs). The aim of this study was to compare 3 ECG algorithms for differentiating the SoO and to determine their diagnostic value for the management of outflow tract IVA. We analyzed 202 patients (mean age [SD]: 45 [16.7] years; 133 women [66%]) with IVAs with the inferior axis (130 premature ventricular contractions or ventricular tachycardias from the right ventricular outflow tract [RVOT]; 72, from the left ventricular outflow tract [LVOT]), who underwent successful radiofrequency catheter ablation (RFCA) using the 3‑dimensional electroanatomical system. The ECGs before ablation were analyzed using custom‑developed software. Automated measurements were performed for the 3 algorithms: 1) novel transitional zone (TZ) index, 2) V2S/V3RV_{2}S/V_{3}R, and 3) V2V_{2} transition ratio. The results were compared with the SoO of acutely successful RFCA. The V2S/V3RV_{2}S/V_{3}R algorithm predicted the left‑sided SoO with a sensitivity and specificity close to 90%. The TZ index showed higher sensitivity (93%) with lower specificity (85%). In the subgroup with the transition zone in lead V3 (n = 44, 15 from the LVOT) the sensitivity and specificity of the V2– transition‑ratio algorithm were 100% and 45%, respectively. The combined TZ index+V2S/V3RV_{2}S/V_{3}R algorithm (LVOT was considered only when both algorithms suggested the LVOT SoO) can increase the specificity of the LVOT SoO prediction to 98% with a sensitivity of 88%. The combined TZ‑index and V2S/V3RV_{2}S/V_{3}R algorithm allowed an accurate and simple identification of the SoO of IVA. A prospective study is needed to determine the strategy for skipping the RVOT mapping in patients with LVOT arrhythmias indicated by the 2 combined algorithms

    Weld Joints Inspection Using Multisource Data and Image Fusion

    Get PDF
    The problem of inspecting weld joints is very complex, especially in critical parts of machines and vehicles. The welded joint is typically inspected visually, chemically or using radiography imaging. The flaw detection is a task for specialized personnel who analyze all the data on each stage of the inspection process separately. The inspection is prone to human error, and is labor intensive. In the stages of weld joint visual control geometrical measurements are performed, joint alignment, straightness, deformation, as well as the weld\u27s uniformity. Coloration my show the heat impact zone, and melted parts of the base material. Also during this stage the unwanted cracks, pores and other surface defects can be spotted. On the other side during the X-ray inspection other flaws can be discovered. Pores, cracks, lack of penetration and slag inclusions can be observed. The author’s goal was to develop a multisource data system of easier flaw detection, and possibly inspection process automation. The methods consisted of three image sources: X-ray, laser profilometer, and imaging camera. The proposed approach consists combining spatial information in the acquired data from all sources. A novel approach of data mixing is proposed to benefit from all the information. The signal form the profilometer enables geometrical information extraction. Deformation and alignment error assessment. The radiogram provides information about the hidden flaws. The color image gives information about texture and color of the surface as well as helps in combining multiple sources

    Observational hints on the Big Bounce

    Full text link
    In this paper we study possible observational consequences of the bouncing cosmology. We consider a model where a phase of inflation is preceded by a cosmic bounce. While we consider in this paper only that the bounce is due to loop quantum gravity, most of the results presented here can be applied for different bouncing cosmologies. We concentrate on the scenario where the scalar field, as the result of contraction of the universe, is driven from the bottom of the potential well. The field is amplified, and finally the phase of the standard slow-roll inflation is realized. Such an evolution modifies the standard inflationary spectrum of perturbations by the additional oscillations and damping on the large scales. We extract the parameters of the model from the observations of the cosmic microwave background radiation. In particular, the value of inflaton mass is equal to m=(2.6±0.6)1013m=(2.6 \pm 0.6) \cdot 10^{13} GeV. In our considerations we base on the seven years of observations made by the WMAP satellite. We propose the new observational consistency check for the phase of slow-roll inflation. We investigate the conditions which have to be fulfilled to make the observations of the Big Bounce effects possible. We translate them to the requirements on the parameters of the model and then put the observational constraints on the model. Based on assumption usually made in loop quantum cosmology, the Barbero-Immirzi parameter was shown to be constrained by γ<1100\gamma<1100 from the cosmological observations. We have compared the Big Bounce model with the standard Big Bang scenario and showed that the present observational data is not informative enough to distinguish these models.Comment: 25 pages, 8 figures, JHEP3.cl

    Expression of genes KCNQ1 and HERG encoding potassium ion channels Ikr, Iks in long QT syndrome

    Get PDF
    Background: The KCNQ1 and HERG genes mutations are responsible for specific types of congenital long QT syndrome (LQT). Aim: To examine the expression of KCNQ1 and HERG genes that encode potassium channels (rapid and slow) responsible for the occurrence of particular types of LQT syndrome. The study also attempted to prove that beta-actin is a good endogenous control when determining the expression of the studied genes. Methods: The study enrolled six families whose members suffered from either LQT1 or LQT2, or were healthy. Examination of gene expression was achieved with quantitative PCR (QRT-PCR). Expression of the investigated genes was inferred from the analysis of the number of mRNA copies per 1 mg total RNA isolated from whole blood. On the basis of KCNQ1 gene expression profile, the presence of, or absence of, LQT1 could be confirmed. Results and conclusions: The study revealed a statistically significant difference (p = 0.031) between the number of KCNQ1 gene copies in patients and healthy controls. On the basis of HERG (KCNH2) gene expression profile, patients with LQT2 cannot be unequivocally differentiated from healthy subjects (p = 0.37). Kardiol Pol 2011; 69, 5: 423&#8211;429Wstęp i cel: Głównym celem pracy było zbadanie ekspresji genów HERG i KCNQ1, kodujących kanały potasowe (szybkie i wolne), odpowiadających za wystąpienie określonego rodzaju zespołu długiego QT (LQTS). Metody: Do badania włączono 6 rodzin, u członków których zdiagnozowano LQTS1 lub LQTS2, lub zdrowych. Badanie miało na celu udowodnienie, że beta-aktyna stanowi dobrą kontrolę endogenną przy ustalaniu ekspresji badanych genów. Do badania ekspresji genów wykorzystano ilościową analizę PCR w czasie rzeczywistym (QRT-PCR). Ekspresję badanych genów przedstawiono jako liczbę kopii mRNA w przeliczeniu na 1 mg całkowitego RNA izolowanego z krwi pełnej. Dane zostały wyeksportowane z arkusza Excel do programu analizy danych Statistica V.7.1. Wyniki i wnioski: Na podstawie profilu ekspresji genu KCNQ1 można potwierdzić występowanie zespołu LQTS1. Badania wykazały statystycznie istotną różnicę (p = 0,031) między liczbą KCNQ1 kopii genu u osób chorych i zdrowych. Na podstawie profilu ekspresji genu HERG (KCNH2) chorych z LQTS2 nie można jednoznacznie odróżnić od osób zdrowych (p = 0,37). Kardiol Pol 2011; 69, 5: 423&#8211;42

    Mechanical thrombectomy in acute stroke – Five years of experience in Poland

    Get PDF
    Objectives Mechanical thrombectomy (MT) is not reimbursed by the Polish public health system. We present a description of 5 years of experience with MT in acute stroke in Comprehensive Stroke Centers (CSCs) in Poland. Methods and results We retrospectively analyzed the results of a structured questionnaire from 23 out of 25 identified CSCs and 22 data sets that include 61 clinical, radiological and outcome measures. Results Most of the CSCs (74%) were founded at University Hospitals and most (65.2%) work round the clock. In 78.3% of them, the working teams are composed of neurologists and neuro-radiologists. All CSCs perform CT and angio-CT before MT. In total 586 patients were subjected to MT and data from 531 of them were analyzed. Mean time laps from stroke onset to groin puncture was 250±99min. 90.3% of the studied patients had MT within 6h from stroke onset; 59.3% of them were treated with IV rt-PA prior to MT; 15.1% had IA rt-PA during MT and 4.7% – emergent stenting of a large vessel. M1 of MCA was occluded in 47.8% of cases. The Solitaire device was used in 53% of cases. Successful recanalization (TICI2b–TICI3) was achieved in 64.6% of cases and 53.4% of patients did not experience hemorrhagic transformation. Clinical improvement on discharge was noticed in 53.7% of cases, futile recanalization – in 30.7%, mRS of 0–2 – in 31.4% and mRS of 6 in 22% of cases. Conclusion Our results can help harmonize standards for MT in Poland according to international guidelines

    AIC, BIC, Bayesian evidence against the interacting dark energy model

    Get PDF
    Recent astronomical observations have indicated that the Universe is in the phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting Λ\LambdaCDM model where the interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative---the Λ\LambdaCDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam's principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: SNIa (Union2.1), h(z)h(z), BAO, Alcock--Paczynski test and CMB we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting Λ\LambdaCDM model when compared to the Λ\LambdaCDM model, while those based on the BIC indicated that there is the strong evidence against it in favor the Λ\LambdaCDM model. Given the weak or almost none support for the interacting Λ\LambdaCDM model and bearing in mind Occam's razor we are inclined to reject this model.Comment: LaTeX svjour3, 12 pages, 3 figure
    corecore