1,504,048 research outputs found

    Left and right ventricle assessment with Cardiac CT: validation study vs. Cardiac MR

    Get PDF
    Objectives To compare Magnetic Resonance (MR) and Computed Tomography (CT) for the assessment of left (LV) and right (RV) ventricular functional parameters. Methods Seventy nine patients underwent both Cardiac CT and Cardiac MR. Images were acquired using short axis (SAX) reconstructions for CT and 2D cine b-SSFP (balanced- steady state free precession) SAX sequence for MR, and evaluated using dedicated software. Results CT and MR images showed good agreement: LV EF (Ejection Fraction) (52±14% for CT vs. 52±14% for MR; r0 0.73; p>0.05); RV EF (47±12% for CT vs. 47±12% for MR; r00.74; p>0.05); LV EDV (End Diastolic Volume) (74± 21 ml/m 2 for CT vs. 76±25 ml/m 2 for MR; r00.59; p>0.05); RV EDV (84±25 ml/m 2 for CT vs. 80±23 ml/m 2 for MR; r0 0.58; p>0.05); LV ESV (End Systolic Volume)(37±19 ml/m 2 for CT vs. 38±23 ml/m 2 for MR; r00.76; p>0.05); RV ESV (46±21 ml/m 2 for CT vs. 43±18 ml/m 2 for MR; r00.70; p>0.05). Intra- and inter-observer variability were good, and the performance of CT was maintained for different EF subgroups. Conclusions Cardiac CT provides accurate and reproducible LVand RV volume parameters compared with MR, and can be considered as a reliable alternative for patients who are not suitable to undergo MR. Key Points • Cardiac-CT is able to provide Left and Right Ventricular function. • Cardiac-CT is accurate as MR for LV and RV volume assessment. • Cardiac-CT can provide accurate evaluation of coronary arteries and LV and RV function

    An Experimental Investigation of Leak Rate Performance of a Subscale Candidate Elastomer Docking Space Seal

    Get PDF
    A novel docking seal was developed for the main interface seal of NASA s Low Impact Docking System (LIDS). This interface seal was designed to maintain acceptable leak rates while being exposed to the harsh environmental conditions of outer space. In this experimental evaluation, a candidate docking seal assembly called Engineering Development Unit (EDU58) was characterized and evaluated against the Constellation Project leak rate requirement. The EDU58 candidate seal assembly was manufactured from silicone elastomer S0383-70 vacuum molded in a metal retainer ring. Four seal designs were considered with unique characteristic heights. The leak rate performance was characterized through a mass point leak rate method by monitoring gas properties within an internal control volume. The leakage performance of the seals were described herein at representative docking temperatures of -50, +23, and +50 C for all four seal designs. Leak performance was also characterized at 100, 74, and 48 percent of full closure. For all conditions considered, the candidate seal assemblies met the Constellation Project leak rate requirement

    Operational Performance Evaluation of Post Office – Teaching Hospital Road, Ile – Ife, Nigeria

    Get PDF
    Traffic demand is significantly on the increase in our urban centres without a commensurate increase in the rate of road infrastructure development, resulting in traffic congestion. The need to periodically evaluate the operational performance of these roads with a view to addressing this problem becomes imperative, hence this study. Speed and traffic flow data were collected for morning and evening peaks for seven days for two segments of the selected urban road in Ile-Ife, using normal procedure. Traffic flow parameters, such as, travel speed, traffic volume and capacity were computed and operational performance evaluation determined. The study showed that, motorcycles were the predominant means of transport (50 %), and followed by buses (25 %), cars (23 %) and trucks (2 %), while the average travel speed was 41 km / h. The average traffic capacity of 1306 pc/hr/ln was also obtained. The results revealed that the operating speed and capacity of the road were short of the required values of 1500 pc/hr/ln and 50 – 60 km / h, respectively, for an urban two-way two-lane highway. The road is therefore prone to congestion. Keywords: key words, Traffic Demand, Road Infrastructure, Operational performanc

    On the use of word embedding for cross language plagiarism detection

    Full text link
    [EN] Cross language plagiarism is the unacknowledged reuse of text across language pairs. It occurs if a passage of text is translated from source language to target language and no proper citation is provided. Although various methods have been developed for detection of cross language plagiarism, less attention has been paid to measure and compare their performance, especially when tackling with different types of paraphrasing through translation. In this paper, we investigate various approaches to cross language plagiarism detection. Moreover, we present a novel approach to cross language plagiarism detection using word embedding methods and explore its performance against other state-of-the-art plagiarism detection algorithms. In order to evaluate the methods, we have constructed an English-Persian bilingual plagiarism detection corpus (referred to as HAMTA-CL) comprised of seven types of obfuscation. The results show that the word embedding approach outperforms the other approaches with respect to recall when encountering heavily paraphrased passages. On the other hand, translation based approach performs well when the precision is the main consideration of the cross language plagiarism detection system.Asghari, H.; Fatemi, O.; Mohtaj, S.; Faili, H.; Rosso, P. (2019). On the use of word embedding for cross language plagiarism detection. Intelligent Data Analysis. 23(3):661-680. https://doi.org/10.3233/IDA-183985S661680233H. Asghari, K. Khoshnava, O. Fatemi and H. Faili, Developing bilingual plagiarism detection corpus using sentence aligned parallel corpus: Notebook for {PAN} at {CLEF} 2015, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.A. Barrón-Cede no, M. Potthast, P. Rosso and B. Stein, Corpus and evaluation measures for automatic plagiarism detection, In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner and D. Tapias, editors, Proceedings of the International Conference on Language Resources and Evaluation, {LREC} 2010, 17–23 May 2010, Valletta, Malta. European Language Resources Association, 2010.A. Barrón-Cede no, P. Rosso, D. Pinto and A. Juan, On cross-lingual plagiarism analysis using a statistical model, In B. Stein, E. Stamatatos and M. Koppel, editors, Proceedings of the ECAI’08 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse, Patras, Greece, July 22, 2008, volume 377 of {CEUR} Workshop Proceedings. CEUR-WS.org, 2008.Farghaly, A., & Shaalan, K. (2009). Arabic Natural Language Processing. ACM Transactions on Asian Language Information Processing, 8(4), 1-22. doi:10.1145/1644879.1644881J. Ferrero, F. Agnès, L. Besacier and D. Schwab, A multilingual, multi-style and multi-granularity dataset for cross-language textual similarity detection, In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk and S. Piperidis, editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation {LREC} 2016, Portorož, Slovenia, May 23–28, 2016, European Language Resources Association {(ELRA)}, 2016.Franco-Salvador, M., Gupta, P., Rosso, P., & Banchs, R. E. (2016). Cross-language plagiarism detection over continuous-space- and knowledge graph-based representations of language. Knowledge-Based Systems, 111, 87-99. doi:10.1016/j.knosys.2016.08.004Franco-Salvador, M., Rosso, P., & Montes-y-Gómez, M. (2016). A systematic study of knowledge graph analysis for cross-language plagiarism detection. Information Processing & Management, 52(4), 550-570. doi:10.1016/j.ipm.2015.12.004C.K. Kent and N. Salim, Web based cross language plagiarism detection, CoRR, abs/0912.3, 2009.McNamee, P., & Mayfield, J. (2004). Character N-Gram Tokenization for European Language Text Retrieval. Information Retrieval, 7(1/2), 73-97. doi:10.1023/b:inrt.0000009441.78971.beT. Mikolov, K. Chen, G. Corrado and J. Dean, Efficient estimation of word representations in vector space, CoRR, abs/1301.3, 2013.S. Mohtaj, B. Roshanfekr, A. Zafarian and H. Asghari, Parsivar: A language processing toolkit for persian, In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis and T. Tokunaga, editors, Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7–12, 2018, European Language Resources Association ELRA, 2018.R.M.A. Nawab, M. Stevenson and P.D. Clough, University of Sheffield – Lab Report for {PAN} at {CLEF} 2010, In M. Braschler, D. Harman and E. Pianta, editors, {CLEF} 2010 LABs and Workshops, Notebook Papers, 22–23 September 2010, Padua, Italy, volume 1176 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2010.G. Oberreuter, G. L’Huillier, S.A. Rios and J.D. Velásquez, Approaches for intrinsic and external plagiarism detection – Notebook for {PAN} at {CLEF} 2011, In V. Petras, P. Forner and P.D. Clough, editors, {CLEF} 2011 Labs and Workshop, Notebook Papers, 19–22 September 2011, Amsterdam, The Netherlands, volume 1177 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2011.Pinto, D., Civera, J., Barrón-Cedeño, A., Juan, A., & Rosso, P. (2009). A statistical approach to crosslingual natural language tasks. Journal of Algorithms, 64(1), 51-60. doi:10.1016/j.jalgor.2009.02.005M. Potthast, A. Barrón-Cede no, A. Eiselt, B. Stein and P. Rosso, Overview of the 2nd international competition on plagiarism detection, In M. Braschler, D. Harman and E. Pianta, editors, {CLEF} 2010 LABs and Workshops, Notebook Papers, 22–23 September 2010, Padua, Italy, volume 1176 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2010.Potthast, M., Barrón-Cedeño, A., Stein, B., & Rosso, P. (2010). Cross-language plagiarism detection. Language Resources and Evaluation, 45(1), 45-62. doi:10.1007/s10579-009-9114-zM. Potthast, A. Eiselt, A. Barrón-Cede no, B. Stein and P. Rosso, Overview of the 3rd international competition on plagiarism detection, In V. Petras, P. Forner and P.D. Clough, editors, {CLEF} 2011 Labs and Workshop, Notebook Papers, 19–22 September 2011, Amsterdam, The Netherlands, volume 1177 of {CEUR} Workshop Proceedings. CEUR-WS.org, 2011.M. Potthast, S. Goering, P. Rosso and B. Stein, Towards data submissions for shared tasks: First experiences for the task of text alignment, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.Potthast, M., Stein, B., & Anderka, M. (s. f.). A Wikipedia-Based Multilingual Retrieval Model. Advances in Information Retrieval, 522-530. doi:10.1007/978-3-540-78646-7_51B. Pouliquen, R. Steinberger and C. Ignat, Automatic identification of document translations in large multilingual document collections, CoRR, abs/cs/060, 2006.B. Stein, E. Stamatatos and M. Koppel, Proceedings of the ECAI’08 Workshop on Uncovering Plagiarism, Authorship and Social Software Misuse, Patras, Greece, July 22, 2008, volume 377 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2008.J. Wieting, M. Bansal, K. Gimpel and K. Livescu, Towards universal paraphrastic sentence embeddings, CoRR, abs/1511.0, 2015.V. Zarrabi, J. Rafiei, K. Khoshnava, H. Asghari and S. Mohtaj, Evaluation of text reuse corpora for text alignment task of plagiarism detection, In L. Cappellato, N. Ferro, G.J.F. Jones and E. SanJuan, editors, Working Notes of {CLEF} 2015 – Conference and Labs of the Evaluation forum, Toulouse, France, September 8–11, 2015, volume 1391 of {CEUR} Workshop Proceedings, CEUR-WS.org, 2015.Barrón-Cedeño, A., Gupta, P., & Rosso, P. (2013). Methods for cross-language plagiarism detection. Knowledge-Based Systems, 50, 211-217. doi:10.1016/j.knosys.2013.06.01

    Long-Term Monitoring of Experimental Features, Subtask 2: Alexandria-Ashland Highway (KY 9) Pavement Performance Monitoring

    Get PDF
    Construction on the AA Highway began in late 1985 and was completed in late 1990. Prior to construction, 30 different test sections had been designed into the highway for evaluation. The test sections contain 23 different characteristic qualities and different segment lengths. The segment lengths range from 1.28 to 9.13 miles and took one and a half to four years to complete each segment. The sections were constructed from various pavement and shoulder designs. The designs are varied by parameters such as the type of subgrade stabilization, drainage type, surface class, surface aggregate, and more. The purpose for monitoring the performance of the AA Highway is to compare the different design types to determine the most feasible, long-lasting design available. There are several factors that impact the long term performance of the pavement. These include the volume of traffic, the classification of traffic, ESAL (equivalent single axle load), and environmental factors. Therefore, the performance of the pavement can not be entirely dependent on the design. The pavement performance was monitored periodically since construction through 1999. Falling weight deflectometer (FWD) measurements were made, distress surveys were conducted, and rideability data was collected from the Pavement Management Branch of the Division of Operations. Cracking of all types was the most prevalent form of distress in all the sections. Raveling was the second most prominent distress. Much of these distresses were associated with crushed gravel surfaces. There was less cracking and raveling on sections that were paved with crushed limestone surface mixtures

    개 흉부 방사선 자료의 딥러닝 적용을 통한 심장 면적 자동 분석 방법 개발

    Get PDF
    학위논문 (석사) -- 서울대학교 대학원 : 보건대학원 보건학과, 2021. 2. 성주헌 .Introduction : Measurement of canine heart size in thoracic lateral radiograph is crucial in detecting heart enlargement caused by diverse cardiovascular diseases. The purpose of this study was 1) to develop deep learning (DL) model that segments heart and 4th thoracic vertebrae (T4) body, 2) develop new radiographic measurement using calculated 2 dimensional heart area and length of T4 body, and 3) calculate performance of our new measurement to detect heart enlargement using echocardiographic measurement as gold standard. Methods : Total 1,000 thoracic radiographic images of dog were collected from Seoul National University Veterinary Medicine Teaching Hospital from 2018. 01. 01 to 2020. 08. 31. Given ground truth mask, two Attention U-Nets for segmentation of heart and T4 body were trained using different hyperparameters. Among 1,000 images, model was trained with 800 images, validated with 100 images and tested with 100 images. Performance of DL model was assessed with dice score coefficient, precision and recall. New calculation method was developed to calculate heart volume and adjust by T4 body length, which was named vertebra-adjusted heart volume (VaHV). Correlation of VaHV of 100 test images and reported VHS (vertebral heart score) was assessed. With 188 images with concurrent echocardiographic examination, diagnostic performance of VaHV for detecting cardiomegaly was assessed by students t-test, receiver operating characteristic (ROC) curve and area under the curve (AUC). Results : The two trained DL model showed very good similarity with ground truth (dice score coefficient 0.956 for heart segmentation, 0.844 for T4 body segmentation). VaHV of 100 test images showed statistically significant correlation with VHS. VaHV showed better diagnostic performance in detecting left atrial enlargement and left ventricular enlargement than VHS. Conclusions : DL model can be used to segment heart and vertebrae in veterinary radiographic images. Our new radiographic measurement obtained from DL model can potentially be used to assess and monitor cardiomegaly in dogs.개의 심장질환 중 가장 높은 유병률을 나타내는 이첨판 폐쇄부전증을 포함하여 다양한 심장질환이 점진적인 심비대를 특징으로 하기에, 개의 흉부 방사선 영상에서 심장 크기를 측정하여 심비대를 진단하는 것은 심장질환을 조기에 발견하고 적절한 치료시기를 계획하는 데 있어 매우 중요한 부분을 차지한다. 현장에서 바로 잴 수 있는 지표로서 기존에는 vertebral heart score (VHS)가 널리 사용되고 있으나, 이는 1차원 길이의 합으로 이루어진 지표이기에 심비대를 진단하는 데 한계가 있을 수 있다. 본 연구의 목적은 개의 흉부 방사선 영상에서 심장 면적과 척추체 길이를 자동으로 산출하는 딥러닝 모델을 구축하고, 이를 이용하여 심장 용적을 추정할 수 있는 지표를 개발하는 것이었다. 본 연구는 서울대학교 수의과대학 동물병원 검진자료로부터 수집된 총 1,188 건의 자료를 바탕으로 수행되었다. 1,000건의 영상은 심장과 척추체의 면적을 자동으로 분할 (semantic segmentation) 해주는 딥러닝 모델을 훈련시키고 평가하기 위해 사용되었으며, 이를 이용하여 새로운 심장 용적 지표인 vertebra-adjusted heart volume (VaHV) 를 산출했다. 추가로 1달 미만 간격의 방사선 촬영 기록과 심장초음파 검진 기록을 가진 188건의 영상을 수집하여 훈련된 딥러닝 모델을 이용해 계산한 VaHV와 심장초음파 기록 (LA/Ao, LVIDDN) 을 비교하여 VaHV의 심비대 진단능을 평가하였다. 심장과 척추체의 면적 불균형을 보완하기 위해 서로 다른 hyperparameter를 가진 Improved Attention U-Net이 사용되었으며, 두 개의 신경망 모두 시험용 데이터셋에서 정답 면적과 높은 일치율 (dice score coefficient 0.956, 0.844) 를 보였으며, 신경망의 예측결과에서 계산된 VaHV는 기존에 기록된 VHS와 통계적으로 유의한 상관계수를 (r = 0.69, P 1.6, LVIDDN > 1.7) 에 대해 높은 예측력을 가짐을 확인하였으며 (AUC 0.818), 기존에 사용되던 VHS의 예측력 (AUC 0.805) 보다 우수한 성능을 보임을 확인하였다. 본 연구는 수의방사선에서 최초로 딥러닝을 이용한 의미론적 면적 분할 (semantic segmentation) 을 적용하여 수의 영상에서 기존보다 더 다양한 신경망 알고리즘이 활용될 수 있는 가능성을 보여주었다. 또한 심장의 2차원 면적이 심비대를 진단함에 있어 기존의 길이 기반 심장 크기 측정 지표를 보완할 수 있다는 가능성을 보여주었다.1. Introduction 6 2. Materials and Methods 8 2.1 Data Collection 8 2.2 Development of DL model 10 2.2.1 Introduction to Semantic Segmentation 10 2.2.2 Attention U-Net with Focal Tversky Loss, Surface Loss 11 2.2.2.1 Attention U-Net 11 2.2.2.2 Improved Attention U-Net with Focal Tversky Loss 12 2.2.2.3 Surface Loss 14 2.2.3 Image Preprocessing 15 2.2.4 Establishing Ground Truth 16 2.2.5 Training DL Model 16 2.2.5.1 DL Model for Heart Segmentation 18 2.2.5.2 DL Model for T4 Body Segmentation 19 2.3 Volumetric Measurement of Heart 20 2.3.1 Analysis of Binary Mask 20 2.3.2 Vertebra-adjusted Heart Volume (VaHV) 21 2.3.3 Calculation of VaHV from DL Model Prediction 22 2.4 Statistical Methods 23 2.4.1 Segmentation DL Model Performance 23 2.4.2 Correlation between VaHV and VHS 23 2.4.3 Evaluation of Cardiomegaly using Echocardiographic Measurement 23 3. Results 24 3.1 DL Model 24 3.1.1 Heart Segmentation 24 3.1.2 T4 Body Segmentation 26 3.2 Descriptive Statistics of VaHV 28 3.3 Correlation between VaHV and VHS 29 3.4 Diagnostic Performance of VaHV for Detecting Cardiomegaly 30 4. Discussion 33 5. Conclusion 34 6. References 35 초록 38Maste

    Emergency Department Management: Data Analytics for Improving Productivity and Patient Experience

    Get PDF
    The onset of big data, typically defined by its volume, velocity, and variety, is transforming the healthcare industry. This research utilizes data corresponding to over 23 million emergency department (ED) visits between January 2010 and December 2017 which were treated by physicians and advanced practice providers from a large national emergency physician group. This group has provided ED services to health systems for several years, and each essay aims to address operational challenges faced by this group’s management team. The first essay focuses on physician performance. We question how to evaluate performance across multiple sites and work to understand the relationships between patient flow, patient complexity, and patient experience. Specifically, an evaluation system to assess physician performance across multiple facilities is proposed, the relationship between productivity and patient experience scores is explored, and the drivers of patient flow and complexity are simultaneously identified. The second essay explores the relationship between physician performance and malpractice claims as we investigate whether physicians’ practice patterns change after they are named in a malpractice lawsuit. Overall, the results of this analysis indicate that the likelihood of being named in a malpractice claim is largely a function of how long a physician has practiced. Furthermore, physician practice patterns remain consistent after a physician is sued, but patient experience scores increase among sued physicians after the lawsuit is filed. Such insights are beneficial for management as they address the issue of medical malpractice claims. The final essay takes a closer look at the relationship between advanced practice providers (APPs) and physicians. Can EDs better utilize APPs to reduce waiting times and improve patient flow? A systematic data-driven approach which incorporates descriptive, predictive, and prescriptive analyses is employed to provide recommendations for ED provider staffing practices

    A thermodynamic analysis of forced convection through porous media using pore scale modeling

    Get PDF
    The flow thorough porous media is analyzed from a thermodynamic perspective, with a particular focus on the entropy generation inside the porous media, using a pore scale modeling approach. A single representative elementary volume was utilized to reduce the CPU time. Periodic boundary conditions were employed for the vertical boundaries, by re-injecting the velocity and temperature profiles from the outlet to the inlet and iterating. The entropy generation was determined for both circular and square cross-sectional configurations, and the effects of different Reynolds numbers, assuming Darcy and Forchheimer regimes, were also taken into account. Three porosities were evaluated and discussed for each cross-sectional configuration, and streamlines, isothermal lines and the local entropy generation rate contours were determined and compared. The local entropy generation rate contours indicated that the highest entropy generation regions were close to the inlet for low Reynolds flows and near the central cylinder for high Reynolds flows. Increasing Reynolds number from 100 to 200 reveals disturbances in the dimensionless volume averaged entropy generation rate trend that may be due to a change in the fluid flow regime. According to Bejan number evaluation for both cross-section configurations, it is demonstrated that is mainly provoked by the heat transfer irreversibility. A performance evaluation criterion parameter was calculated for different case-studies. By this parameter, conditions for obtaining the least entropy generation and the highest Nusselt number could be achieved simultaneously. Indeed, this parameter utilizes both the first and the second laws of thermodynamics to present the best case-study. According to the performance evaluation criterion, it is indicated that the square cross-section configuration with o=0.64 exhibits better thermal performance for low Reynolds number flows. A comparison between the equal porosity cases for two different cross-sectional configurations indicated that the square cross-section demonstrated a higher performance evaluation criterion than the circular cross-section, for a variety of different Reynolds numbers
    corecore