16 research outputs found

    Revisiting the anatomy of the cephalic vein, its origin, course and possible clinical correlations in relation to the anatomical snuffbox among Jordanian

    Get PDF
    Background: The cephalic vein is one of the most distinguished superficial veins of the upper limb. Its clinical value lies in venous access. There is little known about the variation of its formation in relation to the anatomical snuffbox. Hence, anatomical variants in the origin of the cephalic vein are important in clinical practice. Subsequently, this study was designed to examine the variation of the cephalic vein formation in relation to the anatomical snuffbox. Materials and methods: A cross-sectional study of 438 subjects (722 hands), was prepared to study the cephalic vein among Jordanian students and staff of one of the major governmental Medical College in Jordan, by using infrared illumination system. The obtained data was analysed according to; gender, sidedness, and handedness. Results: Four sites for the formation of the cephalic vein in relation to the anatomical snuffbox were found. There was a significant relation between gender and sidedness, and the sites of formation of the cephalic vein (p < 0.0001 and p = 0.048, respectively). Conclusions: For the first time this study identified different sites for the formation of the cephalic vein in relation to the anatomical snuffbox. However, regardless of its sites of formation, the cephalic vein was running in 98% of the examined hands in the anatomical snuffbox

    Effects of hospital facilities on patient outcomes after cancer surgery: an international, prospective, observational study

    Get PDF
    Background Early death after cancer surgery is higher in low-income and middle-income countries (LMICs) compared with in high-income countries, yet the impact of facility characteristics on early postoperative outcomes is unknown. The aim of this study was to examine the association between hospital infrastructure, resource availability, and processes on early outcomes after cancer surgery worldwide.Methods A multimethods analysis was performed as part of the GlobalSurg 3 study-a multicentre, international, prospective cohort study of patients who had surgery for breast, colorectal, or gastric cancer. The primary outcomes were 30-day mortality and 30-day major complication rates. Potentially beneficial hospital facilities were identified by variable selection to select those associated with 30-day mortality. Adjusted outcomes were determined using generalised estimating equations to account for patient characteristics and country-income group, with population stratification by hospital.Findings Between April 1, 2018, and April 23, 2019, facility-level data were collected for 9685 patients across 238 hospitals in 66 countries (91 hospitals in 20 high-income countries; 57 hospitals in 19 upper-middle-income countries; and 90 hospitals in 27 low-income to lower-middle-income countries). The availability of five hospital facilities was inversely associated with mortality: ultrasound, CT scanner, critical care unit, opioid analgesia, and oncologist. After adjustment for case-mix and country income group, hospitals with three or fewer of these facilities (62 hospitals, 1294 patients) had higher mortality compared with those with four or five (adjusted odds ratio [OR] 3.85 [95% CI 2.58-5.75]; p<0.0001), with excess mortality predominantly explained by a limited capacity to rescue following the development of major complications (63.0% vs 82.7%; OR 0.35 [0.23-0.53]; p<0.0001). Across LMICs, improvements in hospital facilities would prevent one to three deaths for every 100 patients undergoing surgery for cancer.Interpretation Hospitals with higher levels of infrastructure and resources have better outcomes after cancer surgery, independent of country income. Without urgent strengthening of hospital infrastructure and resources, the reductions in cancer-associated mortality associated with improved access will not be realised

    Investigating the Adoption of Big Data Management in Healthcare in Jordan

    No full text
    Software developers and data scientists use and deal with big data to easily discover useful knowledge and find better solutions to improve healthcare services and patient safety. Big data analytics (BDA) is getting attention due to its role in decision-making across the healthcare field. Therefore, this article examines the adoption mechanism of big data analytics and management in healthcare organizations in Jordan. Additionally, it discusses health big data’s characteristics and the challenges, and limitations for health big data analytics and management in Jordan. This article proposes a conceptual framework that allows utilizing health big data. The proposed conceptual framework suggests a way to merge the existing health information system with the National Health Information Exchange (HIE), which might play a role in extracting insights from our massive datasets, increases the data availability and reduces waste in resources. When applying the framework, the collected data are processed to develop knowledge and support decision-making, which helps improve the health care quality for both the community and individuals by improving diagnosis, treatment, and other services

    A Deep-Learning-Based Bug Priority Prediction Using RNN-LSTM Neural Networks

    No full text
    Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892)

    Investigating the Adoption of Big Data Management in Healthcare in Jordan

    No full text
    Software developers and data scientists use and deal with big data to easily discover useful knowledge and find better solutions to improve healthcare services and patient safety. Big data analytics (BDA) is getting attention due to its role in decision-making across the healthcare field. Therefore, this article examines the adoption mechanism of big data analytics and management in healthcare organizations in Jordan. Additionally, it discusses health big data’s characteristics and the challenges, and limitations for health big data analytics and management in Jordan. This article proposes a conceptual framework that allows utilizing health big data. The proposed conceptual framework suggests a way to merge the existing health information system with the National Health Information Exchange (HIE), which might play a role in extracting insights from our massive datasets, increases the data availability and reduces waste in resources. When applying the framework, the collected data are processed to develop knowledge and support decision-making, which helps improve the health care quality for both the community and individuals by improving diagnosis, treatment, and other services

    An Evaluation Model for Social Development Environments

    No full text
    Distributed software development is becoming a common practice among developers. Factors such as the development environments improvement, their extensibility, and the emergence of social networking software are leading factors. They lead the development process (both co-located and geographically distributed) to a practice that: 1) improves the team’s productivity, and 2) encourages and supports the social interaction among the teams’ members. The above factors along with the distributed development emergence, Integrated Development Environments (IDEs) evolution, and the social media advances got the attention of the software development teams, and made them consider how to better assist the social nature of software developers, and the social aspects of software development, including activity awareness of team members ’ progress, their presence, collaboration, communication, and coordination around shared artifacts. IDEs are the most commonly used tools by developers and programmers. Integrating the most needed development tools inside the IDE, makes it a Collaborative Developmen

    Machine Learning Based Phishing Attacks Detection Using Multiple Datasets

    No full text
    Nowadays, individuals and organizations are increasingly targeted by phishing attacks, so an accurate phishing detection system is required. Therefore, many phishing detection techniques have been proposed as well as phishing datasets have been collected. In this paper, three datasets have been used to train and test machine learning classifiers. The datasets have been archived by Phish-Tank and UCI Machine Learning Repository. Furthermore, Information Gain algorithm have been used for features reduction and selection purpose. In addition, six machine learning classifiers have been evaluated, namely NaiveBayes, ANN, DecisionStump, KNN, J48 and RandomForest. However, the classifiers have been trained and tested over the three datasets in two stages. The first stage is using all features included in each dataset while the second stage using selected features by IG algorithm. At the first stage RandomForest classifier has shown the best performance over Dataset-1 and Dataset-2, while J48 has shown the best performance over Dataset-3. On the other hand, after features selection, the RandomForest classifier was the superior among the other five classifiers over Dataset-1 and Dataset-2 with accuracy of 98% and 93.66% respectively. While ANN classifier has shown the best performance with accuracy of 88.92% over Dataset-3. Because of the few number of instances as well as features in Dataset-3 comparing to the other two dataset; the performance of the classifiers has been affected

    Node Verification to Join the Cloud Environment Using Third Party Verification Server

    Get PDF
    Currently, cloud computing is facing different types of threats whether from inside or outside its environment.  This may cause cloud to be crashed or at least unable to provide services to the requests made by clients. In this paper, a new technique is proposed to make sure that the new node which asks to join the cloud is not composing a threat on the cloud environment. Our new technique checks the node before it will be guaranteed to join the cloud whether it runs malwares or software that could be used to launch an attack. In this way the cloud will allow only the clean node to join it, eliminating the risk of some types of threats that could be caused by infected nodes

    Perception and practices of depth of anesthesia monitoring and intraoperative awareness event rate among Jordanian anesthesiologists: a cross-sectional study

    No full text
    Abstract Background Intraoperative awareness is the second most common complication of surgeries, and it negatively affects patients and healthcare professionals. Based on the limited previous studies, there is a wide variation in the incidence of intraoperative awareness and in the practices and attitudes toward depth of anesthesia (DoA) monitoring among healthcare systems and anesthesiologists. This study aimed to evaluate the Jordanian anesthesiologists’ practice and attitudes toward DoA monitoring and estimate the event rate of intraoperative awareness among the participating anesthesiologists. Methods A descriptive cross-sectional survey of Jordanian anesthesiologists working in public, private, and university hospitals was utilized using a questionnaire developed based on previous studies. Practice and attitude in using DoA monitors were evaluated. Anesthesiologists were asked to best estimate the number of anesthesia procedures and frequency of intraoperative awareness events in the year before. Percentages and 95% Confidence Intervals (95%CI) were reported and compared between groups using chi-square tests. Results A total of 107 anesthesiologists responded and completed the survey. About one-third of the respondents (34.6%; 95% CI 26.1–44.2) had never used a DoA monitor and only 6.5% (95% CI 3.1–13.2) reported using it as a “daily practice”. The use of a DoA monitor was associated with experience and type of health sector. However, 81.3% (95% CI 66.5–83.5) believed that currently available DoA monitors are effective for DoA monitoring and only 4.7% (95%CI 1.9–10.8) reported it as being “invalid”. Most respondents reported that the main purpose of using a DoA monitor was to prevent awareness (86.0%; 95%CI 77.9–91.4), guide the delivery of anesthetics (63.6%; 95%CI 53.9–72.2), and reduce recovery time (57%; 95%CI 47.4–66.1). The event rate of intraoperative awareness was estimated at 0.4% among participating anesthesiologists. Most Jordanian hospitals lacked policy intending to prevent intraoperative awareness. Conclusions Most anesthesiologists believed in the role of DoA monitors in preventing intraoperative awareness, however, their attitudes and knowledge are inadequate, and few use DoA monitors in routine practices. In Jordan, large efforts are needed to regulate the use of DoA monitoring and reduce the incidence of intraoperative awareness
    corecore