6 research outputs found

    Sensing and Artificial Intelligent Maternal-Infant Health Care Systems: A Review

    Get PDF
    Currently, information and communication technology (ICT) allows health institutions to reach disadvantaged groups in rural areas using sensing and artificial intelligence (AI) technologies. Applications of these technologies are even more essential for maternal and infant health, since maternal and infant health is vital for a healthy society. Over the last few years, researchers have delved into sensing and artificially intelligent healthcare systems for maternal and infant health. Sensors are exploited to gauge health parameters, and machine learning techniques are investigated to predict the health conditions of patients to assist medical practitioners. Since these healthcare systems deal with large amounts of data, significant development is also noted in the computing platforms. The relevant literature reports the potential impact of ICT-enabled systems for improving maternal and infant health. This article reviews wearable sensors and AI algorithms based on existing systems designed to predict the risk factors during and after pregnancy for both mothers and infants. This review covers sensors and AI algorithms used in these systems and analyzes each approach with its features, outcomes, and novel aspects in chronological order. It also includes discussion on datasets used and extends challenges as well as future work directions for researchers

    Workflow optimization in distributed computing environment for stream-based data processing model / Saima Gulzar Ahmad

    Get PDF
    With the advancement in science and technology numerous complex scientific applications can be executed in heterogeneous computing environment. However, the bottle neck is efficient scheduling algorithms. Such complex applications can be expressed in the form of workflows. Geographically distributed heterogeneous resources can execute such workflows in parallel. This enhances the workflow execution. In data-intensive workflows, heavy data moves across the execution nodes. This causes high communication overhead. To avoid such overheads many techniques have been used, however in this thesis stream-based data processing model is used in which data is processed in the form of continuous instances of data items. Data-intensive workflow optimization is an active research area because numerous applications are producing huge amount of data that is increasing exponentially day by day. This thesis proposes data-intensive workflow optimization algorithms. The first algorithm architecture consists of two phases a) workflow partitioning, and b) partitions mapping. Partitions are made in such a way that minimum data should move across the partitions. It enables heavy data processing locally on same execution node because each partition is mapped to one execution node. It overcomes the high communication costs. In the mapping phase, a partition is mapped on that execution node which offers minimum execution time. Eventually, the workflow is executed. The second algorithm is a variation in first algorithm in which data parallelism is introduced in each partition. Most compute intensive task in each partition is identified and data parallelism is applied to that task. It reduces the execution time of that compute intensive tasks. The simulation results prove that proposed algorithms outperform from state of the art algorithms for variety of workflows. The datasets used for performance evaluation are synthesized as well as workflows derived from real world applications. The workflows derived from real world applications include Montage and Cybershake. Synthesized workflows were generated with different sizes, shapes and densities to evaluate the proposed algorithms. The simulation results shows 60% reduced latency with 47% improvement in the throughput. Similarly, when data parallelism is introduced in the algorithm the performance of the algorithm improved further by 12% in latency and 17% in throughput when compared to PDWA algorithm. In the real time stream processing framework the experiments were performed using STORM with a use-case data-intensive workflow (EURExpressII). Experiments show that PDWA outperforms in terms of execution time of the workflow with different input data size

    Predictive modelling and identification of key risk factors for stroke using machine learning

    Get PDF
    Strokes are a leading global cause of mortality, underscoring the need for early detection and prevention strategies. However, addressing hidden risk factors and achieving accurate prediction become particularly challenging in the presence of imbalanced and missing data. This study encompasses three imputation techniques to deal with missing data. To tackle data imbalance, it employs the Synthetic Minority Oversampling Technique (SMOTE). The study initiates with a baseline model and subsequently employs an extensive range of advanced models. This study thoroughly evaluates the performance of these models by employing k-fold cross-validation on various imbalanced and balanced datasets. The findings reveal that age, Body Mass Index (BMI), average glucose level, heart disease, hypertension, and marital status are the most influential features in predicting strokes. Furthermore, a Dense Stacking Ensemble (DSE) model is built upon previous advanced models after fine-tuning, with the best-performing model as a meta-classifier. The DSE model demonstrated over 96% accuracy across diverse datasets, with an AUC score of 83.94% on imbalanced imputed dataset and 98.92% on balanced one. This research underscores the remarkable performance of the DSE model, compared to the previous research on the same dataset. It highlights the model's potential for early stroke detection to improve patient outcomes

    Cost optimization in cloud environment based on task deadline

    Get PDF
    Abstract The popularity of cloud and fog services has raised the number of users exponentially. Main advantage of Cloud/fog infrastructure and services are crucial specially for commercial users from diverse areas. The variety of service requests with different deadlines makes the task of a service broker challenging. The fog and cloud users always lookfor a suitable compromise between cost and quality of service in terms of response time therefore, the cost optimization is vital for the cloud/fog service providers to capture the market. In this paper an algorithm, Cost Optimization in the cloud/fog environment based on Task Deadline (COTD) is proposed that optimizes cost without compromising the response time. In this algorithm the task deadline is considered as a constraint and an appropriate data center for task processing is selected. The proposed algorithm is suitable for runtime decision making due to its low complexity. The proposed algorithm is evluated using a well-known simulation tool Cloud Analyst. Our comprehensive testbed simulations show that COTD outperforms the existing schemes, Service Proximity Based Routing and Performance-Optimized Routing. The proposed algorithm successfully minimizes the cost by 35% on average while maintaining the response time

    Energy-makespan optimization of workflow scheduling in fog-cloud computing

    Get PDF
    The rapid evolution of smart services and Internet of Things devices accessing cloud data centers can lead to network congestion and increased latency. Fog computing, focusing on ubiquitously connected heterogeneous devices, addresses latency and privacy requirements of workflows executing at the network edge. However, allocating resources in this paradigm is challenging due to the complex and strict Quality of Service constraints. Moreover, simultaneously optimizing conflicting objectives, e.g., energy consumption and workflow makespan increases the complexity of the scheduling process. We investigate workflow scheduling in fog–cloud environments to provide an energy-efficient task schedule within acceptable application completion times. We introduce a scheduling algorithm, Energy Makespan Multi-Objective Optimization, that works in two phases. First, it models the problem as a multi-objective optimization problem and computes a tradeoff between conflicting objectives while allocating fog and cloud resources, and schedules latency-sensitive tasks (with lower computational requirements) to fog resources and computationally complex tasks (with low latency requirements) on cloud resources. We adapt the Deadline-Aware stepwise Frequency Scaling approach to further reduce energy consumption by utilizing unused time slots between two already scheduled tasks on a single node. Our evaluation using synthesized and real-world applications shows that our approach reduces energy consumption, up to 50%, as compared to existing approaches with minimal impact on completion times
    corecore