21 research outputs found

    Multi-Agent based Intelligent Decision Support Systems for Cancer Classification

    Get PDF
    There is evidence that early detection of cancer diseases can improve the treatment and increase the survival rate of patients. This paper presents an efficient CAD system for cancer diseases diagnosis by gene expression profiles of DNA microarray datasets. The proposed CAD system combines Intelligent Decision Support System (IDSS) and Multi-Agent (MA) system. The IDSS represents the backbone of the entire CAD system. It consists of two main phases; feature selection/reduction phase and a classification phase. In the feature selection/reduction phase, eight diverse methods are developed. While, in the classification phase, three evolutionary machine learning algorithms are employed. On the other hand, the MA system manages the entire operation of the CAD system. It first initializes several IDSSs (exactly 24 IDSSs) with the aid of mobile agents and then directs the generated IDSSs to run concurrently on the input dataset. Finally, a master agent selects the best classification, as the final report, based on the best classification accuracy returned from the 24 IDSSs The proposed CAD system is implemented in JAVA, and evaluated by using three microarray datasets including; Leukemia, Colon tumor, and Lung cancer. The system is able to classify different types of cancer diseases accurately in a very short time. This is because the MA system invokes 24 different IDSS to classify the diseases concurrently in parallel processing manner before taking the decision of the best classification result

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P < 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Fuzzy based Tuning Congestion Window for Improving End-to-End Congestion Control Protocols

    No full text
    Transmission Control Protocol (TCP) is the transport-layer protocol widely used in the internet today. TCP performance is strongly influenced by its congestion control algorithms which limit the amount of transmitted traffic based on the estimated network capacity to avoid sending packets that may be dropped later. In other words Congestion Control is Algorithms that prevent the sender from overloading the network. This paper presents a modified fuzzy controller implementation to estimate the network capacity which reflected by congestion window size. Fuzzy controller use Round Trip Time “RTT ” as network traffic indication as well as current window size and slow start threshold “ssthresh ” as currently occupied bandwidth indicator. NS2 used as a simulation tool to compare proposed fuzzy approach with most widespread congestion control protocols including; TCP-Tahoe, Reno, New Reno, and Sack. Simulation results show that the proposed mechanism improves the performance against throughput, packet drop, packet delay, and connection fairness

    Assignment of tasks on parallel and distributed computer systems

    No full text
    PARIS-EST Marne-la-Vallee-BU (774682101) / SudocSudocFranceF

    Multi-Agent based Intelligent Decision Support Systems for Cancer Classification

    Get PDF
    There is evidence that early detection of cancer diseases can improve the treatment and increase the survival rate of patients. This paper presents an efficient CAD system for cancer diseases diagnosis by gene expression profiles of DNA microarray datasets. The proposed CAD system combines Intelligent Decision Support System (IDSS) and Multi-Agent (MA) system. The IDSS represents the backbone of the entire CAD system. It consists of two main phases; feature selection/reduction phase and a classification phase. In the feature selection/reduction phase, eight diverse methods are developed. While, in the classification phase, three evolutionary machine learning algorithms are employed. On the other hand, the MA system manages the entire operation of the CAD system. It first initializes several IDSSs (exactly 24 IDSSs) with the aid of mobile agents and then directs the generated IDSSs to run concurrently on the input dataset. Finally, a master agent selects the best classification, as the final report, based on the best classification accuracy returned from the 24 IDSSs The proposed CAD system is implemented in JAVA, and evaluated by using three microarray datasets including; Leukemia, Colon tumor, and Lung cancer. The system is able to classify different types of cancer diseases accurately in a very short time. This is because the MA system invokes 24 different IDSS to classify the diseases concurrently in parallel processing manner before taking the decision of the best classification result

    A new online scheduling approach for enhancing QOS in cloud

    Get PDF
    Quality-of-Services (QoS) is one of the most important requirements of cloud users. So, cloud providers continuously try to enhance cloud management tools to guarantee the required QoS and provide users the services with high quality. One of the most important management tools which play a vital role in enhancing QoS is scheduling. Scheduling is the process of assigning users’ tasks into available Virtual Machines (VMs). This paper presents a new task scheduling approach, called Online Potential Finish Time (OPFT), to enhance the cloud data-center broker, which is responsible for the scheduling process, and solve the QoS issue. The main idea of the new approach is inspired from the idea of passing vehicles through the highways. Whenever the width of the road increases, the number of passing vehicles increases. We apply this idea to assign different users’ tasks into the available VMs. The number of tasks that are allocated to a VM is in proportion to the processing power of this VM. Whenever the VM capacity increases, the number of tasks that are assigned into this VM increases. The proposed OPFT approach is evaluated using the CloudSim simulator considering real tasks and real cost model. The experimental results indicate that the proposed OPFT algorithm is more efficient than the FCFS, RR, Min-Min, and MCT algorithms in terms of schedule length, cost, balance degree, response time and resource utilization

    Intelligence Is beyond Learning: A Context-Aware Artificial Intelligent System for Video Understanding

    No full text
    Understanding video files is a challenging task. While the current video understanding techniques rely on deep learning, the obtained results suffer from a lack of real trustful meaning. Deep learning recognizes patterns from big data, leading to deep feature abstraction, not deep understanding. Deep learning tries to understand multimedia production by analyzing its content. We cannot understand the semantics of a multimedia file by analyzing its content only. Events occurring in a scene earn their meanings from the context containing them. A screaming kid could be scared of a threat or surprised by a lovely gift or just playing in the backyard. Artificial intelligence is a heterogeneous process that goes beyond learning. In this article, we discuss the heterogeneity of AI as a process that includes innate knowledge, approximations, and context awareness. We present a context-aware video understanding technique that makes the machine intelligent enough to understand the message behind the video stream. The main purpose is to understand the video stream by extracting real meaningful concepts, emotions, temporal data, and spatial data from the video context. The diffusion of heterogeneous data patterns from the video context leads to accurate decision-making about the video message and outperforms systems that rely on deep learning. Objective and subjective comparisons prove the accuracy of the concepts extracted by the proposed context-aware technique in comparison with the current deep learning video understanding techniques. Both systems are compared in terms of retrieval time, computing time, data size consumption, and complexity analysis. Comparisons show a significant efficient resource usage of the proposed context-aware system, which makes it a suitable solution for real-time scenarios. Moreover, we discuss the pros and cons of deep learning architectures
    corecore