14 research outputs found

    Cross-comparison of climate change adaptation strategies across large river basins in Europe, Africa and Asia

    Get PDF
    A cross-comparison of climate change adaptation strategies across regions was performed, considering six large river basins as case study areas. Three of the basins, namely the Elbe, Guadiana, and Rhine, are located in Europe, the Nile Equatorial Lakes region and the Orange basin are in Africa, and the Amudarya basin is in Central Asia. The evaluation was based mainly on the opinions of policy makers and water management experts in the river basins. The adaptation strategies were evaluated considering the following issues: expected climate change, expected climate change impacts, drivers for development of adaptation strategy, barriers for adaptation, state of the implementation of a range of water management measures, and status of adaptation strategy implementation. The analysis of responses and cross-comparison were performed with rating the responses where possible. According to the expert opinions, there is an understanding in all six regions that climate change is happening. Different climate change impacts are expected in the basins, whereas decreasing annual water availability, and increasing frequency and intensity of droughts (and to a lesser extent floods) are expected in all of them. According to the responses, the two most important drivers for development of adaptation strategy are: climate-related disasters, and national and international policies. The following most important barriers for adaptation to climate change were identified by responders: spatial and temporal uncertainties in climate projections, lack of adequate financial resources, and lack of horizontal cooperation. The evaluated water resources management measures are on a relatively high level in the Elbe and Rhine basins, followed by the Orange and Guadiana. It is lower in the Amudarya basin, and the lowest in the NEL region, where many measures are only at the planning stage. Regarding the level of adaptation strategy implementation, it can be concluded that the adaptation to climate change has started in all basins, but progresses rather slowly

    Target group characteristics: are perceptual modality preferences relevant for instructional materials design?

    No full text
    In instructional development, one is often advised to take individual perceptional preferences into account when designing audiovisual materials. Perceptual and learning style research literature, however, offers no clear evidence for modality preferences for either video or audio. The same holds for other interlocking symbolic modalities: verbal and pictorial, and reading and listening. Here, too, no such thing as individual modality preference has been clearly proved. A relatively strong support is given to the dichotomy of visualizers/nonvisualizers. In the research literature these various dichotomies are not always clearly discriminated. Audiovisual design must deal with learner characteristics, such as perceptual preference, in the same way as it deals with other characteristics such as reading proficiency and prerequisite visual literacy: by building upon optimal prerequisite information and intuitive knowledge about the target group. There is not yet a legitimate theoretical basis for laborious typological differentiation within the target group

    Verslag van het project 'Auditieve Courseware voor Blinden'

    No full text

    Interactive audio for computer assisted learning

    No full text
    Starting from a short review of the developments of computer assisted learning and of instructional communication, the opportunities for applying audio within CAL courseware are explored. The key-concept of interactivity is brought into the discussion of interactive, eventually auditive, systems. Then a description is given of the hardware and software of an audio-CAL system, making use of an overview of the modern audio technology. The conclusion indicates the research that could result

    Investigating the Impact of Image Quality on Endoscopic AI Model Performance

    No full text
    Virtually all endoscopic AI models are developed with clean, high-quality imagery from expert centers, however, the clinical data quality is much more heterogeneous. Endoscopic image quality can degrade by e.g. poor lighting, motion blur, and image compression. This disparity between training, validation data, and real-world clinical practice can have a substantial impact on the performance of deep neural networks (DNNs), potentially resulting in clinically unreliable models. To address this issue and develop more reliable models for automated cancer detection, this study focuses on identifying the limitations of current DNNs. Specifically, we evaluate the performance of these models under clinically relevant and realistic image corruptions, as well as on a manually selected dataset that includes images with lower subjective quality. Our findings highlight the importance of understanding the impact of a decrease in image quality and the need to include robustness evaluation for DNNs used in endoscopy.Virtually all endoscopic AI models are developed with clean, high-quality imagery from expert centers, however, the clinical data quality is much more heterogeneous. Endoscopic image quality can degrade by e.g. poor lighting, motion blur, and image compression. This disparity between training, validation data, and real-world clinical practice can have a substantial impact on the performance of deep neural networks (DNNs), potentially resulting in clinically unreliable models. To address this issue and develop more reliable models for automated cancer detection, this study focuses on identifying the limitations of current DNNs. Specifically, we evaluate the performance of these models under clinically relevant and realistic image corruptions, as well as on a manually selected dataset that includes images with lower subjective quality. Our findings highlight the importance of understanding the impact of a decrease in image quality and the need to include robustness evaluation for DNNs used in endoscopy

    Estimating Surgical Urethral Length on Intraoperative Robot-Assisted Prostatectomy Images using Artificial Intelligence Anatomy Recognition

    No full text
    Objective: To construct a convolutional neural network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background: Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical toward optimal outcomes. Therefore, new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute toward future AI-assisted RARP and surgeon guidance. Methods: Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. Two hundred sixty-four frames were annotated according to prostate, urethra, ligated plexus, and catheter. Thirty annotated images from different RARP videos were used as a test data set. The dice (similarity) coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results: The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes, respectively, with a Hd95 of 29.27 and 72.62, respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 to 1.86 mm difference vs human annotators, but with significant deviation (standard deviation = 3.28-3.56). Conclusion: This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared with human annotators, but with a small mean difference (<2 mm). This is a promising development for further research on AI-assisted RARP
    corecore