3 research outputs found

    Relevance-Based Compression of Cataract Surgery Videos

    Full text link
    In the last decade, the need for storing videos from cataract surgery has increased significantly. Hospitals continue to improve their imaging and recording devices (e.g., microscopes and cameras used in microscopic surgery, such as ophthalmology) to enhance their post-surgical processing efficiency. The video recordings enable a lot of user-cases after the actual surgery, for example, teaching, documentation, and forensics. However, videos recorded from operations are typically stored in the internal archive without any domain-specific compression, leading to a massive storage space consumption. In this work, we propose a relevance-based compression scheme for videos from cataract surgery, which is based on content specifics of particular cataract surgery phases. We evaluate our compression scheme with three state-of-the-art video codecs, namely H.264/AVC, H.265/HEVC, and AV1, and ask medical experts to evaluate the visual quality of encoded videos. Our results show significant savings, in particular up to 95.94% when using H.264/AVC, up to 98.71% when using H.265/HEVC, and up to 98.82% when using AV1.Comment: 11 pages, 5 figures, 3 table

    Mutual information based feature subset selection in multivariate time series classification

    Get PDF
    This paper deals with supervised classification of multivariate time se- ries. In particular, the goal is to propose a filter method to select a subset of time series. Consequently, we adopt the framework proposed by Brown et al. [10]. The key point in this framework is the computation of the mutual information between the features, which allows us to measure the relevance of each feature subset. In our case, where the features are a time series, we use an adaptation of existing nonparametric mutual infor- mation estimators based on the k-nearest neighbor. Specifically, for the purpose of bringing these methods to the time series scenario, we rely on the use of dynamic time warping dissimilarity. Our experimental results show that our method is able to strongly reduce the number of time series while keeping or increasing the classification accuracy.Grant agreement no. KK-2019/00095 IT1244-19 TIN2016-78365-R PID2019-104966GB-I0

    On the application of artificial intelligence and human computation to the automation of agile software task effort estimation

    Get PDF
    Software effort estimation (SEE), as part of the wider project planning and product road mapping process, occurs throughout a software development life cycle. A variety of effort estimation methods have been proposed in the literature, including algorithmic methods, expert based methods, and more recently, methods based on techniques drawn from machine learning and natural language processing. In general, the consensus in the literature is that expert-based methods such as Planning Poker are more reliable than automated effort estimation. However, these methods are labour intensive and difficult to scale to large-scale projects. To address this limitation, this thesis investigates the feasibility of using human computation techniques to coordinate crowds of inexpert workers to predict expert-comparable effort estimates for a given software development task. The research followed an empirical methodology and used four different methods: literature review, replication, a series of laboratory experiments, and ethnography. The literature uncovered the lack of suitable datasets that include the attributes of descriptive text (corpus), actual cost, and expert estimates for a given software development task. Thus, a new dataset was developed to meet the necessary requirements. Next, effort estimation based on recent natural language processing advancements was evaluated and compared with expert estimates. The results suggest that there was no significant improvement, and the automated approach was still outperformed by expert estimates. Therefore, the feasibility of scaling the Planning Poker effort estimation method by using human computation in a micro-task crowdsourcing environment was explored. A series of pilot experiments were conducted to find the proper design for adapting Planning Poker to a crowd environment. This resulted in designing a new estimation method called Crowd Planning Poker (CPP). The pilot experiments revealed that a significant proportion of the crowd submitted poor quality assignments. Therefore, an approach to actively managing the quality of SEE work was proposed and evaluated before being integrated into the CPP method. A substantial overall evaluation was then conducted. The results demonstrated that crowd workers were able to discriminate between tasks of varying complexity and produce estimates that were comparable with those of experts and at substantially reduced cost compared with small teams of domain experts. It was further noted in the experiments that crowd workers provide useful insights as to the resolution of the task. Therefore, as a final step, fine-grained details about crowd workers’ behaviour, including actions taken and artifacts reviewed, were used in an ethnographic study to understand how crowd effort estimation takes place in a crowd. Four persona archetypes were developed to describe the crowd behaviours, and the results of the behaviour analysis were confirmed by surveying the crowd workers
    corecore