95 research outputs found

    Development and Evaluation of the Oracle Intelligent Tutoring System (OITS)

    Get PDF
    This paper presents the design and development of intelligent tutoring system for teaching Oracle. The Oracle Intelligent Tutoring System (OITS) examined the power of a new methodology to supporting students in Oracle programming. The system presents the topic of Introduction to Oracle with automatically generated problems for the students to solve. The system is dynamically adapted at run time to the student’s individual progress. An initial evaluation study was done to investigate the effect of using the intelligent tutoring system on the performance of students

    Comparing temporal behavior of fast objective video quality measures on a large-scale database

    Get PDF
    In many application scenarios, video quality assessment is required to be fast and reasonably accurate. The characterisation of objective algorithms by subjective assessment is well established but limited due to the small number of test samples. Verification using large-scale objectively annotated databases provides a complementary solution. In this contribution, three simple but fast measures are compared regarding their agreement on a large-scale database. In contrast to subjective experiments, not only sequence-wise but also framewise agreement can be analyzed. Insight is gained into the behavior of the measures with respect to 5952 different coding configurations of High Efficiency Video Coding (HEVC). Consistency within a video sequence is analyzed as well as across video sequences. The results show that the occurrence of discrepancies depends mostly on the configured coding structure and the source content. The detailed observations stimulate questions on the combined usage of several video quality measures for encoder optimization

    Damage Assessment Of Reinforced Concrete Beam With Various Depths By Using Acoustic Emission Technique

    Get PDF
    Penyelidikan ini adalah bertujuan untuk menggunakan Acoustic Emssion (AE) di dalam sistem pemantauan struktur kesihatan. Tiga kategori rasuk konkrit bertetulang dibina dengan perbezaan kedalaman rasuk dan setiap kategori mengandungi empat sampel dengan jumlah keseluruhannya 12 rasuk konkrit bertetulang. Semua sampel kajian diuji dengan mengenakan pembebanan lentur empat titik dan dikelaskan kepada pembebanan monotonik dan berperingkat. Di dalam kajian ini, setiap kategori rasuk konkrit bertetulang akan diuji dengan satu rasuk dikenakan bebanan monotonik dan selebihnya bebanan berperingkat. Berdasarkan pemantauan secara kasar, keseluruhan sampel ini mengalami mod kegagalan lentur, dan untuk mengumpul dan merekod kesemua isyarat AE data yang terlibat di dalam kajian ini, sensor jenis R6I dan sistem AE Digital MICRO-SAMOS (μSAMOS) digunakan bersama dengan perisian computer AEwin untuk menganalisis data. Selain itu, pengamatan secara visual juga dilaksanakan bagi membandingkan hasil analisis oleh AE. Objektif utama dalam kajian ini adalah untuk menyelidik dan mengkaji kebolehan AE dalam menentukan lokasi keretakan pada rasuk selain daripada mengkaji tenaga mutlak kumulatif untuk perilaku mekanikal yang berbeza bagi keretakan tersebut. Di samping itu, kaedah Intensity Analysis (IA) digunakan bagi menentukan tahap kerosakan dan kegagalan lentur pada rasuk berkenaan. Hasil penyelidikan ini mendapati bahawa teknik AE mampu menentukan dengan tepat lokasi keretakan mikro yang tidak dapat dilihat dengan mata kasar dan keretakan makro secara visual. Julat perbezaan adalah diantara 25mm dan 55mm. Berdasarkan kajian ini, didapati keretakan mikro akan terjadi apabila tenaga mutlak kumulatif melebihi 1.0 x 106 atto joule. Kesimpulanya penyelidikan ini mencapai objektif sebenar. ____________________________________________________________________________________________________________ This study aimed at using the Acoustic Emission (AE) in Structure Health Monitoring (SHM), since three categories of reinforced concrete (RC) beams (a total of 12 RC beams) with difference depth of beams. The study sample were tested in four-point bending setup under monotonic and stepwise loading, each category of RC beams contained four beams, one beam for monotonic loading and three beams for stepwise loading. All beams encountered flexural failure mode. In collecting the AE data, MICRO -SAMOS (μSAMOS) Digital AE system and R6I sensor were used, as AEwin software was used to analyze the data. Moreover, visual observation was conducted to compare with the AE results. The main interest of this research was to investigate the capability of AE in locating the concrete specimen’s cracks besides investigating the cumulative absolute energy for different mechanical behaviors of those cracks. On the other hand, Intensity Analysis (IA) method was used to quantified the damage level of the RC-beams associated with the crack flexural failure. The proposed method introduced that the differences of crack locations between visual observation and AE sources are between 25mm and 55mm. Also by comparing the visual observation to the AE result, when the cumulative absolute energy exceeds 1.0 x 106 attoJoule (aJ), the onset of the first crack occurs, and the recommended action at this stage is typically minor surface defects. Moreover, when the specimens affected by localization damage, the absolute energy increases dramatically for about five to seven times

    DrugComb update: a more comprehensive drug sensitivity data repository and analysis portal

    Get PDF
    gkab438Combinatorial therapies that target multiple pathways have shown great promises for treating complex diseases. DrugComb (https://drugcomb.org/) is a web-based portal for the deposition and analysis of drug combination screening datasets. Since its first release, DrugComb has received continuous updates on the coverage of data resources, as well as on the functionality of the web server to improve the analysis, visualization and interpretation of drug combination screens. Here, we report significant updates of DrugComb, including: (i) manual curation and harmonization of more comprehensive drug combination and monotherapy screening data, not only for cancers but also for other diseases such as malaria and COVID-19; (ii) enhanced algorithms for assessing the sensitivity and synergy of drug combinations; (iii) network modelling tools to visualize the mechanisms of action of drugs or drug combinations for a given cancer sample and (iv) state-of-the-art machine learning models to predict drug combination sensitivity and synergy. These improvements have been provided with more user-friendly graphical interface and faster database infrastructure, which make DrugComb the most comprehensive web-based resources for the study of drug sensitivities for multiple diseases.Peer reviewe

    Spatio-temporal error concealment technique for high order multiple description coding schemes including subjective assessment

    Get PDF
    International audienceError resilience (ER) is an important tool in video coding to maximize the quality of Experience (QoE). The prediction process in video coding became complex which yields an unsatisfying video quality when NALunit packets are lost in error-prone channels. There are different ER techniques and multiple description coding (MDC) is one of the promising technique for this problem. MDC is categorized into different types and, in this paper, we focus on temporal MDC techniques. In this paper, a new temporal MDC scheme is proposed. In the encoding process, the encoded descriptions contain primary frames and secondary frames (redundant representations). The secondary frames represent the MVs that are predicted from previous primary frames such that the residual signal is set to zero and is not part of the rate distortion optimization. In the decoding process of the lost frames, a weighted average error concealment (EC) strategy is proposed to conceal these frames. The proposed scheme is subjectively evaluated along with other schemes and the results show that the proposed scheme is significantly different from most of other temporal MDC schemes

    DSMK-means “Density-based Split-and-Merge K-means clustering Algorithm”

    Get PDF
    Clustering is widely used to explore and understand large collections of data. K-means clustering method is one of the most popular approaches due to its ease of use and simplicity to implement. This paper introduces Density-based Split-and-Merge K-means clustering Algorithm (DSMK-means), which is developed to address stability problems of standard K-means clustering algorithm, and to improve the performance of clustering when dealing with datasets that contain clusters with different complex shapes and noise or outliers. Based on a set of many experiments, this paper concluded that developed algorithms “DSMK-means” are more capable of finding high accuracy results compared with other algorithms especially as they can process datasets containing clusters with different shapes, densities, or those with outliers and noise

    Enhancing and Combining a Recent K-means Family of Algorithms for Better Results

    Get PDF
    Clustering is widely used to explore and understand large collections of data. K-means clustering method is one of the most popular approaches due to its ease of use and simplicity to implement. In this thesis, the researcher introduces Distance-based Initialization Method for K-means clustering algorithm (DIMK-means) which is developed to select carefully a set of centroids that would get high accuracy results compared to the random selection of standard K-means clustering method in choosing initial centroids, which gets low accuracy results. This initialization method is as fast and as simple as the K-means algorithm itself with almost the same low cost, which makes it attractive in practice. The researcher also Introduces Density-based Split- and -Merge K-means clustering Algorithm (DSMK-means) which is developed to address stability problems of K-means clustering, and to improve the performance of clustering when dealing with datasets that contain clusters with different complex shapes and noise or outliers. Based on a set of many experiments, this research concluded that the developed algorithms are more capable to finding high accuracy results compared with other algorithms especially as they can process datasets containing clusters with different shapes, densities, non-linearly separable, or those with outliers and noise. The researcher chose the experiments datasets from artificial and real-world examples off the UCI Machine Learning Repository

    Reproducible research framework for objective video quality measures using a large-scale database approach

    Get PDF
    This work presents a framework to facilitate reproducibility of research in video quality evaluation. Its initial version is built around the JEG-Hybrid database of HEVC coded video sequences. The framework is modular, organized in the form of pipelined activities, which range from the tools needed to generate the whole database from reference signals up to the analysis of the video quality measures already present in the database. Researchers can re-run, modify and extend any module, starting from any point in the pipeline, while always achieving perfect reproducibility of the results. The modularity of the structure allows to work on subsets of the database since for some analysis this might be too computationally intensive. To this purpose, the framework also includes a software module to compute interesting subsets, in terms of coding conditions, of the whole database. An example shows how the framework can be used to investigate how the small differences in the definition of the widespread PSNR metric can yield very different results, discussed in more details in our accompanying research paper Aldahdooh et al. (0000). This further underlines the importance of reproducibility to allow comparing different research work with high confidence. To the best of our knowledge, this framework is the first attempt to bring exact reproducibility end-to-end in the context of video quality evaluation research. (C) 2017 The Authors. Published by Elsevier B.V

    Daily Herald, January, 03, 1978

    Get PDF
    International audienceError concealment (EC) is one of the target applications of inpainting techniques. Some methods combine the estimated lost motion vectors (MVs) with the exemplar-based inpainting technique to recover the lost regions. Due to the erroneous motion vectors that might indicate a moving object as background object and vice versa, these methods are still showing visual artifacts in the recovered regions. In this paper, a concept of motion map that can be easily generated in the decoder side is introduced and it is combined with the exemplar-based inpainting technique. The proposed method introduces an adaptive search window size that trades-off the quality and complexity. Moreover, an optional blending technique is proposed to limit the spatio-temporal artifacts. Experiments show that the proposed method improves the visual quality with 5dB on average relative to the state-of-the-art inpainting-based EC method
    corecore