21 research outputs found

    Neuromodeling in horticulture and viticulture

    Get PDF
    The article considers the possibilities of using the artificial intelligence in horticulture and viticulture. At present, the artificial intelligence technologies are actively used in agriculture, which make it possible to effectively determine crop yields, automate the cropping and storage of agricultural produce, determine the condition of the soil, the composition and effective use of fertilizers, identify plant diseases and bring weeds under control using recognition methods. The use of the artificial intelligence methods in horticulture and viticulture has its own specific features: firstly, robotic complexes for harvesting cherries, apricots, apples, peaches and grapes; and secondly, the identification of fruit diseases by means photo recognition using neural networks’ machine learning

    Retrieval of experiments by efficient comparison of marginal likelihoods

    Get PDF

    Soft computing-based methods for semantic service retrieval

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Nowadays, a large number of business services have been advertised to customers via online channels. To access the published services, the customers typically search for the services by using search engines. Consequently, in order to meet the customers' desires, many researchers have focused on improving performance of the retrieval process. In the recent past, semantic technologies have played an important role in service retrieval and service querying. A service retrieval system consists of two main processes; service annotation and service querying. Annotating services semantically enables machines to understand the purpose of services, while semantic service querying helps machines to expand user queries by considering meanings of query terms, and retrieve services which are relevant to the queries. Because of dealing with semantics of services and queries, both processes can further assist in intelligent and precise service retrieval, selection and composition. In terms of semantic service annotation, a key issue is the manual nature of service annotation. Manual service annotation requires not just large amount of time, but updating the annotation is infrequent and, hence, annotation of the service description changes may be out-of-date. Although some researchers have studied semantic service annotation, they have focused only on Web services, not business service information. Moreover, their approaches are semi-automated, so service providers are still required to select appropriate service annotations. Similar to semantic service annotation, existing literature in semantic service querying has focused on processing Web pages or Web services, not business service information. In addition, because of issues of ubiquity, heterogeneity, and ambiguity of services, the use of soft computing methods offers an interesting solution for handling complex tasks in service retrieval. Unfortunately, based on the literature review, no soft-computing based methods have been used for semantic service annotation or semantic service querying. In this research, intelligent soft-computing driven methods are developed to improve the performance of a semantic retrieval system for business services. The research includes three main parts, namely, intelligent methods for semantically annotating services, querying service concepts, and retrieving services based on relevant concepts. Furthermore, a prototype of a service retrieval system is built to validate the developed intelligent methods. The research proposes three semantic-based methods; ECBR, Vector-based and Classification-based, for accomplishing each research part. The experimental results present that the Classification-based method, which is based on soft-computing techniques, performs well in the service annotation and outperforms both the ECBR and the Vector-based methods in the service querying and service retrieval

    Diffusion Model as Representation Learner

    Full text link
    Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive results on various generative tasks.Despite its promises, the learned representations of pre-trained DPMs, however, have not been fully understood. In this paper, we conduct an in-depth investigation of the representation power of DPMs, and propose a novel knowledge transfer method that leverages the knowledge acquired by generative DPMs for recognition tasks. Our study begins by examining the feature space of DPMs, revealing that DPMs are inherently denoising autoencoders that balance the representation learning with regularizing model capacity. To this end, we introduce a novel knowledge transfer paradigm named RepFusion. Our paradigm extracts representations at different time steps from off-the-shelf DPMs and dynamically employs them as supervision for student networks, in which the optimal time is determined through reinforcement learning. We evaluate our approach on several image classification, semantic segmentation, and landmark detection benchmarks, and demonstrate that it outperforms state-of-the-art methods. Our results uncover the potential of DPMs as a powerful tool for representation learning and provide insights into the usefulness of generative models beyond sample generation. The code is available at \url{https://github.com/Adamdad/Repfusion}.Comment: Accepted by ICCV 202

    Enhanced collapsible linear blocks for arbitrary sized image super-resolution

    Get PDF
    Image up-scaling and super-resolution (SR) techniques have been a hot research topic for many years due to its large impact in the field of medical imaging, surveillance etc. Especially single image super-resolution (SISR) become very popular because of the fast development of deep convolution neural network (DCNN) and the low requirement on the input. They are achieving outstanding performance. However, there are still problems in the state-of-the-art works, especially from two perspectives: 1. failed at exploiting the hierarchical characteristics from the input, resulting in loss of information and artifacts in the final high resolution (HR) image; 2. failed to handle arbitrary-sized images; the existing research works are focused on fixed size input images. To address these challenges, this paper proposed a residual dense network (RDN) and multi-scale sub-pixel convolution network (MSSPCN) which are integrated into a Collapsible Linear Block Super Efficient Super-Resolution (SESR) network. The RDNs aims to tackle the first challenge, carrying the hierarchical features from end-to-end. An adaptive cropping strategy (ACS) technique is introduced before feature extraction targeting at the image size challenge. The novelty of this work is extracting the hierarchical features and integrating RDNs with MSSPCNs. The proposed network can upscale any arbitrary-sized image (1080p) to ×2 (4K) and ×4 (8K). To secure ground truth for evaluation, this paper follows the opposite flow, generating the input LR images by down-sampling the given HR images (ground truth). To evaluate the performance, the proposed algorithm is compared with eight state-of-the-art algorithms, both quantitatively and qualitatively. The results are verified on six benchmark datasets. The extensive experiments justify that the proposed architecture performs better than other methods and upscales the images satisfactorily

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2ϵ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction

    Less users more confidence: How AOIs don’t affect scanpath trend analysis

    Get PDF
    User studies are typically difficult, recruiting enough users is often problematic and each experiment takes a considerable amount of time to be completed. In these studies, eye tracking is increasingly used which often increases time, therefore, the lower the number of users required for these studies the better for making these kinds of studies more practical in terms of economics and time expended. The possibility of achieving almost the same results with fewer users has already been raised. Specifically, the possibility of achieving 75% similarity to the results of 65 users with 27 users for searching tasks and 34 users for browsing tasks has been observed in scanpath trend analysis which discovers the most commonly followed path on a particular web page in terms of its visual elements or areas of interest (AOIs). Different approaches are available to segment or divide web pages into their visual elements or AOIs. In this paper, we investigate whether the possibility raised by the previous work is restricted to a particular page segmentation approach by replicating the experiments with two other segmentation approaches. The results are consistent with ~5% difference for the searching tasks and ~10% difference for the browsing tasks
    corecore