3 research outputs found
Research on Calligraphy Evaluation Technology Based on Deep Learning
Today, when computer-assisted instruction (CAI) is booming, related research in the field of calligraphy education still hasn’t much progress. This main research for the calligraphy beginners to evaluate their works anytime and anywhere. Author uses the literature research and interview to understand the common writing problems of beginners. Then conducts discussion on these problems, design of solutions, research on algorithms, and experimental verification. Based on the ResNet-50 model, through WeChat applet implements for beginners. The main research contents are as follows:
(1) In order to achieve good results in calligraphy judgment, this article uses the ResNet-50 model to judge calligraphy. First, adjust the area of the handwritten calligraphy image as the input of the network to a small block suitable for the network. While training the network, adjust the learning rate, the number of image layers and the number of training samples to achieve the optimal. The research results show that ResNet has certain practicality and reference value in the field of calligraphy judgment. Regarding the possible over-fitting problem, this article proposes to improve the accuracy of the judgment by collecting more data and optimizing the data washing process.
(2) Combining the rise of WeChat applets, in view of the current WeChat applet learning platform development process and the problem of fewer functional modules, this paper uses cloud development functions to develop a calligraphy learning platform based on WeChat applets. While simplifying the development process, it ensures that the functional modules of the platform meet the needs of teachers and beginners, it has certain practicality and commercial value. After the development of the calligraphy learning applet is completed, it will be submitted for official
Recommended from our members
Environmentally robust multiple camera tracking
A significant growth of the use of surveillance cameras has arisen from both the availability of low-cost home security and post-September 11th security measures. With such a plethora of surveillance cameras available and already in use, tracking a person or object from one field of view to another accurately is a challenging possibility; recognising the same person at different spatial locations, under different lighting conditions, at different scales and orientations. In order to address these challenges and provide a solution, a review of recent and past literature is provided.
The main theme of this research is investigating methods to improve tracking of objects and people in dynamic environments and applying computational techniques to provide solutions to optimise such tracking systems. Image processing techniques are explored and refactored to adapt to currently available single-board computing power. Optimisation methods for speed of computing are investigated, presenting the paradigm of parallel programming during the design of “computationally intense” algorithms. The research also addresses cross-platform software/ server application design.
In controlled environments current tracking systems perform well, however, this project explores methods to take multiple camera tracking to a higher level where they can, in real time, robustly cope with: rapid changes in lighting and track objects between indoor and outdoor scenarios at any time of day or in any weather conditions, severe image occlusion, rapid changes in direction, orientation and velocity of the object being tracked and be invariant to image clutter and noise. Thus the outputs are twofold: track a human/object across multiple cameras and ensure the algorithm is fast enough to run in real time on a modern processor.
This research explores algorithms to deliver colour illumination invariance, also known as colour constancy. Colour illumination invariance can be applied as a pre-processing step to all cameras in a multi-camera environment. The research also investigates experimental assessment of multi-camera performance, focusing mainly on robustness to environmental changes.
There are three main objectives for a tracking algorithm being used in the proposed system. Firstly, the tracking algorithm must accurately detect objects independently of their scale change and rotation. Secondly, the tracking algorithm must accurately detect objects across multiple cameras in different lighting conditions. The third objective for the tracking algorithm is that it must be able to attain a high level of colour constancy. The last objective can be implemented as a pre-processing step to such a tracking algorithm. This research explores the use of the Scale Invariant Feature Transform (SIFT) and the Speeded-Up Robust Features (SURF) algorithm. These algorithms are discussed in detail in the literature review as well as methods for providing colour illumination invariance
Localisation et détection de fermeture de boucle basées saillance visuelle : algorithmes et architectures matérielles
In several tasks of robotics, vision is considered to be the essential element by which the perception of the environment or the interaction with other users can be realized. However, the potential artifacts in the captured images make the task of recognition and interpretation of the visual information extremely complicated. It is therefore very important to use robust, stable and high repeatability rate primitives to achieve good performance. This thesis deals with the problems of localization and loop closure detection for a mobile robot using visual saliency. The results in terms of accuracy and efficiency of localization and closure detection applications are evaluated and compared to the results obtained with the approaches provided in literature, both applied on different sequences of images acquired in outdoor environnement. The main drawback with the models proposed for the extraction of salient regions is their computational complexity, which leads to significant processing time. To obtain a real-time processing, we present in this thesis also the implementation of the salient region detector on the reconfigurable platform DreamCam.Dans plusieurs tâches de la robotique, la vision est considérée comme l’élément essentiel avec lequel la perception de l’environnement ou l’interaction avec d’autres utilisateurs peut se réaliser. Néanmoins, les artefacts potentiellement présents dans les images capturées rendent la tâche de reconnaissance et d’interprétation de l’information visuelle extrêmement compliquée. Il est de ce fait, très important d’utiliser des primitives robustes, stables et ayant un taux de répétabilité élevé afin d’obtenir de bonnes performances. Cette thèse porte sur les problèmes de localisation et de détection de fermeture de boucle d’un robot mobile en utilisant la saillance visuelle. Les résultats en termes de précision et d’efficacité des applications de localisation et de détection de fermeture sont évalués et comparés aux résultats obtenus avec des approches de l’état de l’art sur différentes séquences d’images acquises en milieu extérieur. Le principal inconvénient avec les modèles proposés pour l’extraction de zones de saillance est leur complexité de calcul, ce qui conduit à des temps de traitement important. Afin d’obtenir un traitement en temps réel, nous présentons dans ce mémoire l’implémentation du détecteur de régions saillantes sur la plate forme reconfigurable DreamCam