6 research outputs found

    Методология сбора и анализа Big Data автомобильного трафика в образовательных программах при подготовке транспортных инженеров

    Get PDF
    Интеллектуальные транспортные системы являются неотъемлемой частью развития и цифровизации современных мегаполисов. Создание цифровых двойников является приоритетной задачей для мониторинга, анализа и прогнозирования поведения транспортной системы. Для этого требуется организовать сбор данных. При этом различные виды датчиков генерирую большой объем данных – Big Data, требующий мониторинга и анализа. Все это ставит перед государством и бизнесом задачу подготовки высококвалифицированных кадров, имеющих компетенции не только в сфере автотранспортного комплекса, но и работы с Big Data

    Применение цифровых технологий в солнечной энергетике

    Get PDF
    Важнейшим элементом для эффективной реализации Национальной программы «Цифровая экономика» является повышение эффективности и устойчивости функционирования предприятий энергетической отрасли за счет реализации мероприятий по цифровой трансформации. Внедрение результатов данных мероприятий позволят увеличить производительность труда, сократить удельные затраты на управление и снизить долю материальных затрат. Примером одного из успешных решений по внедрению технологий цифровизации энергетики является использование методов, базирующихся на комплексном применении беспилотных летательных аппаратов и технологий машинного зрения

    Object Recognition Using Deep Convolutional Features Transformed by a Recursive Network Structure

    No full text

    Object Recognition Using Deep Convolutional Features Transformed by a Recursive Network Structure

    Full text link
    © 2017 IEEE. Deep neural networks (DNNs) trained on large data sets have been shown to be able to capture high-quality features describing image data. Numerous studies have proposed various ways to transfer DNN structures trained on large data sets to perform classification tasks represented by relatively small data sets. Due to the limitations of these proposals, it is not well known how to effectively adapt the pre-trained model into the new task. Typically, the transfer process uses a combination of fine-tuning and training of adaptation layers; however, both tasks are susceptible to problems with data shortage and high computational complexity. This paper proposes an improvement to the well-known AlexNet feature extraction technique. The proposed approach applies a recursive neural network structure on features extracted by a deep convolutional neural network pre-trained on a large data set. Object recognition experiments conducted on the Washington RGBD image data set have shown that the proposed method has the advantages of structural simplicity combined with the ability to provide higher recognition accuracy at a low computational cost compared with other relevant methods. The new approach requires no training at the feature extraction phase, and can be performed very efficiently as the output features are compact and highly discriminative, and can be used with a simple classifier in object recognition settings

    Neural network based image representation for small scale object recognition

    Get PDF
    Object recognition can be abstractedly viewed as a two-stage process. The features learning stage selects key information that can represent the input image in a compact, robust, and discriminative manner in some feature space. Then the classification stage learns the rules to differentiate object classes based on the representations of their images in feature space. Consequently, if the first stage can produce a highly separable features set, simple and cost-effective classifiers can be used to make the recognition system more applicable in practice. Features, or representations, used to be engineered manually with different assumptions about the data population to limit the complexity in a manageable range. As more practical problems are tackled, those assumptions are no longer valid, and so are the representations built on them. More parameters and test cases have to be considered in those new challenges, that causes manual engineering to become too complicated. Machine learning approaches ease those difficulties by allowing computer to learn to identify the appropriate representation automatically. As the number of parameters increases with the divergence of data, it is always beneficial to eliminate irrelevant information from input data to reduce the complexity of learning. Chapter 3 of the thesis reports the study case where removal of colour leads to an improvement in recognition accuracy. Deep learning appears to be a very strong representation learner with new achievements coming in monthly basic. While training the phase of deep structures requires huge amount of data, tremendous calculation, and careful calibration, the inferencing phase is affordable and straightforward. Utilizing knowledge in trained deep networks is therefore promising for efficient feature extraction in smaller systems. Many approaches have been proposed under the name of “transfer learning”, aimed to take advantage of that “deep knowledge”. However, the results achieved so far could be classified as a learning room for improvement. Chapter 4 presents a new method to utilize a trained deep convolutional structure as a feature extractor and achieved state-of-the-art accuracy on the Washington RGBD dataset. Despite some good results, the potential of transfer learning is just barely exploited. On one hand, a dimensionality reduction can be used to make the deep neural network representation even more computationally efficient and allow a wider range of use cases. Inspired by the structure of the network itself, a new random orthogonal projection method for the dimensionality reduction is presented in the first half of Chapter 5. The t-SNE mimicking neural network for low-dimensional embedding is also discussed in this part with promising results. In another approach, feature encoding can be used to improve deep neural network features for classification applications. Thanks to the spatially organized structure, deep neural network features can be considered as local image descriptors, and thus the traditional feature encoding approaches such as the Fisher vector can be applied to improve those features. This method combines the advantages of both discriminative learning and generative learning to boost the features performance in difficult scenarios such as when data is noisy or incomplete. The problem of high dimensionality in deep neural network features is alleviated with the use of the Fisher vector based on sparse coding, where infinite number of Gaussian mixtures was used to model the feature space. In the second half of Chapter 5, the regularized Fisher encoding was shown to be effective in improving classification results on difficult classes. Also, the low cost incremental k-means learning was shown to be a potential dictionary learning approach that can be used to replace the slow and computationally expensive sparse coding method
    corecore