16 research outputs found

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    Using deep learning to count albatrosses from space: Assessing results in light of ground truth uncertainty

    Get PDF
    Many wildlife species inhabit inaccessible environments, limiting researchers ability to conduct essential population surveys. Recently, very high resolution (sub-metre) satellite imagery has enabled remote monitoring of certain species directly from space; however, manual analysis of the imagery is time-consuming, expensive and subjective. State-of-the-art deep learning approaches can automate this process; however, often image datasets are small, and uncertainty in ground truth labels can affect supervised training schemes and the interpretation of errors. In this paper, we investigate these challenges by conducting both manual and automated counts of nesting Wandering Albatrosses on four separate islands, captured by the 31 cm resolution WorldView-3 sensor. We collect counts from six observers, and train a convolutional neural network (U-Net) using leave-one-island-out cross-validation and different combinations of ground truth labels. We show that (1) interobserver variation in manual counts is significant and differs between the four islands, (2) the small dataset can limit the networks ability to generalise to unseen imagery and (3) the choice of ground truth labels can have a significant impact on our assessment of network performance. Our final results show the network detects albatrosses as accurately as human observers for two of the islands, while in the other two misclassifications are largely caused by the presence of noise, cloud cover and habitat, which was not present in the training dataset. While the results show promise, we stress the importance of considering these factors for any study where data is limited and observer confidence is variable

    Geodesic Tracking of Retinal Vascular Trees with Optical and TV-Flow Enhancement in SE(2)

    Get PDF
    Retinal images are often used to examine the vascular system in a non-invasive way. Studying the behavior of the vasculature on the retina allows for noninvasive diagnosis of several diseases as these vessels and their behavior are representative of the behavior of vessels throughout the human body. For early diagnosis and analysis of diseases, it is important to compare and analyze the complex vasculature in retinal images automatically. In previous work, PDE-based geometric tracking and PDE-based enhancements in the homogeneous space of positions and orientations have been studied and turned out to be useful when dealing with complex structures (crossing of blood vessels in particular). In this article, we propose a single new, more effective, Finsler function that integrates the strength of these two PDE-based approaches and additionally accounts for a number of optical effects (dehazing and illumination in particular). The results greatly improve both the previous left-invariant models and a recent data-driven model, when applied to real clinical and highly challenging images. Moreover, we show clear advantages of each module in our new single Finsler geometrical method

    Potassium deficiency diagnosis method of apple leaves based on MLR-LDA-SVM

    Get PDF
    IntroductionAt present, machine learning and image processing technology are widely used in plant disease diagnosis. In order to address the challenges of subjectivity, cost, and timeliness associated with traditional methods of diagnosing potassium deficiency in apple tree leaves. MethodsThe study proposes a model that utilizes image processing technology and machine learning techniques to enhance the accuracy of detection during each growth period. Leaf images were collected at different growth stages and processed through denoising and segmentation. Color and shape features of the leaves were extracted and a multiple regression analysis model was used to screen for key features. Linear discriminant analysis was then employed to optimize the data and obtain the optimal shape and color feature factors of apple tree leaves during each growth period. Various machine-learning methods, including SVM, DT, and KNN, were used for the diagnosis of potassium deficiency. ResultsThe MLR-LDA-SVM model was found to be the optimal model based on comprehensive evaluation indicators. Field experiments were conducted to verify the accuracy of the diagnostic model, achieving high diagnostic accuracy during different growth periods. DiscussionThe model can accurately diagnose whether potassium deficiency exists in apple tree leaves during each growth period. This provides theoretical guidance for intelligent and precise water and fertilizer management in orchards

    Edge Video Analytics: A Survey on Applications, Systems and Enabling Techniques

    Full text link
    Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. The basic concepts of EVA (e.g., definition, architectures) were not fully elucidated due to the rapid development of this domain. To fill these gaps, we provide a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.Comment: 31 pages, 13 figure

    Perspectives in visual imaging for marine biology and ecology: from acquisition to understanding

    Get PDF
    Durden J, Schoening T, Althaus F, et al. Perspectives in Visual Imaging for Marine Biology and Ecology: From Acquisition to Understanding. In: Hughes RN, Hughes DJ, Smith IP, Dale AC, eds. Oceanography and Marine Biology: An Annual Review. 54. Boca Raton: CRC Press; 2016: 1-72

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    Automated UAV and Satellite Image Analysis For Wildlife Monitoring.

    Get PDF
    Very high resolution satellites and unmanned aerial vehicles (UAVs) are revolutionising our ability to monitor wildlife, especially species in remote and inaccessible regions. However, given the rapid increase in data acquisition, computer-automated approaches are urgently needed to count wildlife in the resultant imagery. In this thesis, we investigate the application of convolutional neural networks (CNNs) to the task of detecting vulnerable seabird populations in satellite and UAV imagery. In our first application we train a U-Net CNN to detect wandering albatrosses in 31-cm resolution WorldView-3 satellite imagery. We compare results across four different island colonies using a leave-one-island-out cross validation, achieving a mean average precision (mAP) score of 0.669. By collecting new data on inter-observer variation in albatross counts, we show that our U-Net results fall within the range of human accuracy for two islands, with misclassifications at other sites being simple to filter manually. In our second application we detect Abbott’s boobies nesting in forest canopy, using UAV Structure from Motion (SfM) imagery. We focus on overcoming occlusion from branches by implementing a multi-view detection method. We first train a Faster R-CNN model to detect Abbott’s booby nest sites (mAP=0.518) and guano (mAP=0.472) in the 2D UAV images. We then project Faster R-CNN detections onto the 3D SfM model, cluster multi-view detections of the same objects using DBSCAN, and use cluster features to classify proposals into true and false positives (comparing logistic regression, support vector machine, and multilayer perceptron models). Our best-performing multi-view model successfully detects nest sites (mAP=0.604) and guano (mAP=0.574), and can be incorporated with expert review to greatly expedite analysis time. Both methods have immediate real-world application for future surveys of the target species, allowing for more frequent, expansive, and lower-cost monitoring, vital for safeguarding populations in the long-term

    Advances in Object and Activity Detection in Remote Sensing Imagery

    Get PDF
    The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms

    Semantic location extraction from crowdsourced data

    Get PDF
    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction
    corecore