12 research outputs found

    A System for Monitoring Animals Based on Behavioral Information and Internal State Information

    Get PDF
    Managing the risk of injury or illness is an important consideration when keeping pets. This risk can be minimized if pets are monitored on a regular basis, but this can be difficult and time-consuming. However, because only the external behavior of the animal can be observed and the internal condition cannot be assessed, the animal’s state can easily be misjudged. Additionally, although some systems use heartbeat measurement to determine a state of tension, or use rest to assess the internal state, because an increase in heart rate can also occur as a result of exercise, it is desirable to use this measurement in combination with behavioral information. In the current study, we proposed a monitoring system for animals using video image analysis. The proposed system first extracts features related to behavioral information and the animal’s internal state via mask R-CNN using video images taken from the top of the cage. These features are used to detect typical daily activities and anomalous activities. This method produces an alert when the hamster behaves in an unusual way. In our experiment, the daily behavior of a hamster was measured and analyzed using the proposed system. The results showed that the features of the hamster’s behavior were successfully detected. When loud sounds were presented from outside the cage, the system was able to discriminate between the behavioral and internal changes of the hamster. In future research, we plan to improve the accuracy of the measurement of small movements and develop a more accurate system

    ASTER Cloud Coverage Assessment and Mission Operations Analysis Using Terra/MODIS Cloud Mask Products

    No full text
    Since the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument cannot detect clouds accurately for snow-covered or nighttime images due to a lack of spectral bands, Terra/MODIS cloud mask (MOD35) products have been alternatively used in cloud assessment for all ASTER images. In this study, we evaluated ASTER cloud mask images generated from MOD35 products and used them to analyze the mission operations of ASTER. In the evaluation, ASTER cloud mask images from different MOD35 versions (Collections 5, 6, and 6.1) showed a large discrepancy in low- or high-latitude areas, and the rate of ASTER scenes with a high uncertain-pixel rate (≥30%) showed to be 2.2% in daytime and 12.0% in nighttime. In the visual evaluation with ASTER browse images, about 2% of cloud mask images showed some problems such as mislabeling and artifacts. In the mission operations analysis, the cloud avoidance function implemented in the ASTER observation scheduler showed a decrease in the mean cloud coverage (MCC) and an increase in the rate of clear scenes by 10% to 15% in each. Although 19-year-old time-series of MCC in five areas showed weather-related fluctuations such as the El Niño Southern Oscillation (ENSO), they indicated a small percent reduction in MCC by enhancement of the cloud avoidance function in April 2012. The global means of the number of clear ASTER scenes were 15.7 and 6.6 scenes in daytime and nighttime, respectively, and those of the success rate were 33.3% and 40.4% in daytime and nighttime, respectively. These results are expected to contribute not only to the ASTER Project but also to other optical sensor projects

    Modeling Land Use Transformations and Flood Hazard on Ibaraki’s Coastal in 2030: A Scenario-Based Approach Amid Population Fluctuations

    No full text
    Coastal areas, influenced by human activity and natural factors, face major environmental shifts, including climate-induced flood risks. This highlights the importance of forecasting coastal land use for effective flood defense and ecological conservation. Japan’s distinct demographic path necessitates flexible strategies for managing its urban development. The study examines the Ibaraki Coastal region to analyze the impacts of land-use changes in 2030, predicting and evaluating future floods from intensified high tides and waves in scenario-based forecasts. The future roughness map is derived from projected land-use changes, and we utilize this information in DioVISTA 3.5.0 software to simulate flood scenarios. Finally, we analyzed the overlap between simulated floods and each land-use category. The results indicate since 2020, built-up areas have increased by 52.37 sq. km (39%). In scenarios of constant or shrinking urban areas, grassland increased by 28.54 sq. km (42%), and urban land cover decreased by 7.47 sq. km (5.6%) over ten years. Our research examines two separate peaks in water levels associated with urban flooding. Using 2030 land use maps and a peak height of 4 m, which is the lower limit of the maximum run-up height due to storm surge expected in the study area, 4.71 sq. km of residential areas flooded in the urban growth scenario, compared to 4.01 sq. km in the stagnant scenario and 3.96 sq. km in the shrinkage scenario. With the upper limit of 7.2 m, which is the extreme case in most of the study area, these areas increased to 49.91 sq. km, 42.52 sq. km, and 42.31 sq. km, respectively. The simulation highlights future flood-prone urban areas for each scenario, guiding targeted flood prevention efforts

    Early Stage Forest Fire Detection from Himawari-8 AHI Images Using a Modified MOD14 Algorithm Combined with Machine Learning

    No full text
    The early detection and rapid extinguishing of forest fires are effective in reducing their spread. Based on the MODIS Thermal Anomaly (MOD14) algorithm, we propose an early stage fire detection method from low-spatial-resolution but high-temporal-resolution images, observed by the Advanced Himawari Imager (AHI) onboard the geostationary meteorological satellite Himawari-8. In order to not miss early stage forest fire pixels with low temperature, we omit the potential fire pixel detection from the MOD14 algorithm and parameterize four contextual conditions included in the MOD14 algorithm as features. The proposed method detects fire pixels from forest areas using a random forest classifier taking these contextual parameters, nine AHI band values, solar zenith angle, and five meteorological values as inputs. To evaluate the proposed method, we trained the random forest classifier using an early stage forest fire data set generated by a time-reversal approach with MOD14 products and time-series AHI images in Australia. The results demonstrate that the proposed method with all parameters can detect fire pixels with about 90% precision and recall, and that the contribution of contextual parameters is particularly significant in the random forest classifier. The proposed method is applicable to other geostationary and polar-orbiting satellite sensors, and it is expected to be used as an effective method for forest fire detection

    Extraction of Coastal Levees Using U-Net Model with Visible and Topographic Images Observed by High-Resolution Satellite Sensors

    No full text
    Coastal levees play a role in protecting coastal areas from storm surges and high waves, and they provide important input information for inundation damage simulations. However, coastal levee data with uniformity and sufficient accuracy for inundation simulations are not always well developed. Against this background, this study proposed a method to extract coastal levees by inputting high spatial resolution optical satellite image products (RGB images, digital surface models (DSMs), and slope images that can be generated from DSM images), which have high data availability at the locations and times required for simulation, into a deep learning model. The model is based on U-Net, and post-processing for noise removal was introduced to further improve its accuracy. We also proposed a method to calculate levee height using a local maximum filter by giving DSM values to the extracted levee pixels. The validation was conducted in the coastal area of Ibaraki Prefecture in Japan as a test area. The levee mask images for training were manually created by combining these data with satellite images and Google Street View, because the levee GIS data created by the Ibaraki Prefectural Government were incomplete in some parts. First, the deep learning models were compared and evaluated, and it was shown that U-Net was more accurate than Pix2Pix and BBS-Net in identifying levees. Next, three cases of input images were evaluated: (Case 1) RGB image only, (Case 2) RGB and DSM images, and (Case 3) RGB, DSM, and slope images. Case 3 was found to be the most accurate, with an average Matthews correlation coefficient of 0.674. The effectiveness of noise removal post-processing was also demonstrated. In addition, an example of the calculation of levee heights was presented and evaluated for validity. In conclusion, this method was shown to be effective in extracting coastal levees. The evaluation of generalizability and use in actual inundation simulations are future tasks

    Analysis of Permafrost Distribution and Change in the Mid-East Qinghai–Tibetan Plateau during 2012–2021 Using the New TLZ Model

    No full text
    The monitoring of permafrost is important for assessing the effects of global environmental changes and maintaining and managing social infrastructure, and remote sensing is increasingly being used for this wide-area monitoring. However, the accuracy of the conventional method in terms of temperature factor and soil factor needs to be improved. To address these two issues, in this study, we propose a new model to evaluate permafrost with a higher accuracy than the conventional methods. In this model, the land surface temperature (LST) is used as the upper temperature of the active layer of permafrost, and the temperature at the top of permafrost (TTOP) is used as the lower temperature. The TTOP value is then calculated by a modified equation using precipitation–evapotranspiration (PE) factors to account for the effect of soil moisture. This model, referred to as the TTOP-LST zero-curtain (TLZ) model, allows us to analyze subsurface temperatures for each layer of the active layer, and to evaluate the presence or absence of the zero-curtain effect through a time series analysis of stratified subsurface temperatures. The model was applied to the Qinghai–Tibetan Plateau and permafrost was classified into seven classes based on aspects such as stability and seasonality. As a result, it was possible to map the recent deterioration of permafrost in this region, which is thought to be caused by global warming. A comparison with the mean annual ground temperature (MAGT) model using local subsurface temperature data showed that the average root mean square error (RMSE) value of subsurface temperatures at different depths was 0.19 degrees C, indicating the validity of the TLZ model. A similar analysis based on the TLZ model is expected to enable detailed permafrost analysis in other areas

    Image-to-Image Subpixel Registration Based on Template Matching of Road Network Extracted by Deep Learning

    No full text
    The vast digital archives collected by optical remote sensing observations over a long period of time can be used to determine changes in the land surface and this information can be very useful in a variety of applications. However, accurate change extraction requires highly accurate image-to-image registration, which is especially true when the target is urban areas in high-resolution remote sensing images. In this paper, we propose a new method for automatic registration between images that can be applied to noisy images such as old aerial photographs taken with analog film, in the case where changes in man-made objects such as buildings in urban areas are extracted from multitemporal high-resolution remote sensing images. The proposed method performs image-to-image registration by applying template matching to road masks extracted from images using a two-step deep learning model. We applied the proposed method to multitemporal images, including images taken more than 36 years before the reference image. As a result, the proposed method achieved registration accuracy at the subpixel level, which was more accurate than the conventional area-based and feature-based methods, even for image pairs with the most distant acquisition times. The proposed method is expected to provide more robust image-to-image registration for differences in sensor characteristics, acquisition time, resolution and color tone of two remote sensing images, as well as to temporal variations in vegetation and the effects of building shadows. These results were obtained with a road extraction model trained on images from a single area, single time period and single platform, demonstrating the high versatility of the model. Furthermore, the performance is expected to be improved and stabilized by using images from different areas, time periods and platforms for training

    Image-to-Image Subpixel Registration Based on Template Matching of Road Network Extracted by Deep Learning

    No full text
    The vast digital archives collected by optical remote sensing observations over a long period of time can be used to determine changes in the land surface and this information can be very useful in a variety of applications. However, accurate change extraction requires highly accurate image-to-image registration, which is especially true when the target is urban areas in high-resolution remote sensing images. In this paper, we propose a new method for automatic registration between images that can be applied to noisy images such as old aerial photographs taken with analog film, in the case where changes in man-made objects such as buildings in urban areas are extracted from multitemporal high-resolution remote sensing images. The proposed method performs image-to-image registration by applying template matching to road masks extracted from images using a two-step deep learning model. We applied the proposed method to multitemporal images, including images taken more than 36 years before the reference image. As a result, the proposed method achieved registration accuracy at the subpixel level, which was more accurate than the conventional area-based and feature-based methods, even for image pairs with the most distant acquisition times. The proposed method is expected to provide more robust image-to-image registration for differences in sensor characteristics, acquisition time, resolution and color tone of two remote sensing images, as well as to temporal variations in vegetation and the effects of building shadows. These results were obtained with a road extraction model trained on images from a single area, single time period and single platform, demonstrating the high versatility of the model. Furthermore, the performance is expected to be improved and stabilized by using images from different areas, time periods and platforms for training

    Development of practical atmospheric correction algorithms for thermal infrared multispectral data over land

    Get PDF
    University of Tokyo (東京大学
    corecore