5,837 research outputs found
D5.1 SHM digital twin requirements for residential, industrial buildings and bridges
This deliverable presents a report of the needs for structural control on buildings (initial imperfections, deflections at service, stability, rheology) and on bridges (vibrations, modal shapes, deflections, stresses) based on state-of-the-art image-based and sensor-based techniques. To this end, the deliverable identifies and describes strategies that encompass state-of-the-art instrumentation and control for infrastructures (SHM technologies).Objectius de Desenvolupament Sostenible::8 - Treball Decent i Creixement EconòmicObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i InfraestructuraPreprin
Recommended from our members
A New Passive 3-D Automatic Target Recognition Architecture for Aerial Platforms
The 3-D automatic target recognition (ATR) has many advantages over its 2-D counterpart, but there are several constraints in the context of small low-cost unmanned aerial vehicles (UAVs). These limitations include the requirement for active rather than passive monitoring, high equipment costs, sensor packaging size, and processing burden. We, therefore, propose a new structure from motion (SfM) 3-D ATR architecture that exploits the UAV's onboard sensors, i.e., the visual band camera, gyroscope, and accelerometer, and meets the requirements of a small UAV system. We tested the proposed 3-D SfM ATR using simulated UAV reconnaissance scenarios and found that the performance was better than classic 3-D light detection and ranging (LIDAR) ATR, combining the advantages of 3-D LIDAR ATR and passive 2-D ATR. The main advantages of the proposed architecture include the rapid processing, target pose invariance, small template size, passive scene sensing, and inexpensive equipment. We implemented the SfM module under two keypoint detection, description and matching schemes, with the 3-D ATR module exploiting several current techniques. By comparing SfM 3-D ATR, 3-D LIDAR ATR, and 2-D ATR, we confirmed the superior performance of our new architecture
On Small Satellites for Oceanography: A Survey
The recent explosive growth of small satellite operations driven primarily
from an academic or pedagogical need, has demonstrated the viability of
commercial-off-the-shelf technologies in space. They have also leveraged and
shown the need for development of compatible sensors primarily aimed for Earth
observation tasks including monitoring terrestrial domains, communications and
engineering tests. However, one domain that these platforms have not yet made
substantial inroads into, is in the ocean sciences. Remote sensing has long
been within the repertoire of tools for oceanographers to study dynamic large
scale physical phenomena, such as gyres and fronts, bio-geochemical process
transport, primary productivity and process studies in the coastal ocean. We
argue that the time has come for micro and nano satellites (with mass smaller
than 100 kg and 2 to 3 year development times) designed, built, tested and
flown by academic departments, for coordinated observations with robotic assets
in situ. We do so primarily by surveying SmallSat missions oriented towards
ocean observations in the recent past, and in doing so, we update the current
knowledge about what is feasible in the rapidly evolving field of platforms and
sensors for this domain. We conclude by proposing a set of candidate ocean
observing missions with an emphasis on radar-based observations, with a focus
on Synthetic Aperture Radar.Comment: 63 pages, 4 figures, 8 table
Computational Imaging and Artificial Intelligence: The Next Revolution of Mobile Vision
Signal capture stands in the forefront to perceive and understand the
environment and thus imaging plays the pivotal role in mobile vision. Recent
explosive progresses in Artificial Intelligence (AI) have shown great potential
to develop advanced mobile platforms with new imaging devices. Traditional
imaging systems based on the "capturing images first and processing afterwards"
mechanism cannot meet this unprecedented demand. Differently, Computational
Imaging (CI) systems are designed to capture high-dimensional data in an
encoded manner to provide more information for mobile vision systems.Thanks to
AI, CI can now be used in real systems by integrating deep learning algorithms
into the mobile vision platform to achieve the closed loop of intelligent
acquisition, processing and decision making, thus leading to the next
revolution of mobile vision.Starting from the history of mobile vision using
digital cameras, this work first introduces the advances of CI in diverse
applications and then conducts a comprehensive review of current research
topics combining CI and AI. Motivated by the fact that most existing studies
only loosely connect CI and AI (usually using AI to improve the performance of
CI and only limited works have deeply connected them), in this work, we propose
a framework to deeply integrate CI and AI by using the example of self-driving
vehicles with high-speed communication, edge computing and traffic planning.
Finally, we outlook the future of CI plus AI by investigating new materials,
brain science and new computing techniques to shed light on new directions of
mobile vision systems
Automatic land cover classification with SAR imagery and Machine learning using Google Earth Engine
Land cover is the most critical information required for land management and planning because human interference on land can be easily detected through it. However, mapping land cover utilizing optical remote sensing is not easy due to the acute shortage of cloud-free images. Google Earth Engine (GEE) is an efficient and effective tool for huge land cover analysis by providing access to large volumes of imagery available within a few days after acquisition in one consolidated system. This article demonstrates the use of Sentinel-1 datasets to create a land cover map of Pusad, Maharashtra using the GEE platform. Sentinel-1 provides Synthetic Aperture Radar (SAR) datasets that have a temporally dense and high spatial resolution, which is renowned for its cloud penetration characteristics and round-the-year observations irrespective of the weather. VV and VH polarization sentinel-1 time series data were automatically classified using a support vector machine (SVM) and Random Forest (RF) machine learning algorithms. Overall accuracies (OA), ranging from 82.3% to 90%, were obtained depending on polarization and methodology used. RF algorithm with VV polarization dataset stands better in comparison to SVM achieving OA of 90% and Kappa coefficient of 0.86. The highest user accuracy was obtained for the water class with both classifiers
The future of Earth observation in hydrology
In just the past 5 years, the field of Earth observation has progressed beyond the offerings of conventional space-agency-based platforms to include a plethora of sensing opportunities afforded by CubeSats, unmanned aerial vehicles (UAVs), and smartphone technologies that are being embraced by both for-profit companies and individual researchers. Over the previous decades, space agency efforts have brought forth well-known and immensely useful satellites such as the Landsat series and the Gravity Research and Climate Experiment (GRACE) system, with costs typically of the order of 1 billion dollars per satellite and with concept-to-launch timelines of the order of 2 decades (for new missions). More recently, the proliferation of smart-phones has helped to miniaturize sensors and energy requirements, facilitating advances in the use of CubeSats that can be launched by the dozens, while providing ultra-high (3-5 m) resolution sensing of the Earth on a daily basis. Start-up companies that did not exist a decade ago now operate more satellites in orbit than any space agency, and at costs that are a mere fraction of traditional satellite missions. With these advances come new space-borne measurements, such as real-time high-definition video for tracking air pollution, storm-cell development, flood propagation, precipitation monitoring, or even for constructing digital surfaces using structure-from-motion techniques. Closer to the surface, measurements from small unmanned drones and tethered balloons have mapped snow depths, floods, and estimated evaporation at sub-metre resolutions, pushing back on spatio-temporal constraints and delivering new process insights. At ground level, precipitation has been measured using signal attenuation between antennae mounted on cell phone towers, while the proliferation of mobile devices has enabled citizen scientists to catalogue photos of environmental conditions, estimate daily average temperatures from battery state, and sense other hydrologically important variables such as channel depths using commercially available wireless devices. Global internet access is being pursued via high-altitude balloons, solar planes, and hundreds of planned satellite launches, providing a means to exploit the "internet of things" as an entirely new measurement domain. Such global access will enable real-time collection of data from billions of smartphones or from remote research platforms. This future will produce petabytes of data that can only be accessed via cloud storage and will require new analytical approaches to interpret. The extent to which today's hydrologic models can usefully ingest such massive data volumes is unclear. Nor is it clear whether this deluge of data will be usefully exploited, either because the measurements are superfluous, inconsistent, not accurate enough, or simply because we lack the capacity to process and analyse them. What is apparent is that the tools and techniques afforded by this array of novel and game-changing sensing platforms present our community with a unique opportunity to develop new insights that advance fundamental aspects of the hydrological sciences. To accomplish this will require more than just an application of the technology: in some cases, it will demand a radical rethink on how we utilize and exploit these new observing systems
Application of advanced technology to space automation
Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits
CGMP: cloud-assisted green multimedia processing
With continued advancements of mobile computing and communications, emerging novel multimedia services and applications have attracted lots of attention and been developed for mobile users, such as mobile social network, mobile cloud medical treatment, mobile cloud game. However, because of limited resources on mobile terminals, it is of great challenge to improve the energy efficiency of multimedia services. In this paper, we propose a cloud-assisted green multimedia processing architecture (CGMP) based on mobile cloud computing. Specifically, the tasks of multimedia processing with energy-extensive consumption can be offloaded to the cloud, and the face recognition algorithm with improved principal component analysis and nearest neighbor classifier is realized on CGMP based cloud platform. Experimental results show that the proposed scheme can effectively save the energy consumption of the equipment
- …