68 research outputs found
Deep Learning Methods for Remote Sensing
Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing
A neural network based model for mass non-residential real estate price evaluation of Lisbon, Portugal
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Information Analysis and ManagementAn accurate estimation of the real estate value became very important to make correct purchase and sale transaction, calculate taxes, mortgages for loans. Mass appraisal systems that use modern methodology based on artificial intelligence significantly help to deal with these issues. Objectives of this article are: using artificial neural networks (AANs) build mass appraisal model to evaluate market price of non-residential real estate of Lisbon, Portugal; evaluate performance of AANs and compare it with results generated by other models based on different methodologies and prove AANs superiority in issues connected with real estate apprising
Ubiquitous and context-aware computing modelling : study of devices integration in their environment
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementIn an almost imperceptible way, ubiquitous and context-aware computing make part
of our everyday lives, as the world has developed in an interconnected way between
humans and technological devices. This interconnectedness raises the need to
integrate humans’ interaction with the different devices they use in different social
contexts and environments. In the proposed research, it is suggested the
development of new scenario building based on a current ubiquitous computing
model dedicated to the environment context-awareness. We will also follow previous
research made on the formal structure computation model based on social paradigm
theory, dedicated to embed devices into different context environments with social
roles developed by Santos (2012/2015). Furthermore, several socially relevant context
scenarios are to be identified and studied. Once identified, we gather and document
the requirements that devices should have, according to the model, in order to
achieve a correct integration in their contextual environment
March madness prediction using machine learning techniques
Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceMarch Madness describes the final tournament of the college basketball championship, considered by many as the biggest sporting event in the United States - moving every year tons of dollars in both bets and television. Besides that, there are 60 million Americans who fill out their tournament bracket every year, and anything is more likely than hit all 68 games.
After collecting and transforming data from Sports-Reference.com, the experimental part consists of preprocess the data, evaluate the features to consider in the models and train the data. In this study, based on tournament data over the last 20 years, Machine Learning algorithms like Decision Trees Classifier, K-Nearest Neighbors Classifier, Stochastic Gradient Descent Classifier and others were applied to measure the accuracy of the predictions and to be compared with some benchmarks.
Despite of the most important variables seemed to be those related to seeds, shooting and the number of participations in the tournament, it was not possible to define exactly which ones should be used in the modeling and all ended up being used.
Regarding the results, when training the entire dataset, the accuracy ranges from 65 to 70%, where Support Vector Classification yields the best results. When compared with picking the highest seed, these results are slightly lower. On the other hand, when predicting the Tournament of 2017, the Support Vector Classification and the Multi-Layer Perceptron Classifier reach 85 and 79% of accuracy, respectively. In this sense, they surpass the previous benchmark and the most respected websites and statistics in the field.
Given some existing constraints, it is quite possible that these results could be improved and deepened in other ways. Meanwhile, this project can be referenced and serve as a basis for the future work
Towards Wireless Virtualization for 5G Cellular Systems
Although it has been defined as one of the most promising key enabling technologies for the forthcoming fifth generation cellular networks, wireless virtualization still has several challenges remaining to be addressed. Amongst those, resource allocation, which decides how to embed the different wireless virtual networks on the physical relying infrastructure, is the one receiving maximum attention. This project aims at finding the optimal resource allocation for each virtual network, in terms of channel resources, power levels and radio access technologies so that the data rate requested by each virtual network can be guaranteed and the global throughput efficiency can be maximized.Aunque haya sido definida como una de las tecnologÃas clave para el desarrollo de la nueva generación de sistemas móviles, la virtualización del acceso radio aún tiene muchos retos a investigar. Entre ellos, la distribución de los recursos, que tiene por objetivo encontrar el mejor encaje de las distintas redes virtuales en la infraestructura fÃsica que comparten, es el que está recibiendo la mayor atención. Este proyecto, tiene por objetivo encontrar la repartición óptima de los recursos, tanto a nivel de canal como de potencia y de tecnologÃas de acceso radio, para que los requisitos de las redes virtuales puedan ser garantizadas y la eficiencia global sea maximizada.Malgrat ha estat definida com una de les tecnologies claus de cara al desenvolupament de la propera cinquena generació de xarxes mòbils, la virtualització de l'accés radio encara té molts reptes oberts a fer front. Entre ells, la distribució de recursos, que té per objectiu buscar el millor encaix de les diferents xarxes virtuals en la infraestructura fÃsica que comparteixen, és la que està centrant la mà xima atenció. Aquest projecte té per objectiu aconseguir la repartició òptima de recursos, pel que fa al canal, als nivells de potència i a les tecnologies radio disponibles, de manera que els requisits de cada xarxa virtual puguin ser garantits i que l'eficiència global pugui ser maximitzada
METODE MOST PROMINENT RIDGE LINE PADA PENGUKURAN RANGKA ATLET JALAN CEPAT
Perkembangan teknologi yang semakin canggih berdampak positif terhadap banyak aspek tetapi tidak berdampak nyata pada peningkatan kualitas bidang olahraga. Performa atlet khususnya atlet cabang jalan cepat dapat ditingkatkan melalui biomekaniknya. Biomekanik merupakan salah satu cabang dari sport science yang mempelajari mekanisme sistem biologis gerakan. Salah satu metode untuk mempelajari biomekanik manusia yaitu dengan melakukan analisa gait manusia. Gait adalah pola gerakan individu yang dihasilkan dari orang berjalan.. Proses skeletonisasi tersebut menghasilkan citra rangka berupa rangka dari para atlet jalan cepat. Berdasarkan hasil yang diperoleh, citra rangka yang terbentuk dengan baik namun ada juga rangka yang tidak menggambarkan bentuk atlet secara sempurna. Hal ini disebabkan oleh banyak faktor seperti pemilihan algoritma skeletoning atau proses pembentukan siluet. Skeletoning dilakukan mulai dari proses capture, background subtraction, filtering, dilasi erosi dan skeletonisasi menggunakan algoritma most prominent ridge line Kata kunci : gait cycle, gait analysis, rangka, skeletoning, atlet jalan cepat
Optical Coherence Tomography guided Laser-Cochleostomy
Despite the high precision of laser, it remains challenging to control the laser-bone ablation without injuring the underlying critical structures. Providing an axial resolution on micrometre scale, OCT is a promising candidate for imaging microstructures beneath the bone surface and monitoring the ablation process. In this work, a bridge connecting these two technologies is established. A closed-loop control of laser-bone ablation under the monitoring with OCT has been successfully realised
Hyperparameters optimization on neural networks for bond trading
Project Work presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and ManagementArtificial Neural Networks have been recently spotlighted as de facto tools used for classification. Their
ability to deal with complex decision boundaries makes them potentially suitable to work on trading
within financial markets, namely on Bonds. Such classifier faces high flexibility on its parameters
in parallel with great modularity of its techniques, arising thus the need to efficiently optimize its
hyperparameters. To determine the most effcient search method to optimize almost the majority of the
Neural Networks hyperparameters, we have compared the results obtained by the manual, evolutionary
(genetic algorithm) and random search methods. The search methods compete on several metrics from
which we aim to estimate the generalization capability, i.e. the capacity to correctly predict on unseen
data. We have found the manual method to present better generalization results than the remaining
automatic methods. Also, no benefit was found on the direction provided by the genetic search method
when compared to the purely random. Such results demonstrate the importance of human oversight
during the hyperparameters optimization and weight training phases, capable of analyzing in parallel
multiple metrics and data visualization techniques, a process critical to avoid suboptimal solutions
when navigating complex hyperspaces
Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data
This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and
- …