207 research outputs found
Generación de resúmenes de videos basada en consultas utilizando aprendizaje de máquina y representaciones coordinadas
Video constitutes the primary substrate of information of humanity, consider the video data uploaded daily on platforms as YouTube: 300 hours of video per minute, video analysis is currently one of the most active areas in computer science and industry, which includes fields such as video classification, video retrieval and video summarization (VSUMM).
VSUMM is a hot research field due to its importance in allowing human users to simplify the information processing required to see and analyze sets of videos, for example, reducing the number of hours of recorded videos to be analyzed by a security personnel. On the other hand, many video analysis tasks and systems requires to reduce the computational load using segmentation schemes, compression algorithms, and video summarization techniques.
Many approaches have been studied to solve VSUMM. However, it is not a single solution problem due to its subjective and interpretative nature, in the sense that important parts to be preserved from the input video requires a subjective estimation of an importance sco- re. This score can be related to how interesting are some video segments, how close they represent the complete video, and how segments are related to the task a human user is performing in a given situation. For example, a movie trailer is, in part, a VSUMM task but related to preserving promising and interesting parts from the movie but not to be able to reconstruct the movie content from them, i.e., movie trailers contains interesting scenes but not representative ones. On the contrary, in a surveillance situation, a summary from the closed-circuit cameras needs to be representative and interesting, and in some situations related with some objects of interest, for example, if it is needed to find a person or a car.
As written natural language is the main human-machine communication interface, recently some works have made advances in allowing to include textual queries in the VSUMM process which allows to guide the summarization process, in the sense that video segments related with the query are considered important.
In this thesis, we present a computational framework to perform video summarization over an input video, which allows the user to input free-form sentences and keywords queries to guide the process by considering user intention or task intention, but also considering general objectives such as representativeness and interestingness. Our framework relies on the use of pre-trained deep visual and linguistic models, although we trained our visual-linguistic coordination model. We expect this model will be of interest in cases where VSUMM tasks requires a high degree of specification of user/task intentions with minimal training stages and rapid deployment.El video constituye el sustrato primario de información de la humanidad, por ejemplo, considere los datos de video subidos diariamente en plataformas cómo YouTube: 300 horas de video por minuto. El análisis de video es actualmente una de las áreas más activas en la informática y la industria, que incluye campos como la clasificación, recuperación y generación de resúmenes de video (VSUMM).
VSUMM es un campo de investigación de alto dinamismo debido a su importancia al permitir que los usuarios humanos simplifiquen el procesamiento de la información requerido para ver y analizar conjuntos de videos, por ejemplo, reduciendo la cantidad de horas de videos grabados para ser analizados por un personal de seguridad. Por otro lado, muchas tareas y sistemas de análisis de video requieren reducir la carga computacional utilizando esquemas de segmentación, algoritmos de compresión y técnicas de VSUMM.
Se han estudiado muchos enfoques para abordar VSUMM. Sin embargo, no es un problema de solución única debido a su naturaleza subjetiva e interpretativa, en el sentido de que las partes importantes que se deben preservar del video de entrada, requieren una estimación de una puntuación de importancia. Esta puntuación puede estar relacionada con lo interesantes que son algunos segmentos de video, lo cerca que representan el video completo y con cómo los segmentos están relacionados con la tarea que un usuario humano está realizando en una situación determinada. Por ejemplo, un avance de película es, en parte, una tarea de VSUMM, pero esta ́ relacionada con la preservación de partes prometedoras e interesantes de la película, pero no con la posibilidad de reconstruir el contenido de la película a partir de ellas, es decir, los avances de películas contienen escenas interesantes pero no representativas. Por el contrario, en una situación de vigilancia, un resumen de las cámaras de circuito cerrado debe ser representativo e interesante, y en algunas situaciones relacionado con algunos objetos de interés, por ejemplo, si se necesita para encontrar una persona o un automóvil.
Dado que el lenguaje natural escrito es la principal interfaz de comunicación hombre-máquina, recientemente algunos trabajos han avanzado en permitir incluir consultas textuales en el proceso VSUMM lo que permite orientar el proceso de resumen, en el sentido de que los segmentos de video relacionados con la consulta se consideran importantes.
En esta tesis, presentamos un marco computacional para realizar un resumen de video sobre un video de entrada, que permite al usuario ingresar oraciones de forma libre y consultas de palabras clave para guiar el proceso considerando la intención del mismo o la intención de la tarea, pero también considerando objetivos generales como representatividad e interés. Nuestro marco se basa en el uso de modelos visuales y linguísticos profundos pre-entrenados,
aunque también entrenamos un modelo propio de coordinación visual-linguística. Esperamos que este marco computacional sea de interés en los casos en que las tareas de VSUMM requieran un alto grado de especificación de las intenciones del usuario o tarea, con pocas etapas de entrenamiento y despliegue rápido.MincienciasDoctorad
An implementation of novel genetic based clustering algorithm for color image segmentation
The color image segmentation is one of most crucial application in image processing. It can apply to medical image segmentation for a brain tumor and skin cancer detection or color object detection on CCTV traffic video image segmentation and also for face recognition, fingerprint recognition etc. The color image segmentation has faced the problem of multidimensionality. The color image is considered in five-dimensional problems, three dimensions in color (RGB) and two dimensions in geometry (luminosity layer and chromaticity layer). In this paper the, L*a*b color space conversion has been used to reduce the one dimensional and geometrically it converts in the array hence the further one dimension has been reduced. The a*b space is clustered using genetic algorithm process, which minimizes the overall distance of the cluster, which is randomly placed at the start of the segmentation process. The segmentation results of this method give clear segments based on the different color and it can be applied to any application
Large-scale image collection cleansing, summarization and exploration
A perennially interesting topic in the research field of large scale image collection organization is how to effectively and efficiently conduct the tasks of image cleansing, summarization and exploration. The primary objective of such an image organization system is to enhance user exploration experience with redundancy removal and summarization operations on large-scale image collection. An ideal system is to discover and utilize the visual correlation among the images, to reduce the redundancy in large-scale image collection, to organize and visualize the structure of large-scale image collection, and to facilitate exploration and knowledge discovery.
In this dissertation, a novel system is developed for exploiting and navigating large-scale image collection. Our system consists of the following key components: (a) junk image filtering by incorporating bilingual search results; (b) near duplicate image detection by using a coarse-to-fine framework; (c) concept network generation and visualization; (d) image collection summarization via dictionary learning for sparse representation; and (e) a multimedia practice of graffiti image retrieval and exploration.
For junk image filtering, bilingual image search results, which are adopted for the same keyword-based query, are integrated to automatically identify the clusters for the junk images and the clusters for the relevant images. Within relevant image clusters, the results are further refined by removing the duplications under a coarse-to-fine structure. The duplicate pairs are detected with both global feature (partition based color histogram) and local feature (CPAM and SIFT Bag-of-Word model). The duplications are detected and removed from the data collection to facilitate further exploration and visual correlation analysis. After junk image filtering and duplication removal, the visual concepts are further organized and visualized by the proposed concept network. An automatic algorithm is developed to generate such visual concept network which characterizes the visual correlation between image concept pairs. Multiple kernels are combined and a kernel canonical correlation analysis algorithm is used to characterize the diverse visual similarity contexts between the image concepts. The FishEye visualization technique is implemented to facilitate the navigation of image concepts through our image concept network. To better assist the exploration of large scale data collection, we design an efficient summarization algorithm to extract representative examplars. For this collection summarization task, a sparse dictionary (a small set of the most representative images) is learned to represent all the images in the given set, e.g., such sparse dictionary is treated as the summary for the given image set. The simulated annealing algorithm is adopted to learn such sparse dictionary (image summary) by minimizing an explicit optimization function.
In order to handle large scale image collection, we have evaluated both the accuracy performance of the proposed algorithms and their computation efficiency. For each of the above tasks, we have conducted experiments on multiple public available image collections, such as ImageNet, NUS-WIDE, LabelMe, etc. We have observed very promising results compared to existing frameworks. The computation performance is also satisfiable for large-scale image collection applications. The original intention to design such a large-scale image collection exploration and organization system is to better service the tasks of information retrieval and knowledge discovery. For this purpose, we utilize the proposed system to a graffiti retrieval and exploration application and receive positive feedback
공공자전거 활용 패턴 분석을 위한 시각적 분석 도구 디자인
학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·컴퓨터공학부, 2021.8. 김성준.With the development of sensors, various transportation related data such as activities and movements of citizens are being accumulated. Accordingly, urban planning researchers have made many attempts to obtain meaningful insights through data-driven analysis. For studying domain problems, we closely collaborated with urban planning researchers. Their main concern was to identify the route choice behaviors of public bicycle riders, which is called route choice modeling (RCM). In the process of our collaboration, we identified the two limitations in their RCM analysis process. First, there was no visual interface that can effectively support the whole RCM process. In their process, data exploration and modeling steps were not systematically interlocked and were quite fragmented, which impedes the cognitive flow of the researchers. Second, there was no means to understand various origin-destination (OD) movement behaviors between different public bicycle riders. For this reason, domain researchers could not take bicycle riders’ characteristics into account in conducting their study.
In this dissertation, we present two analysis approaches to address the issues mentioned above. In the first study, we present RCMVis, a visual analytics system to support interactive RCManalysis. The system supports three interactive analysis stages: exploration, modeling, and reasoning. In the exploration stage,we help analysts interactively explore trip data from multiple OD pairs and choose a subset of data they want to focus on. In the modeling stage, we integrate a k-medoids clustering method and a path-size logit model into our system to enable analysts to model route choice behaviors from trips with support for feature selection, hyperparameter tuning, and model comparison. Finally, in the reasoning stage, we help analysts rationalize and refine the model by selectively inspecting the trips that strongly support the modeling result. The domain experts discovered unexpected insights from numerous modeling results, allowing them to explore the hyperparameter space more effectively to gain better results.
In the second study, we suggest a method to discover various OD movement behaviors of different bicycle riders by exploring the latent feature space. To extract latent features of riders,we train Sequence-to-Sequence (Seq2Seq) model on the riders’ trip records. After extracting the latent features, we represent these features in two-dimensional space using the dimensionalityreduction technique. As a result, we found various OD movement behaviors by exploring the spatio-temporal characteristics using our carefully designed visualizations and interactions. In addition, we identified that how the OD movement behaviors can affect the route choice behaviors of riders. We believe that the two suggested analysis approaches will help solve many problems in the urban planning domain.최근 GPS와 같은 센서들의 발달로 인해 교통수단과 관련된 도시 시민들의 다양한 활동과 움직임 등의 데이터들이 축적되고 있다. 그에 따라 도시 계획 연구자들은 유용한 통찰을 얻기 위한 다양한 데이터 기반 분석들을 시도하고 있다. 도시 계획 분야의 연구를 위해 우리는 도시 계획 연구자들과의 긴밀한 협업을 진행하였다. 그들의 주된 연구는 경로 선택 모델링이라고 불리는 공공자전거 이용자들의 경로 선택 행위를 알아내기 위한 연구였다. 협업의 과정에서 우리는 그들의 경로 선택 모델링의 과정이 지닌 한계를 발견하게 되었다. 첫째로, 경로 선택 모델링의 전 과정을 효과적으로 지원하는 시각화 및 인터페이스가 부재하였다. 특히 그들의 연구 과정에서는 데이터 시각화와 모델링이 체계적으로 맞물려있지 않고 파편화되어 있어서 연구를 위한 인지적 흐름이 방해를 받았다. 둘째로, 서로 다른 공공자전거 사용자들의 출발지-목적지 (OD; origin-destination) 움직임 행태를 파악할 수 있는 수단이 부재하였다. 이 때문에 연구자들은 경로 선택 모델링 등 여러 연구에서 자전거 이용자들의 서로 다른 특성을 반영하지 못하는 문제가 있었다.
본 논문에서는 위에서 언급한 두 가지 문제를 해결하기 위한 분석 방안을 제안한다. 첫째로, 사용자 상호작용을 통한 경로 선택 모델링이 가능한 시각적 분석 도구인 RCMVis를 제안한다. 이 시스템은 탐색, 모델링, 해석의 세 과정을 지원한다. 탐색 과정에서는 분석가들이 다양한 OD 데이터를 탐색하고 모델링 할 데이터를 결정하도록 한다. 모델링 과정에서는 k-메도이드 (k-medoids) 군집화 방법과 경로-크기 로짓 (PSL; path-size logit) 모델을 채택하여 주어진 데이터에 대해 경로 선택 모델링을 할 수 있게 하였다. 이때 특징 선택과 하이퍼파라미터 선택을 통해 한 번에 다양한 결과들을 확인하고 비교할 수 있게 하였다. 마지막으로 해석 과정에서는 선택된 모형에 대해 데이터 수준의 해석을 할 수 있게 한다. 이 시스템을 통해 분석가들은 기존에 얻기 어려웠던 다양한 통찰들을 얻을 수 있음을 확인하였다.
두 번째 연구로, 우리는 잠재 특징 공간 탐색을 기반으로 서로 다른 자전거 이용자들의 다양한 OD 움직임 행태를 파악할 수 있는 방법을 제시하였다. 자전거 이용자들의 통행들을 시퀀스 (sequence) 데이터로 나타낼 수 있음에 착안하여 그들의 통행 기록을 시퀀스 투 시퀀스 (Seq2Seq) 모형을 이용하여 학습시켰다. 그 후, 학습된 모형을 통해 얻은 잠재적 특징들을 차원축소를 통해 2차원 공간상에 나타내어 그 분포를 확인하였다. 우리는 잠재 특징 공간과 OD 움직임 행태를 탐색할 수 있는 시각화를 디자인하였고, 그것들을 이용해 다양한 시공간적 특징들을 파악할 수 있었다. 또한 서로 다른 움직임 행태를 갖는 이용자들의 경로 선택 행태는 어떻게 다른지에 대한 분석도 진행하였다. 우리는 제시된 두 방법이 도시 계획 연구자들이 문제를 해결함에 있어 도움이 될 것이라고 믿는다.CHAPTER 1. Introduction 1
1.1 Background and Motivation 1
1.2 Thesis Statement and Research Questions 5
1.2.1 Designing RCMVis: A Visual Analytics System for Route Choice Modeling 5
1.2.2 Discovering OD Movement Behaviors of Different Bicycle Riders Using Latent Feature Exploration 6
1.3 Dissertation Outline 8
CHAPTER 2. Related Work 9
2.1 Route Choice Modeling 9
2.2 Analysis of Movement Behaviors 11
2.3 Visual Analytics of Public Bicycle Sharing System 12
2.4 OD Visualization 13
2.5 Trajectory Visual Analytics 14
CHAPTER 3. RCMVis: A Visual Analytics System for Route Choice Modeling 17
3.1 Background 19
3.1.1 Domain Situation Analysis 19
3.1.2 Data Preprocessing and Abstraction 21
3.1.3 Task Analysis and Abstraction 25
3.2 Route Choice Model 27
3.2.1 Choice Set Generation 27
3.2.2 Model Estimation 29
3.2.3 Goodness of Fit 31
3.2.4 Estimation Contribution Score 32
3.3 The RCMVis Design 32
3.3.1 Exploration Stage 33
3.3.2 Modeling Stage 44
3.3.3 Reasoning Stage 50
3.4 System Implementation 53
3.5 Evaluation 53
3.5.1 Case Study 53
3.5.2 Domain Expert Interview 66
3.6 Discussion 67
3.7 Summary 70
CHAPTER 4. Discovering OD Movement Behaviors of Different Bicycle Riders Using Latent Feature Exploration 71
4.1 Learning Latent Feature Representations 72
4.1.1 Data Description 73
4.1.2 Feature Engineering 76
4.1.3 Model Selection and Implementation 78
4.2 Visualization 80
4.2.1 Rider View 82
4.2.2 OD Filter View 85
4.2.3 Temporal Matrix 86
4.2.4 Spatial Map 87
4.2.5 Station View 88
4.3 Implementation 91
4.4 Results 91
4.4.1 Major Patterns 92
4.4.2 Minor Patterns 100
4.4.3 Outliers 101
4.4.4 Route Choice Modeling 101
4.5 Discussion 103
4.6 Summary 104
CHAPTER 5. Conclusion 106
APPENDIX A. Data Preprocessing in RCMVis 122
A.1 Introduction 122
A.2 Road Network 122
A.3 Route Attribute 123
A.3.1 Route Distance 124
A.3.2 Number of Intersections 124
A.3.3 Number of Traffic Lights 125
A.3.4 Road Type Ratios 126
A.3.5 Bike Lane Ratio 126
A.3.6 Slopes 127
A.3.7 Path Size 128박
Data-driven remote fault detection and diagnosis of HVAC terminal units using machine learning techniques
The modernising and retrofitting of older buildings has created a drive to install building management systems (BMS) aimed to assist building managers pave the way towards smarter energy use, improve maintenance and increase occupants comfort inside a building. BMS is a computerised control system that controls and monitors a building’s equipment, services such as lighting, ventilation, power systems, fire and security systems, etc. Buildings are becoming more and more complex environments and energy consumption has globally increased to 40% in the past decades. Still, there is no generalised solution or standardisation method available to maintain and handle a building’s energy consumption. Thus this research aims to discover an intelligent solution for the building’s electrical and mechanical units that consume the most power. Indeed, remote control and monitoring of Heating, Ventilation and Air-Conditioning (HVAC) units based on the received information through the thousands of sensors and actuators, is a crucial task in BMS. Thus, it is a foremost task to identify faulty units automatically to optimise running and energy usage. Therefore, a comprehensive analysis on HVAC data and the development of computational intelligent methods for automatic fault detection and diagnosis is been presented here for a period of July 2015 to October 2015 on a real commercial building in London. This study mainly investigated one of the HVAC sub-units namely Fan-coil unit’s terminal unit (TU). It comprises of the three stages: data collection, pre-processing, and machine learning. Further to the aspects of machine learning algorithms for TU behaviour identification by employing unsupervised, supervised, and semi-supervised learning algorithms and their combination was employed to make an automatic intelligent solution for building services. The accuracy of these employed algorithms have been measured in both training and testing phases, results compared with different suitable algorithms, and validated through statistical measures. This research provides an intelligent solution for the real time prediction through the development of an effective automatic fault detection and diagnosis system creating a smarter way to handle the BMS data for energy optimisation
Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine
The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far.
Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews.
Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level.
In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data.
The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience
Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine
The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far.
Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews.
Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level.
In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data.
The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience
- …