398 research outputs found
Recommended from our members
Automated Detection and Counting of Pedestrians on an Urban Roadside
This thesis implements an automated system that counts pedestrians with 85% accuracy. Two approaches have been considered and evaluated in terms of count accuracy, cost and ease of deployment. The first approach employs the Autoscope Solo Terra, a traffic camera which is widely used to monitor vehicular traffic. The Solo Terra supports an image processing-based detector that counts the number of objects crossing user-defined areas in the captured image. The count is updated based on the amount of movement across the selected regions. Therefore, a second approach has been considered that uses a histogram of oriented gradients (HoG), an advanced vision based algorithm proposed by Dalal et al. which distinguishes a pedestrian from a non-pedestrian based on an omega shape formed by the head and shoulders of a human being. The implemented detection software processes video frames that are streamed from a low-cost digital camera. The frames are divided into sub-regions which are scanned for an omega shape whenever movement is detected in those regions. It has been found that the HoG-based approach degrades in performance due to occlusion under dense pedestrian traffic conditions whereas the Solo Terra approach appears to be more robust. Undercounts and overcounts were encountered using the Solo Terra approach. To combat the disadvantages of both the approaches, they were integrated to form a single system where count is incremented predominantly using the Solo Terra. The HoG-based approach corrects the obtained count under certain conditions. A preliminary prototype of the integrated system has been verified
Unusual event detection in real-world surveillance applications
Given the near-ubiquity of CCTV, there is significant ongoing research effort to apply image and video analysis methods together with machine learning techniques towards autonomous analysis of such data sources. However, traditional approaches to scene understanding remain dependent on training based on human annotations that need to be provided for every camera sensor. In this thesis, we propose an unusual event detection and classification approach which is applicable to real-world visual monitoring applications. The goal is to infer the usual behaviours in the scene and to judge the normality of the scene on the basis on the model created. The first requirement for the system is that it should not demand annotated data to train the system. Annotation of the data is a laborious task, and it is not feasible in practice to annotate video data for each camera as an initial stage of event detection. Furthermore, even obtaining training examples for the unusual event class is challenging due to the rarity of such events in video data. Another requirement for the system is online generation of results. In surveillance applications, it is essential to generate real-time results to allow a swift response by a security operator to prevent harmful consequences of unusual and antisocial events. The online learning capabilities also mean that the model can be continuously updated to accommodate natural changes in the environment. The third requirement for the system is the ability to run the process indefinitely. The mentioned requirements are necessary for real-world surveillance applications and the approaches that conform to these requirements need to be investigated. This thesis investigates unusual event detection methods that conform with real-world requirements and investigates the issue through theoretical and experimental study of machine learning and computer vision algorithms
Validation Methods for Fault-Tolerant avionics and control systems, working group meeting 1
The proceedings of the first working group meeting on validation methods for fault tolerant computer design are presented. The state of the art in fault tolerant computer validation was examined in order to provide a framework for future discussions concerning research issues for the validation of fault tolerant avionics and flight control systems. The development of positions concerning critical aspects of the validation process are given
Model Order Reduction
An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This three-volume handbook covers methods as well as applications. This third volume focuses on applications in engineering, biomedical engineering, computational physics and computer science
Evaluation of face recognition algorithms under noise
One of the major applications of computer vision and image processing is face recognition,
where a computerized algorithm automatically identifies a personâs face from
a large image dataset or even from a live video. This thesis addresses facial recognition,
a topic that has been widely studied due to its importance in many applications
in both civilian and military domains. The application of face recognition systems
has expanded from security purposes to social networking sites, managing fraud, and
improving user experience. Numerous algorithms have been designed to perform face
recognition with good accuracy. This problem is challenging due to the dynamic nature
of the human face and the different poses that it can take. Regardless of the
algorithm, facial recognition accuracy can be heavily affected by the presence of noise.
This thesis presents a comparison of traditional and deep learning face recognition
algorithms under the presence of noise. For this purpose, Gaussian and salt-andpepper
noises are applied to the face images drawn from the ORL Dataset. The
image recognition is performed using each of the following eight algorithms: principal
component analysis (PCA), two-dimensional PCA (2D-PCA), linear discriminant
analysis (LDA), independent component analysis (ICA), discrete cosine transform
(DCT), support vector machine (SVM), convolution neural network (CNN) and Alex
Net. The ORL dataset was used in the experiments to calculate the evaluation accuracy
for each of the investigated algorithms. Each algorithm is evaluated with two
experiments; in the first experiment only one image per person is used for training,
whereas in the second experiment, five images per person are used for training. The investigated traditional algorithms are implemented with MATLAB and the deep
learning algorithms approaches are implemented with Python. The results show that
the best performance was obtained using the DCT algorithm with 92% dominant
eigenvalues and 95.25 % accuracy, whereas for deep learning, the best performance
was using a CNN with accuracy of 97.95%, which makes it the best choice under noisy
conditions
Knowledge-Based Systems. Overview and Selected Examples
The Advanced Computer Applications (ACA) project builds on IIASA's traditional strength in the methodological foundations of operations research and applied systems analysis, and its rich experience in numerous application areas including the environment, technology and risk. The ACA group draws on this infrastructure and combines it with elements of AI and advanced information and computer technology to create expert systems that have practical applications.
By emphasizing a directly understandable problem representation, based on symbolic simulation and dynamic color graphics, and the user interface as a key element of interactive decision support systems, models of complex processes are made understandable and available to non-technical users.
Several completely externally-funded research and development projects in the field of model-based decision support and applied Artificial Intelligence (AI) are currently under way, e.g., "Expert Systems for Integrated Development: A Case Study of Shanxi Province, The People's Republic of China."
This paper gives an overview of some of the expert systems that have been considered, compared or assessed during the course of our research, and a brief introduction to some of our related in-house research topics
A 3D Digital Approach to the Stylistic and Typo-Technological Study of Small Figurines from Ayia Irini, Cyprus
The thesis aims to develop a 3D digital approach to the stylistic and typo-technological study of coroplastic, focusing on small figurines. The case study to test the method is a sample of terracotta statuettes from an assemblage of approximately 2000 statues and figurines found at the beginning of the 20th century in a rural open-air sanctuary at Ayia Irini (Cyprus) by the archaeologists of the Swedish Cyprus Expedition. The excavators identified continuity of worship at the sanctuary from the Late Cypriot III (circa 1200 BC) to the end of the Cypro-Archaic II period (ca. 475 BC). They attributed the small figurines to the Cypro-Archaic I-II. Although the excavation was one of the first performed through the newly established stratigraphic method, the archaeologists studied the site and its material following a traditional, merely qualitative approach. Theanalysis of the published results identified a classification of the material with no-clear-cut criteria, and their overlap between types highlights ambiguities in creating groups and classes. Similarly, stratigraphic arguments and different opinions among archaeologists highlight the need for revising. Moreover, pastlegislation allowed the excavators to export half of the excavated antiquities, creating a dispersion of the assemblage. Today, the assemblage is still partly exhibited at the Cyprus Museum in Nicosia and in four different museums in Sweden. Such a setting prevents to study, analyse and interpret the assemblageholistically. This research proposes a 3D chaĂźne opĂ©ratoire methodology to study the collectionâs small terracotta figurines, aiming to understand the contextâs function and social role as reflected by the classification obtained with the 3D digital approach. The integration proposed in this research of traditional archaeological studies, and computer-assisted investigation based on quantitative criteria, identified and defined with 3D measurements and analytical investigations, is adopted as a solution to the biases of a solely qualitative approach. The 3D geometric analysis of the figurines focuses on the objectsâ shape and components, mode of manufacture, level of expertise, specialisation or skills of the craftsman and production techniques. The analysis leads to the creation of classes of artefacts which allow archaeologists to formulate hypotheses on the production process, identify a common production (e.g., same hand, same workshop) and establish a relative chronological sequence. 3D reconstruction of the excavationâs area contributes to the virtual re-unification of the assemblage for its holistic study, the relative chronological dating of the figurines and the interpretation of their social and ritual purposes. The results obtained from the selected sample prove the efficacy of the proposed 3D approach and support the expansion of the analysis to the whole assemblage, and possibly initiate quantitative and systematic studies on Cypriot coroplastic production
- âŠ