32 research outputs found

    Proceedings of the 1st Annual WASM: MECE HDR Conference

    Get PDF
    1st Annual WA School of Mines: Minerals, Energy and Chemical Engineering HDR conference program and conference abstracts

    NASA Automated Rendezvous and Capture Review. Executive summary

    Get PDF
    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure

    Learning in behavioural robotics

    Get PDF
    The research described in this thesis examines how machine learning mechanisms can be used in an assembly robot system to improve the reliability of the system and reduce the development workload, without reducing the flexibility of the system. The justification foi' this is that for a robot to be performing effectively it is frequently necessary to have gained experience of its performance under a particular configuration before that configuration can be altered to produce a performance improvement. Machine learning mechanisms can automate this activity of testing, evaluating and then changing.From studying how other researchers have developed working robot systems the activities which require most effort and experimentation are:-• The selection of the optimal parameter settings. • The establishment of the action-sensor couplings which are necessary for the effective handling of uncertainty. • Choosing which way to achieve a goal.One way to implement the first two kinds of learning is to specify a model of the coupling or the interaction of parameters and results, and from that model derive an appropriate learning mechanism that will find a parametrisation for that model that will enable good performance to be obtained. From this starting point it has been possible to show how equal, or better performance can be obtained by using iearning mechanisms which are neither derived from nor require a model of the task being learned. Instead, by combining iteration and a task specific profit function it is possible to use a generic behavioural module based on a learning mechanism to achieve the task.Iteration and a task specific profit function can also be used to learn which behavioural module from a pool of equally competent modules is the best at any one time to use to achieve a particular goal. Like the other two kinds of learning, this successfully automates an otherwise difficult test and evaluation process that would have to be performed by a developer. In doing so effectively, it, like the other learning that has been used here, shows that instead of being a peripheral issue to be introduced to a working system, learning, carried out in the right way, can be instrumental in the production of that working system

    Vulnerability assessment in the use of biometrics in unsupervised environments

    Get PDF
    Mención Internacional en el título de doctorIn the last few decades, we have witnessed a large-scale deployment of biometric systems in different life applications replacing the traditional recognition methods such as passwords and tokens. We approached a time where we use biometric systems in our daily life. On a personal scale, the authentication to our electronic devices (smartphones, tablets, laptops, etc.) utilizes biometric characteristics to provide access permission. Moreover, we access our bank accounts, perform various types of payments and transactions using the biometric sensors integrated into our devices. On the other hand, different organizations, companies, and institutions use biometric-based solutions for access control. On the national scale, police authorities and border control measures use biometric recognition devices for individual identification and verification purposes. Therefore, biometric systems are relied upon to provide a secured recognition where only the genuine user can be recognized as being himself. Moreover, the biometric system should ensure that an individual cannot be identified as someone else. In the literature, there are a surprising number of experiments that show the possibility of stealing someone’s biometric characteristics and use it to create an artificial biometric trait that can be used by an attacker to claim the identity of the genuine user. There were also real cases of people who successfully fooled the biometric recognition system in airports and smartphones [1]–[3]. That urges the necessity to investigate the potential threats and propose countermeasures that ensure high levels of security and user convenience. Consequently, performing security evaluations is vital to identify: (1) the security flaws in biometric systems, (2) the possible threats that may target the defined flaws, and (3) measurements that describe the technical competence of the biometric system security. Identifying the system vulnerabilities leads to proposing adequate security solutions that assist in achieving higher integrity. This thesis aims to investigate the vulnerability of fingerprint modality to presentation attacks in unsupervised environments, then implement mechanisms to detect those attacks and avoid the misuse of the system. To achieve these objectives, the thesis is carried out in the following three phases. In the first phase, the generic biometric system scheme is studied by analyzing the vulnerable points with special attention to the vulnerability to presentation attacks. The study reviews the literature in presentation attack and the corresponding solutions, i.e. presentation attack detection mechanisms, for six biometric modalities: fingerprint, face, iris, vascular, handwritten signature, and voice. Moreover, it provides a new taxonomy for presentation attack detection mechanisms. The proposed taxonomy helps to comprehend the issue of presentation attacks and how the literature tried to address it. The taxonomy represents a starting point to initialize new investigations that propose novel presentation attack detection mechanisms. In the second phase, an evaluation methodology is developed from two sources: (1) the ISO/IEC 30107 standard, and (2) the Common Evaluation Methodology by the Common Criteria. The developed methodology characterizes two main aspects of the presentation attack detection mechanism: (1) the resistance of the mechanism to presentation attacks, and (2) the corresponding threat of the studied attack. The first part is conducted by showing the mechanism's technical capabilities and how it influences the security and ease-of-use of the biometric system. The second part is done by performing a vulnerability assessment considering all the factors that affect the attack potential. Finally, a data collection is carried out, including 7128 fingerprint videos of bona fide and attack presentation. The data is collected using two sensing technologies, two presentation scenarios, and considering seven attack species. The database is used to develop dynamic presentation attack detection mechanisms that exploit the fingerprint spatio-temporal features. In the final phase, a set of novel presentation attack detection mechanisms is developed exploiting the dynamic features caused by the natural fingerprint phenomena such as perspiration and elasticity. The evaluation results show an efficient capability to detect attacks where, in some configurations, the mechanisms are capable of eliminating some attack species and mitigating the rest of the species while keeping the user convenience at a high level.En las últimas décadas, hemos asistido a un despliegue a gran escala de los sistemas biométricos en diferentes aplicaciones de la vida cotidiana, sustituyendo a los métodos de reconocimiento tradicionales, como las contraseñas y los tokens. Actualmente los sistemas biométricos ya forman parte de nuestra vida cotidiana: es habitual emplear estos sistemas para que nos proporcionen acceso a nuestros dispositivos electrónicos (teléfonos inteligentes, tabletas, ordenadores portátiles, etc.) usando nuestras características biométricas. Además, accedemos a nuestras cuentas bancarias, realizamos diversos tipos de pagos y transacciones utilizando los sensores biométricos integrados en nuestros dispositivos. Por otra parte, diferentes organizaciones, empresas e instituciones utilizan soluciones basadas en la biometría para el control de acceso. A escala nacional, las autoridades policiales y de control fronterizo utilizan dispositivos de reconocimiento biométrico con fines de identificación y verificación individual. Por lo tanto, en todas estas aplicaciones se confía en que los sistemas biométricos proporcionen un reconocimiento seguro en el que solo el usuario genuino pueda ser reconocido como tal. Además, el sistema biométrico debe garantizar que un individuo no pueda ser identificado como otra persona. En el estado del arte, hay un número sorprendente de experimentos que muestran la posibilidad de robar las características biométricas de alguien, y utilizarlas para crear un rasgo biométrico artificial que puede ser utilizado por un atacante con el fin de reclamar la identidad del usuario genuino. También se han dado casos reales de personas que lograron engañar al sistema de reconocimiento biométrico en aeropuertos y teléfonos inteligentes [1]–[3]. Esto hace que sea necesario investigar estas posibles amenazas y proponer contramedidas que garanticen altos niveles de seguridad y comodidad para el usuario. En consecuencia, es vital la realización de evaluaciones de seguridad para identificar (1) los fallos de seguridad de los sistemas biométricos, (2) las posibles amenazas que pueden explotar estos fallos, y (3) las medidas que aumentan la seguridad del sistema biométrico reduciendo estas amenazas. La identificación de las vulnerabilidades del sistema lleva a proponer soluciones de seguridad adecuadas que ayuden a conseguir una mayor integridad. Esta tesis tiene como objetivo investigar la vulnerabilidad en los sistemas de modalidad de huella dactilar a los ataques de presentación en entornos no supervisados, para luego implementar mecanismos que permitan detectar dichos ataques y evitar el mal uso del sistema. Para lograr estos objetivos, la tesis se desarrolla en las siguientes tres fases. En la primera fase, se estudia el esquema del sistema biométrico genérico analizando sus puntos vulnerables con especial atención a los ataques de presentación. El estudio revisa la literatura sobre ataques de presentación y las soluciones correspondientes, es decir, los mecanismos de detección de ataques de presentación, para seis modalidades biométricas: huella dactilar, rostro, iris, vascular, firma manuscrita y voz. Además, se proporciona una nueva taxonomía para los mecanismos de detección de ataques de presentación. La taxonomía propuesta ayuda a comprender el problema de los ataques de presentación y la forma en que la literatura ha tratado de abordarlo. Esta taxonomía presenta un punto de partida para iniciar nuevas investigaciones que propongan novedosos mecanismos de detección de ataques de presentación. En la segunda fase, se desarrolla una metodología de evaluación a partir de dos fuentes: (1) la norma ISO/IEC 30107, y (2) Common Evaluation Methodology por el Common Criteria. La metodología desarrollada considera dos aspectos importantes del mecanismo de detección de ataques de presentación (1) la resistencia del mecanismo a los ataques de presentación, y (2) la correspondiente amenaza del ataque estudiado. Para el primer punto, se han de señalar las capacidades técnicas del mecanismo y cómo influyen en la seguridad y la facilidad de uso del sistema biométrico. Para el segundo aspecto se debe llevar a cabo una evaluación de la vulnerabilidad, teniendo en cuenta todos los factores que afectan al potencial de ataque. Por último, siguiendo esta metodología, se lleva a cabo una recogida de datos que incluye 7128 vídeos de huellas dactilares genuinas y de presentación de ataques. Los datos se recogen utilizando dos tecnologías de sensor, dos escenarios de presentación y considerando siete tipos de instrumentos de ataque. La base de datos se utiliza para desarrollar y evaluar mecanismos dinámicos de detección de ataques de presentación que explotan las características espacio-temporales de las huellas dactilares. En la fase final, se desarrolla un conjunto de mecanismos novedosos de detección de ataques de presentación que explotan las características dinámicas causadas por los fenómenos naturales de las huellas dactilares, como la transpiración y la elasticidad. Los resultados de la evaluación muestran una capacidad eficiente de detección de ataques en la que, en algunas configuraciones, los mecanismos son capaces de eliminar completamente algunos tipos de instrumentos de ataque y mitigar el resto de los tipos manteniendo la comodidad del usuario en un nivel alto.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Cristina Conde Vila.- Secretario: Mariano López García.- Vocal: Farzin Derav

    The 31st Aerospace Mechanisms Symposium

    Get PDF
    The proceedings of the 31st Aerospace Mechanisms Symposium are reported. Topics covered include: robotics, deployment mechanisms, bearings, actuators, scanners, boom and antenna release, and test equipment. A major focus is the reporting of problems and solutions associated with the development and flight certification of new mechanisms

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques

    A comparative analysis of algorithms for satellite operations scheduling

    Get PDF
    Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration.Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration

    Visual dysfunction : a contributing factor in memory deficits, and therefore learning difficulties?

    Get PDF
    This thesis is based on Educational Therapy (ET) practice which has found eye muscle imbalance is a key factor to be addressed in management of learning difficulties (LD). This level of oculo-motor (o-m) function is a \u27hidden\u27 handicap as individuals are unaware of the problem; it is not routinely tested; and is not generally included in learning difficulties research. O-m function is omitted in standard paediatric optometry tests, and in school vision screening. Eye exercises increase the range of binocular fields of vision by employing stereopsis glasses and red/green slides. Central vision loss was uncovered when students reported words, seen by only the right eye, disappear or switch on and off . When the left eye was covered, right eye vision returned but was lost again with binocular vision, even though larger shapes on the screen remained complete. In effect, global vision was unaffected while right eye central (foveal) vision was suppressed. This is considered significant because students attending ET have learning difficulties with phonemic memory, spelling and reading deficits, which are predominantly left hemisphere processes. The aim of this three-part study, consisting of School Survey, ET Intervention study and Case studies, was to: a) determine whether o-m dysfunction was found in a girls\u27 school population and/or was associated with LD; b) set up an Intervention study to explore the effects of vision training on the outcomes of a subsequent week-long word-skills programme in the ET practice. Two case studies we\u27re also examined, that of matched senior school boys whose outcomes were significantly different; and c) examine more closely the common pattern of muscle imbalance in two case studies of current junior school students. This tested the therapy assumption that mal-adaptive sensory feedback was contributing to o-m dysfunction. This notion is based on the Luria (1973) Model of Levels of Neural Function which provides the framework for ET practice, and the Developmental Model of LD that has evolved in application and explanation. Part 1 School Survey. This exploratory, cross-sectional study included a randomised sample of 277 participants in a private girl\u27s school. A 7-10 minute screening was provided by five optometrists, with an expanded protocol including o-m function. Also assessed were academic standards of reading comprehension and spelling, reasoning, visual perception, phonological skills, auditory, visual and phonemic memory, and arm dominance. Results showed visual dysfunction and mixed eye dominance in approximately equal numbers. Of the 47% girls with visual dysfunction, not all had literacy problems; however, LD students had corresponding degrees of o-m dysfunction, memory deficit and mixed hand / arm dominance. Part 2 Intervention study. The Research Question for the Intervention Study was: Does the difference in learning standards depend on which eye is disadvantaged in the case of weak binocularity? This question was answered by determining the outcomes to literacy levels once normal binocular o-m function and stable eye dominance were established. Twenty-four students (6 to 18 years) had Behavioural Optometry assessment prior to commencing therapy and were found to have o-m dysfunction, undetected by previous standard optometry tests. Eye exercise results showed 62.5% of the group had changed from left to right eye dominance. The dominance criterion was set by this group, indicated by the right eye holding fixation through full range of fusional reserves (binocular overlap), together with superior eye-tracking speed \u3e20% by the right, compared to the left, eye. Associated significant gains in literacy and phonemic memory were also achieved by the newly established \u27right-eyed\u27 group. In spite of undergoing identical treatment, the \u27left-eyed\u27 group retained limited foveal binocularity, and made less progress in literacy outcomes. Part 3 Two current Case Studies. Present ET practice benefited from insights gained from the 36% \u27unsuccessful\u27 participants of the previous study. Better therapy outcomes are achieved from an integrative motor-sensory approach, supported by Podiatry and Cranial Osteopathy. This detailed study involved two junior school boys who exemplified a common pattern of physical anomalies. For example, RW (8-year old male) had \u27minimal brain damage\u27 and LD that co-occur with unstable feet and o-m control, postural muscle imbalance, poor balance, motor co-ordination and dyspraxia. After 18, two-hour therapy sessions over nine months, he is now reading well, his motor co-ordination, eye tracking and writing are within the ‘low normal range’, and he is interacting competently with his peers. Learning difficulties can be conceptualised as a profile of immaturities. The results of this three part study have shown that once the \u27hidden\u27 handicap of right eye suppression is overcome with balanced binocular fields of vision, learning difficulties arc ameliorated. This is affirmed by the positive gains achieved by these students, not only in literacy skills but also \u27outgrowing\u27 immaturity in motor-sensory-perceptual development
    corecore