1,267 research outputs found
Automating the processes involved in facial composite production
Bringing a criminal to justice is a labour intensive process. In the current paper, we explored ways of reducing police time when constructing and identifying facial composites. In the former, we designed and evaluated a standalone version of the EvoFIT composite system. This was found to perform similarly to the full system that normally requires several hours of a police
officer’s time. In the latter, we built a small database of composites that could be used to search for matching identities. It was found that pixel intensity (texture) information was valuable for composites produced from a traditional feature-based system, but feature shape information for composites produced from the recognition-based EvoFIT. The results show promise for the automated construction and identification of facial composites
Evaluation of facial composites utilizing the EvoFIT software program
Facial composites are traditionally created with the assistance of a sketch artist,
and the resulting image is then circulated in the police force as well as the public
community. However, with the advance of computer technologies and a better
understanding of how facial composites are created, composite software systems have
developed greatly.
EvoFIT, an abbreviation for Evolutionary Facial Imaging Technique, is a
computer program used to create composites based on the Darwinian concept. It allows a
witness to select for global features of the face, that will in turn be combined together to
create new faces that have a greater likeness to the offender. The EvoFIT program aims
to boost the low recognition values of facial composite methods currently used. The
purpose of this study is to evaluate production of two composites from the same person
as a mechanism for improving performance. The use of a second composite, paired
composites, and morphed composites is examined as mechanisms for boosting
recognition.
Ten sets of composites representing ten different volunteers (targets) were created
using EvoFIT. The first composite in each set was named correctly 8.3% of the time, the
second composites at 18.3%, the paired composites at 20%, and the morphed composites
at 23.33%. The results support the theory that use of a second composite, a pair of
composites, and morphed composites increases the number of instances in which namers
correctly identify the target. This research suggests that it is valuable for a witness to
construct a second composite using EvoFIT or similar software
Configural and featural information in facial-composite images
Eyewitnesses are often invited to construct a facial composite, an image created of the person they saw commit a crime that is used by law enforcement to locate criminal suspects. In the current paper, the effectiveness of composite images was investigated from traditional feature systems (E-FIT and PRO-fit), where participants (face constructors) selected individual features to build the face, and a more recent holistic system (EvoFIT), where they ‘evolved' a composite by repeatedly selecting from arrays of complete faces. Further participants attempted to name these composites when seen as an unaltered image, or when blurred, rotated, linearly stretched or converted to a photographic negative. All of the manipulations tested reduced correct naming of the composites overall except (i) for a low level of blur, for which naming improved for holistic composites but reduced for feature composites, and (ii) for 100% linear stretch, for which a substantial naming advantage was observed. Results also indicated that both featural (facial elements) and configural (feature spacing) information was useful for recognition in both types of composite system, but highly-detailed information was more accurate in the feature-based than the holistic method. The naming advantage of linear stretch was replicated using a forensically more-practical procedure with observers viewing an unaltered ¬composite sideways. The work is valuable to police practitioners and designers of facial-composite systems
Recommended from our members
Proceedings of QG2010: The Third Workshop on Question Generation
These are the peer-reviewed proceedings of "QG2010, The Third Workshop on Question Generation". The workshop included a special track for "QGSTEC2010: The First Question Generation Shared Task and Evaluation Challenge".
QG2010 was held as part of The Tenth International Conference on Intelligent Tutoring Systems (ITS2010)
Experimental indices: Situational assemblages of facial recognition
Facial recognition technologies are increasingly used outside of constricted, laboratory-like settings. While supporters of the technologies contend that they help in identifying threats by linking specific bodies to hard evidence, we argue that the indexical relations they exhibit are best described as experimental, pointing to specific situational constellations within which they were initially created. By revisiting key moments in the development of (semi-)automated facial recognition technologies from the late 1960s to the present, we identify varying situational assemblages of facial recognition that depend on different understandings of indexicality. These experimental indices rely on historical dynamics, including significant government interest in the development of facial recognition technology, expansion in the scale of experimental settings, and dissolution of the formerly strict boundaries between the social spheres of private image-sharing, commercial image distribution, and institutional image forensics for identification. In coupling experimental indices with the development of facial recognition technologies, we hope to show a way forward to comparing the histories of other evidential technical images too.publishedVersionPeer reviewe
A 3D Pipeline for 2D Pixel Art Animation
Aquest document presenta un informe exhaustiu sobre un projecte destinat a desenvolupar un procés automatitzat per a la creació d'animacions 2D a partir de models 3D utilitzant Blender. L'objectiu principal del projecte és millorar les tècniques existents i reduir la necessitat que els artistes realitzin tasques repetitives en el procés de producció d'animació. El projecte implica el disseny i desenvolupament d'un complement per a Blender, programat en Python, que es va desenvolupar per ser eficient i reduir les tasques intensives en temps que solen caracteritzar algunes etapes en el procés d'animació. El complement suporta tres estils específics d'animació: l'art de píxel, "cel shader", i "cel shader" amb contorns, i es pot expandir per suportar una àmplia gamma d'estils. El complement també és de codi obert, permetent una major col·laboració i potencials contribucions per part de la comunitat. Malgrat els problemes trobats, el projecte ha estat exitós en aconseguir els seus objectius, i els resultats mostren que el complement pot aconseguir resultats similars als adquirits amb eines similars i animació tradicional. El treball futur inclou mantenir el complement actualitzat amb les últimes versions de Blender, publicar-lo a GitHub i mercats de complements de Blender, així com afegir nous estils d'art.This document presents a comprehensive report on a project aimed at developing an automated process for creating 2D animations from 3D models using Blender. The project's main goal is to improve upon existing techniques and reduce the need for artists to do clerical tasks in the animation production process. The project involves the design and development of a plugin for Blender, coded in Python, which was developed to be efficient and reduce time-intensive tasks that usually characterise some stages in the animation process. The plugin supports three specific styles of animation: pixel art, cel shading, and cel shading with outlines, and can be expanded to support a wider range of styles. The plugin is also open-source, allowing for greater collaboration and potential contributions from the community. Despite the challenges faced, the project was successful in achieving its goals, and the results show that the plugin could achieve results similar to those acquired with similar tools and traditional animation. The future work includes keeping the plugin up-to-date with the latest versions of Blender, publishing it on GitHub and Blender plugin markets, as well as adding new art styles
Addressing Algorithmic Bias in AI-Driven Customer Management
Research on AI has gained momentum in recent years. Many scholars and practitioners increasingly highlight the dark sides of AI, particularly related to algorithm bias. This study elucidates situations in which AI-enabled analytics systems make biased decisions against customers based on gender, race, religion, age, nationality or socioeconomic status. Based on a systematic literature review, this research proposes two approaches (i.e., a priori and post-hoc) to overcome such biases in customer management. As part of a priori approach, the findings suggest scientific, application, stakeholder and assurance consistencies. With regard to the post-hoc approach, the findings recommend six steps: bias identification, review of extant findings, selection of the right variables, responsible and ethical model development, data analysis and action on insights. Overall, this study contributes to the ethical and responsible use of AI applications
- …