410 research outputs found

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity

    Um método supervisionado para encontrar variáveis discriminantes na análise de problemas complexos : estudos de caso em segurança do Android e em atribuição de impressora fonte

    Get PDF
    Orientadores: Ricardo Dahab, Anderson de Rezende RochaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A solução de problemas onde muitos componentes atuam e interagem simultaneamente requer modelos de representação nem sempre tratáveis pelos métodos analíticos tradicionais. Embora em muitos caso se possa prever o resultado com excelente precisão através de algoritmos de aprendizagem de máquina, a interpretação do fenómeno requer o entendimento de quais são e em que proporção atuam as variáveis mais importantes do processo. Esta dissertação apresenta a aplicação de um método onde as variáveis discriminantes são identificadas através de um processo iterativo de ranqueamento ("ranking") por eliminação das que menos contribuem para o resultado, avaliando-se em cada etapa o impacto da redução de características nas métricas de acerto. O algoritmo de florestas de decisão ("Random Forest") é utilizado para a classificação e sua propriedade de importância das características ("Feature Importance") para o ranqueamento. Para a validação do método, dois trabalhos abordando sistemas complexos de natureza diferente foram realizados dando origem aos artigos aqui apresentados. O primeiro versa sobre a análise das relações entre programas maliciosos ("malware") e os recursos requisitados pelos mesmos dentro de um ecossistema de aplicações no sistema operacional Android. Para realizar esse estudo, foram capturados dados, estruturados segundo uma ontologia definida no próprio artigo (OntoPermEco), de 4.570 aplicações (2.150 malware, 2.420 benignas). O modelo complexo produziu um grafo com cerca de 55.000 nós e 120.000 arestas, o qual foi transformado usando-se a técnica de bolsa de grafos ("Bag Of Graphs") em vetores de características de cada aplicação com 8.950 elementos. Utilizando-se apenas os dados do manifesto atingiu-se com esse modelo 88% de acurácia e 91% de precisão na previsão do comportamento malicioso ou não de uma aplicação, e o método proposto foi capaz de identificar 24 características relevantes na classificação e identificação de famílias de malwares, correspondendo a 70 nós no grafo do ecosistema. O segundo artigo versa sobre a identificação de regiões em um documento impresso que contém informações relevantes na atribuição da impressora laser que o imprimiu. O método de identificação de variáveis discriminantes foi aplicado sobre vetores obtidos a partir do uso do descritor de texturas (CTGF-"Convolutional Texture Gradient Filter") sobre a imagem scaneada em 600 DPI de 1.200 documentos impressos em 10 impressoras. A acurácia e precisão médias obtidas no processo de atribuição foram de 95,6% e 93,9% respectivamente. Após a atribuição da impressora origem a cada documento, 8 das 10 impressoras permitiram a identificação de variáveis discriminantes associadas univocamente a cada uma delas, podendo-se então visualizar na imagem do documento as regiões de interesse para uma análise pericial. Os objetivos propostos foram atingidos mostrando-se a eficácia do método proposto na análise de dois problemas em áreas diferentes (segurança de aplicações e forense digital) com modelos complexos e estruturas de representação bastante diferentes, obtendo-se um modelo reduzido interpretável para ambas as situaçõesAbstract: Solving a problem where many components interact and affect results simultaneously requires models which sometimes are not treatable by traditional analytic methods. Although in many cases the result is predicted with excellent accuracy through machine learning algorithms, the interpretation of the phenomenon requires the understanding of how the most relevant variables contribute to the results. This dissertation presents an applied method where the discriminant variables are identified through an iterative ranking process. In each iteration, a classifier is trained and validated discarding variables that least contribute to the result and evaluating in each stage the impact of this reduction in the classification metrics. Classification uses the Random Forest algorithm, and the discarding decision applies using its feature importance property. The method handled two works approaching complex systems of different nature giving rise to the articles presented here. The first article deals with the analysis of the relations between \textit{malware} and the operating system resources requested by them within an ecosystem of Android applications. Data structured according to an ontology defined in the article (OntoPermEco) were captured to carry out this study from 4,570 applications (2,150 malware, 2,420 benign). The complex model produced a graph of about 55,000 nodes and 120,000 edges, which was transformed using the Bag of Graphs technique into feature vectors of each application with 8,950 elements. The work accomplished 88% of accuracy and 91% of precision in predicting malicious behavior (or not) for an application using only the data available in the application¿s manifest, and the proposed method was able to identify 24 relevant features corresponding to only 70 nodes of the entire ecosystem graph. The second article is about to identify regions in a printed document that contains information relevant to the attribution of the laser printer that printed it. The discriminant variable determination method achieved average accuracy and precision of 95.6% and 93.9% respectively in the source printer attribution using a dataset of 1,200 documents printed on ten printers. Feature vectors were obtained from the scanned image at 600 DPI applying the texture descriptor Convolutional Texture Gradient Filter (CTGF). After the assignment of the source printer to each document, eight of the ten printers allowed the identification of discriminant variables univocally associated to each one of them, and it was possible to visualize in document's image the regions of interest for expert analysis. The work in both articles accomplished the objective of reducing a complex system into an interpretable streamlined model demonstrating the effectiveness of the proposed method in the analysis of two problems in different areas (application security and digital forensics) with complex models and entirely different representation structuresMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques

    ANALYSIS OF CLIENT-SIDE ATTACKS THROUGH DRIVE-BY HONEYPOTS

    Get PDF
    Client-side cyberattacks on Web browsers are becoming more common relative to server-side cyberattacks. This work tested the ability of the honeypot (decoy) client software Thug to detect malicious or compromised servers that secretly download malicious files to clients, and to classify what it downloaded. Prior to using Thug we did TCP/IP fingerprinting to assess Thug’s ability to impersonate different Web browsers, and we created our own malicious Web server with some drive-by exploits to verify Thug’s functions; Thug correctly identified 85 out of 86 exploits from this server. We then tested Thug’s analysis of delivered exploits from two sets of real Web servers; one set was obtained from random Internet addresses of Web servers, and the other came from a commercial blacklist. The rates of malicious activity on 37,415 random websites and 83,667 blacklisted websites were 5.6% and 1.15%, respectively. Thug’s interaction with the blacklisted Web servers found 163 unique malware files. We demonstrated the usefulness and efficiency of client-side honeypots in analyzing harmful data presented by malicious websites. These honeypots can help government and industry defenders to proactively identify suspicious Web servers and protect users.OUSD(R&E)Outstanding ThesisLieutenant, United States NavyApproved for public release. Distribution is unlimited

    My Text in Your Handwriting

    Get PDF
    There are many scenarios where we wish to imitate a specific author’s pen-on-paper handwriting style. Rendering new text in someone’s handwriting is difficult because natural handwriting is highly variable, yet follows both intentional and involuntary structure that makes a person’s style self-consistent. The variability means that naive example-based texture synthesis can be conspicuously repetitive. We propose an algorithm that renders a desired input string in an author’s handwriting. An annotated sample of the author’s handwriting is required; the system is flexible enough that historical documents can usually be used with only a little extra effort. Experiments show that our glyph-centric approach, with learned parameters for spacing, line thickness, and pressure, produces novel images of handwriting that look hand-made to casual observers, even when printed on paper

    Forensic Box for Quick Network-Based Security Assessments

    Get PDF
    Network security assessments are seen as important, yet cumbersome and time consuming tasks, mostly due to the use of different and manually operated tools. These are often very specialized tools that need to be mastered and combined, besides requiring sometimes that a testing environment is set up. Nonetheless, in many cases, it would be useful to obtain an audit in a swiftly and on-demand manner, even if with less detail. In such cases, these audits could be used as an initial step for a more detailed evaluation of the network security, as a complement to other audits, or aid in preventing major data leaks and system failures due to common configuration, management or implementation issues. This dissertation describes the work towards the design and development of a portable system for quick network security assessments and the research on the automation of many tasks (and associated tools) composing that process. An embodiment of such system was built using a Raspberry Pi 2, several well known open source tools, whose functions vary from network discovery, service identification, Operating System (OS) fingerprinting, network sniffing and vulnerability discovery, and custom scripts and programs for connecting all the different parts that comprise the system. The tools are integrated in a seamless manner with the system, to allow deployment in wired or wireless network environments, where the device carries out a mostly automated and thorough analysis. The device is near plug-and-play and produces a structured report at the end of the assessment. Several simple functions, such as re-scanning the network or doing Address Resolution Protocol (ARP) poisoning on the network are readily available through a small LCD display mounted on top of the device. It offers a web based interface for finer configuration of the several tools and viewing the report, aso developed within the scope of this work. Other specific outputs, such as PCAP files with collected traffic, are available for further analysis. The system was operated in controlled and real networks, so as to verify the quality of its assessments. The obtained results were compared with the results obtained through manually auditing the same networks. The achieved results showed that the device was able to detect many of the issues that the human auditor detected, but showed some shortcomings in terms of some specific vulnerabilities, mainly Structured Query Language (SQL) injections. The image of the OS with the pre-configured tools, automation scripts and programs is available for download from [Ber16b]. It comprises one of the main outputs of this work.As avaliações de segurança de uma rede (e dos seus dispositivos) são vistas como tarefas importantes, mas pesadas e que consomem bastante tempo, devido à utilização de diferentes ferramentas manuais. Normalmente, estas ferramentas são bastante especializadas e exigem conhecimento prévio e habituação, e muitas vezes a necessidade de criar um ambiente de teste. No entanto, em muitos casos, seria útil obter uma auditoria rápida e de forma mais direta, ainda que pouco profunda. Nesses moldes, poderia servir como passo inicial para uma avaliação mais detalhada, complementar outra auditoria, ou ainda ajudar a prevenir fugas de dados e falhas de sistemas devido a problemas comuns de configuração, gestão ou implementação dos sistemas. Esta dissertação descreve o trabalho efetuado com o objetivo de desenhar e desenvolver um sistema portátil para avaliações de segurança de uma rede de forma rápida, e também a investigação efetuada com vista à automação de várias tarefas (e ferramentas associadas) que compõem o processo de auditoria. Uma concretização do sistema foi criada utilizando um Raspberry Pi 2, várias ferramentas conhecidas e de código aberto, cujas funcionalidades variam entre descoberta da rede, identificação de sistema operativo, descoberta de vulnerabilidades a captura de tráfego na rede, e scripts e programas personalizados que interligam as várias partes que compõem o sistema. As ferramentas são integradas de forma transparente no sistema, que permite ser lançado em ambientes cablados ou wireless, onde o dispositivo executa uma análise meticulosa e maioritariamente automatizada. O dispositivo é praticamente plug and play e produz um relatório estruturado no final da avaliação. Várias funções simples, tais como analisar novamente a rede ou efetuar ataques de envenenamento da cache Address Resolution Protocol (ARP) na rede estão disponíveis através de um pequeno ecrã LCD montado no topo do dispositivo. Este oferece ainda uma interface web, também desenvolvida no contexto do trabalho, para configuração mais específica das várias ferramentas e para obter acesso ao relatório da avaliação. Outros outputs mais específicos, como ficheiros com tráfego capturado, estão disponíveis a partir desta interface. O sistema foi utilizado em redes controladas e reais, de forma a verificar a qualidade das suas avaliações. Os resultados obtidos foram comparados com aqueles obtidos através de auditoria manual efetuada às mesmas redes. Os resultados obtidos mostraram que o dispositivo deteta a maioria dos problemas que um auditor detetou manualmente, mas mostrou algumas falhas na deteção de algumas vulnerabilidades específicas, maioritariamente injeções Structured Query Language (SQL). A imagem do Sistema Operativo com as ferramentas pré-configuradas, scripts de automação e programas está disponível para download de [Ber16b]. Esta imagem corresponde a um dos principais resultados deste trabalho

    Democracy Enhancing Technologies: Toward deployable and incoercible E2E elections

    Get PDF
    End-to-end verifiable election systems (E2E systems) provide a provably correct tally while maintaining the secrecy of each voter's ballot, even if the voter is complicit in demonstrating how they voted. Providing voter incoercibility is one of the main challenges of designing E2E systems, particularly in the case of internet voting. A second challenge is building deployable, human-voteable E2E systems that conform to election laws and conventions. This dissertation examines deployability, coercion-resistance, and their intersection in election systems. In the course of this study, we introduce three new election systems, (Scantegrity, Eperio, and Selections), report on two real-world elections using E2E systems (Punchscan and Scantegrity), and study incoercibility issues in one deployed system (Punchscan). In addition, we propose and study new practical primitives for random beacons, secret printing, and panic passwords. These are tools that can be used in an election to, respectively, generate publicly verifiable random numbers, distribute the printing of secrets between non-colluding printers, and to covertly signal duress during authentication. While developed to solve specific problems in deployable and incoercible E2E systems, these techniques may be of independent interest

    Análise de propriedades intrínsecas e extrínsecas de amostras biométricas para detecção de ataques de apresentação

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Hélio PedriniTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Os recentes avanços nas áreas de pesquisa em biometria, forense e segurança da informação trouxeram importantes melhorias na eficácia dos sistemas de reconhecimento biométricos. No entanto, um desafio ainda em aberto é a vulnerabilidade de tais sistemas contra ataques de apresentação, nos quais os usuários impostores criam amostras sintéticas, a partir das informações biométricas originais de um usuário legítimo, e as apresentam ao sensor de aquisição procurando se autenticar como um usuário válido. Dependendo da modalidade biométrica, os tipos de ataque variam de acordo com o tipo de material usado para construir as amostras sintéticas. Por exemplo, em biometria facial, uma tentativa de ataque é caracterizada quando um usuário impostor apresenta ao sensor de aquisição uma fotografia, um vídeo digital ou uma máscara 3D com as informações faciais de um usuário-alvo. Em sistemas de biometria baseados em íris, os ataques de apresentação podem ser realizados com fotografias impressas ou com lentes de contato contendo os padrões de íris de um usuário-alvo ou mesmo padrões de textura sintéticas. Nos sistemas biométricos de impressão digital, os usuários impostores podem enganar o sensor biométrico usando réplicas dos padrões de impressão digital construídas com materiais sintéticos, como látex, massa de modelar, silicone, entre outros. Esta pesquisa teve como objetivo o desenvolvimento de soluções para detecção de ataques de apresentação considerando os sistemas biométricos faciais, de íris e de impressão digital. As linhas de investigação apresentadas nesta tese incluem o desenvolvimento de representações baseadas nas informações espaciais, temporais e espectrais da assinatura de ruído; em propriedades intrínsecas das amostras biométricas (e.g., mapas de albedo, de reflectância e de profundidade) e em técnicas de aprendizagem supervisionada de características. Os principais resultados e contribuições apresentadas nesta tese incluem: a criação de um grande conjunto de dados publicamente disponível contendo aproximadamente 17K videos de simulações de ataques de apresentações e de acessos genuínos em um sistema biométrico facial, os quais foram coletados com a autorização do Comitê de Ética em Pesquisa da Unicamp; o desenvolvimento de novas abordagens para modelagem e análise de propriedades extrínsecas das amostras biométricas relacionadas aos artefatos que são adicionados durante a fabricação das amostras sintéticas e sua captura pelo sensor de aquisição, cujos resultados de desempenho foram superiores a diversos métodos propostos na literature que se utilizam de métodos tradicionais de análise de images (e.g., análise de textura); a investigação de uma abordagem baseada na análise de propriedades intrínsecas das faces, estimadas a partir da informação de sombras presentes em sua superfície; e, por fim, a investigação de diferentes abordagens baseadas em redes neurais convolucionais para o aprendizado automático de características relacionadas ao nosso problema, cujos resultados foram superiores ou competitivos aos métodos considerados estado da arte para as diferentes modalidades biométricas consideradas nesta tese. A pesquisa também considerou o projeto de eficientes redes neurais com arquiteturas rasas capazes de aprender características relacionadas ao nosso problema a partir de pequenos conjuntos de dados disponíveis para o desenvolvimento e a avaliação de soluções para a detecção de ataques de apresentaçãoAbstract: Recent advances in biometrics, information forensics, and security have improved the recognition effectiveness of biometric systems. However, an ever-growing challenge is the vulnerability of such systems against presentation attacks, in which impostor users create synthetic samples from the original biometric information of a legitimate user and show them to the acquisition sensor seeking to authenticate themselves as legitimate users. Depending on the trait used by the biometric authentication, the attack types vary with the type of material used to build the synthetic samples. For instance, in facial biometric systems, an attempted attack is characterized by the type of material the impostor uses such as a photograph, a digital video, or a 3D mask with the facial information of a target user. In iris-based biometrics, presentation attacks can be accomplished with printout photographs or with contact lenses containing the iris patterns of a target user or even synthetic texture patterns. In fingerprint biometric systems, impostor users can deceive the authentication process using replicas of the fingerprint patterns built with synthetic materials such as latex, play-doh, silicone, among others. This research aimed at developing presentation attack detection (PAD) solutions whose objective is to detect attempted attacks considering different attack types, in each modality. The lines of investigation presented in this thesis aimed at devising and developing representations based on spatial, temporal and spectral information from noise signature, intrinsic properties of the biometric data (e.g., albedo, reflectance, and depth maps), and supervised feature learning techniques, taking into account different testing scenarios including cross-sensor, intra-, and inter-dataset scenarios. The main findings and contributions presented in this thesis include: the creation of a large and publicly available benchmark containing 17K videos of presentation attacks and bona-fide presentations simulations in a facial biometric system, whose collect were formally authorized by the Research Ethics Committee at Unicamp; the development of novel approaches to modeling and analysis of extrinsic properties of biometric samples related to artifacts added during the manufacturing of the synthetic samples and their capture by the acquisition sensor, whose results were superior to several approaches published in the literature that use traditional methods for image analysis (e.g., texture-based analysis); the investigation of an approach based on the analysis of intrinsic properties of faces, estimated from the information of shadows present on their surface; and the investigation of different approaches to automatically learning representations related to our problem, whose results were superior or competitive to state-of-the-art methods for the biometric modalities considered in this thesis. We also considered in this research the design of efficient neural networks with shallow architectures capable of learning characteristics related to our problem from small sets of data available to develop and evaluate PAD solutionsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação140069/2016-0 CNPq, 142110/2017-5CAPESCNP

    Augmented interaction for custom-fit products by means of interaction devices at low costs

    Get PDF
    This Ph.D thesis refers to a research project that aims at developing an innovative platform to design lower limb prosthesis (both for below and above knee amputation) centered on the virtual model of the amputee and based on a computer-aided and knowledge-guided approach. The attention has been put on the modeling tool of the socket, which is the most critical component of the whole prosthesis. The main aim has been to redesign and develop a new prosthetic CAD tool, named SMA2 (Socket Modelling Assistant2) exploiting a low-cost IT technologies (e.g. hand/finger tracking devices) and making the user’s interaction as much as possible natural and similar to the hand-made manipulation. The research activities have been carried out in six phases as described in the following. First, limits and criticalities of the already available modeling tool (namely SMA) have been identified. To this end, the first version of SMA has been tested with Ortopedia Panini and the orthopedic research group of Salford University in Manchester with real case studies. Main criticalities were related to: (i) automatic reconstruction of the residuum geometric model starting from medical images, (ii) performance of virtual modeling tools to generate the socket shape, and (iii) interaction mainly based on traditional devices (e.g., mouse and keyboard). The second phase lead to the software reengineering of SMA according to the limits identified in the first phase. The software architecture has been re-designed adopting an object-oriented paradigm and its modularity permits to remove or add new features in a very simple way. The new modeling system, i.e. SMA2, has been totally implemented using open source Software Development Kit-SDK (e.g., Visualization ToolKit VTK, OpenCASCADE and Qt SDK) and based on low cost technology. It includes: • A new module to automatically reconstruct the 3D model of the residual limb from MRI images. In addition, a new procedure based on low-cost technology, such as Microsoft Kinect V2 sensor, has been identified to acquire the 3D external shape of the residuum. • An open source software library, named SimplyNURBS, for NURBS modeling and specifically used for the automatic reconstruction of the residuum 3D model from medical images. Even if, SimplyNURBS has been conceived for the prosthetic domain, it can be used to develop NURBS-based modeling tools for a range of applicative domains from health-care to clothing design. • A module for mesh editing to emulate the hand-made operations carried out by orthopedic technicians during traditional socket manufacturing process. In addition several virtual widgets have been implemented to make available virtual tools similar to the real ones used by the prosthetist, such as tape measure and pencil. • A Natural User Interface (NUI) to allow the interaction with the residuum and socket models using hand-tracking and haptic devices. • A module to generate the geometric models for additive manufacturing of the socket. The third phase concerned the study and design of augmented interaction with particular attention to the Natural User Interface (NUI) for the use of hand-tracking and haptic devices into SMA2. The NUI is based on the use of the Leap Motion device. A set of gestures, mainly iconic and suitable for the considered domain, has been identified taking into account ergonomic issues (e.g., arm posture) and ease of use. The modularity of SMA2 permits us to easily generate the software interface for each device for augmented interaction. To this end, a software module, named Tracking plug-in, has been developed to automatically generate the source code of software interfaces for managing the interaction with low cost hand-tracking devices (e.g., Leap Motion and Intel Gesture Camera) and replicate/emulate manual operations usually performed to design custom-fit products, such medical devices and garments. Regarding haptic rendering, two different devices have been considered, the Falcon Novint, and a haptic mouse developed in-house. In the fourth phase, additive manufacturing technologies have been investigated, in particular FDM one. 3D printing has been exploited in order to permit the creation of trial sockets in laboratory to evaluate the potentiality of SMA2. Furthermore, research activities have been done to study new ways to design the socket. An innovative way to build the socket has been developed based on multi-material 3D printing. Taking advantage of flexible material and multi-material print possibility, new 3D printers permit to create object with soft and hard parts. In this phase, issues about infill, materials and comfort have been faced and solved considering different compositions of materials to re-design the socket shape. In the fifth phase the implemented solution, integrated within the whole prosthesis design platform, has been tested with a transfemoral amputee. Following activities have been performed: • 3D acquisition of the residuum using MRI and commercial 3D scanning systems (low cost and professional). • Creation of the residual limb and socket geometry. • Multi-material 3D printing of the socket using FDM technology. • Gait analysis of the amputee wearing the socket using a markerless motion capture system. • Acquisition of contact pressure between residual limb and a trial socket by means of Teskan’s F-Socket System. Acquired data have been combined inside an ad-hoc developed application, which permits to simultaneously visualize pressure data on the 3D model of the residual lower limb and the animation of gait analysis. Results and feedback have been possible thanks to this application that permits to find correlation between several phases of the gait cycle and the pressure data at the same time. Reached results have been considered very interested and several tests have been planned in order to try the system in orthopedic laboratories in real cases. The reached results have been very useful to evaluate the quality of SMA2 as a future instruments that can be exploited for orthopedic technicians in order to create real socket for patients. The solution has the potentiality to begin a potential commercial product, which will be able to substitute the classic procedure for socket design. The sixth phase concerned the evolution of SMA2 as a Mixed Reality environment, named Virtual Orthopedic LABoratory (VOLAB). The proposed solution is based on low cost devices and open source libraries (e.g., OpenCL and VTK). In particular, the hardware architecture consists of three Microsoft Kinect v2 for human body tracking, the head mounted display Oculus Rift SDK 2 for 3D environment rendering, and the Leap Motion device for hand/fingers tracking. The software development has been based on the modular structure of SMA2 and dedicated modules have been developed to guarantee the communication among the devices. At present, two preliminary tests have been carried out: the first to verify real-time performance of the virtual environment and the second one to verify the augmented interaction with hands using SMA2 modeling tools. Achieved results are very promising but, highlighted some limitations of this first version of VOLAB and improvements are necessary. For example, the quality of the 3D real world reconstruction, especially as far as concern the residual limb, could be improved by using two HD-RGB cameras together the Oculus Rift. To conclude, the obtained results have been evaluated very interested and encouraging from the technical staff of orthopedic laboratory. SMA2 will made possible an important change of the process to design the socket of lower limb prosthesis, from a traditional hand-made manufacturing process to a totally virtual knowledge-guided process. The proposed solutions and results reached so far can be exploited in other industrial sectors where the final product heavily depends on the human body morphology. In fact, preliminary software development has been done to create a virtual environment for clothing design by starting from the basic modules exploited in SMA2
    corecore