29 research outputs found

    Fuzz, Penetration, and AI Testing for SoC Security Verification: Challenges and Solutions

    Get PDF
    The ever-increasing usage and application of system-on-chips (SoCs) has resulted in the tremendous modernization of these architectures. For a modern SoC design, with the inclusion of numerous complex and heterogeneous intellectual properties (IPs), and its privacy-preserving declaration, there exists a wide variety of highly sensitive assets. These assets must be protected from any unauthorized access and against a diverse set of attacks. Attacks for obtaining such assets could be accomplished through different sources, including malicious IPs, malicious or vulnerable firmware/software, unreliable and insecure interconnection and communication protocol, and side-channel vulnerabilities through power/performance profiles. Any unauthorized access to such highly sensitive assets may result in either a breach of company secrets for original equipment manufactures (OEM) or identity theft for the end-user. Unlike the enormous advances in functional testing and verification of the SoC architecture, security verification is still on the rise, and little endeavor has been carried out by academia and industry. Unfortunately, there exists a huge gap between the modernization of the SoC architectures and their security verification approaches. With the lack of automated SoC security verification in modern electronic design automation (EDA) tools, we provide a comprehensive overview of the requirements that must be realized as the fundamentals of the SoC security verification process in this paper. By reviewing these requirements, including the creation of a unified language for SoC security verification, the definition of security policies, formulation of the security verification, etc., we put forward a realization of the utilization of self-refinement techniques, such as fuzz, penetration, and AI testing, for security verification purposes. We evaluate all the challenges and resolution possibilities, and we provide the potential approaches for the realization of SoC security verification via these self-refinement techniques

    Deep Learning in Mobile and Wireless Networking: A Survey

    Get PDF
    The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, agile management of network resource to maximize user experience, and extraction of fine-grained real-time analytics. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques to help managing the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research

    Models of ICT Innovation. A Focus on the Cinema Sector

    Get PDF
    The report starts by looking at the competing and overlapping definitions of creative industries, media and content industries. Chapter 1 investigates the fate of R&D and innovation in the creative industries and in the broader Telecom Media and Technology sectors. Chapter 2 summarizes past studies on innovation in distinct media and content industries (videogames, music recording and newspapers publishing) and draws some lessons from them. Chapter 3 delves more deeply into the specific case of cinema. This chapter investigates the film industry's complex and evolving relationship with technologies and technological inventions. Chapter 4 offers a short cross-comparison with R&D in the book publishing industry. Chapter 5 deals with policy issues triggered by the observed digital changes. Chapter 6 concludes with a brief assessment of EU strengths and weaknesses, and offers some recommendations.JRC.J.3-Information Societ

    Spinoff 1993

    Get PDF
    This publication is intended to foster the aim of the NASA Technology Transfer Program by heightening awareness of the NASA technology available for reapplication and its potential for public benefit. The publication is organized in three main sections. Section 1 outlines NASA's mainline effort, the major programs that generate new technology and therefore replenish and expand the bank of knowledge available for transfer. Section 2 contains a representative sampling of spinoff products that resulted from secondary application of technology originally developed to meet mainline goals. Section 3 describes the various mechanisms NASA employs to stimulate technology transfer and lists, in an appendix, contact sources for further information about the Technology Transfer Program

    An architecture for secure data management in medical research and aided diagnosis

    Get PDF
    Programa Oficial de Doutoramento en Tecnolox铆as da Informaci贸n e as Comunicaci贸ns. 5032V01[Resumo] O Regulamento Xeral de Proteccion de Datos (GDPR) implantouse o 25 de maio de 2018 e considerase o desenvolvemento mais importante na regulacion da privacidade de datos dos ultimos 20 anos. As multas fortes definense por violar esas regras e non e algo que os centros sanitarios poidan permitirse ignorar. O obxectivo principal desta tese e estudar e proponer unha capa segura/integracion para os curadores de datos sanitarios, onde: a conectividade entre sistemas illados (localizacions), a unificacion de rexistros nunha vision centrada no paciente e a comparticion de datos coa aprobacion do consentimento sexan as pedras angulares de a arquitectura controlar a sua identidade, os perfis de privacidade e as subvencions de acceso. Ten como obxectivo minimizar o medo a responsabilidade legal ao compartir os rexistros medicos mediante o uso da anonimizacion e facendo que os pacientes sexan responsables de protexer os seus propios rexistros medicos, pero preservando a calidade do tratamento do paciente. A nosa hipotese principal e: os conceptos Distributed Ledger e Self-Sovereign Identity son unha simbiose natural para resolver os retos do GDPR no contexto da saude? Requirense solucions para que os medicos e investigadores poidan manter os seus fluxos de traballo de colaboracion sen comprometer as regulacions. A arquitectura proposta logra eses obxectivos nun ambiente descentralizado adoptando perfis de privacidade de datos illados.[Resumen] El Reglamento General de Proteccion de Datos (GDPR) se implemento el 25 de mayo de 2018 y se considera el desarrollo mas importante en la regulacion de privacidad de datos en los ultimos 20 anos. Las fuertes multas estan definidas por violar esas reglas y no es algo que los centros de salud puedan darse el lujo de ignorar. El objetivo principal de esta tesis es estudiar y proponer una capa segura/de integraci贸n para curadores de datos de atencion medica, donde: la conectividad entre sistemas aislados (ubicaciones), la unificacion de registros en una vista centrada en el paciente y el intercambio de datos con la aprobacion del consentimiento son los pilares de la arquitectura propuesta. Esta propuesta otorga al titular de los datos un rol central, que le permite controlar su identidad, perfiles de privacidad y permisos de acceso. Su objetivo es minimizar el temor a la responsabilidad legal al compartir registros medicos utilizando el anonimato y haciendo que los pacientes sean responsables de proteger sus propios registros medicos, preservando al mismo tiempo la calidad del tratamiento del paciente. Nuestra hipotesis principal es: .son los conceptos de libro mayor distribuido e identidad autosuficiente una simbiosis natural para resolver los desafios del RGPD en el contexto de la atencion medica? Se requieren soluciones para que los medicos y los investigadores puedan mantener sus flujos de trabajo de colaboracion sin comprometer las regulaciones. La arquitectura propuesta logra esos objetivos en un entorno descentralizado mediante la adopcion de perfiles de privacidad de datos aislados.[Abstract] The General Data Protection Regulation (GDPR) was implemented on 25 May 2018 and is considered the most important development in data privacy regulation in the last 20 years. Heavy fines are defined for violating those rules and is not something that healthcare centers can afford to ignore. The main goal of this thesis is to study and propose a secure/integration layer for healthcare data curators, where: connectivity between isolated systems (locations), unification of records in a patientcentric view and data sharing with consent approval are the cornerstones of the proposed architecture. This proposal empowers the data subject with a central role, which allows to control their identity, privacy profiles and access grants. It aims to minimize the fear of legal liability when sharing medical records by using anonymisation and making patients responsible for securing their own medical records, yet preserving the patient鈥檚 quality of treatment. Our main hypothesis is: are the Distributed Ledger and Self-Sovereign Identity concepts a natural symbiosis to solve the GDPR challenges in the context of healthcare? Solutions are required so that clinicians and researchers can maintain their collaboration workflows without compromising regulations. The proposed architecture accomplishes those objectives in a decentralized environment by adopting isolated data privacy profiles

    Bayesian Prognostic Framework for High-Availability Clusters

    Get PDF
    Critical services from domains as diverse as finance, manufacturing and healthcare are often delivered by complex enterprise applications (EAs). High-availability clusters (HACs) are software-managed IT infrastructures that enable these EAs to operate with minimum downtime. To that end, HACs monitor the health of EA layers (e.g., application servers and databases) and resources (i.e., components), and attempt to reinitialise or restart failed resources swiftly. When this is unsuccessful, HACs try to failover (i.e., relocate) the resource group to which the failed resource belongs to another server. If the resource group failover is also unsuccessful, or when a system-wide critical failure occurs, HACs initiate a complete system failover. Despite the availability of multiple commercial and open-source HAC solutions, these HACs (i) disregard important sources of historical and runtime information, and (ii) have limited reasoning capabilities. Therefore, they may conservatively perform unnecessary resource group or system failovers or delay justified failovers for longer than necessary. This thesis introduces the first HAC taxonomy, uses it to carry out an extensive survey of current HAC solutions, and develops a novel Bayesian prognostic (BP) framework that addresses the significant HAC limitations that are mentioned above and are identified by the survey. The BP framework comprises four \emph{modules}. The first module is a technique for modelling high availability using a combination of established and new HAC characteristics. The second is a suite of methods for obtaining and maintaining the information required by the other modules. The third is a HAC-independent Bayesian decision network (BDN) that predicts whether resource failures can be managed locally (i.e., without failovers). The fourth is a method for constructing a HAC-specific Bayesian network for the fast prediction of resource group and system failures. Used together, these modules reduce the downtime of HAC-protected EAs significantly. The experiments presented in this thesis show that the BP framework can deliver downtimes between 5.5 and 7.9 times smaller than those obtained with an established open-source HAC

    Designing for adaptability in architecture

    Get PDF
    The research is framed on the premise that designing buildings that can adapt by accommodating change easier and more cost-effectively provides an effective means to a desired end a more sustainable built environment. In this context, adaptability can be viewed as a means to decrease the amount of new construction (reduce), (re)activate underused or vacant building stock (reuse) and enhance disassembly/ deconstruction of components (reuse, recycle) - prolonging the useful life of buildings (reduce, reuse, recycle). The aim of the research is to gain a holistic overview of the concept of adaptability in the construction industry and provide an improved framework to design for, deploy and implement adaptability. An over-arching research question was posited to guide the inquiry: how can architects understand, communicate, design for and test the concept of adaptability in the context of the design process? The research followed Dubois and Gadde s (2002) systematic combining as an over-arching approach that continuously moves between the empirical world and theoretical models allowing the co-evolution of data collection and theory from the beginning as part of a non-linear process with the objective of matching theory with reality. An initial framework was abducted from a preliminary collection of data from which a set of mixed research methods was deployed to explore adaptability (interviews, building case studies, dependency structural matrices, practitioner surveys and workshop). Emergent from the data is an expanded and revised theory on designing for adaptability consisting of concepts, models and propositions. The models illustrate many of the casual links between the physical design structure of the building (e.g. plan depth, storey height) and the soft contingencies of a messy design/construction/occupation process (e.g. procurement route, funding methods, stakeholder mindsets). In an effort to enhance building adaptability, the abducted propositions suggest a shift in the way the industry values buildings and conducts aspects of the design process and how designer s approach designing for adaptability

    Computer-Generated Holography for Areal Additive Manufacture

    Get PDF
    With a market of approximately $10B, additive manufacture (AM) is an exciting next-generation technology with the promise of significant environmental and societal impact. AM promises to help reduce emissions and waste during manufacture while improving sustainability. Widely used in applications from hip implants to jet engines, AM remains the domain of experts due to the material and thermal challenges encountered. AM in metals is dominated by Laser Powder Based Fusion (L-PBF). Powder is spread in layers 10s of microns thick and selectively melted by scanning a small laser spot heat source over the bed. Traditional AM systems have limited ability to manage or compensate for heat generated. The rapidly moving heat source spot results in high thermal cycling and is a major influence on residual stress and distortion. Mechanical limitations in the galvoscanner mean that over or under-heating is common and can lead to voids, boiling and spatter. The scale difference between the part size and the spot size means that predictive modelling is beyond the scope of even today鈥檚 best computing clusters. These factors have led to frequent inability to ensure part quality without physical prototyping and destructive testing. This thesis sets out initial research into creating a radically new AM process that uses computer-generated holography (CGH) to produce complex light patterns in a single pulse. Projecting power to the whole layer at once will mean that the thermal properties of the powders before and after writing can be factored into the processed hologram and part design. It will also significantly reduce thermal gradients and melt-pool instability. The fields of additive manufacture and computer-generated holography are introduced in Chapter 1. Chapters 2 and 3 then provide more detail on CGH and AM modelling respectively. The first deliverable, a reusable software package capable of generating holograms, is presented in Chapter 4. Algorithms developed for the project are introduced in Chapter 4.3. The first project demonstrator, an AM machine capable of printing in resins using holographic projection is discussed in Section 6.2. This shows performance comparable to modern 3D printing machines and highlights the applicability of computer-generated holography to areal processes. Section 6.3 then discusses the ongoing development of a metal powder demonstrator. As this PhD forms the first stage of a larger project, only preliminary work on the powder demonstrator is discussed. Chapter 7 then draws conclusions and outlines the way forward for future research. The thesis appendices then discuss an in-depth discussion of algorithm performances in Appendices A and B. Appendices C and D then discuss digressions into the implementation. Appendices E and F present a laser induced damage threshold (LIDT) measurement system developed. Finally, Appendices G and H provide more detail on the software developed and Appendix I gives links to additional project resources.EP/T008369/1; EP/L016567/1; EP/V055003/

    Faculty Publications & Presentations, 2005-2006

    Get PDF
    corecore