218 research outputs found

    Multimodal Robotic Health in Future Factories Through IIot, Data Analytics, and Virtual Commissioning

    Get PDF
    The manufacturing sector is continuously reinventing itself by embracing opportunities offered by the industrial internet of things and big data, among other advances. Modern manufacturing platforms are defined by the quest for ever increasing automation along all aspects of the production cycle. Furthermore, in the next decades, research and industry are expected to develop a large variety of autonomous robots for a large variety of tasks and environments enabling future factories. This continuing pressure towards automation dictates that emergent technologies are leveraged in a manner that suits this purpose. These challenges can be addressed through the advanced methods such as [1] large-scale simulation, [2] system health monitoring sensors and [3] advanced computational technologies to establish a life-like digital manufacturing platform and capture, represent, predict, and control the dynamics of a live manufacturing cell in a future factory. Autonomy is a desirable quality for robots in manufacturing, particularly when the robot needs to act in real-world environments together with other agents, and when the environment changes in unpredictable or uncertain way. This dissertation research will focus on experimentally collecting sensor signals from force sensors, motor voltages, robot monitors and thermal cameras to connect to such digital twin systems so that more accurate real-time plant descriptions can be collected and shared between stakeholders. Creating a future factory based on an Industrial Internet-of-Things (IIoT) platform, data-driven science and engineering solutions will help accelerating Smart Manufacturing Innovation. Besides, this study will examine the ways of sharing knowledge between robots, and between different subsystems of a single robot, and implement concepts for communicating knowledge that are machine logical and reliable. My work will focus on applying the proposed methodology on more diverse manufacturing tasks and materials flows, including collaboratively assembly jobs, visual inspection, and continuous movement tasks. These tasks will require higher-dimensional information such as, analog plant signals, and machine vision feedback to be fed into and train the digital twin

    MODELLING VIRTUAL ENVIRONMENT FOR ADVANCED NAVAL SIMULATION

    Get PDF
    This thesis proposes a new virtual simulation environment designed as element of an interoperable federation of simulator to support the investigation of complex scenarios over the Extended Maritime Framework (EMF). Extended Maritime Framework is six spaces environment (Underwater, Water surface, Ground, Air, Space, and Cyberspace) where parties involved in Joint Naval Operations act. The amount of unmanned vehicles involved in the simulation arise the importance of the Communication modelling, thus the relevance of Cyberspace. The research is applied to complex cases (one applied to deep waters and one to coast and littoral protection) as examples to validate this approach; these cases involve different kind of traditional assets (e.g. satellites, helicopters, ships, submarines, underwater sensor infrastructure, etc.) interact dynamically and collaborate with new autonomous systems (i.e. AUV, Gliders, USV and UAV). The use of virtual simulation is devoted to support validation of new concepts and investigation of collaborative engineering solutions by providing a virtual representation of the current situation; this approach support the creation of dynamic interoperable immersive framework that could support training for Man in the Loop, education and tactical decision introducing the Man on the Loop concepts. The research and development of the Autonomous Underwater Vehicles requires continuous testing so a time effective approach can result a very useful tool. In this context the simulation can be useful to better understand the behaviour of Unmanned Vehicles and to avoid useless experimentations and their costs finding problems before doing them. This research project proposes the creation of a virtual environment with the aim to see and understand a Joint Naval Scenario. The study will be focusing especially on the integration of Autonomous Systems with traditional assets; the proposed simulation deals especially with collaborative operation involving different types of Autonomous Underwater Vehicles (AUV), Unmanned Surface Vehicles (USV) and UAV (Unmanned Aerial Vehicle). The author develops an interoperable virtual simulation devoted to present the overall situation for supervision considering also the sensor capabilities, communications and mission effectiveness that results dependent of the different asset interaction over a complex heterogeneous network. The aim of this research is to develop a flexible virtual simulation solution as crucial element of an HLA federation able to address the complexity of Extended Maritime Framework (EMF). Indeed this new generation of marine interoperable simulation is a strategic advantage for investigating the problems related to the operational use of autonomous systems and to finding new ways to use them respect to different scenarios. The research deal with the creation of two scenarios, one related to military operations and another one on coastal and littoral protection where the virtual simulation propose the overall situation and allows to navigate into the virtual world considering the complex physics affecting movement, perception, interaction and communication. By this approach, it becomes evident the capability to identify, by experimental analysis within the virtual world, the new solutions in terms of engineering and technological configuration of the different systems and vehicles as well as new operational models and tactics to address the specific mission environment. The case of study is a maritime scenario with a representation of heterogeneous network frameworks that involves multiple vehicles both naval and aerial including AUVs, USVs, gliders, helicopter, ships, submarines, satellite, buoys and sensors. For the sake of clarity aerial communications will be represented divided from underwater ones. A connection point for the latter will be set on the keel line of surface vessels representing communication happening via acoustic modem. To represent limits in underwater communications, underwater signals have been considerably slowed down in order to have a more realistic comparison with aerial ones. A maximum communication distance is set, beyond which no communication can take place. To ensure interoperability the HLA Standard (IEEE 1516 evolved) is adopted to federate other simulators so to allow its extensibility for other case studies. Two different scenarios are modelled in 3D visualization: Open Water and Port Protection. The first one aims to simulate interactions between traditional assets in Extended Maritime Framework (EMF) such as satellite, navy ships, submarines, NATO Research Vessels (NRVs), helicopters, with new generation unmanned assets as AUV, Gliders, UAV, USV and the mutual advantage the subjects involved in the scenario can have; in other word, the increase in persistence, interoperability and efficacy. The second scenario models the behaviour of unmanned assets, an AUV and an USV, patrolling a harbour to find possible threats. This aims to develop an algorithm to lead patrolling path toward an optimum, guaranteeing a high probability of success in the safest way reducing human involvement in the scenario. End users of the simulation face a graphical 3D representation of the scenario where assets would be represented. He can moves in the scenario through a Free Camera in Graphic User Interface (GUI) configured to entitle users to move around the scene and observe the 3D sea scenario. In this way, players are able to move freely in the synthetic environment in order to choose the best perspective of the scene. The work is intended to provide a valid tool to evaluate the defencelessness of on-shore and offshore critical infrastructures that could includes the use of new technologies to take care of security best and preserve themselves against disasters both on economical and environmental ones

    Building a Simple Smart Factory

    Get PDF
    This thesis describes (a) the search and findings of smart factories and their enabling technologies (b) the methodology to build or retrofit a smart factory and (c) the building and operation of a simple smart factory using the methodology. A factory is an industrial site with large buildings and collection of machines, which are operated by persons to manufacture goods and services. These factories are made smart by incorporating sensing, processing, and autonomous responding capabilities. Developments in four main areas (a) sensor capabilities (b) communication capabilities (c) storing and processing huge amount of data and (d) better utilization of technology in management and further development have contributed significantly for this incorporation of smartness to factories. There is a flurry of literature in each of the above four topics and their combinations. The findings from the literature can be summarized in the following way. Sensors detect or measure a physical property and records, indicates, or otherwise responds to it. In real-time, they can make a very large amount of observations. Internet is a global computer network providing a variety of information and communication facilities and the internet of things, IoT, is the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data. Big data handling and the provision of data services are achieved through cloud computing. Due to the availability of computing power, big data can be handled and analyzed under different classifications using several different analytics. The results from these analytics can be used to trigger autonomous responsive actions that make the factory smart. Having thus comprehended the literature, a seven stepped methodology for building or retrofitting a smart factory was established. The seven steps are (a) situation analysis where the condition of the current technology is studied (b) breakdown prevention analysis (c) sensor selection (d) data transmission and storage selection (e) data processing and analytics (f) autonomous action network and (g) integration with the plant units. Experience in a cement factory highlighted the wear in a journal bearing causes plant stoppages and thus warrant a smart system to monitor and make decisions. The experience was used to develop a laboratory-scale smart factory monitoring the wear of a half-journal bearing. To mimic a plant unit a load-carrying shaft supported by two half-journal bearings were chosen and to mimic a factory with two plant units, two such shafts were chosen. Thus, there were four half-journal bearings to monitor. USB Logitech C920 webcam that operates in full-HD 1080 pixels was used to take pictures at specified intervals. These pictures are then analyzed to study the wear at these intervals. After the preliminary analysis wear versus time data for all four bearings are available. Now the ‘making smart activity’ begins. Autonomous activities are based on various analyses. The wear time data are analyzed under different classifications. Remaining life, wear coefficient specific to the bearings, weekly variation in wear and condition of adjacent bearings are some of the characteristics that can be obtained from the analytics. These can then be used to send a message to the maintenance and supplies division alerting them on the need for a replacement shortly. They can also be alerted about other bearings reaching their maturity to plan a major overhaul if needed

    MODELLING & SIMULATION HYBRID WARFARE Researches, Models and Tools for Hybrid Warfare and Population Simulation

    Get PDF
    The Hybrid Warfare phenomena, which is the subject of the current research, has been framed by the work of Professor Agostino Bruzzone (University of Genoa) and Professor Erdal Cayirci (University of Stavanger), that in June 2016 created in order to inquiry the subject a dedicated Exploratory Team, which was endorsed by NATO Modelling & Simulation Group (a panel of the NATO Science & Technology organization) and established with the participation as well of the author. The author brought his personal contribution within the ET43 by introducing meaningful insights coming from the lecture of \u201cFight by the minutes: Time and the Art of War (1994)\u201d, written by Lieutenant Colonel US Army (Rtd.) Robert Leonhard; in such work, Leonhard extensively developed the concept that \u201cTime\u201d, rather than geometry of the battlefield and/or firepower, is the critical factor to tackle in military operations and by extension in Hybrid Warfare. The critical reflection about the time - both in its quantitative and qualitative dimension - in a hybrid confrontation it is addressed and studied inside SIMCJOH, a software built around challenges that imposes literally to \u201cFight by the minutes\u201d, echoing the core concept expressed in the eponymous work. Hybrid Warfare \u2013 which, by definition and purpose, aims to keep the military commitment of both aggressor and defender at the lowest - can gain enormous profit by employing a wide variety of non-military tools, turning them into a weapon, as in the case of the phenomena of \u201cweaponization of mass migrations\u201d, as it is examined in the \u201cDies Irae\u201d simulation architecture. Currently, since migration it is a very sensitive and divisive issue among the public opinions of many European countries, cynically leveraging on a humanitarian emergency caused by an exogenous, inducted migration, could result in a high level of political and social destabilization, which indeed favours the concurrent actions carried on by other hybrid tools. Other kind of disruption however, are already available in the arsenal of Hybrid Warfare, such cyber threats, information campaigns lead by troll factories for the diffusion of fake/altered news, etc. From this perspective the author examines how the TREX (Threat network simulation for REactive eXperience) simulator is able to offer insights about a hybrid scenario characterized by an intense level of social disruption, brought by cyber-attacks and systemic faking of news. Furthermore, the rising discipline of \u201cStrategic Engineering\u201d, as envisaged by Professor Agostino Bruzzone, when matched with the operational requirements to fulfil in order to counter Hybrid Threats, it brings another innovative, as much as powerful tool, into the professional luggage of the military and the civilian employed in Defence and Homeland security sectors. Hybrid is not the New War. What is new is brought by globalization paired with the transition to the information age and rising geopolitical tensions, which have put new emphasis on hybrid hostilities that manifest themselves in a contemporary way. Hybrid Warfare is a deliberate choice of an aggressor. While militarily weak nations can resort to it in order to re-balance the odds, instead military strong nations appreciate its inherent effectiveness coupled with the denial of direct responsibility, thus circumventing the rules of the International Community (IC). In order to be successful, Hybrid Warfare should consist of a highly coordinated, sapient mix of diverse and dynamic combination of regular forces, irregular forces (even criminal elements), cyber disruption etc. all in order to achieve effects across the entire DIMEFIL/PMESII_PT spectrum. However, the owner of the strategy, i.e. the aggressor, by keeping the threshold of impunity as high as possible and decreasing the willingness of the defender, can maintain his Hybrid Warfare at a diplomatically feasible level; so the model of the capacity, willingness and threshold, as proposed by Cayirci, Bruzzone and Gunneriusson (2016), remains critical to comprehend Hybrid Warfare. Its dynamicity is able to capture the evanescent, blurring line between Hybrid Warfare and Conventional Warfare. In such contest time is the critical factor: this because it is hard to foreseen for the aggressor how long he can keep up with such strategy without risking either the retaliation from the International Community or the depletion of resources across its own DIMEFIL/PMESII_PT spectrum. Similar discourse affects the defender: if he isn\u2019t able to cope with Hybrid Threats (i.e. taking no action), time works against him; if he is, he can start to develop counter narrative and address physical countermeasures. However, this can lead, in the medium long period, to an unforeseen (both for the attacker and the defender) escalation into a large, conventional, armed conflict. The performance of operations that required more than kinetic effects drove the development of DIMEFIL/PMESII_PT models and in turn this drive the development of Human Social Culture Behavior Modelling (HCSB), which should stand at the core of the Hybrid Warfare modelling and simulation efforts. Multi Layers models are fundamental to evaluate Strategies and Support Decisions: currently there are favourable conditions to implement models of Hybrid Warfare, such as Dies Irae, SIMCJOH and TREX, in order to further develop tools and war-games for studying new tactics, execute collective training and to support decisions making and analysis planning. The proposed approach is based on the idea to create a mosaic made by HLA interoperable simulators able to be combined as tiles to cover an extensive part of the Hybrid Warfare, giving the users an interactive and intuitive environment based on the \u201cModelling interoperable Simulation and Serious Game\u201d (MS2G) approach. From this point of view, the impressive capabilities achieved by IA-CGF in human behavior modeling to support population simulation as well as their native HLA structure, suggests to adopt them as core engine in this application field. However, it necessary to highlight that, when modelling DIMEFIL/PMESII_PT domains, the researcher has to be aware of the bias introduced by the fact that especially Political and Social \u201cscience\u201d are accompanied and built around value judgement. From this perspective, the models proposed by Cayirci, Bruzzone, Guinnarson (2016) and by Balaban & Mileniczek (2018) are indeed a courageous tentative to import, into the domain of particularly poorly understood phenomena (social, politics, and to a lesser degree economics - Hartley, 2016), the mathematical and statistical instruments and the methodologies employed by the pure, hard sciences. Nevertheless, just using the instruments and the methodology of the hard sciences it is not enough to obtain the objectivity, and is such aspect the representations of Hybrid Warfare mechanics could meet their limit: this is posed by the fact that they use, as input for the equations that represents Hybrid Warfare, not physical data observed during a scientific experiment, but rather observation of the reality that assumes implicitly and explicitly a value judgment, which could lead to a biased output. Such value judgement it is subjective, and not objective like the mathematical and physical sciences; when this is not well understood and managed by the academic and the researcher, it can introduce distortions - which are unacceptable for the purpose of the Science - which could be used as well to enforce a narrative mainstream that contains a so called \u201ctruth\u201d, which lies inside the boundary of politics rather than Science. Those observations around subjectivity of social sciences vs objectivity of pure sciences, being nothing new, suggest however the need to examine the problem under a new perspective, less philosophical and more leaned toward the practical application. The suggestion that the author want make here is that the Verification and Validation process, in particular the methodology used by Professor Bruzzone in doing V&V for SIMCJOH (2016) and the one described in the Modelling & Simulation User Risk Methodology (MURM) developed by Pandolfini, Youngblood et all (2018), could be applied to evaluate if there is a bias and the extent of the it, or at least making clear the value judgment adopted in developing the DIMEFIL/PMESII_PT models. Such V&V research is however outside the scope of the present work, even though it is an offspring of it, and for such reason the author would like to make further inquiries on this particular subject in the future. Then, the theoretical discourse around Hybrid Warfare has been completed addressing the need to establish a new discipline, Strategic Engineering, very much necessary because of the current a political and economic environment which allocates diminishing resources to Defense and Homeland Security (at least in Europe). However, Strategic Engineering can successfully address its challenges when coupled with the understanding and the management of the fourth dimension of military and hybrid operations, Time. For the reasons above, and as elaborated by Leonhard and extensively discussed in the present work, addressing the concern posed by Time dimension is necessary for the success of any military or Hybrid confrontation. The SIMCJOH project, examined under the above perspective, proved that the simulator has the ability to address the fourth dimension of military and non-military confrontation. In operations, Time is the most critical factor during execution, and this was successfully transferred inside the simulator; as such, SIMCJOH can be viewed as a training tool and as well a dynamic generator of events for the MEL/MIL execution during any exercise. In conclusion, SIMCJOH Project successfully faces new challenging aspects, allowed to study and develop new simulation models in order to support decision makers, Commanders and their Staff. Finally, the question posed by Leonhard in terms of recognition of the importance of time management of military operations - nowadays Hybrid Conflict - has not been answered yet; however, the author believes that Modelling and Simulation tools and techniques can represent the safe \u201ctank\u201d where innovative and advanced scientific solutions can be tested, exploiting the advantage of doing it in a synthetic environment

    Resource-Independent Computer Aided Inspection

    Get PDF

    A process model in platform independent and neutral formal representation for design engineering automation

    Get PDF
    An engineering design process as part of product development (PD) needs to satisfy ever-changing customer demands by striking a balance between time, cost and quality. In order to achieve a faster lead-time, improved quality and reduced PD costs for increased profits, automation methods have been developed with the help of virtual engineering. There are various methods of achieving Design Engineering Automation (DEA) with Computer-Aided (CAx) tools such as CAD/CAE/CAM, Product Lifecycle Management (PLM) and Knowledge Based Engineering (KBE). For example, Computer Aided Design (CAD) tools enable Geometry Automation (GA), PLM systems allow for sharing and exchange of product knowledge throughout the PD lifecycle. Traditional automation methods are specific to individual products and are hard-coded and bound by the proprietary tool format. Also, existing CAx tools and PLM systems offer bespoke islands of automation as compared to KBE. KBE as a design method incorporates complete design intent by including re-usable geometric, non-geometric product knowledge as well as engineering process knowledge for DEA including various processes such as mechanical design, analysis and manufacturing. It has been recognised, through an extensive literature review, that a research gap exists in the form of a generic and structured method of knowledge modelling, both informal and formal modelling, of mechanical design process with manufacturing knowledge (DFM/DFA) as part of model based systems engineering (MBSE) for DEA with a KBE approach. There is a lack of a structured technique for knowledge modelling, which can provide a standardised method to use platform independent and neutral formal standards for DEA with generative modelling for mechanical product design process and DFM with preserved semantics. The neutral formal representation through computer or machine understandable format provides open standard usage. This thesis provides a contribution to knowledge by addressing this gap in two-steps: • In the first step, a coherent process model, GPM-DEA is developed as part of MBSE which can be used for modelling of mechanical design with manufacturing knowledge utilising hybrid approach, based on strengths of existing modelling standards such as IDEF0, UML, SysML and addition of constructs as per author’s Metamodel. The structured process model is highly granular with complex interdependencies such as activities, object, function, rule association and includes the effect of the process model on the product at both component and geometric attributes. • In the second step, a method is provided to map the schema of the process model to equivalent platform independent and neutral formal standards using OWL/SWRL ontology for system development using Protégé tool, enabling machine interpretability with semantic clarity for DEA with generative modelling by building queries and reasoning on set of generic SWRL functions developed by the author. Model development has been performed with the aid of literature analysis and pilot use-cases. Experimental verification with test use-cases has confirmed the reasoning and querying capability on formal axioms in generating accurate results. Some of the other key strengths are that knowledgebase is generic, scalable and extensible, hence provides re-usability and wider design space exploration. The generative modelling capability allows the model to generate activities and objects based on functional requirements of the mechanical design process with DFM/DFA and rules based on logic. With the help of application programming interface, a platform specific DEA system such as a KBE tool or a CAD tool enabling GA and a web page incorporating engineering knowledge for decision support can consume relevant part of the knowledgebase

    Assessment of Coronary Artery Disease by Computed Tomography

    Get PDF
    MD (Res)BACKGROUND Computed Tomography Coronary Angiography (CTCA)is a technique for imaging coronary arteries with increasing indications in clinical cardiology. AIMS 1.Develop a heart rate (HR) lowering regime for CTCA and to measure its association with image quality. 2.Examine the diagnostic accuracy of 64 slice CTCA (CTCA64) in patients with known coronary artery disease (CAD). 3.Examine the diagnostic accuracy of CTCA64 for assessment of stent restenosis 4.Demonstrate utility of CTCA as an endpoint in assessment of novel diagnostic biomarkers of CAD. METHODS I developed a HR reducing strategy using metoprolol and assessed its effectiveness for improving CTCA64 image quality. The diagnostic value of CTCA in patients with suspected angina was evaluated by comparison with invasive coronary angiography. The diagnostic value of CTCA for quantifying stent restenosis was evaluated by comparison with intravascular ultrasound. The utility of CTCA for evaluating the diagnostic value of B-type natriuretic peptide (BNP) and high sensitivity cardiac troponin I (hs- TnI) was evaluated by blood sampling in patients with suspected angina who subsequently underwent CTCA. RESULTS 1.In 121 patients undergoing CTCA, 75 required rate control. This was achieved (rate ≤60 bpm) in 83% using a systematic regimen of oral and IV metoprolol (n=71) or verapamil (n=4). I demonstrated a significant relation between HR reduction and graded image quality (p<0.001). 2.80 patients underwent CTCA64 and invasive coronary angiography. 724 coronary arterial segments were available for analysis. The sensitivity and specificity of CTCA for significant luminal stenosis was 83.3% (95% CI 67.1-92.5%) and 96.7% (95% CI 95.1-97.9%), respectively, but the positive predictive value was only 63.5% (95% CI 50.4-75.3%). 3.80 patients with 125 stented segments underwent CTCA64 and invasive coronary angiography. Additional intravascular ult rasound (IVUS) examination of stented segments was performed in 48 patients. Using IVUS as the gold-standard for stent restenosis, CTCA and invasive coronary angiography had comparable diagnostic specificities for binary stent restenosis: 82.7% (95% confidence intervals 69.7- 91.84%)and 78.9% (95% confidence intervals 65.3-88.9%), respectively. Sensitivities were lower, particularly the sensitivity of CTCA which was only 11.8% (95% confidence intervals 1.5-36.4%) compared with 58.8% (95% confidence intervals 32.9-81.6%) for invasive coronary angiography. 4. In 93 patients with suspected angina CTCA64 provided a useful endpoint for assessing the diagnostic value of novel circulating biomarkers. BNP levels were higher in the 13 patients shown to have significant (≥50% stenosis) coronary artery disease compared with patients who had unobstructed coronary arteries (18.08pg/ml (IQR 22) vs 9.14pg/ml (IQR 12.62), p=0.024) and increased significantly with exercise, particularly in the group with anatomic coronary artery disease (2.73 ± 5.69 pg/ml vs 1.27±3.29 pg/ml, p=0.16). Conversely I found no association between hs-TnI and the presence of CAD. CONCLUSION Image quality of CTCA64 is enhanced by heart rate reduction below 60 bpm which can be achieved safely by a regimen of oral and intravenous metoprolol. Although CTCA64 is a useful non-invasive method for diagnosis of coronary artery disease, it has a low positive predictive value for identifying severe (≥50%) luminal stenosis which limits its clinical value. Its value for assessment of stent restenosis is even more limited but it finds useful application as an endpoint for diagnostic evaluation of novel biomarkers, allowing confirmation of an association between circulating BNP levels and stable coronary artery diseaseBarts and London charitable trust; Siemens; The Hospital of St John and St Elizabet

    Facilitating Reliable Autonomy with Human-Robot Interaction

    Get PDF
    Autonomous robots are increasingly deployed to complex environments in which we cannot predict all possible failure cases a priori. Robustness to failures can be provided by humans enacting the roles of: (1) developers who can iteratively incorporate robustness into the robot system, (2) collocated bystanders who can be approached for aid, and (3) remote teleoperators who can be contacted for guidance. However, assisting the robot in any of these roles can place demands on the time or effort of the human. This dissertation develops modules to reduce the frequency and duration of failure interventions in order to increase the reliability of autonomous robots, while also reducing the demand on humans. In pursuit of that goal, the dissertation makes the following contributions: (1) A development paradigm for autonomous robots that separates task specification from error recovery. The paradigm reduces burden on developers while making the robot robust to failures. (2) A model for gauging the interruptibility of collocated humans. A human-subjects study shows that using the model can reduce the time expended by the robot during failure recovery. (3) A human-subjects experiment on the effects of decision support provided to remote operators during failures. The results show that humans need both diagnosis and action recommendations as decision support during an intervention. (4) An evaluation of model features and unstructured Machine Learning (ML) techniques in pursuit of learning robust suggestions models from intervention data, in order to reduce developer effort. The results indicate that careful crafting of features can lead to improved performance, but that without such feature selection, current ML algorithms lack robustness in addressing a domain where the robot's observations are heavily influenced by the user's actions.Ph.D
    • …
    corecore