761 research outputs found
Adapting the ADVANCE group program for digitally-supported delivery to reduce intimate partner violence by men in substance use treatment: a feasibility study
IntroductionCOVID-19 restrictions created barriers to “business as usual” in healthcare but also opened the door to innovation driven by necessity. This manuscript (1) describes how ADVANCE, an in-person group perpetrator program to reduce intimate partner violence (IPV) against female (ex)partners by men in substance use treatment, was adapted for digitally-supported delivery (ADVANCE-D), and (2) explores the feasibility and acceptability of delivering ADVANCE-D to men receiving substance use treatment.MethodsFirstly, the person-based approach and mHealth development framework were used to iteratively adapt ADVANCE for digitally-supported delivery including conceptualization, formative research, and pre-testing. Then, a non-randomized feasibility study was conducted to assess male participants’ eligibility, recruitment, and attendance rates and uptake of support offered to their (ex)partners. Exploratory analyses on reductions in IPV perpetration (assessed using the Abusive Behavior Inventory; ABI) and victimization (using the revised ABI; ABI-R) at the end of the program were performed. Longitudinal qualitative interviews with participants, their (ex)partners, and staff provided an understanding of the program’s implementation, acceptability, and outcomes.ResultsThe adapted ADVANCE-D program includes one goal-setting session, seven online groups, 12 self-directed website sessions, and 12 coaching calls. ADVANCE-D includes enhanced risk management and support for (ex)partners. Forty-five participants who had perpetrated IPV in the past 12 months were recruited, forty of whom were offered ADVANCE-D, attending 11.4 (SD 9.1) sessions on average. Twenty-one (ex)partners were recruited, 13 of whom accepted specialist support. Reductions in some IPV perpetration and victimization outcome measures were reported by the 25 participants and 11 (ex)partners interviewed pre and post-program, respectively. Twenty-two participants, 11 (ex)partners, 12 facilitators, and 7 integrated support service workers were interviewed at least once about their experiences of participation. Overall, the program content was well-received. Some participants and facilitators believed digital sessions offered increased accessibility.ConclusionThe digitally-supported delivery of ADVANCE-D was feasible and acceptable. Remote delivery has applicability post-pandemic, providing greater flexibility and access. Given the small sample size and study design, we do not know if reductions in IPV were due to ADVANCE-D, time, participant factors, or chance. More research is needed before conclusions can be made about the efficacy of ADVANCE-D
Integrated Approaches to Digital-enabled Design for Manufacture and Assembly: A Modularity Perspective and Case Study of Huoshenshan Hospital in Wuhan, China
Countries are trying to expand their healthcare capacity through advanced construction, modular innovation, digital technologies and integrated design approaches such as Design for Manufacture and Assembly (DfMA). Within the context of China, there is a need for stronger implementation of digital technologies and DfMA, as well as a knowledge gap regarding how digital-enabled DfMA is implemented. More critically, an integrated approach is needed in addition to DfMA guidelines and digital-enabled approaches.
For this research, a mixed method was used. Questionnaires defined the context of Huoshenshan Hospital, namely the healthcare construction in China. Then, Huoshenshan Hospital provided a case study of the first emergency hospital which addressed the uncertainty of COVID-19. This extreme project, a 1,000-bed hospital built in 10 days, implemented DfMA in healthcare construction and provides an opportunity to examine the use of modularity. A workshop with a design institution provided basic facts and insight into past practice and was followed by interviews with 18 designers, from various design disciplines, who were involved in the project. Finally, multiple archival materials were used as secondary data sources.
It was found that complexity hinders building systems integration, while reinforcement relationships between multiple dimensions of modularity (across organisation-process-product-supply chain dimensions) are the underlying mechanism that allows for the reduction of complexity and the integration of building systems. Promoting integrated approaches to DfMA relies on adjusting and coupling multi-dimensional modular reinforcement relationships (namely, relationships of modular alignment, modular complement, and modular incentive). Thus, the building systems integrator can use these three approaches to increase the success of digital-enabled DfMA
Edge-enhanced QoS aware compression learning for sustainable data stream analytics
Existing Cloud systems involve large volumes of data streams being sent to a centralised data centre for monitoring, storage and analytics. However, migrating all the data to the cloud is often not feasible due to cost, privacy, and performance concerns. However, Machine Learning (ML) algorithms typically require significant computational resources, hence cannot be directly deployed on resource-constrained edge devices for learning and analytics. Edge-enhanced compressive offloading becomes a sustainable solution that allows data to be compressed at the edge and offloaded to the cloud for further analysis, reducing bandwidth consumption and communication latency. The design and implementation of a learning method for discovering compression techniques that offer the best QoS for an application is described. The approach uses a novel modularisation approach that maps features to models and classifies them for a range of Quality of Service (QoS) features. An automated QoS-aware orchestrator has been designed to select the best autoencoder model in real-time for compressive offloading in edge-enhanced clouds based on changing QoS requirements. The orchestrator has been designed to have diagnostic capabilities to search appropriate parameters that give the best compression. A key novelty of this work is harnessing the capabilities of autoencoders for edge-enhanced compressive offloading based on portable encodings, latent space splitting and fine-tuning network weights. Considering how the combination of features lead to different QoS models, the system is capable of processing a large number of user requests in a given time. The proposed hyperparameter search strategy (over the neural architectural space) reduces the computational cost of search through the entire space by up to 89%. When deployed on an edge-enhanced cloud using an Azure IoT testbed, the approach saves up to 70% data transfer costs and takes 32% less time for job completion. It eliminates the additional computational cost of decompression, thereby reducing the processing cost by up to 30%
Barry Turner: The under-acknowledged safety pioneer
Barry Turner’s 1978 Man-made Disasters and Charles Perrow’s 1984 Normal Accidents were seminal books but a detailed comparison has yet to be undertaken. Doing so is important to establish content and priority of key ideas underpinning contemporary safety science. Turner’s research found socio-technical and systemic patterns that meant that major organisational disasters could be foreseen and were preventable. Perrow’s macro-structuralist industry focus was on technologically deterministic but unpredictable and unpreventable “system” accidents, particularly rare catastrophes. Andrew Hopkins and Nick Pidgeon respectively suggested that some prominent writers who wrote after Turner may not have been aware of, or did not properly acknowledge, Turner’s work. Using a methodology involving systematic reading and historical, biographical and thematic theory analysis, a detailed review of Turner’s and Perrow’s backgrounds and publications sheds new light on Turner’s priority and accomplishment, highlighting substantial similarities as well as clear differences. Normal Accidents did not cite Turner in 1984 or when republished with major additions in 1999. Turner became better known after a 1997 second edition of Man-made Disasters but under-acknowledgment issues by Perrow and others continued. Ethical citation and potential reasons for under-acknowledgment are discussed together with lessons applicable more broadly. It is concluded that Turner’s foundational importance for safety science should be better recognised
Novel neural architectures & algorithms for efficient inference
In the last decade, the machine learning universe embraced deep neural networks (DNNs) wholeheartedly with the advent of neural architectures such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), transformers, etc. These models have empowered many applications, such as ChatGPT, Imagen, etc., and have achieved state-of-the-art (SOTA) performance on many vision, speech, and language modeling tasks. However, SOTA performance comes with various issues, such as large model size, compute-intensive training, increased inference latency, higher working memory, etc. This thesis aims at improving the resource efficiency of neural architectures, i.e., significantly reducing the computational, storage, and energy consumption of a DNN without any significant loss in performance.
Towards this goal, we explore novel neural architectures as well as training algorithms that allow low-capacity models to achieve near SOTA performance. We divide this thesis into two dimensions: \textit{Efficient Low Complexity Models}, and \textit{Input Hardness Adaptive Models}.
Along the first dimension, i.e., \textit{Efficient Low Complexity Models}, we improve DNN performance by addressing instabilities in the existing architectures and training methods. We propose novel neural architectures inspired by ordinary differential equations (ODEs) to reinforce input signals and attend to salient feature regions. In addition, we show that carefully designed training schemes improve the performance of existing neural networks. We divide this exploration into two parts:
\textsc{(a) Efficient Low Complexity RNNs.} We improve RNN resource efficiency by addressing poor gradients, noise amplifications, and BPTT training issues. First, we improve RNNs by solving ODEs that eliminate vanishing and exploding gradients during the training. To do so, we present Incremental Recurrent Neural Networks (iRNNs) that keep track of increments in the equilibrium surface. Next, we propose Time Adaptive RNNs that mitigate the noise propagation issue in RNNs by modulating the time constants in the ODE-based transition function. We empirically demonstrate the superiority of ODE-based neural architectures over existing RNNs. Finally, we propose Forward Propagation Through Time (FPTT) algorithm for training RNNs. We show that FPTT yields significant gains compared to the more conventional Backward Propagation Through Time (BPTT) scheme.
\textsc{(b) Efficient Low Complexity CNNs.} Next, we improve CNN architectures by reducing their resource usage. They require greater depth to generate high-level features, resulting in computationally expensive models. We design a novel residual block, the Global layer, that constrains the input and output features by approximately solving partial differential equations (PDEs). It yields better receptive fields than traditional convolutional blocks and thus results in shallower networks. Further, we reduce the model footprint by enforcing a novel inductive bias that formulates the output of a residual block as a spatial interpolation between high-compute anchor pixels and low-compute cheaper pixels. This results in spatially interpolated convolutional blocks (SI-CNNs) that have better compute and performance trade-offs. Finally, we propose an algorithm that enforces various distributional constraints during training in order to achieve better generalization. We refer to this scheme as distributionally constrained learning (DCL).
In the second dimension, i.e., \textit{Input Hardness Adaptive Models}, we introduce the notion of the hardness of any input relative to any architecture. In the first dimension, a neural network allocates the same resources, such as compute, storage, and working memory, for all the inputs. It inherently assumes that all examples are equally hard for a model. In this dimension, we challenge this assumption using input hardness as our reasoning that some inputs are relatively easy for a network to predict compared to others. Input hardness enables us to create selective classifiers wherein a low-capacity network handles simple inputs while abstaining from a prediction on the complex inputs. Next, we create hybrid models that route the hard inputs from the low-capacity abstaining network to a high-capacity expert model. We design various architectures that adhere to this hybrid inference style. Further, input hardness enables us to selectively distill the knowledge of a high-capacity model into a low-capacity model by cleverly discarding hard inputs during the distillation procedure.
Finally, we conclude this thesis by sketching out various interesting future research directions that emerge as an extension of different ideas explored in this work
ESSA Identification of National (Sector) VET Qualification and Skills (Regulatory) Frameworks for Steel
Digital transformation and climate change represent the main drivers of innovation for European industry. In particular, green and digital technologies help to increase energy and resource efficiency and contribute to keeping materials in use for longer.
However, the right skills are needed to implement, operate and exploit these technologies to best effect. The ESSA project has developed a sector-driven Blueprint following a bottom-up social innovation process to address skills needs, which integrates all the relevant stakeholders (companies, training providers, research institutions, associations, social partners, policy makers, public administration, and civil society organisations). It has identified where there is need for re- and up- skilling and talent recruitment, and identified strategies for developing a highly skilled workforce, proactively addressing skills gaps, and engaging the workforce with new technological innovations. As part of the Blueprint, we offer policy recommendations to support these strategies and address the deep transformations the industry is currently experiencing.
The first policy recommendations are presented as general recommendations. Second, we offer policy recommendations at three levels in order to provide further contextualisation: European, national and regional. Third, we present some recommendations related to the specific support of small and medium-sized enterprises (SMEs)
Focused categorization power of ontologies: General framework and study on simple existential concept expressions
When reusing existing ontologies for publishing a dataset in RDF (or developing a new ontology), preference may be given to those providing extensive subcategorization for important classes (denoted as focus classes). The subcategories may consist not only of named classes but also of compound class expressions. We define the notion of focused categorization power of a given ontology, with respect to a focus class and a concept expression language, as the (estimated) weighted count of the categories that can be built from the ontology’s signature, conform to the language, and are subsumed by the focus class. For the sake of tractable initial experiments we then formulate a restricted concept expression language based on existential restrictions, and heuristically map it to syntactic patterns over ontology axioms (so-called FCE patterns). The characteristics of the chosen concept expression language and associated FCE patterns are investigated using three different empirical sources derived from ontology collections: first, the concept expression pattern frequency in class definitions; second, the occurrence of FCE patterns in the Tbox of ontologies; and last, for class expressions generated from the Tbox of ontologies (through the FCE patterns); their ‘meaningfulness’ was assessed by different groups of users, yielding a ‘quality ordering’ of the concept expression patterns. The complementary analyses are then compared and summarized. To allow for further experimentation, a web-based prototype was also implemented, which covers the whole process of ontology reuse from keyword-based ontology search through the FCP computation to the selection of ontologies and their enrichment with new concepts built from compound expressions
- …