7,227 research outputs found
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Technical Dimensions of Programming Systems
Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art.
In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too.
We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them.
We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions.
Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems.
Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants
Review of Methodologies to Assess Bridge Safety During and After Floods
This report summarizes a review of technologies used to monitor bridge scour with an emphasis on techniques appropriate for testing during and immediately after design flood conditions. The goal of this study is to identify potential technologies and strategies for Illinois Department of Transportation that may be used to enhance the reliability of bridge safety monitoring during floods from local to state levels. The research team conducted a literature review of technologies that have been explored by state departments of transportation (DOTs) and national agencies as well as state-of-the-art technologies that have not been extensively employed by DOTs. This review included informational interviews with representatives from DOTs and relevant industry organizations. Recommendations include considering (1) acquisition of tethered kneeboard or surf ski-mounted single-beam sonars for rapid deployment by local agencies, (2) acquisition of remote-controlled vessels mounted with single-beam and side-scan sonars for statewide deployment, (3) development of large-scale particle image velocimetry systems using remote-controlled drones for stream velocity and direction measurement during floods, (4) physical modeling to develop Illinois-specific hydrodynamic loading coefficients for Illinois bridges during flood conditions, and (5) development of holistic risk-based bridge assessment tools that incorporate structural, geotechnical, hydraulic, and scour measurements to provide rapid feedback for bridge closure decisions.IDOT-R27-SP50Ope
Educating Sub-Saharan Africa:Assessing Mobile Application Use in a Higher Learning Engineering Programme
In the institution where I teach, insufficient laboratory equipment for engineering education pushed students to learn via mobile phones or devices. Using mobile technologies to learn and practice is not the issue, but the more important question lies in finding out where and how they use mobile tools for learning. Through the lens of Kearney et al.âs (2012) pedagogical model, using authenticity, personalisation, and collaboration as constructs, this case study adopts a mixed-method approach to investigate the mobile learning activities of students and find out their experiences of what works and what does not work. Four questions are borne out of the over-arching research question, âHow do students studying at a University in Nigeria perceive mobile learning in electrical and electronic engineering education?â The first three questions are answered from qualitative, interview data analysed using thematic analysis. The fourth question investigates their collaborations on two mobile social networks using social network and message analysis. The study found how studentsâ mobile learning relates to the real-world practice of engineering and explained ways of adapting and overcoming the mobile toolsâ limitations, and the nature of the collaborations that the students adopted, naturally, when they learn in mobile social networks. It found that mobile engineering learning can be possibly located in an offline mobile zone. It also demonstrates that investigating the effectiveness of mobile learning in the mobile social environment is possible by examining usersâ interactions. The study shows how mobile learning personalisation that leads to impactful engineering learning can be achieved. The study shows how to manage most interface and technical challenges associated with mobile engineering learning and provides a new guide for educators on where and how mobile learning can be harnessed. And it revealed how engineering education can be successfully implemented through mobile tools
Optical coherence tomography methods using 2-D detector arrays
Optical coherence tomography (OCT) is a non-invasive, non-contact optical technique that allows cross-section imaging of biological tissues with high spatial resolution, high sensitivity and high dynamic range. Standard OCT uses a focused beam to illuminate a point on the target and detects the signal using a single photodetector. To acquire transverse information, transversal scanning of the illumination point is required. Alternatively, multiple OCT channels can be operated in parallel simultaneously; parallel OCT signals are recorded by a two-dimensional (2D) detector array. This approach is known as Parallel-detection OCT. In this thesis, methods, experiments and results using three parallel OCT techniques, including full -field (time-domain) OCT (FF-OCT), full-field swept-source OCT (FF-SS-OCT) and line-field Fourier-domain OCT (LF-FD-OCT), are presented. Several 2D digital cameras of different formats have been used and evaluated in the experiments of different methods. With the LF-FD-OCT method, photography equipment, such as flashtubes and commercial DSLR cameras have been equipped and tested for OCT imaging. The techniques used in FF-OCT and FF-SS-OCT are employed in a novel wavefront sensing technique, which combines OCT methods with a Shack-Hartmann wavefront sensor (SH-WFS). This combination technique is demonstrated capable of measuring depth-resolved wavefront aberrations, which has the potential to extend the applications of SH-WFS in wavefront-guided biomedical imaging techniques
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Industry 4.0: product digital twins for remanufacturing decision-making
Currently there is a desire to reduce natural resource consumption and expand circular business principles whilst Industry 4.0 (I4.0) is regarded as the evolutionary and potentially disruptive movement of technology, automation, digitalisation, and data manipulation into the industrial sector. The remanufacturing industry is recognised as being vital to the circular economy (CE) as it extends the in-use life of products, but its synergy with I4.0 has had little attention thus far. This thesis documents the first investigating into I4.0 in remanufacturing for a CE contributing a design and demonstration of a model that optimises remanufacturing planning using data from different instances in a productâs life cycle.
The initial aim of this work was to identify the I4.0 technology that would enhance the stability in remanufacturing with a view to reducing resource consumption. As the project progressed it narrowed to focus on the development of a product digital twin (DT) model to support data-driven decision making for operations planning. The modelâs architecture was derived using a bottom-up approach where requirements were extracted from the identified complications in production planning and control that differentiate remanufacturing from manufacturing. Simultaneously, the benefits of enabling visibility of an assetâs through-life health were obtained using a DT as the modus operandi. A product simulator and DT prototype was designed to use Internet of Things (IoT) components, a neural network for remaining life estimations and a search algorithm for operational planning optimisation. The DT was iteratively developed using case studies to validate and examine the real opportunities that exist in deploying a business model that harnesses, and commodifies, early life product data for end-of-life processing optimisation. Findings suggest that using intelligent programming networks and algorithms, a DT can enhance decision-making if it has visibility of the product and access to reliable remanufacturing process information, whilst existing IoT components provide rudimentary âsmartâ capabilities, but their integration is complex, and the durability of the systems over extended product life cycles needs to be further explored
Radionuclide and heavy metal sorption on to functionalised magnetic nanoparticles for environmental remediation
The presence of radionuclides and heavy metal ions in aqueous waste streams from industrial processes, especially in the nuclear waste industry, are a major concern. Many other processes are inherent producers of hazardous aqueous waste streams that require treatment for further disposal. These wastes quite often contain many contaminants, from harmful to very toxic. Contact with the environment, through groundwater or rivers, with such contaminants needs to be avoided. The ability to selectively sequester and remove contaminants from aqueous wastes with high loading capacities is of paramount importance to achieve full removal of the contaminants produced in many industries. The recent development of phosphate functionalised superparamagnetic magnetite ((PO)x-Fe3O4) nanoparticles have been shown to have ultra-high loading capacities and a high degree of selectivity towards uranium (U(VI)). The ability to manipulate these NPs with an external magnetic field gives these nanomaterials an advantage over many other conventional technologies in the field. These low-cost, non-toxic, and easily prepared magnetic NPs are highly biocompatible and have already been widely applied in the biotechnology and biomedical industries. The addition of specific functionalities allows for the fine tuning of the selectivity towards certain elements, therefore allowing full control over the selective removal of a wide range of contaminants. This study addresses the optimisation of the NPs manufacturing process that allows for the use of these NPs in a wider range of environments. Many of these waste streams are extreme environments, where they can be highly acidic or highly basic conditions. Therefore the feasibility of coating the Fe3O4 with silica (SiO2) was addressed, to provide an acid resistant layer and substrate for further functionalisation. Both the silica coating, and the applied surface functionality, were found to be stable against dissolution or chemical changes under acidic conditions from pH 1-4. Once acid resistance was established, the ability to extract a wide range of contaminant ions was also investigated. Sorption experiments with a wide range of contaminant ions were conducted to determine the selectivity and loading capacities of both (PO)x-Fe3O4 and (PO)x-SiO2@Fe3O4 NPs, at acidic (pH 3), neutral (pH 7), and basic (pH 11) conditions. Providing a basis for the manufacture of a state-of-the-art, novel extraction tool for both heavy metals and radionuclides. Inductively Coupled Plasma - Optical Emission Spectrometry (ICP-OES), Transmission Electron Microscopy (TEM), and Scanning Transmission Electron Microscopy - Energy Dispersive X-Ray (STEM-EDX) were used to achieve full characterisation of the NP complexes and supernatants to determine the successful extraction and presence of the contaminant metal ions used in this study. Determining the uptake kinetics, loading capacities for Cs(I), K(I), Na(I), Ca(II), Cd(II), Co(II), Cu(II), Mg(II), Mn(II), Mo(II), Ni(II), Pb(II), Sr(II), Al(III), Ce(III), Cr(III), Eu(III), Fe(III) and La(III) on to (PO)x-Fe3O4 and (PO)x-SiO2@Fe3O4 NPs. Implications of the use of these NPs in the extraction of radionuclides and heavy metals have been discussed in each case along with the potential for developing a broad-spectrum adsorbent. In conclusion, this PhD has shown the potential of these novel as-synthesised phosphate functionalised NP complexes to be utilised for heavy metal and radionuclide extraction, of a range of contaminants, from aqueous solutions, in acidic, neutral, and basic conditions. The production of these cost-effective and selective nanomaterials which exhibit rapid kinetics has the potential to be an important asset to the water treatment industry. Overall, these NP-complexes have been effective in fully removing a wide range of heavy metal contaminants and, therefore, have shown great promise to become a broad-spectrum adsorbent tool, which ultimately will aid in the clean-up of many new and legacy waste environments.Open Acces
Interactive Sonic Environments: Sonic artwork via gameplay experience
The purpose of this study is to investigate the use of video-game technology in the design and implementation of interactive sonic centric artworks, the purpose of which is to create and contribute to the discourse and understanding of its effectiveness in electro-acoustic composition highlighting the creative process. Key research questions include: How can the language of electro-acoustic music be placed in a new framework derived from videogame aesthetics and technology? What new creative processes need to be considered when using this medium? Moreover, what aspects of 'play' should be considered when designing the systems? The findings of this study assert that composers and sonic art practitioners need little or no coding knowledge to create exciting applications and the myriad of options available to the composer when using video-game technology is limited only by imagination. Through a cyclic process of planning, building, testing and playing these applications the project revealed advantages and unique sonic opportunities in comparison to other sonic art installations. A portfolio of selected original compositions, both fixed and open are presented by the author to complement this study. The commentary serves to place the work in context with other practitioners in the field and to provide compositional approaches that have been taken
- âŠ