358 research outputs found
Dynamic Modelling and Experimental Evaluation of Nanoparticles Application in Surfactant Enhanced Oil Recovery
No abstract availabl
Structural material from cellulose fibres: design-driven research case
Innovative wood-based biomaterials can be seen as a strategic asset for Finland, both from economic and environmental perspectives. Pioneering research programmes such as FinnCERES are positioning the country at the cutting edge of global forest material innovation, targeting novel material solutions with advanced properties and efficient manufacturing technologies. Simultaneously, the contribution of design gradually increases in the field of material research. Design transforms from a separate practice into a valuable component of a collaborative approach. Multiple recent projects have exhibited the potential of design to accelerate scientific innovation of wood-based materials.
This thesis describes a design-driven case of applied research focused on wood-based fibre materials. It provides a detailed description of an experimental process leading to a structured approach that facilitates interdisciplinary material development. The study is practice-based, and focuses on the development of foam-formed structures from cellulose fibres. Previous research of new foam forming technology indicates a unique property of precision in surface texture. Building upon this discovery, an attempt was made to produce a structural material composed of millimetre-scale units by a combination of geometrical design and the understanding of cellulose fibre interactions.
The research was performed as an iterative process composed of five cycles. Each cycle consists of a series of practical experiments focusing on the development of material structures via prototyping. The experiments included design of foam-formed structures, and qualitative assessment and mechanical testing of the prototypes. Improved understanding of the materials and manufacturing process was obtained through observation and analysis of the experiments, which was transformed into new concepts during collaborative ideation. Interdisciplinary collaboration between material scientists and designer resulted in combined expertise that is at the core of the iterative approach.
The outcome of iterative exploration is expressed in structural material prototypes with improved technical properties and appealing perceptual characteristics. The prototypes demonstrate the feasibility of foam forming as a mean of production for cellulosic materials with increased compressive strength and reduced density. Correspondingly, the obtained visual and associative material properties provide a new perspective on fibre materials and an engaging experience for future users. This material has potential for further development into lightweight applications that can become alternatives to fossil-derived products. The detailed description of the process showcases the benefits of interdisciplinary methods in materials development and provides the background needed for future research
A Challenge in Reweighting Data with Bilevel Optimization
In many scenarios, one uses a large training set to train a model with the
goal of performing well on a smaller testing set with a different distribution.
Learning a weight for each data point of the training set is an appealing
solution, as it ideally allows one to automatically learn the importance of
each training point for generalization on the testing set. This task is usually
formalized as a bilevel optimization problem. Classical bilevel solvers are
based on a warm-start strategy where both the parameters of the models and the
data weights are learned at the same time. We show that this joint dynamic may
lead to sub-optimal solutions, for which the final data weights are very
sparse. This finding illustrates the difficulty of data reweighting and offers
a clue as to why this method is rarely used in practice
Adaptive Tests for Ordered Categorical Data
Consider testing for independence against stochastic order in an ordered 2xJ contingency table, under product multinomial sampling. In applications one may wish to exploit prior information concerning the direction of the treatment effect, yet ultimately end up with a testing procedure with good frequentist properties. As such, a reasonable objective may be to simultaneously maximize power at a specified alternative and ensure reasonable power for all other alternatives of interest. For this objective, none of the available testing approaches are completely satisfactory. A new class of admissible adaptive tests is derived. Each test in this class strictly preserves the Type I error rate and strikes a balance between good global power and nearly optimal (envelope) power to detect a specific alternative of most interest. Prior knowledge of the direction of the treatment effect, the level of confidence in this prior information, and possibly the marginal totals might be used to select a specific test from this class
Abundance, Diversity, and Depth Distribution of Planctomycetes in Acidic Northern Wetlands
Members of the bacterial phylum Planctomycetes inhabit various aquatic and terrestrial environments. In this study, fluorescence in situ hybridization (FISH) was applied to assess the abundance and depth distribution of these bacteria in nine different acidic wetlands of Northern Russia. Planctomycetes were most abundant in the oxic part of the wetland profiles. The respective cell numbers were in the range 1.1–6.7 × 107 cells g−1 of wet peat, comprising 2–14% of total bacterial cells, and displaying linear correlation to the peat water pH. Most peatland sites showed a sharp decline of planctomycete abundance with depth, while in two particular sites this decline was followed by a second population maximum in an anoxic part of the bog profile. Oxic peat layers were dominated by representatives of the Isosphaera–Singulisphaera group, while anoxic peat was inhabited mostly by Zavarzinella- and Pirellula-like planctomycetes. Phylogenetically related bacteria of the candidate division OP3 were detected in both oxic and anoxic peat layers with cell densities of 0.6–4.6 × 106 cells g−1 of wet peat
Selection of the initial design for the two-stage continual reassessment method
The continual reassessment method (CRM) was proposed in a Bayesian framework whereby the first patient is assigned to the prior guess of the maximum tolerated dose which is usually not the lowest dose level. This assignment may lead to safety concerns in practice because physicians usually prefer not to skip lower dose levels before escalating to the higher dose levels. The two-stage CRM was proposed to address such concern whereby model based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifing cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose finding study for which physicians stated their desire to start from the lowest dose even though the maximum tolerated dose was thought to be one of the higher dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies and provides a systematic approach for the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided
Practical designs for Phase I combination studies in oncology
Phase I trials evaluating the safety of multi-drug combinations are becoming more common in oncology. Despite the emergence of novel methodology in the area, it is rare that innovative approaches are used in practice. In this article, we review three methods for Phase I combination studies that are easy to understand and straightforward to implement. We demonstrate the operating characteristics of the designs through illustration in a single trial, as well as through extensive simulation studies, with the aim of increasing the use of novel approaches in phase I combination studies. Design specifications and software capabilities are also discussed
Enrollment and Stopping Rules for Managing Toxicity Requiring Long Follow-Up in Phase II Oncology Trials
Monitoring of toxicity is often conducted in Phase II trials in oncology to avoid an excessive number of toxicities if the wrong dose is chosen for Phase II. Existing stopping rules for toxicity use information from patients who have already completed follow-up. We describe a stopping rule that uses all available data to determine whether to stop for toxicity or not when follow-up for toxicity is long. We propose an enrollment rule that prescribes the maximum number of patients that may be enrolled at any given point in the trial. Key words: Delayed outcome, Phase II oncology trial, Pocock boundary, Stopping rule, Enrollment rule
Dose finding when the target dose is on a plateau of a dose-response curve: comparison of fully sequential designs
Consider the problem of estimating a dose with a certain response rate. Many multistage dose-finding designs for this problem were originally developed for oncology studies where the mean dose-response is strictly increasing in dose. In non-oncology Phase II dose-finding studies the dose-response curve often plateaus in the range of interest and there are several doses with the mean response equal to the target. In this case it is usually of interest to find the lowest of these doses since higher doses might have higher adverse event rates. It is often desirable to compare the response rate at the estimated target dose with a placebo and/or active control. We investigate which of the several known dose-finding methods developed for oncology Phase I trials is the most suitable when the dose response curve plateaus. Some of the designs tend to spread the allocation among the doses on the plateau. Others, like the continual reassessment method and the t-statistic design, concentrate allocation at one of the doses with the t-statistic design selecting the lowest dose on the plateau more frequently
Comparison of Isotonic Designs for Dose-Finding
We compare several decision rules for allocating subjects to dosages that are based on sequential isotonic estimates of a monotone dose–toxicity curve. We conclude that the decision rule in which the next assignment is to the dose having probability of toxicity closest to target does not work well. The best rule in our comparison is given by the cumulative cohort design. According to this design, the dose for the next subject is decreased, increased, or repeated depending on the distance between the estimated toxicity rate at the current dose and the target quantile
- …