967 research outputs found
The Specific Acceleration Rate in Loop-structured Solar Flares -- Implications for Electron Acceleration Models
We analyze electron flux maps based on RHESSI hard X-ray imaging spectroscopy
data for a number of extended coronal loop flare events. For each event, we
determine the variation of the characteristic loop length with electron
energy , and we fit this observed behavior with models that incorporate an
extended acceleration region and an exterior "propagation" region, and which
may include collisional modification of the accelerated electron spectrum
inside the acceleration region. The models are characterized by two parameters:
the plasma density in, and the longitudinal extent of, the
acceleration region. Determination of the best-fit values of these parameters
permits inference of the volume that encompasses the acceleration region and of
the total number of particles within it. It is then straightforward to compute
values for the emission filling factor and for the {\it specific acceleration
rate} (electrons s per ambient electron above a chosen reference
energy). For the 24 events studied, the range of inferred filling factors is
consistent with a value of unity. The inferred mean value of the specific
acceleration rate above keV is s, with a
1 spread of about a half-order-of-magnitude above and below this value.
We compare these values with the predictions of several models, including
acceleration by large-scale, weak (sub-Dreicer) fields, by strong
(super-Dreicer) electric fields in a reconnecting current sheet, and by
stochastic acceleration processes
A New Class of Multiple-rate Codes Based on Block Markov Superposition Transmission
Hadamard transform~(HT) as over the binary field provides a natural way to
implement multiple-rate codes~(referred to as {\em HT-coset codes}), where the
code length is fixed but the code dimension can be varied from
to by adjusting the set of frozen bits. The HT-coset codes, including
Reed-Muller~(RM) codes and polar codes as typical examples, can share a pair of
encoder and decoder with implementation complexity of order .
However, to guarantee that all codes with designated rates perform well,
HT-coset coding usually requires a sufficiently large code length, which in
turn causes difficulties in the determination of which bits are better for
being frozen. In this paper, we propose to transmit short HT-coset codes in the
so-called block Markov superposition transmission~(BMST) manner. At the
transmitter, signals are spatially coupled via superposition, resulting in long
codes. At the receiver, these coupled signals are recovered by a sliding-window
iterative soft successive cancellation decoding algorithm. Most importantly,
the performance around or below the bit-error-rate~(BER) of can be
predicted by a simple genie-aided lower bound. Both these bounds and simulation
results show that the BMST of short HT-coset codes performs well~(within one dB
away from the corresponding Shannon limits) in a wide range of code rates
Engineered antigen-presenting hydrogels: model platforms for studies of T cell mechanotransduction
T cells apply forces and eventually sense and then respond to the mechanical properties of their surroundings, including those of antigen presenting cells (APC) when they form the immunological synapse (IS). The identification of the mechanosensitive receptors and time scales at which they sense and actuate is experimentally difficult at the natural cell-cell interface. Inspired by the tools used in cell-matrix mechanobiology, this thesis presents synthetic, hydrogel-based models of APCs to study T cell mechanotransduction, focusing on the early T cell activation. Polyacrylamide (PAAm) hydrogels (1-50 kPa) were micropatterned with streptavidin and APC ligands (antibody against CD3 co-receptor (anti-CD3) and intercellular cell adhesion molecule-1 (ICAM-1)) at controlled ligand density and in geometries with defined dimensions. The anti-CD3 patterned hydrogels were used to study the interplay between hydrogel stiffness and CD3-mediated early T cell activation markers. In the last chapter, the regulatory role of ICAM-1 coupled to anti-CD3 and hydrogel stiffness in early T cell activation was studied on hydrogels with patterned anti-CD3 microdots surrounded by a background of ICAM-1. The results contribute to the understanding of the factors involved in T cell mechanotransduction, providing useful information for the future design of immunomodulatory materials.T-Zellen ĂŒben auf ihre Umgebung KrĂ€fte aus und erfassen und reagieren auf die mechanischen Eigenschaften ihrer Umgebung, insbesondere auf Antigen-prĂ€sentierende Zellen (APC), wenn sie die immunologische Synapse (IS) bilden. Die Identifizierung der mechanosensitiven Rezeptoren und der Zeitskalen, in denen sie aktiviert werden, ist an der natĂŒrlichen Zell-Zell-GrenzflĂ€che experimentell schwierig. Inspiriert von Werkzeugen aus der Zellmatrix-Mechanobiologie werden in dieser Arbeit synthetische hydrogelbasierte Modelle der IS entwickelt, um die T-Zell-Mechanotransduktion zu untersuchen, wobei der Schwerpunkt auf der frĂŒhen T-Zell-Aktivierung liegt. Dazu wurden Polyacrylamid (PAAm)-Hydrogele (1-50 kPa) hergestellt und mit Streptavidin- und APC-Liganden (Antikörper gegen CD3-Co-Rezeptor (Anti-CD3) und interzellulĂ€res ZelladhĂ€sionsmolekĂŒl-1 (ICAM-1)) mikrostrukturiert. Anhand von Hydrogelen mit Anti-CD3-Mustern wurde die Korrelation zwischen der CD3-vermittelten frĂŒhen T-Zell-Aktivierung und der Steifigkeit des Hydrogels untersucht. AnschlieĂend wurde die zusĂ€tzliche regulatorische Rolle von ICAM-1 an Hydrogelen mit strukturierten Anti-CD3-Mikropunkten auf einem ICAM-1 Hintergrund untersucht. Die Ergebnisse tragen zum VerstĂ€ndnis der Faktoren bei, die an der T-Zell-Mechanotransduktion beteiligt sind, und liefern nĂŒtzliche Informationen fĂŒr das zukĂŒnftige Design immunmodulatorischer Materialien
Robust Model-free Variable Screening, Double-parallel Monte Carlo and Average Bayesian Information Criterion
Big data analysis and high dimensional data analysis are two popular and challenging topics in current statistical research. They bring us a lot of opportunities as well as many challenges. For big data, traditional methods are generally not efficient enough to handle them, from both time perspective and space perspective. For high dimensional data, most traditional methods canât be implemented, let alone maintain their desirable properties, such as consistency.
In this disseration, three new strategies are proposed to solve these issues. HZSIS is a robust model-free variable screening method and possesses sure screening property under the ultrahigh-dimensional setting. It works based on the nonparanormal transformation and Henze-Zirklerâs test. The numerical results indicate that, compared to the existing methods, the proposed method is more robust to the data generated from heavy-tailed distributions and/or complex models with interaction variables.
Double Parallel Monte Carlo is a simple, practical and efficient MCMC algorithm for Bayesian analysis of big data. The proposed algorithm suggests to divide the big dataset into some smaller subsets and provides a simple method to aggregate the subset posteriors to approximate the full data posterior. To further speed up computation, the proposed algorithm employs the population stochastic approximation Monte Carlo (Pop-SAMC) algorithm, a parallel MCMC algorithm, to simulate from each subset posterior. Since the proposed algorithm consists of two levels of parallel, data parallel and simulation parallel, it is coined as âDouble Parallel Monte Carloâ. The validity of the proposed algorithm is justified both mathematically and numerically.
Average Bayesian Information Criterion (ABIC) and its high-dimensional variant Average Extended Bayesian Information Criterion (AEBIC) led to an innovative way to use posterior samples to conduct model selection. The consistency of this method is established for the high-dimensional generalized linear model under some sparsity and regularity conditions. The numerical results also indicate that, when the sample size is large enough, this method can accurately select the smallest true model with high probability
Cross-Processing Fish Co-Products with Plant Food Side Streams or Seaweeds Using the pH-Shift Method - a new sustainable route to functional food protein ingredients stable towards lipid oxidation
The seafood value chain is highly inefficient as 50-60% of the fish weight end up as co-products in the filleting operation. Despite their abundance in high-quality proteins, fish co-products mainly go to low value products such as fodder. The pH-shift process, i.e., acid/alkaline solubilization followed by isoelectric precipitation, is an opportunity to instead recover these proteins in a food grade manner while maintaining their functionality. A challenge when subjecting hemoglobin-rich fish raw materials to pH-shift processing is however oxidation of polyunsaturated fatty acids (PUFAs).This thesis investigated, for the first time, cross-processing of fish co-products with antioxidant-containing support materials (\u27\u27helpers\u27\u27) to protect the fish protein isolates from lipid oxidation in a clean label and sustainable manner. The helpers, including locally sourced plant food side streams (press cakes from lingonberry (LPC) and apple, barley spent grain, oat fiber residues), shrimp shells, and seaweeds, were also expected to introduce new characteristics to the protein isolates.All helpers, except shrimp shells, reduced lipid oxidation in herring/salmon co-products when added at 30% (dw/dw) at start of the pH-shift process. LPC was the most effective, and even at 2.5% addition it prevented volatile aldehyde formation during production of herring protein isolates while at 10% addition, the isolates were also stable towards oxidation for â„8 days on ice. When the 10% LPC instead was added during protein precipitation, the oxidation lag phase was extended to 21 days. The oxidative stability of protein isolates correlated with their total phenolic content, and the very high antioxidant ability of LPC\u27s was mainly attributed to anthocyanins, e.g., ideain and procyanidin A1.LPC also improved the water solubility, emulsifying activity, and gel-forming capabilities of herring protein isolates, expanding their potential applications in food products. The water solubility and emulsifying activity were also boosted by adding shrimp shells and Ulva, while the gel-forming ability was also enhanced by apple press cake. LPC-derived anthocyanins resulted in red isolates under acidic conditions and dark-colored isolates under neutral/alkaline conditions. Ulva resulted in green isolates due to the presence of chlorophyll. The color of protein isolates was also affected by oxidation of fish-derived pigments like Hb and astaxanthin. The addition of helpers also influenced the composition of protein isolates. LPC added at the start of the process reduced lipid content, while shrimp shells and LPC added during precipitation increased it. Seaweeds raised ash content by introducing minerals.Additionally, the organic acids of LPC saved the use of HCl in acid-aided protein solubilization and in isoelectric precipitation of alkali-solubilized proteins. During the latter, adding 30% LPC decreased HCl usage by as much as 61%. Opposite, alkaline protein solubilization in presence of LPC required more NaOH than the control, but this issue was naturally less pronounced at low LPC additions. Another challenge of introducing helpers was that they reduced total protein yield in the pH-shift process. This was however successfully mitigated by optimizing solubilization/precipitation pH, increasing water addition, and employing more powerful high shear homogenization and ultrasound techniques.In summary, this thesis introduced a completely new concept of cross-processing fish co-products with antioxidant-containing food materials, significantly reducing lipid oxidation and enhancing protein isolate techno-functionalities. Herring co-products paired with 10% LPC was particularly promising. Beyond its technical advantages, cross-processing can add economic value to side streams of both fish and other food industries, while stimulating circularity and industrial symbiosis. Altogether, these features reduce food chain losses and promote a more sustainable food system
Near-field scanning study for radio frequency interference estimation
This dissertation discusses the novel techniques using near-fields scanning to do radio frequency interference (RFI) estimation. As the electronic products are becoming more and more complicated, the radio frequency (RF) receiver in the system is very likely interfered by multiple noise sources simultaneously. A method is proposed to identify the interference from different noise sources separately, even when they are radiating at the same time. This method is very helpful for engineers to identify the contribution of the coupling from different sources and further solve the electromagnetic interference issues efficiently. On the other hand, the equivalent dipole-moment models and a decomposition method based on reciprocity theory can also be used together to estimate the coupling from the noise source to the victim antennas. This proposed method provides convenience to estimate RFI issues in the early design stage and saves the time of RFI simulation and measurements. The finite element method and image theory can also predict the far fields of the radiation source, locating above a ground plane. This method applies the finite element method (FEM) to get the equivalent current sources from the tangential magnetic near fields. With the equivalent current sources, the far-field radiation can be calculated based on Huygens\u27s Principle and image theory. By using only the magnetic near fields on the simplified Huygens\u27s surface, the proposed method significantly saves measurement time and cost while also retaining good far-field prediction --Abstract, page iv
Understanding Interfacial Reactions Initiating on Electrode Materials for Energy Storage Technologies
Since the first generation of lithium-ion batteries featured lithium cobalt oxide cathode and carbon anode commercialized in the 1990s, the high-capacity materials with lower cost are in demand to further increase the battery energy density. Lithium metal and silicon anode are promising high-capacity anode materials to achieve next-generation lithium batteries. However, both the materials actively react in electrolytes and suffer from dramatic volume change. Therefore, a reliable passivation layer at the electrolyte/electrode interphase (i.e., solid electrolyte interphase, or âSEIâ) is required to support the long-term cycling of both materials. Cetrimonium hydro fluoride (CTAHF2) has been proposed and synthesized as an electrolyte additive, which has the unique advantages of increasing the electrolyte wettability and introducing more LiF content in the electrode surface layer. By incorporating 4 M Lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) in dimethoxyethane (DME) electrolyte, the cycling life has been increased for both lithium metal and silicon anode. To understand the origination and evolution of SEI layers in energy storage systems, an integrated microscopic study has been applied to explore the interfacial reactions initiated on the surfaces of different electrode materials. Specifically, atomic force microscopy (AFM), combined with the scanning electron microscope (SEM), and X-ray photoelectron spectroscopy (XPS), is employed to probe the properties of SEI layers formed on different electrode surfaces. A custom-designed electrochemical cell has been proposed to allow the monitoring of SEI layers by using in situ AFM in a âlivingâ cell. Layered LiNi0.8Mn0.1Co0.1O2 (NMC811) is one of the most promising cathode materials for modern lithium-ion batteries with respect to its high reversible capacity. Whereas the redox reactions of Ni2+/Ni3+ and Ni3+/Ni4+ contribute the majority of reversible capacity, the highly reactive Ni-rich surface also encourages the growth of surface impurity species, which causes the irreversible capacity loss and degradation of cycle life. In this work, the residual lithium compounds induced cell failure based on NMC811 was investigated. An acid-base titration method is employed to quantify the carbonate species generated during ambient storage. Finally, a feasible coating method with ethylene carbonate as the coating material has been proposed and helps to maintain the chemical and structural stability of the materials during the ambient environment storage. In comparison to the non-treated extended air-storage samples, the coating treated samples effectively alleviate the initial capacity loss and cycle life degradation. The surface chemical and structural changes and their relevance to electrochemical performances are further discussed in this work
Laser waterjet heat treatment on super-hard materials
The use of the hybrid laser waterjet heat (LWH) treatment process to develop super-hard materials is a new field of research that has the potential to develop materials with hardness characteristics that compare with or exceed that of diamond. The research presented in this dissertation investigated the hardness improvement mechanism of boron nitride (BN) materials by investigating the optimization of LWH process parameters and studying the microstructural refinement of BN materials. The study of the LWH system parameters investigated the relationship between LWH system parameters and the BN materialsâ hardness change ratio. Upon the study, the optimal LWH system parameters (laser fluence, laser beam overlap percentage, boron nitride composition, laser pass number, and laser intensity) were identified in order to maximize BN material hardness. For the BN material microstructure refinement study, scanning electron microscopy (SEM), Raman spectroscopy, and finite element methods (FEM) were used to investigate the microstructure of pre- and post-LWH-treated BN materials. Analytical and experimental approaches were conducted throughout all of the studies, and a variety of analysis techniques were applied. Results indicate that LWH treatment is a feasible approach to improve the hardness of select materials while providing a potential method to develop new super-hard materials for the tooling industry
Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric
Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
- âŠ