143 research outputs found
The art of microbe maintenance: value and applications in design
My thesis centers around designing microbial systems and objects for a sustainable future. I propose ideas to bring microbes into the home in order to make people understand them as a part of the environment. Through deep consideration of how my microbe based material could change across national and social contexts, I create accessible, attractive and friendly-looking design objects with microbes that address people’s fear of microbial life. I strive to facilitate the intersection and interaction between people and technologies in ways that are ultimately harmonious for the well being of both.
My ultimate goal for my thesis is not only making this material useful, but also finding design processes that could contribute to the environment by returning the design to nature. Furthermore, I would like to implement technologies into this sustainable material so I can suggest ways that designers can use it for various purposes and mass production
Overcoming Overconfidence for Active Learning
It is not an exaggeration to say that the recent progress in artificial
intelligence technology depends on large-scale and high-quality data.
Simultaneously, a prevalent issue exists everywhere: the budget for data
labeling is constrained. Active learning is a prominent approach for addressing
this issue, where valuable data for labeling is selected through a model and
utilized to iteratively adjust the model. However, due to the limited amount of
data in each iteration, the model is vulnerable to bias; thus, it is more
likely to yield overconfident predictions. In this paper, we present two novel
methods to address the problem of overconfidence that arises in the active
learning scenario. The first is an augmentation strategy named
Cross-Mix-and-Mix (CMaM), which aims to calibrate the model by expanding the
limited training distribution. The second is a selection strategy named Ranked
Margin Sampling (RankedMS), which prevents choosing data that leads to overly
confident predictions. Through various experiments and analyses, we are able to
demonstrate that our proposals facilitate efficient data selection by
alleviating overconfidence, even though they are readily applicable
What Makes Ly Nebulae Glow? Mapping the Polarization of LABd05
"Ly nebulae" are giant (100 kpc), glowing gas clouds in the
distant universe. The origin of their extended Ly emission remains a
mystery. Some models posit that Ly emission is produced when the cloud
is photoionized by UV emission from embedded or nearby sources, while others
suggest that the Ly photons originate from an embedded galaxy or AGN
and are then resonantly scattered by the cloud. At least in the latter
scenario, the observed Ly emission will be polarized. To test these
possibilities, we are conducting imaging polarimetric observations of seven
Ly nebulae. Here we present our results for LABd05, a cloud at =
2.656 with an obscured, embedded AGN to the northeast of the peak of Ly
emission. We detect significant polarization. The highest polarization
fractions are 10-20% at 20-40 kpc southeast of the Ly
peak, away from the AGN. The lowest , including upper-limits, are 5%
and lie between the Ly peak and AGN. In other words, the polarization
map is lopsided, with increasing from the Ly peak to the southeast.
The measured polarization angles are oriented northeast, roughly
perpendicular to the gradient. This unique polarization pattern suggests
that 1) the spatially-offset AGN is photoionizing nearby gas and 2) escaping
Ly photons are scattered by the nebula at larger radii and into our
sightline, producing tangentially-oriented, radially-increasing polarization
away from the photoionized region. Finally we conclude that the interplay
between the gas density and ionization profiles produces the observed central
peak in the Ly emission. This also implies that the structure of LABd05
is more complex than assumed by current theoretical spherical or cylindrical
models.Comment: 11 pages, 8 figure
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
In an ever-evolving world, the dynamic nature of knowledge presents
challenges for language models that are trained on static data, leading to
outdated encoded information. However, real-world scenarios require models not
only to acquire new knowledge but also to overwrite outdated information into
updated ones. To address this under-explored issue, we introduce the temporally
evolving question answering benchmark, EvolvingQA - a novel benchmark designed
for training and evaluating LMs on an evolving Wikipedia database, where the
construction of our benchmark is automated with our pipeline using large
language models. Our benchmark incorporates question-answering as a downstream
task to emulate real-world applications. Through EvolvingQA, we uncover that
existing continual learning baselines have difficulty in updating and
forgetting outdated knowledge. Our findings suggest that the models fail to
learn updated knowledge due to the small weight gradient. Furthermore, we
elucidate that the models struggle mostly on providing numerical or temporal
answers to questions asking for updated knowledge. Our work aims to model the
dynamic nature of real-world information, offering a robust measure for the
evolution-adaptability of language models.Comment: 14 pages, 5 figures, 5 tables; accepted at NeurIPS Syntheticdata4ML
workshop, 202
ReplaceNet: real-time replacement of a biological neural circuit with a hardware-assisted spiking neural network
Recent developments in artificial neural networks and their learning algorithms have enabled new research directions in computer vision, language modeling, and neuroscience. Among various neural network algorithms, spiking neural networks (SNNs) are well-suited for understanding the behavior of biological neural circuits. In this work, we propose to guide the training of a sparse SNN in order to replace a sub-region of a cultured hippocampal network with limited hardware resources. To verify our approach with a realistic experimental setup, we record spikes of cultured hippocampal neurons with a microelectrode array (in vitro). The main focus of this work is to dynamically cut unimportant synapses during SNN training on the fly so that the model can be realized on resource-constrained hardware, e.g., implantable devices. To do so, we adopt a simple STDP learning rule to easily select important synapses that impact the quality of spike timing learning. By combining the STDP rule with online supervised learning, we can precisely predict the spike pattern of the cultured network in real-time. The reduction in the model complexity, i.e., the reduced number of connections, significantly reduces the required hardware resources, which is crucial in developing an implantable chip for the treatment of neurological disorders. In addition to the new learning algorithm, we prototype a sparse SNN hardware on a small FPGA with pipelined execution and parallel computing to verify the possibility of real-time replacement. As a result, we can replace a sub-region of the biological neural circuit within 22 μs using 2.5 × fewer hardware resources, i.e., by allowing 80% sparsity in the SNN model, compared to the fully-connected SNN model. With energy-efficient algorithms and hardware, this work presents an essential step toward real-time neuroprosthetic computation
Recommended from our members
Primary transcriptome and translatome analysis determines transcriptional and translational regulatory elements encoded in the Streptomyces clavuligerus genome
Determining transcriptional and translational regulatory elements in GC-rich Streptomyces genomes is essential to elucidating the complex regulatory networks that govern secondary metabolite biosynthetic gene cluster (BGC) expression. However, information about such regulatory elements has been limited for Streptomyces genomes. To address this limitation, a high-quality genome sequence of β-lactam antibiotic-producing Streptomyces clavuligerus ATCC 27 064 is completed, which contains 7163 newly annotated genes. This provides a fundamental reference genome sequence to integrate multiple genome-scale data types, including dRNA-Seq, RNA-Seq and ribosome profiling. Data integration results in the precise determination of 2659 transcription start sites which reveal transcriptional and translational regulatory elements, including -10 and -35 promoter components specific to sigma (σ) factors, and 5'-untranslated region as a determinant for translation efficiency regulation. Particularly, sequence analysis of a wide diversity of the -35 components enables us to predict potential σ-factor regulons, along with various spacer lengths between the -10 and -35 elements. At last, the primary transcriptome landscape of the β-lactam biosynthetic pathway is analyzed, suggesting temporal changes in metabolism for the synthesis of secondary metabolites driven by transcriptional regulation. This comprehensive genetic information provides a versatile genetic resource for rational engineering of secondary metabolite BGCs in Streptomyces
Translation and preliminary validation of a Korean version of the parental reflective functioning questionnaire
This study aimed to explore the factor structure, reliability, and validity of a Korean translation of the Parental Reflective Functioning Questionnaire (PRFQ). The PRFQ consists of three subscales: prementalizing modes , certainty about mental states , and interest and curiosity in mental states . A convenience sample of 163 Korean parents completed the K‐PRFQ. Exploratory factor analysis showed three factors mapped on to the original PRFQ factors, but items from the original prementalizing modes subscale clustered into two additional factors. Data from a subsample (n = 67) showed that the certainty about mental states and interest and curiosity in mental states subscales correlated positively with more optimal self‐reported parenting. We discuss the validity of using the PRFQ in collectivistic culture
ODIN: Where Do Lyman-alpha Blobs Live? Contextualizing Blob Environments within the Large-Scale Structure
While many Lyman-alpha Blobs (LABs) are found in and around several
well-known protoclusters at high redshift, how they trace the underlying
large-scale structure is still poorly understood. In this work, we utilize
5,352 Lyman-alpha emitters (LAEs) and 129 LABs at z=3.1 identified over a
9.5 sq. degree area in early data from the ongoing One-hundred-deg
DECam Imaging in Narrowbands (ODIN) survey to investigate this question. Using
LAEs as tracers of the underlying matter distribution, we identify overdense
structures as galaxy groups, protoclusters, and filaments of the cosmic web. We
find that LABs preferentially reside in regions of higher-than-average density
and are located in closer proximity to overdense structures, which represent
the sites of protoclusters and their substructures. Moreover, protoclusters
hosting one or more LABs tend to have a higher descendant mass than those which
do not. Blobs are also strongly associated with filaments of the cosmic web,
with 70% of the population being within a projected distance of 2.4 pMpc
from a filament. We show that the proximity of LABs to protoclusters is
naturally explained by their association with filaments as large cosmic
structures are where many filaments converge. The contiguous wide-field
coverage of the ODIN survey allows us for the first time to firmly establish a
connection between LABs as a population and their environment.Comment: 24 pages, 17 figures; submitted to Ap
NICE 2023 Zero-shot Image Captioning Challenge
In this report, we introduce NICE
project\footnote{\url{https://nice.lgresearch.ai/}} and share the results and
outcomes of NICE challenge 2023. This project is designed to challenge the
computer vision community to develop robust image captioning models that
advance the state-of-the-art both in terms of accuracy and fairness. Through
the challenge, the image captioning models were tested using a new evaluation
dataset that includes a large variety of visual concepts from many domains.
There was no specific training data provided for the challenge, and therefore
the challenge entries were required to adapt to new types of image descriptions
that had not been seen during training. This report includes information on the
newly proposed NICE dataset, evaluation methods, challenge results, and
technical details of top-ranking entries. We expect that the outcomes of the
challenge will contribute to the improvement of AI models on various
vision-language tasks.Comment: Tech report, project page https://nice.lgresearch.ai
- …