145 research outputs found
LSST: from Science Drivers to Reference Design and Anticipated Data Products
(Abridged) We describe here the most ambitious survey currently planned in
the optical, the Large Synoptic Survey Telescope (LSST). A vast array of
science will be enabled by a single wide-deep-fast sky survey, and LSST will
have unique survey capability in the faint time domain. The LSST design is
driven by four main science themes: probing dark energy and dark matter, taking
an inventory of the Solar System, exploring the transient optical sky, and
mapping the Milky Way. LSST will be a wide-field ground-based system sited at
Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m
effective) primary mirror, a 9.6 deg field of view, and a 3.2 Gigapixel
camera. The standard observing sequence will consist of pairs of 15-second
exposures in a given field, with two such visits in each pointing in a given
night. With these repeats, the LSST system is capable of imaging about 10,000
square degrees of sky in a single filter in three nights. The typical 5
point-source depth in a single visit in will be (AB). The
project is in the construction phase and will begin regular survey operations
by 2022. The survey area will be contained within 30,000 deg with
, and will be imaged multiple times in six bands, ,
covering the wavelength range 320--1050 nm. About 90\% of the observing time
will be devoted to a deep-wide-fast survey mode which will uniformly observe a
18,000 deg region about 800 times (summed over all six bands) during the
anticipated 10 years of operations, and yield a coadded map to . The
remaining 10\% of the observing time will be allocated to projects such as a
Very Deep and Fast time domain survey. The goal is to make LSST data products,
including a relational database of about 32 trillion observations of 40 billion
objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures
available from https://www.lsst.org/overvie
Modeling and Simulation in Engineering
This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results
Parallelized computational 3D video microscopy of freely moving organisms at multiple gigapixels per second
To study the behavior of freely moving model organisms such as zebrafish
(Danio rerio) and fruit flies (Drosophila) across multiple spatial scales, it
would be ideal to use a light microscope that can resolve 3D information over a
wide field of view (FOV) at high speed and high spatial resolution. However, it
is challenging to design an optical instrument to achieve all of these
properties simultaneously. Existing techniques for large-FOV microscopic
imaging and for 3D image measurement typically require many sequential image
snapshots, thus compromising speed and throughput. Here, we present 3D-RAPID, a
computational microscope based on a synchronized array of 54 cameras that can
capture high-speed 3D topographic videos over a 135-cm^2 area, achieving up to
230 frames per second at throughputs exceeding 5 gigapixels (GPs) per second.
3D-RAPID features a 3D reconstruction algorithm that, for each synchronized
temporal snapshot, simultaneously fuses all 54 images seamlessly into a
globally-consistent composite that includes a coregistered 3D height map. The
self-supervised 3D reconstruction algorithm itself trains a
spatiotemporally-compressed convolutional neural network (CNN) that maps raw
photometric images to 3D topography, using stereo overlap redundancy and
ray-propagation physics as the only supervision mechanism. As a result, our
end-to-end 3D reconstruction algorithm is robust to generalization errors and
scales to arbitrarily long videos from arbitrarily sized camera arrays. The
scalable hardware and software design of 3D-RAPID addresses a longstanding
problem in the field of behavioral imaging, enabling parallelized 3D
observation of large collections of freely moving organisms at high
spatiotemporal throughputs, which we demonstrate in ants (Pogonomyrmex
barbatus), fruit flies, and zebrafish larvae
The Boston University Photonics Center annual report 2016-2017
This repository item contains an annual report that summarizes activities of the Boston University Photonics Center in the 2016-2017 academic year. The report provides quantitative and descriptive information regarding photonics programs in education, interdisciplinary research, business innovation, and technology development. The Boston University Photonics Center (BUPC) is an interdisciplinary hub for education, research, scholarship, innovation, and technology development associated with practical uses of light.This has undoubtedly been the Photonics Centerâs best year since I became Director 10 years ago. In the following pages, you will see highlights of the Centerâs activities in the past year, including more than 100 notable scholarly publications in the leading journals in our field, and the attraction of more than 22 million dollars in new research grants/contracts. Last year I had the honor to lead an international search for the first recipient of the Moustakas Endowed Professorship in Optics and Photonics, in collaboration with ECE Department Chair Clem Karl. This professorship honors the Centerâs most impactful scholar and one of the Centerâs founding visionaries, Professor Theodore Moustakas. We are delighted to haveawarded this professorship to Professor Ji-Xin Cheng, who joined our faculty this year.The past year also marked the launch of Boston Universityâs Neurophotonics Center, which will be allied closely with the Photonics Center. Leading that Center will be a distinguished new faculty member, Professor David Boas. David and I are together leading a new Neurophotonics NSF Research Traineeship Program that will provide $3M to promote graduate traineeships in this emerging new field. We had a busy summer hosting NSF Sites for Research Experiences for Undergraduates, Research Experiences for Teachers, and the BU Student Satellite Program. As a community, we emphasized the theme of âOptics of Cancer Imagingâ at our annual symposium, hosted by Darren Roblyer. We entered a five-year second phase of NSF funding in our Industry/University Collaborative Research Center on Biophotonic Sensors and Systems, which has become the centerpiece of our translational biophotonics program. That I/UCRC continues to focus on advancing the health care and medical device industries
A cryogenic liquid-mirror telescope on the moon to study the early universe
We have studied the feasibility and scientific potential of zenith observing
liquid mirror telescopes having 20 to 100 m diameters located on the moon. They
would carry out deep infrared surveys to study the distant universe and follow
up discoveries made with the 6 m James Webb Space Telescope (JWST), with more
detailed images and spectroscopic studies. They could detect objects 100 times
fainter than JWST, observing the first, high-red shift stars in the early
universe and their assembly into galaxies. We explored the scientific
opportunities, key technologies and optimum location of such telescopes. We
have demonstrated critical technologies. For example, the primary mirror would
necessitate a high-reflectivity liquid that does not evaporate in the lunar
vacuum and remains liquid at less than 100K: We have made a crucial
demonstration by successfully coating an ionic liquid that has negligible vapor
pressure. We also successfully experimented with a liquid mirror spinning on a
superconducting bearing, as will be needed for the cryogenic, vacuum
environment of the telescope. We have investigated issues related to lunar
locations, concluding that locations within a few km of a pole are ideal for
deep sky cover and long integration times. We have located ridges and crater
rims within 0.5 degrees of the North Pole that are illuminated for at least
some sun angles during lunar winter, providing power and temperature control.
We also have identified potential problems, like lunar dust. Issues raised by
our preliminary study demand additional in-depth analyses. These issues must be
fully examined as part of a scientific debate we hope to start with the present
article.Comment: 35 pages, 11 figures. To appear in Astrophysical Journal June 20 200
The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST
The focus of this report is on the opportunities enabled by the combination
of LSST, Euclid and WFIRST, the optical surveys that will be an essential part
of the next decade's astronomy. The sum of these surveys has the potential to
be significantly greater than the contributions of the individual parts. As is
detailed in this report, the combination of these surveys should give us
multi-wavelength high-resolution images of galaxies and broadband data covering
much of the stellar energy spectrum. These stellar and galactic data have the
potential of yielding new insights into topics ranging from the formation
history of the Milky Way to the mass of the neutrino. However, enabling the
astronomy community to fully exploit this multi-instrument data set is a
challenging technical task: for much of the science, we will need to combine
the photometry across multiple wavelengths with varying spectral and spatial
resolution. We identify some of the key science enabled by the combined surveys
and the key technical challenges in achieving the synergies.Comment: Whitepaper developed at June 2014 U. Penn Workshop; 28 pages, 3
figure
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
- âŠ