453 research outputs found

    Low complexity hardware oriented H.264/AVC motion estimation algorithm and related low power and low cost architecture design

    Get PDF
    制度:新 ; 報告番号:甲2999号 ; 学位の種類:博士(工学) ; 授与年月日:2010/3/15 ; 早大学位記番号:新525

    SPHR-SAR-Net: Superpixel High-resolution SAR Imaging Network Based on Nonlocal Total Variation

    Full text link
    High-resolution is a key trend in the development of synthetic aperture radar (SAR), which enables the capture of fine details and accurate representation of backscattering properties. However, traditional high-resolution SAR imaging algorithms face several challenges. Firstly, these algorithms tend to focus on local information, neglecting non-local information between different pixel patches. Secondly, speckle is more pronounced and difficult to filter out in high-resolution SAR images. Thirdly, the process of high-resolution SAR imaging generally involves high time and computational complexity, making real-time imaging difficult to achieve. To address these issues, we propose a Superpixel High-Resolution SAR Imaging Network (SPHR-SAR-Net) for rapid despeckling in high-resolution SAR mode. Based on the concept of superpixel techniques, we initially combine non-convex and non-local total variation as compound regularization. This approach more effectively despeckles and manages the relationship between pixels while reducing bias effects caused by convex constraints. Subsequently, we solve the compound regularization model using the Alternating Direction Method of Multipliers (ADMM) algorithm and unfold it into a Deep Unfolded Network (DUN). The network's parameters are adaptively learned in a data-driven manner, and the learned network significantly increases imaging speed. Additionally, the Deep Unfolded Network is compatible with high-resolution imaging modes such as spotlight, staring spotlight, and sliding spotlight. In this paper, we demonstrate the superiority of SPHR-SAR-Net through experiments in both simulated and real SAR scenarios. The results indicate that SPHR-SAR-Net can rapidly perform high-resolution SAR imaging from raw echo data, producing accurate imaging results

    Kepler-432: a red giant interacting with one of its two long period giant planets

    Get PDF
    We report the discovery of Kepler-432b, a giant planet (Mb=5.410.18+0.32MJup,Rb=1.1450.039+0.036RJupM_b = 5.41^{+0.32}_{-0.18} M_{\rm Jup}, R_b = 1.145^{+0.036}_{-0.039} R_{\rm Jup}) transiting an evolved star (M=1.320.07+0.10M,R=4.060.08+0.12R)(M_\star = 1.32^{+0.10}_{-0.07} M_\odot, R_\star = 4.06^{+0.12}_{-0.08} R_\odot) with an orbital period of Pb=52.5011290.000053+0.000067P_b = 52.501129^{+0.000067}_{-0.000053} days. Radial velocities (RVs) reveal that Kepler-432b orbits its parent star with an eccentricity of e=0.51340.0089+0.0098e = 0.5134^{+0.0098}_{-0.0089}, which we also measure independently with asterodensity profiling (AP; e=0.5070.114+0.039e=0.507^{+0.039}_{-0.114}), thereby confirming the validity of AP on this particular evolved star. The well-determined planetary properties and unusually large mass also make this planet an important benchmark for theoretical models of super-Jupiter formation. Long-term RV monitoring detected the presence of a non-transiting outer planet (Kepler-432c; Mcsinic=2.430.24+0.22MJup,Pc=406.22.5+3.9M_c \sin{i_c} = 2.43^{+0.22}_{-0.24} M_{\rm Jup}, P_c = 406.2^{+3.9}_{-2.5} days), and adaptive optics imaging revealed a nearby (0\farcs87), faint companion (Kepler-432B) that is a physically bound M dwarf. The host star exhibits high signal-to-noise asteroseismic oscillations, which enable precise measurements of the stellar mass, radius and age. Analysis of the rotational splitting of the oscillation modes additionally reveals the stellar spin axis to be nearly edge-on, which suggests that the stellar spin is likely well-aligned with the orbit of the transiting planet. Despite its long period, the obliquity of the 52.5-day orbit may have been shaped by star-planet interaction in a manner similar to hot Jupiter systems, and we present observational and theoretical evidence to support this scenario. Finally, as a short-period outlier among giant planets orbiting giant stars, study of Kepler-432b may help explain the distribution of massive planets orbiting giant stars interior to 1 AU.Comment: 22 pages, 19 figures, 5 tables. Accepted to ApJ on Jan 24, 2015 (submitted Nov 11, 2014). Updated with minor changes to match published versio

    Temporal column abundances of atmospheric nitrous oxide at the University of Tennessee, Knoxville

    Get PDF
    This dissertation reports the detection of real time concentration levels of nitrous oxide in the earth\u27s atmosphere at the University of Tennessee Knoxville, Tennessee and describes the integration of a suntracker with the 5-meter Littrow spectrometric system at the University of Tennessee Complex Systems Laboratory. Atmospheric nitrous oxide (N2O) is an important trace gas in the earth\u27s atmosphere. Not only does it have implications to stratospheric ozone depletion, it is an important greenhouse gas. Since the main source of N2O result from agricultural activities, this study is motivated in part by of the location of the University of Tennessee, which is in a large agricultural area. Tropospheric abundances of N2O are mostly constant worldwide with only slight local variations due to N2O sources. This study will quantify any local variations in column abundances of nitrous oxide using ground based solar infrared spectroscopy

    Variational and learning models for image and time series inverse problems

    Get PDF
    Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms

    Data-constrained solar modeling with GX simulator

    Get PDF
    To facilitate the study of solar flares and active regions, we have created a modeling framework, the freely distributed GX Simulator IDL package, that combines 3D magnetic and plasma structures with thermal and nonthermal models of the chromosphere, transition region, and corona. Its object-based modular architecture, which runs on Windows, Mac, and Unix/Linux platforms, offers the ability to either import 3D density and temperature distribution models, or to assign numerically defined coronal or chromospheric temperatures and densities, or their distributions, to each individual voxel. GX Simulator can apply parametric heating models involving average properties of the magnetic field lines crossing a given voxel, as well as compute and investigate the spatial and spectral properties of radio, (sub)millimeter, EUV, and X-ray emissions calculated from the model, and quantitatively compare them with observations. The package includes a fully automatic model production pipeline that, based on minimal users input, downloads the required SDO/HMI vector magnetic field data, performs potential or nonlinear force-free field extrapolations, populates the magnetic field skeleton with parameterized heated plasma coronal models that assume either steady-state or impulsive plasma heating, and generates non-LTE density and temperature distribution models of the chromosphere that are constrained by photospheric measurements. The standardized models produced by this pipeline may be further customized through specialized IDL scripts, or a set of interactive tools provided by the graphical user interface. Here, we describe the GX Simulator framework and its applications

    Dielectron Production in Heavy Ion Collisions at 158 GeV/c per Nucleon

    Get PDF
    In this paper, the low-mass electron pair production in 158 AGeV/c Pb-Au collisions is investigated with the Cherenkov Ring Electron Spectrometer (CERES) at the Super Proton Synchrotron accelerator (SPS) at CERN. The main goal is to search for modifications of hadron properties in hot and dense nuclear matter. The presented re-analysis of the 1996 data set is focused on a detailed study of the combinatorial-background subtraction by means of the mixed-event technique. The results confirm previous findings of CERES. The dielectron production in the mass range of 0.25<m(ee)<2GeV/c**2 is enhanced by a factor of 3.0+-1.3(stat.)+-1.2(syst.) over the expectation from neutral meson decays. The data is compared to transport model calculations and seem to favor the version including in-medium effects. Furthermore, the development of a new technology to manufacture ultralightweight mirrors for Ring Imaging Cherenkov detectors (RICH) is described. Replacement of the RICH-2 glass mirror by a mirror almost transparent to electrons would considerably improve the performance of the upgraded CERES detector system including a radial Time Projection Chamber (TPC).Comment: 152 pages, 142 figures, published in http://elib.tu-darmstadt.de/diss/00019

    Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018

    Get PDF
    This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions

    Approximate and timing-speculative hardware design for high-performance and energy-efficient video processing

    Get PDF
    Since the end of transistor scaling in 2-D appeared on the horizon, innovative circuit design paradigms have been on the rise to go beyond the well-established and ultraconservative exact computing. Many compute-intensive applications – such as video processing – exhibit an intrinsic error resilience and do not necessarily require perfect accuracy in their numerical operations. Approximate computing (AxC) is emerging as a design alternative to improve the performance and energy-efficiency requirements for many applications by trading its intrinsic error tolerance with algorithm and circuit efficiency. Exact computing also imposes a worst-case timing to the conventional design of hardware accelerators to ensure reliability, leading to an efficiency loss. Conversely, the timing-speculative (TS) hardware design paradigm allows increasing the frequency or decreasing the voltage beyond the limits determined by static timing analysis (STA), thereby narrowing pessimistic safety margins that conventional design methods implement to prevent hardware timing errors. Timing errors should be evaluated by an accurate gate-level simulation, but a significant gap remains: How these timing errors propagate from the underlying hardware all the way up to the entire algorithm behavior, where they just may degrade the performance and quality of service of the application at stake? This thesis tackles this issue by developing and demonstrating a cross-layer framework capable of performing investigations of both AxC (i.e., from approximate arithmetic operators, approximate synthesis, gate-level pruning) and TS hardware design (i.e., from voltage over-scaling, frequency over-clocking, temperature rising, and device aging). The cross-layer framework can simulate both timing errors and logic errors at the gate-level by crossing them dynamically, linking the hardware result with the algorithm-level, and vice versa during the evolution of the application’s runtime. Existing frameworks perform investigations of AxC and TS techniques at circuit-level (i.e., at the output of the accelerator) agnostic to the ultimate impact at the application level (i.e., where the impact is truly manifested), leading to less optimization. Unlike state of the art, the framework proposed offers a holistic approach to assessing the tradeoff of AxC and TS techniques at the application-level. This framework maximizes energy efficiency and performance by identifying the maximum approximation levels at the application level to fulfill the required good enough quality. This thesis evaluates the framework with an 8-way SAD (Sum of Absolute Differences) hardware accelerator operating into an HEVC encoder as a case study. Application-level results showed that the SAD based on the approximate adders achieve savings of up to 45% of energy/operation with an increase of only 1.9% in BD-BR. On the other hand, VOS (Voltage Over-Scaling) applied to the SAD generates savings of up to 16.5% in energy/operation with around 6% of increase in BD-BR. The framework also reveals that the boost of about 6.96% (at 50°) to 17.41% (at 75° with 10- Y aging) in the maximum clock frequency achieved with TS hardware design is totally lost by the processing overhead from 8.06% to 46.96% when choosing an unreliable algorithm to the blocking match algorithm (BMA). We also show that the overhead can be avoided by adopting a reliable BMA. This thesis also shows approximate DTT (Discrete Tchebichef Transform) hardware proposals by exploring a transform matrix approximation, truncation and pruning. The results show that the approximate DTT hardware proposal increases the maximum frequency up to 64%, minimizes the circuit area in up to 43.6%, and saves up to 65.4% in power dissipation. The DTT proposal mapped for FPGA shows an increase of up to 58.9% on the maximum frequency and savings of about 28.7% and 32.2% on slices and dynamic power, respectively compared with stat
    corecore