1,644 research outputs found
Automated detection of block falls in the north polar region of Mars
We developed a change detection method for the identification of ice block
falls using NASA's HiRISE images of the north polar scarps on Mars. Our method
is based on a Support Vector Machine (SVM), trained using Histograms of
Oriented Gradients (HOG), and on blob detection. The SVM detects potential new
blocks between a set of images; the blob detection, then, confirms the
identification of a block inside the area indicated by the SVM and derives the
shape of the block. The results from the automatic analysis were compared with
block statistics from visual inspection. We tested our method in 6 areas
consisting of 1000x1000 pixels, where several hundreds of blocks were
identified. The results for the given test areas produced a true positive rate
of ~75% for blocks with sizes larger than 0.7 m (i.e., approx. 3 times the
available ground pixel size) and a false discovery rate of ~8.5%. Using blob
detection we also recover the size of each block within 3 pixels of their
actual size
Recommended from our members
Explanatory debugging: Supporting end-user debugging of machine-learned programs
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on “explanatory debugging”, then empirically evaluated it. Our results contribute methods for exposing a learned program's logic to end users and for eliciting user corrections to improve the program's predictions
Is megestrol acetate safe and effective for malnourished nursing home residents?
Q: Is megestrol acetate safe and effective for malnourished nursing home residents? A: no. Megestrol acetate (MA) is neither safe nor effective for stimulating appetite in malnourished nursing home residents. It increases the risk of deep vein thrombosis (DVT) (strength of recommendation [SOR]: C, 2 retrospective chart reviews), but isn't associated with other new or worsening events or disorders (SOR: B, single randomized controlled trial [RCT]). Over a 25-week period, MA wasn't associated with increased mortality (SOR: B, single RCT). After 44 months, however, MA-treated patients showed decreased median survival (SOR: B, single case- control study). Consistent, meaningful weight gain was not observed with MA treatment (SOR: B, single case-control study, single RCT, 2 retrospective chart reviews, single prospective case-series).Authors: Frances K. Wen, PhD; James Millar, MD University of Oklahoma School of Community Medicine, Tulsa; Linda Oberst-Walsh, MD University of Colorado School of Medicine, Denver; Joan Nashelsky, MLS Family Physicians Inquiries Network, Iowa City
Ubiquitination and proteasomal degradation of ATG12 regulates its proapoptotic activity
During macroautophagy, conjugation of ATG12 to ATG5 is essential for LC3 lipidation and autophagosome formation. Additionally, ATG12 has ATG5-independent functions in diverse processes including mitochondrial fusion and mitochondrial-dependent apoptosis. In this study, we investigated the regulation of free ATG12. In stark contrast to the stable ATG12–ATG5 conjugate, we find that free ATG12 is highly unstable and rapidly degraded in a proteasome-dependent manner. Surprisingly, ATG12, itself a ubiquitin-like protein, is directly ubiquitinated and this promotes its proteasomal degradation. As a functional consequence of its turnover, accumulation of free ATG12 contributes to proteasome inhibitor-mediated apoptosis, a finding that may be clinically important given the use of proteasome inhibitors as anticancer agents. Collectively, our results reveal a novel interconnection between autophagy, proteasome activity, and cell death mediated by the ubiquitin-like properties of ATG12
Phobos DTM and Coordinate Refinement for Phobos-Grunt Mission Support.
Images obtained by the High Resolution Stereo Camera (HRSC) during recent Phobos flybys were used to study the proposed new landing site area of the Russian Phobos-Grunt mission, scheduled for launch in 2011 [1]. From the stereo images (resolution of up to 4.4 m/pixel), a digital terrain model (DTM) with a lateral resolution of 100 m per pixel and a relative point accuracy of ±15 m, was determined. Images and DTM were registered to the established Phobos control point network [7]. A map of the landing site area was produced enabling mission planers and scientists to extract accurate body-fixed coordinates of features in the Phobos Grunt landing site area
End-user feature labeling: a locally-weighted regression approach
When intelligent interfaces, such as intelligent desktop assistants, email classifiers, and recommender systems, customize themselves to a particular end user, such customizations can decrease productivity and increase frustration due to inaccurate predictions - especially in early stages, when training data is limited. The end user can improve the learning algorithm by tediously labeling a substantial amount of additional training data, but this takes time and is too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose a new learning algorithm based on locally weighted regression for feature labeling by end users, enabling them to point out which features are important for a class, rather than provide new training instances. In our user study, the first allowing ordinary end users to freely choose features to label directly from text documents, our algorithm was both more effective than others at leveraging end users' feature labels to improve the learning algorithm, and more robust to real users' noisy feature labels. These results strongly suggest that allowing users to freely choose features to label is a promising method for allowing end users to improve learning algorithms effectively
End-User Feature Labeling via Locally Weighted Logistic Regression
Applications that adapt to a particular end user often make inaccurate predictions during the early stages when training data is limited. Although an end user can improve the learning algorithm by labeling more training data, this process is time consuming and too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose a new learning algorithm based on Locally Weighted Logistic Regression for feature labeling by end users, enabling them to point out which features are important for a class, rather than provide new training instances. In our user study, the first allowing ordinary end users to freely choose features to label directly from text documents, our algorithm was more effective than others at leveraging end users’ feature labels to improve the learning algorithm. Our results strongly suggest that allowing users to freely choose features to label is a promising method for allowing end users to improve learning algorithms effectively
Recommended from our members
Integrating rich user feedback into intelligent user interfaces
The potential for machine learning systems to improve via a mutually beneficial exchange of information with users has yet to be explored in much detail. Previously, we found that users were willing to provide a generous amount of rich feedback to machine learning systems, and that the types of some of this rich feedback seem promising for assimilation by machine learning algorithms. Following up on those findings, we ran an experiment to assess the viability of incorporating real-time keyword-based feedback in initial training phases when data is limited. We found that rich feedback improved accuracy but an initial unstable period often caused large fluctuations in classifier behavior. Participants were able to give feedback by relying heavily on system communication in order to respond to changes. The results show that in order to benefit from the user’s knowledge, machine learning systems must be able to absorb keyword-based rich feedback in a graceful manner and provide clear explanations of their predictions
Optimizing the distribution of tie points for the bundle adjustment of hrsc image mosaics
For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4, 000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.DLR/50 QM 1601BMWi/50 QM 160
Recommended from our members
Marco Polo: near Earth object sample return mission
Marco Polo is a joint European-Japanese mission of sample return from a Near Earth Object. The Marco Polo proposal was submitted to ESA on July 2007 in the framework of the Cosmic Vision 2015-2025 context, and on October 2007 passed the first evaluation process. The primary objectives of this mission is to visit a primitive NEO, belonging to a class that cannot be related to known meteorite types, to characterize it at multiple scales, and to bring samples back to Earth. Marco Polo will give us the first opportunity for detailed laboratory study of the most primitive materials that formed the planets. This will allow us to improve our knowledge on the processes which governed the origin and early evolution of the Solar System, and possibly of the life on Earth
- …