9 research outputs found
A Computational Vision on Human Emotion
Late 20th century Artificial Intelligence research treated emotion and cognition as antithetical entities. Recent neurological studies, however, suggest that the two are closely related. Emotion plays a critical role in decision making. Studies also have established that neurological deficits in emotion processing lead to deficiency in decision making. These findings have invoked a new interest in the modeling of emotion in artificially intelligent systems. The Dependable Computing and Networking Lab (DCNL) at ISU, led by Dr. Arun Somani, is researching human emotion modeling using Computer Vision. The study will engender novel ideas to adapt the existing emotion-modeling framework in the research realm to the needs of the Human and Object Detection project in the DCNL group. We believe that this study could also lead to new and innovative models of human emotion. Computational tools such as OpenCV and MATLAB will be used to test and validate new models and adaptations. Using machine learning methods, the reliability and efficacy of the new methods will also be evaluated
Recommended from our members
Exascalable Communication for Modern Supercomputing
Supercomputing applications rely on strong scaling to achieve faster results on a larger number of processing units. But, at the strong-scaling limit, where communication is a relatively large portion of an application’s runtime, today’s state-of-the-art hybrid MPI+threads applications perform slower than their traditional MPI everywhere counterparts. This slowdown is primarily due to the supercomputing community’s outdated view: the network is a single device. NICs of modern interconnects feature multiple network hardware contexts. These parallel interfaces into the network are not utilized in MPI+threads applications today because MPI libraries still use conservative approaches to maintain MPI’s ordering constraints. MPI libraries do so because domain scientists today do not do a good job exposing logically parallel communication in their multithreaded MPI communication even though the existing MPI standard provides them with opportunities to do so. Only when domain scientists and MPI developers take a step forward together can we eliminate the communication bottleneck in MPI+threads applications.This dissertation eliminates the communication bottleneck by bridging the two ends of the HPC stack—MPI library developers and domain experts—that typically do not talk to each other directly. Through collaborations with system researchers and MPI library developers, we develop a fast MPI+threads library capable of achieving scaling communication throughput similar to that of MPI everywhere and make high-speed multithreaded communication a reality. Through collaborations with domain scientists, we use various designs to expose logically parallel communication to the fast MPI+threads library on exemplar applications targeted to run on the upcoming exascale systems. Our conversations with the end-users—the domain experts—educate us on the usability aspects of the various designs. Hence, in addition to the performance comparisons of the various designs, we discuss the strengths and limitations of the different designs and provide our design recommendation for the supercomputing community. Through such collaborations on both ends of the HPC stack, we unlock the true potential of the MPI+threads programming model. Prominent modern applications and computational frameworks, such as Uintah, WOMBAT, and Legion, now perform significantly faster (up to 2x) at the strong-scaling limit
Recommended from our members
Exascalable Communication for Modern Supercomputing
Supercomputing applications rely on strong scaling to achieve faster results on a larger number of processing units. But, at the strong-scaling limit, where communication is a relatively large portion of an application’s runtime, today’s state-of-the-art hybrid MPI+threads applications perform slower than their traditional MPI everywhere counterparts. This slowdown is primarily due to the supercomputing community’s outdated view: the network is a single device. NICs of modern interconnects feature multiple network hardware contexts. These parallel interfaces into the network are not utilized in MPI+threads applications today because MPI libraries still use conservative approaches to maintain MPI’s ordering constraints. MPI libraries do so because domain scientists today do not do a good job exposing logically parallel communication in their multithreaded MPI communication even though the existing MPI standard provides them with opportunities to do so. Only when domain scientists and MPI developers take a step forward together can we eliminate the communication bottleneck in MPI+threads applications.This dissertation eliminates the communication bottleneck by bridging the two ends of the HPC stack—MPI library developers and domain experts—that typically do not talk to each other directly. Through collaborations with system researchers and MPI library developers, we develop a fast MPI+threads library capable of achieving scaling communication throughput similar to that of MPI everywhere and make high-speed multithreaded communication a reality. Through collaborations with domain scientists, we use various designs to expose logically parallel communication to the fast MPI+threads library on exemplar applications targeted to run on the upcoming exascale systems. Our conversations with the end-users—the domain experts—educate us on the usability aspects of the various designs. Hence, in addition to the performance comparisons of the various designs, we discuss the strengths and limitations of the different designs and provide our design recommendation for the supercomputing community. Through such collaborations on both ends of the HPC stack, we unlock the true potential of the MPI+threads programming model. Prominent modern applications and computational frameworks, such as Uintah, WOMBAT, and Legion, now perform significantly faster (up to 2x) at the strong-scaling limit
A Computational Vision on Human Emotion
Late 20th century Artificial Intelligence research treated emotion and cognition as antithetical entities. Recent neurological studies, however, suggest that the two are closely related. Emotion plays a critical role in decision making. Studies also have established that neurological deficits in emotion processing lead to deficiency in decision making. These findings have invoked a new interest in the modeling of emotion in artificially intelligent systems. The Dependable Computing and Networking Lab (DCNL) at ISU, led by Dr. Arun Somani, is researching human emotion modeling using Computer Vision. The study will engender novel ideas to adapt the existing emotion-modeling framework in the research realm to the needs of the Human and Object Detection project in the DCNL group. We believe that this study could also lead to new and innovative models of human emotion. Computational tools such as OpenCV and MATLAB will be used to test and validate new models and adaptations. Using machine learning methods, the reliability and efficacy of the new methods will also be evaluated.</p
Adaptive Parallelism in Browsers
Mozilla Research is developing Servo, a parallel web browser engine, to exploit the benetsof parallelism and concurrency in the web rendering pipeline. Parallelization results inimproved performance for pinterest.com but not for google.com. This is because the workload of a browser is dependent on the web page it is rendering. In many cases, the overhead of creating, deleting, and coordinating parallel work outweighs any of its benets. In this work, I model the relationship between web page primitives and a web browser's parallelperformance and energy usage using both regression and classication learning algorithms.I propose a feature space that is representative of the parallelism available in a web pageand characterize it using seven key features. After training the models to minimize custom-dened loss functions, such a model can be used to predict the degree of parallelism availablein a web page and decide the optimal thread conguration to use to render a web page. Suchmodeling is critical for improving the browser's performance and minimizing its energy usage.As a case study, I evaluate the models on Servo's styling stage. Experiments on a quad-coreIntel Ivy Bridge (i7-3615QM) laptop show that we can improve performance and energyusage by up to 94.52% and 46.32% respectively on the 535 web pages considered in thisstudy. Looking forward, we identify opportunities to tackle this problem with an online-learningapproach to realize a practical and portable adaptive parallel browser on variousperformance- and energy-critical devices
Recommended from our members
Scalable Communication Endpoints for MPI+Threads Applications
Hybrid MPI+threads programming is gaining prominence as an alternative to the
traditional "MPI everywhere'" model to better handle the disproportionate
increase in the number of cores compared with other on-node resources. Current
implementations of these two models represent the two extreme cases of
communication resource sharing in modern MPI implementations. In the
MPI-everywhere model, each MPI process has a dedicated set of communication
resources (also known as endpoints), which is ideal for performance but is
resource wasteful. With MPI+threads, current MPI implementations share a single
communication endpoint for all threads, which is ideal for resource usage but
is hurtful for performance.
In this paper, we explore the tradeoff space between performance and
communication resource usage in MPI+threads environments. We first demonstrate
the two extreme cases---one where all threads share a single communication
endpoint and another where each thread gets its own dedicated communication
endpoint (similar to the MPI-everywhere model) and showcase the inefficiencies
in both these cases. Next, we perform a thorough analysis of the different
levels of resource sharing in the context of Mellanox InfiniBand. Using the
lessons learned from this analysis, we design an improved resource-sharing
model to produce \emph{scalable communication endpoints} that can achieve the
same performance as with dedicated communication resources per thread but using
just a third of the resources
Scalable Communication Endpoints for MPI+Threads Applications
Hybrid MPI+threads programming is gaining prominence as an alternative to the
traditional "MPI everywhere'" model to better handle the disproportionate
increase in the number of cores compared with other on-node resources. Current
implementations of these two models represent the two extreme cases of
communication resource sharing in modern MPI implementations. In the
MPI-everywhere model, each MPI process has a dedicated set of communication
resources (also known as endpoints), which is ideal for performance but is
resource wasteful. With MPI+threads, current MPI implementations share a single
communication endpoint for all threads, which is ideal for resource usage but
is hurtful for performance.
In this paper, we explore the tradeoff space between performance and
communication resource usage in MPI+threads environments. We first demonstrate
the two extreme cases---one where all threads share a single communication
endpoint and another where each thread gets its own dedicated communication
endpoint (similar to the MPI-everywhere model) and showcase the inefficiencies
in both these cases. Next, we perform a thorough analysis of the different
levels of resource sharing in the context of Mellanox InfiniBand. Using the
lessons learned from this analysis, we design an improved resource-sharing
model to produce \emph{scalable communication endpoints} that can achieve the
same performance as with dedicated communication resources per thread but using
just a third of the resources