Thanks GroupSense for funding this award!
The following two papers received the Best Paper Awards: (1) "Debugging OpenStack Problems using a State Graph Approach" by Yong Xiang, Hu Li, Sen Wang, Charley Peter Chen, and Wei Xu; and (2) "Time Capsule: Tracing Packet Latency across Different Layers in Virtualized Systems" by Kun Suo, Jia Rao, Luwei Cheng, and Francis C.M. Lau. Congratulations!
The registration desk will open at 8 AM on August 4 & 5.
Call this HOTLINE if you need any help during the 2-day workshop: 5626 4411 (voice only).
The Venue page has been updated.
A typhoon (Nida) is approaching Hong Kong, but according to official reports, it should be completely clear of the way by Wednesday and things will be back to normal then. So please stay assured that APSys 2016 will be held as scheduled. Expect a bit or rain though during the two days of our workshop.
The Accommodation page is updated. Please book your hotel ASAP!
Speech: Parallel Programming Needs Data-centric Foundations
Multicore and manycore processors are now ubiquitous, but： parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas.
In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs.
Keshav Pingali is a Professor in the Department of Computer Science at the University of Texas at Austin, and he holds the W.A.”Tex” Moncrief Chair of Computing in the Institute for Computational Engineering and Sciences (ICES) at UT Austin. He was on the faculty of the Department of Computer Science at Cornell University from 1986 to 2006, where he held the India Chair of Computer Science.
Pingali is a Fellow of the IEEE, ACM and the AAAS. He was the co-Editor-in-chief of the ACM Transactions on Programming Languages and Systems, and currently serves on the editorial boards of the ACM Transactions on Parallel Computing, International Journal of Parallel Programming and Distributed Computing. He has also served on the NSF CISE Advisory Committee.