GPU computing and Many Integrated Core Computing

For the next decade, Moore's Law is still going to bring higher transistor densities allowing Billions of transistors to be integrated on a single chip. However, it becomes obvious that exploiting significant amounts of instruction-level parallelism with deeper pipelines and more aggressive wide-issue superscalar techniques, and using most of the transistor budget for large on-chip caches has come to a dead end. Especially, scaling performance with higher clock frequencies is getting more and more difficult because of heat dissipation problems and too high energy consumption. The latter is not only a technical problem for mobile systems, but is even going to become a severe problem for computing centers because high energy consumption leads to significant cost factors in the budget. For the moment, improving performance can only be achieved by exploiting parallelism on all system levels. Multicore architectures like Graphics Processing Unit (GPU) offer a better performance/Watt ratio than single core architectures with similar performance. Combining multicore and coprocessor technology promises extreme computing power for highly CPU-time-consuming applications like in image processing. The Special Session on GPU Computing and Many Integrated Core Computing aims at providing a forum for scientific researchers and engineers on hot topics related to GPU computing and hybrid computing with special emphasis on applications, performance analysis, programming models and mechanisms for mapping codes.

Important Dates:

Paper submission:   10th Sep 2018 5th Oct 2018 15th Oct 2018 30th Oct 2018
Acceptance notification:   17th Oct 2018 15th Nov 2018 23rd Nov 2018 27th Nov 201830th Nov 2018
Camera ready due:   12th Nov 2018 15th Dec 2018 19th Dec 2018
Conference: 13th - 15th Feb 2019


    Notification extended
    November 27, 2018
    November 30, 2018

    July 21, 2018:
    List of accepted special sessions

    July 21, 2018:
    Call for paper available


  • GPU computing, multi GPU processing, hybrid computing
  • Programming models, programming frameworks, CUDA, OpenCL, communication libraries
  • Mechanisms for mapping codes
  • Task allocation
  • Fault tolerance
  • Performance analysis
  • Many Integrated Core architecture, MIC
  • Intel coprocessor, Xeon Phi
  • Vectorization
  • Applications: image processing, signal processing, linear algebra, numerical simulation, optimisation
  • Domains: computer science, electronic, embedded systems, telecommunication, medical imaging, finance

Programme Co-chairs:

Didier El Baz, LAAS/CNRS, France, <elbaz[AT]laas[DOT]fr>

Programme Committee:

Vincent Boyer, PISIS-FIME-UANL, Mexico

David Defour, DALI-LIRMM, France

Fumihiko Ino, Osaka University, Japan

Volodymyr Kindratenko, University of Illinois at Urbana-Champaign, USA

Bastien Plazolles, LAAS-CNRS, France

Premysl Sucha, Czech Technical University, Czech Republic

Cornelis Vuik, Delft University of Technology, Netherlands