2022
DUBAI
6 - 8 SEPTEMBER
MAIN TOPICS - DUBAI 2022
The accuracy and performance of neural networks directly depends on well-known algorithms and data structures. In addition, their advanced counterparts are widely used in machine learning.
This topic is devoted to the approaches for accelerating the neural network performance, application of graph algorithms in artificial intelligence, and others.
RNS can be used for efficient multiplication and addition which opens various ways for RNS applicability in AI and HPC (particularly through matrix multiplication). However, there are challenges to apply RNS for the floating point types: effective RNS scaling and comparison as well as conversion are needed to be implemented. The main target is trying to enable RNS for floating point matrix multiplication.
Exascale supercomputers are our present, but development of Exascale-ready scalable applications is a big challenge as well. The research focus is on effective parallel runtimes and new parallel programming paradigms, performance analysis and characterization of exascale applications.
This topic is devoted to the problems of analysis of large graphs. An overview of current approaches to graph analysis and open problems in this field (linear-algebraic method, problems of calculations on a distributed cluster, problem of segmentation (partitioning) graph for sparse graphs of social networks type, analysis of dynamic graphs and others) will be presented .
FDS task (Interprocedural, Finite, Distributive, Subsets) is well known compiler optimization technique which is applicable to sound, but not to precise source code static analysis. Evolution of symbolic execution methods applied to static inference of program execution facts can improve precision of program analysis.
This topic is devoted to an approach to program analysis which is both complete and precise and allows to detect security and stability problems in real-world projects in C and C++.
Seemingly simple concepts from linear algebra open the door to a wonderful world of high-performance computing (HPC). What does it take to make simple algorithms to perform on ARM-based architecture?
This topic is devoted to algorithmic challenges in dense & sparse linear solvers and eigensolvers. Enabling of exascale clusters with millions of cores, low precision (up to 16-bit) calculations and neural network usage to solve NP-complete problems are examples of the challenges to handle.
TO BE CONTINUED...