Development and Investigation of quasi-Monte Carlo Algorithms for Extreme Parallel Computer Systems

Main Project Activities (Workpakages):

The Activities are divided of two parts:

  1. Activities (A2, A3, and A4) related with execution of scientific
  2. Activities (A1 and A5) related with project management and dissemination of the scientific results

No
Activities
Leader
Start End
A1 Administrative and technical management. Assoc. Prof. Todor Gurov, Ph.D
M01
M24
A2 Scrambled quasi-random sequences and randomized quasi-MC methods for extreme-scale parallel computing systems. Professor Aneta Karaivanova
M01
M24
A3 Randomized quasi-Monte Carlo algorithms for complex simulations with high societal and economic impact. Professor Aneta Karaivanova, Assoc. Prof. Todor Gurov, Ph.D
M01
лю24
A4 Estimation of energy efficiency and scalability of the algorithms. Assoc. Prof. Todor Gurov, Ph.D
M06
M24
A5 Dissemination of the results and training. Assoc. Prof. Sofiya Ivanovska, Ph.D.
M01
M24

To realize the main purpose of the project and execution of scientific tasks, we use high performance computing systems with low latency communications,such that include graphics cards (from manufacturers such as NVIDIA) and/or co-processors (with technology the MIC of Intel) to accelerate speed up calculations. The following HPC systems are used:

1.Avitohol HPC system at IICT-BAS

High-performance computing system - Avitohol in the TOP500 list (389-th place, November 2015) .


System Overview:
150 HP Cluster Platform SL250S GEN8 servers with 2 Intel Xeon E 2650 v2 CPUs and 2 Intel Xeon Phi 7120P coprocessors
Site IICT-BAS/Avitohol
Manufacturer Hewlett-Packard
Cores 20700
Interconnection FDR InfiniBand
Theoretical Peak Performance 412.3 Tflop/s
RMAX Performance 264.2 TFlop/s
Memory 9600 GB
Operation System Red Hat Enterprise Linux for HPC
Compiler Intel Composer XE 2015
Lustre Storage systems 96 TB storage

2. HPCG cluster at IICT-BAS

High-performance grid cluster is in operation since 2010. The disk memory of the cluster was expanded in 2014. The cluster was again expanded with additional opportunities to accelerate calculations using graphics cards and co-processors in 2014 and 2015.

System Overview:
  • HP Cluster Platform Express 7000 enclosures with 36 blades BL 280c (Total 576 CPU cores), 24 GB RAM per blade;
  • 8 controlling nodes HP DL 380 G6 with dual Intel X5560 @ 2.8 GHz, 32 GB RAM (total 128 CPU cores);
  • 3 storage systems with total 132 TB storage;
  • FDR InfiniBand Interconnection ;
  • 2 HP ProLiant SL390s G7 4U servers with 16 NVIDIA Tesla M2090 graphic cards (total 8192 GPU cores with 10.64 Tflop/s).
  • HP SL270s Gen8 4U server with 8 Intel Xeon Phi 5110P Coprocessors each (total 480 cores, 1920 threads, with 8.088 Tflop/s).

3.MareNostrum at Barcelona Supercomputer Centre

Hardware features of the computer system MareNostrum at the Barcelona Supercomputing Center.

More information about DFNI-IO2/8 :

Assoc. Prof. Todor Gurov, Ph.D

IICT-BAS, Acad. G. Bonchev, 25A

Sofia, Bulgaria

Tel: +359 2979 6639

Mail:gurov(at)bas(dot)bg