Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Accelerating N-Body Simulations using GPUs: A Case Study, Slides of Computer Science

A problem statement for accelerating computations using gpus, openmp, and mpi for an n-body simulation. An algorithm for calculating new positions of vortices, finding the maximum magnitude of angular velocity, and assigning new positions and angular velocities to vortices that are out of the box. The document also discusses memory allocation and handling, as well as future improvements and references.

What you will learn

  • How is the vortex distribution calculation parallelized using GPU?
  • How are the new positions of vortices calculated?
  • What are the dimensions of the cube in the simulation?
  • What are the future improvements suggested for this project?
  • What data does each vortex contain?

Typology: Slides

2017/2018

Uploaded on 06/18/2018

kotir500
kotir500 🇮🇳

5

(1)

2 documents

1 / 15

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Strömungssimula
tion auf GPUs
Kami Reddy Koti Reddy -- 216100231
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Accelerating N-Body Simulations using GPUs: A Case Study and more Slides Computer Science in PDF only on Docsity!

Strömungssimula

tion auf GPUs

Kami Reddy Koti Reddy -- 216100231

Problem

statement

The task is to accelerate the computations using GPU, OMP and MPI options.

The part of the code to be parallelized contains two embedded loops (see

comments ”this loop can be done parallel”).

Problem

Description

● We have a cube of dimension 1 x 1 x 1 units. Where 700 Vortices are filled in

cube, Each vortex has data of its position and angular velocities in x,y,z

directions.

● Once the discretized form of below equations are applied, then the position and

angular velocity of vortex is changed for first time step.

● If the new positioned vortex are out of box, Assign new position to it in the box

and new angular velocity to it.

● Find the maximum magnitude of angular velocity, Speed, Radius of Vortex.

● The same procedure is continued for N time steps.

Algorithm to the problem (Flowchart)

start create 1D array of floats Allocate memory in device cudaMalloc() initialize array randomly uniform distributed value copying initalized array from host to dvice cudaMemcpy()

global

void NewVortexDistrub (float *V, float *O, float

*VN, float *ON, float *S, int N) {

shared float Vc[ 3 ];

shared float dVx[ 3 ];

shared float dVy[ 3 ];

shared float dVz[ 3 ];

shared float domdt[ 3 ];

int tx = threadIdx.x;

int bx = blockIdx.x;

float radiika, dssss_dr;

float ssss;

float t1,t2,t3;

Parallelization to the problem (Continued...)

Block 0

Float *V

Position of Vortex 1

float Vc[3]

Global Variables

Shared Memory

Individual Block

Local Memory

radikka

Parallel Code : Number Blocks : 700 ; Number of threads in each block = 700 radiika = powf (V[(bx * 3) + 0] - V[(tx * 3) + 0], 2) + powf ( V[(bx * 3) + 1] - V[(tx * 3) + 1], 2 ) + powf ( V[(bx * 3) + 2] - V[(tx * 3) + 2], 2 ); dssss_dr = expf (-((3.1416f * 2.0f) / (S[tx]* S[tx] ) ) )*expf ( ( -radiika ) * ( ( 3.1416f * 2.0f ) / ( S[tx]

  • S[tx] ) ) ) ; Sequential code: for (int ivorton = 0; ivorton< 2100 ; ivorton++) { for (int induced = 0; induced< 2100 ; induced++){ vxx = Vortex[ivorton * 3 + 0] - Vortex[induced * 3
  • 0]; vyy = Vortex[ivorton * 3 + 1] - Vortex[induced * 3
  • 1]; vzz = Vortex[ivorton * 3 + 2] - Vortex[induced * 3
  • 2]; radiika = vxxvxx + vyyvyy + vzz*vzz; Parallelization to the problem (Continued...) **float V; Size = 2100 float V; size = 2100

Tx

Block 1 (bx = 1)

Tx

Block 0

bx = 0

radikka radikka

*float V; Size = 2100 Block 0 (bx = 0) Block 0 (bx = 0) *float V; size = 2100 Block 1 (bx = 1)

Parallel code:

if (tx < 3) { VN[(bx * 3) + tx] = V[(bx * 3) + tx] + 0.01f * Vc[tx]; domdt[tx] = dVx[0] * O[bx * 3 + 0] + dVx[1] * O[bx * 3 + 1] + dVx[2] * O[bx * 3 + 2]; ON[bx * 3 + tx] = O[bx * 3 + tx] + domdt[tx] * 0.01f; }

Block 0 (bx = 0)

Block 1 (bx = 1)

V VN

Vc of block 1 V VN

Tx = 0 Tx = 1 Tx = 2 Tx = 0 Tx = 1 Tx = 2 Tx = 0^ Tx = 1^ Tx = 2^ Tx = 0 Tx = 1 Tx = 2 Tx = 0 Tx = 1 Tx = 2 Tx = 0 Tx = 1 Tx = 2

Vc of block 0

Tesla M

● The Tesla M2090 , a July 2011 launch is

built on the 40 nm process, and based on

the GF110 graphics processor, with DirectX

● Compute Capability is of 2.

● Connected with a PCIe 2.0 x 16 interfaces.

Device Query

Sequential V/s Parallel code Percentage difference:- 57%

Note:- The higher time in GPU can be attributed to the less number of threads

and blocks assigned for data transfer between GPU and CPU.. For a higher

number of threads, the time could have been lesser because of the PCle Bus.

Future Improvements

● A Gpu with compute capability 3.5 or higher will yield better results, as

they offer Dynamic Parallelism and Unified memory Programming.

● In case the system allows the cudaMalloc to allocate huge memory in

device at Once could be helpful in handling large data.

● Further improvements can be done in code by using

cudaMallocPitch() and cudaMemcpy2D() for 2 Dimensional arrays.