## Direct finite element simulation of turbulent flow for marine based renewable energy

##### View/Open

##### Date

2015-12-31##### Author

Wozniak M.

Paszynski M.

Pardo D.

Dalcin L.

Calo V.M.

##### Metadata

Show full item record##### Abstract

This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the $C^{p-1}$ global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order $\mathcal{O}(log(N)p^2)$ for the one dimensional $(1D)$ case, $\mathcal{O}(Np^2)$ for the two dimensional $(2D)$ case, and $\mathcal{O}(N^{4/3}p^2)$ for the three dimensional $(3D)$ case, where $N$ is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of $p$ and $N$. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about $20%$ for $256$ processors for a $3D$ example with $128^3$ unknowns and linear B-splines with $C^0$ global continuity, and $15%$ for a $3D$ example with $643$ unknowns and quartic B-splines with $C^3$ global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.