22.04.2021

Enabling Multicore Multithread FE-DEM Computing within Elfen

Enabling Multicore Multithread FE-DEM Computing within Elfen

Martin Dutko, 20 April 2021

Sockets, Cores, and Threads

We have a puzzle for you: Why is the Rockfield team talking so much about sockets, cores, and threads?  It’s been going on for quite some time – to the point where it has changed the direction of our business!

Sockets? Is Rockfield now providing electrical installation and maintenance services? 

Cores?  Are we producing very special apple cider? (We wish!)

Threads?  Have we entered a niche market in haberdashery?

Of course, you guessed it - we are not involved in any of the above! We are talking about enabling parallel computing using multiple cores and multi-threading within our combined Finite Element - Discrete Element Method (FE-DEM) software, Elfen.

And sockets are in there too, somewhere.

Our History with Parallel Computing

Rockfield has been involved in the development of parallel code for many years. 

Originally, it was the intellectual driving force of our founding father, Professor Roger Owen, and his eager PhD students. It was – and still is – cool research to do. 

We have been involved in the research and development of parallel computing specifically for FE-DEM since the 1980’s.  Remember “Transputers”?

No? We don’t blame you. The concept was great but ultimately not a commercial success.

For us, though, it was a kick-start both in terms of accumulating knowledge and recognizing many issues that had to be addressed to develop commercial parallel code.

In plain English, we scrapped that code and started again. But the learnings stayed with us.

We recognize the wide range of hardware being used by the design offices that deploy our software.

Our objective is to make our code run on a heterogenous network, which might span several workstations, each with several multi-core processors. The operating system might be Windows or Linux. 

Parallelisation is enabled via a high-performance, widely portable implementation of the Message Passing Interface, MPICH, or Open MPI.

Splitting a Simulation to Run in Parallel

Splitting a simulation to run parallel computations must be done in a clever way to yield significant benefits, and this became a major part of our development.

Using our knowledge of adaptive mesh refinement (something we will talk more about in future blog posts), we recognised the need for simple, reliable, fast, efficient, and – probably most important – easily expandable Domain Decomposition (DD) methods. 

It took a lot of work, but we have eventually delivered each of those attributes.

Our DD is based on the simple idea of splitting the simulation domain in multiple directions. This can either be a basic split in only one direction or a more complex split in up to 3 directions, using cartesian or spherical coordinates. 

Our meshes are optimised to minimise the error of the solution while maintaining efficiency by increasing or decreasing mesh density (element sizes) at particular locations within the model. The DD recognises this non-uniform spatial mesh distribution and optimises decomposition accordingly, to achieve maximum simulation efficiency.

An important challenge we had to solve was how to deal with discrete contact boundaries that cross boundaries of the domain. For example, imagine a long listric fault in an underground formation, passing through many domains.  Our DD solution incorporates both non-overlapping and overlapping boundaries, with the latter automatically handling large contact slips.

Positive Results

We are proud to report that, for most applications, we are hitting the best efficiency that is achievable on heterogenous networks.

Importantly, our approach is eminently scalable. We can take advantage of as many domains and cores as are available.

The solver is typically used as a black box. Apart from the number of cores to be used, no other input information is required. 

However, you – our highly respected users – can also select the DD type, style of communication between domains, size of boundary layers, and several other control parameters, if you wish.

Lately, by the way, we have begun talking a lot about something else. Get ready, GPUs, we are coming for you!

 

Want to chat more about parallelizing your simulations?  Drop us a line at john.cain@rockfieldglobal.com

Rockfield-2-colour-strap-crop (1).jpg

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×