MENU

The Continuous Adjoint Method For Optimization Problems 

Urban Microclimate Modelling Using CFD

August 24, 2023 Comments Off on The Benefits Cloud Simulation Can Bring to CFD Views: 573 CFD, ELEMENTS, HELYX

The Benefits Cloud Simulation Can Bring to CFD

On average, a typical employee uses around 36 cloud-based services every day [1]. This shift to using the cloud has led to a boom in the cloud computing market which is projected to increase by over 130%, growing from $446.4 billion in 2022 to over $1.03 trillion by 2026 [2].

This rapid growth is due to the improvements in accessibility and affordability of hosting services on the cloud, along with the many benefits cloud-based services bring. It allows users to instantly access computing resource from anywhere, at any time with a pay-as-you-go philosophy that has revolutionised the world of simulation.

What Is Cloud Based Computing?

Cloud computing offers IT resources and services (like computing power, databases and storage, networking, applications, etc.) on demand through a cloud service provider.

On Cloud computing, virtual machines are compute units/nodes that are hosted in the cloud. These nodes essentially resambles a physical computing environment, where each node has several CPU cores that manage data and perform calculations. One important aspect is that Cloud computing is scalable, which means that compute nodes can be easily linked together on demand to work in parallel on the same project. The workload can then be distributed amongst the nodes, resulting in improved overall performance. 

Nodes are linked by a network to form clusters which work in parallel.

The Advantages Of Cloud Computing

One key difference and advantage of Cloud Computing over One-Premise Computing is that the Cloud offers a flexible and scalable infrastructure where companies only pay for the resources they use. For On-Premise computing, on the other hand, companies need to invest and maintain physical hardware locally. 

By spreading the workload across a scalable number of virtual machines (compute nodes), simulations are no longer limited in terms of number of cores. ‘Unlike physical hardware where you are restricted to a specific number of cores, cloud simulation allows you to scale your simulations and run on as many cores as you need,’ explains Cristian Hoerlle, CFD Engineer at ENGYS.

‘Models are evolving, and industry standards are continuing to get higher each year. Customers want more precision and accuracy in their simulations but in shorter runtimes. This requires much more computing resources that would be difficult and extremely expensive to achieve with physical hardware.’

It is not only the computing limitations that make On-Premise computing unattractive, but also the expense. In house clusters or supercomputers are extremely expensive to purchase and maintaining them to operate at maximum performance requires further investment. The hardware needs to be continually updated, while the computers themselves need significant cooling which escalates energy costs. To operate efficiently, all the cores within the cluster need to be utilised continuously, otherwise energy, and money, are being wasted.

Licenses Designed For Cloud Simulation

ENGYS softwares facilitate running CFD simulations in the cloud. Unlike other CFD proprietary software, where users have to pay for additional cores or can only run one case at a time, HELYX and ELEMENTS gives users the flexibility to run as many cases as they want, or with many cores as needed (without any licensing restriction) for much lower cost. 

‘The licenses of proprietary software usually limit the number of cores you can use, so if you need to run on more cores then you have to purchase new licenses,’ highlights Hoerlle. ‘Our software does not have this restriction, so customers can easily scale their simulations on the cloud and use thousands of cores simultaneously if they want to.’ 

Benchmark Testing Of HELYX And ELEMENTS On Azure Cloud

To analyse the performance of HELYX and ELEMENTS on the cloud, both software’s were deployed on an Azure HBv3 virtual machine and were tested in both single-node and multi-node configurations.

‘Running a simulation on all the cores available on a single node does not necessarily mean that the simulation will solve faster or efficiently,’ says Hoerlle. ‘There will be inner limitations such as memory bandwidth and other factors within the hardware that will affect the performance. That’s why we wanted to evaluate the scalability performance of our software on a single node first,’ continues Hoerlle.

‘Whereas in a multi-node architecture, often the data communication and synchronisation between multiple processors can increase overheads and slow simulations down, so again it was important to also test on a multi-node configuration.’

HELYX Version 3.5.0 Results

HELYX is a CFD tool primarily used for engineering analysis and design optimisation and to analyse its parallel scalability on Azure, three different models were used. These models represent the typical use cases of HELYX customers:

  1. A steady-state model of a city landscape with a mesh of 26.5 million cells using a single phase turbulent flow solver
  2. A steady-state model of a ventilator fan with moving blades. Two different mesh densities of 3.1 million cells and 11.8 million cells were compared and a single phase turbulent flow solver with a Moving Reference Frame (MRF) approach and Arbitrary Mesh Interface (AMI) was used
  3. A transient model of a ship moving in calm water with mesh densities of 1.35 million cells and 11.1 million cells using a two phase Volume of Fluid (VOF) solver
The city landscape model uses 26.5 million cells

Each case was first run in a single-node configuration on a Standard HBv3_series virtual machine. Four tests were conducted, each with a different number of cores, and the results showed that, for this particular instance, the solver runtime and relative speed increase tailed off after 64 cores per node. A higher number of cores displayed a minimal benefit in run time but resulted in a huge increase in energy and therefore cost.

Blue bar graph showing the relative speed increase for different numbers of cores per node
The relative speed increase in solver time showed minimal benefit above 64 cores per node

Once the optimum number of 64 cores per node was established, the multi-node configuration tests were then conducted using nodes with 64 cores. ‘The results highlighted that the scalability is not the same for each of the three models which is expected,’ explains Hoerlle. ‘The runtime of the solver depends on many factors such as the physical models being used, the size of the computational mesh and the infrastructure of the code.’

The results highlight that a minimum core load (the number of cells per core) is required to reach optimal scalability across multiple nodes and this is strongly influenced by the numerical methods and physical models considered within the simulation. If the core load is lower than this minimum value, solver performance due to the excessive data communication between processor boundaries.

Scalability results table of city landscape model on multi-node configuration
Scalability results of the city landscape model in a multi-node configuration

Overall, it was shown that the parallel scalability for the city landscape model was above optimal. On the other hand, studies conducted with the ventilator fan and the ship models indicated that parallel performance can be influenced by ancillary methods like MRF/AMI or automatic mesh refinement. This was further proved by comparing two difference mesh densities for the same case where the number of cells per core significantly influenced parallel performance.

ELEMENTS Version 3.5.0 Results

ELEMENTS CFD software has been specifically designed for optimising the airflows around vehicles and so two vehicle models were used to analyse the scalability of ELEMENTS on Azure:

  1. A DrivAer sedan model using a mid-size computational grid of 17 million cells
  2. A Generic Truck Utility model with a large computational grid 116 million cells
Side view of the Generic Truck Utility Model with coloured velocity clouds
The Generic Truck Utility (GTU) model has a computational mesh of 116 million cells

The DrivAer model was tested in a single node configuration and, similar to the HELYX study, the optimum number of cores was also 64. Consequently, 64 core nodes were then used for the multi-node tests. It was found that the parallel scalability of the DriAer case became suboptimal from 4 to 8 nodes because the low number of cells per core resulted in reduced solver performance.

Blue bar graph showing the relative speed increase for an increasing number of nodes for the GTU model in multi-node configuration
The solver scalability was above optimal for the GTU model in multi-node configuration. This highlights the scalability of ELEMENTS, particularly for meshes with a high number of cells which is common in automotive

However, the larger mesh GTU model presented a good solver scalability across the whole range of nodes evaluated, and the number of cells per core never dropped below 100,000. ‘This is encouraging as models of 150 to 200 million cells, running on thousands of cores is quite common in the automotive industry,’ says Hoerlle. ‘This is relatively difficult to run on in-house clusters, so cloud simulation provides a much more flexible and efficient alternative.’

References

[1] J.F., 2023. 25 Amazing Cloud Adoption Statistics [2023]: Cloud Migration, Computing and more [Online]. Zippia.

[2] L.S., R.K., H.J., 2022. The Public Cloud Market Outlook, 2022 To 2026[Online]. Forrester.

Comments are closed.