High performance computing (HPC) systems are in a constant state of evolution. As businesses look to gain an edge in increasingly competitive markets, HPC experts are working to make it easier for clients to manipulate and analyse large amounts of data.
High performance computing (HPC) systems are in a constant state of evolution. As businesses look to gain an edge in increasingly competitive markets, HPC experts are working to make it easier for clients to manipulate and analyse large amounts of data.
Evolving Scalability from AI and Analytics
During a virtual panel discussion called The New HPC for Financial Services, Addison Snell, founder and CEO of Intersect360 Research, gave a broad view on the shifts seen in the HPC space: “HPC has introduced new definitions of scalability,” he said. “The traditional HPC definitions of scalability – more capacity, more bandwidth, higher throughput, more FLOPS, more computation, are all still in effect. But as we incorporate more aspects of AI and analytics into enterprise HPC, we can consider all the other dimensions of scalability that come into play.”
As the world of heterogeneous computing evolves, new definitions of scalability are coming into effect. Snell noted how in a survey conducted a year ago, 81% of organizations surveyed were already running AI: “These people said they were working to implement it within the next year. That percentage is only going to be going up.”
According to Snell, “finance is going to be one of the vertical markets that does take advantage of exascale levels of scalability faster than other commercial vertical markets”.
Exascale computing refers to extremely high-powered computers able to analyse huge volumes of data at speed while simulating complex processes and relationships which exist in the world.
One example is the Durham Intelligent NIC Environment (DINE). DINE is part of the DiRAC memory intensive service at Durham University and is a 16-node cluster equipped with Dell PowerEdge C6525 servers with NVIDIA® BlueField® DPUs. The smart network interface cards (smartNICs) enable the intelligent processing and routing of messages to improve the performance of massively parallel codes in preparation for future exascale systems.
Practical HPC Challenges and Advice
These technological developments can be deployed to considerable benefit of financial services firms. However, the experts on the virtual panel outlined some of the barriers that remain.
Andrew Paterson, product owner, HPC, at ING, noted that working with HPC and computational finance means running into some fundamental challenges: “These are so ingrained in certain choices made with respect to your ways of working and the value of your data computation platform.”
He added that experts need not only to consider the way they work and what they are doing, but, more importantly in this context, the consequences of decisions being made.
The choices made regarding a firm’s way of working, the value of its data, and the computing platform will profoundly influence the approach to computational finance and HPC. “These choices have a mutually influencing relationship,” Paterson said. “Maintaining balance is crucial to your HPC work being efficient. if you balance your considerations, you are going to get very close to your results with a minimum of expenditure in time, budget or resources.”
And sometimes these choices can be supported by external partners.
Jonathan Gough, lead data scientist at Converge Technology, explained how Convergence focuses on using data to change the way people do business: “Our goal when working with our clients is to take their data, bring it together, and make sure it’s ready to be turned into action,” he said. “Our work involves updating their setup, modernizing databases and building that data lake, or data puddle into something that enables them to apply analytics and take it to scale, adding predictive models on top of that, together with machine learning. This turns that data into actions and can move those businesses into the 21st century.”
The How and Why of HPC
While financial firms each have different factors driving their HPC usage, the common thread across HPC applications is maximising the value of data. Snell noted: “The overarching question people ask themselves is: what is the wealth of data that we have across our organization and how can we do more with that data? This is HPC in a very broad definitive sense. Companies doing machine learning and training algorithms are using HPC – they might not think of it as HPC, but it is driving the use of NVIDIA GPUs, NVIDIA InfiniBand and high performance server platforms like those offered by Dell.”
HPC exists to help firms get more insights from their data, whether that is through analytics, machine learning or traditional HPC approaches. According to Snell: “Within this decade, we are going through a transition point where these are not separate domains. We’re going to see them continue to be integrated at the level of HPC plus AI, plus financial services.”
As Gough reiterated, AI and machine learning take data and turn it into actionable decisions. This can involve natural language processing and understanding the intent behind that data, whether it’s embedded within the paragraphs of a 150-page document or the tone of voice of a caller using a helpline.
“You can also use predictive analysis, helping you understand what the future may hold or not,” he said.
Kevin Levitt, global business development lead at NVIDIA, offered his perspective: “There’s the obvious way to do software, but over time, people find the optimizations that work best with the hardware they can get. In fact, Nvidia actually has more software engineers than hardware engineers; we are more of a software company, and as much a systems company as we are a GPU company.
“You have to pull all that HPC computing into a clear picture,” Levitt added. “We focus on enabling the expertise of data scientists, coupled with all this data. These are the places where customers are building value.”
Dell is the number one provider of HPC solutions, inclusive of servers and storage services, according to Intersect360 data. Anas Bricha, the firm’s North America director for HPC & AI, said that looking at HPC for financial services is about an end-to-end solution: “Within our HPC & AI Innovation Lab, we have the Rattler supercomputer – a system with NVIDIA GPUs and NVLink – on the Top500 supercomputer list. We run customer workloads to make sure that we get the right HPL benchmarks that we’re looking to accomplish to meet performance goals and reduce latency.
“We want to make sure we provide the expertise to elevate the customer as well,” he added. “We are continuing to develop our strategy for now and the future and our aim is to make it simple for our customers and our partners.”
Where to Next?
Taking stock, Gough observed that most financial institutions prefer to run HPC workloads on premises, but that there was “a growing trend to move to the cloud.”
He said: “The hybrid space is where people are finding the most comfort. This trend isn’t rocketing into outer space yet, because it takes time and effort to make these migrations happen.”
Discussing Dell’s specific plans, Bricha said: “We want to keep advancing our HPC and AI solutions to support the emphasis in very granular typical use case scenarios. These include fraud detection, banking, insurance, risk management and other similar areas.
“We’re going to continue to keep innovating; providing the best architecture, not only for large customers, but for small and medium business owners as well.”
These industry experts are starting to explore the world of HPC and how they can benefit, and Dell and NVIDIA are aiming to further its democratisation. “We want to make HPC accessible and share our findings with the community. We are working to ensure customers are leveraging their investments to their full capacity,” Bricha concluded.
This article is done in partnership with Dell Technologies & NVIDIA.