Digital Assets Report

Newsletter

Like this article?

Sign up to our free newsletter

Technology innovation thriving but ‘software intelligence’ necessary

Related Topics

Technology innovation continues to develop at an astonishing pace. And for those hedge fund managers in active fund raising mode, the ability to demonstrate superior reporting, data management, and performance attribution skills with the latest technologies is becoming a serious point of differentiation.

As a recent SEI article1 suggested, not a day goes by where the impact of platforms like Amazon, Facebook, etc, isn’t felt by those in the asset management industry. The fact is, investor habits are rapidly changing and the way they choose to invest is beginning to mirror the way they use technology to support all other aspects of their lives, from shopping online to algorithms providing recommendations, personally curated to them, on Netflix.

This age of digitisation is just at the cusp of what might be possible. As such, asset managers – both traditional and alternative – cannot afford to sit back and expect that the old fashioned way of servicing investors will suffice.

Managers are not blind to the digitisation megatrend. They well might have a digital transformation strategy in place but often there exist limiting factors that prevent them from moving forward. While it could be the lack of understanding the technological options available, it could also simply be due to the lack of available budget or lack of senior management buy-in.

“It’s a cultural shift as opposed to just swapping out technology,” says Ross Ellis, Vice President and Managing Director of the Knowledge Partnership in the Investment Manager Services division at SEI. “It’s understanding how you’ve done things in the past and having a strategic commitment to changing from manual-based workflows to digital. The asset management industry has always been a relationship-based, people oriented industry. Yet with technology, it’s moving things in the opposite direction. It’s about process, transparency, and trusting the system; the irony is, it now frees up resources for managers to perform more higher value relationship-based work for clients.”

Integration means customisation

To achieve this freeing up of resources, managers need to partner with technologists who have deep level integration built in to their platform offering; whether that’s a portfolio analytics provider, a risk analytics provider, a cybersecurity vendor, EMS vendor; whatever the capability may be. The fact is, when you connect things together it only works if the plumbing is in place.

Axioma Risk, for example, enables hedge funds to look at risk from a front-office perspective when doing portfolio construction, and from a middle-office perspective to do risk reporting for investors and regulators, giving CIOs or CTOs an enterprise view of multi-asset class risk across their organisation. Most vendors offer what they think the view of risk should be, and sell that particular methodology to the marketplace. Axioma, on the other hand, provides the platform for people to look at their strategies in very specific ways.

“There are other firms that offer front-office tools, or middle-office tools, but they aren’t necessarily connected at the back-end,” says Jason Connelly, Managing Director, Business Strategy & Execution, Axioma. “Files end up getting sent back and forth and you lose a lot in that process. That’s where we are able to help clients enhance their portfolio construction. Our solutions are best-of-breed, and we are making them even easier to use.”

Without an agile system infrastructure, asset managers face continued operational pressures when trading new assets and instruments. And with so much regulatory compliance to contend with, the more firms are able to achieve a complete overview of instrument positions, risk levels, collateral and margining levels, etc, the more adept they can become at satisfying both regulator and investor expectations.

“I do think enablement is the key to this,” says Roger Woolman, Business Development Director, Asset Management & Alternatives at SS&C Advent. “The ability to react quickly when things change and knowing that you have a system that can accommodate new strategies, can be quite empowering from an investment perspective. If different market opportunities are being presented, you want to be able to act on them, to be agile.

“This quicker time to market can make an asset manager more competitive and become proactive rather than reactive.”

Multi-asset class risk is not itself a differentiator. As Connelly says, “it’s the integration piece that differentiates us from our peers. “How we bring front and middle office together, risk and return, how we seamlessly plug into our clients’ ecosystems using APIs and so on. We make sure clients are partners, this is not a client/vendor relationship. It’s far more collaborative.”

Axioma Risk helps risk managers gain a complete picture of their risk-return profile and translate it into actionable insight. It is, in many ways, a natural evolution of where the firm started, using flexible front-office models, then middle-office models, and connecting the two. “We don’t want to be viewed only as a risk and return analytics vendor, we do more than that and provide a platform for clients to integrate the entire workflow.

“We are building out our fixed income analytics toolset to complement our equities portfolio management tools. There are a number of things we are doing within fixed income right now,” adds Connelly.

Smart automation

Mike Canni is the COO at Opus Fund Services. One of the most important technology developments, currently, in his view, is Smart Automation, which involves combining a deep understanding of workflows with robust granular data to allow highly efficient and accurate decisions to be made.

“New millennial managers have grown up in a technology driven world which offers many core services for no charge. To achieve this, industries put a premium on the elimination of errors, more efficient and higher quality production, and a better user experience. Continued innovation in this area will only be possible through the application of smart automation.

“We have identified numerous benefits that makes smart automation an important focus for our business. It helps improve quality, accuracy, timeliness, reduces costs, and mitigates risks in the current cyber-centric business climate,” explains Canni.

Opus is applying smart automation to the investor onboarding process; something that traditionally has been both time consuming and risky work. An error may result in statements being sent to the wrong email address, or redemptions/distributions being paid out to the wrong account. “By creating highly automated workflows,” says Canni, “we attempt to mitigate these risks, so that no single point of failure exists. As an example, investor access to our portal is 100 per cent automated. There is no manual user setup, nor permissions. Instead a series of progressive algorithms automatically enable an investor’s account once the appropriate levels of review have taken place.”

A few of the main technology features associated with smart automation include:

• Independent server-side logic checking and validating application side logic, using proprietary algorithms.

• Workflow optimisation, ensuring that the right people are doing the right things, at the right time.

• No single points of failure, substantially reducing errors associated with work being completed manually.

From an R&D perspective, Canni confirms that Opus will shortly be releasing 100 per cent of the NAV delivery process to its clients, enabling fund managers to see exactly at what stage their NAV preparation is at. “We’re also working on a proprietary machine-learning based application to produce 95 per cent of the NAV automatically. This would drastically reduce the time and manpower required to produce NAVs for our clients,” adds Canni.

Virtual Desktop Infrastructure

Technology advances are also reinforcing the capabilities for hedge funds to work remotely, as today’s workforce becomes increasingly more mobile. Abacus Group, a leading technology service provider to the alternatives industry, has long been at the forefront of this transformation. Ten years ago, it was offering a private-based model to reduce the reliance on data or applications having to be accessed from one location. With its new Virtual Desktop Infrastructure, it is able to provide a truly cutting edge solution for today’s mobile hedge fund manager, where they can access their desktops from any place, at any time, and securely.

This is proving beneficial from an employee onboarding perspective. Flexible technology, such as that offered with VDI, moves in rhythm with the changing business needs and expansion/contraction cycles of fund managers.

“The old fashioned model would have been that if a manager wanted to integrate, say 20 people, into a single network with a unified experience, they would have to spend USD500 per person per month to get a router firewall, buy everyone a PC, have someone configure it, manage it in the network and so on,” comments Paul Ponzeka, Managing Director, Engineering at Abacus.

“The best case scenario is it would take two weeks to get everything set up, irrespective of the end experience. With VDI, each person is a single line item. We can set people up in minutes as opposed to weeks, and they get the same user experience. This leads to a significant cost saving from a CAPEX perspective because fund managers no longer have to spend money on routers, PCs and other hardware, or worry about the time to market for each individual.

“VDI therefore dramatically streamlines the onboarding process, as well as the offboarding process – you simply terminate an employee’s access to Abacus VDI when they leave the firm, meaning there’s no concern over data leakage.”

The shockwave effect of what the public cloud can do has, over the last couple of years, led to a massive spike in innovation and new ways of thinking and opened new possibilities. The cadence that the public cloud has set, most notably with Amazon Web Services and Microsoft Azure, has forced the rest of the industry to keep pace.

“I think we’re seeing that now in the technology arena,” says Ponzeka. “The public cloud has started an arms race as far as technology features, relief cycles and innovations and that gets me excited, from a technology development standpoint.”

Christopher Reeve is Director of Investment Solutions at Aspect Capital, one of Europe’s leading CTAs. When discussing whether the cloud has helped improve the productivity of the investment team, Reeve comments:

“I think it has. Our quantitative research team, for example, just wants to know that they can access computing power when required. In that sense, the cloud has been quite seamless and improved their productivity as a result.

“The other benefit is we don’t see running racks of hardware and maintaining them as our main selling point. That’s not what we are selling to investors. It makes total sense for us to outsource it and have it available when we need. We used to have our own on-site server room but that stopped a few years ago when we moved to off-site co-location centres. Now we are making the next logical step to reduce overheads by outsourcing some of our computing power to the cloud,” explains Reeve.

Bill Neuman is senior managing director of Product and Engineering at Eze Soft. He says that using AWS or a world-class data centre with multiple redundant power connections at each corner of the building was unheard of just a few years ago.

“You can put high-quality product in that environment and create a whole cohesive system. There are so many open-source libraries available today, AI experimentation for making financial calculations easier and more accessible; it’s increasingly becoming a technologist’s world and part of our vision is to make those advancements work for our customers,” asserts Neuman. 

Regardless of the cloud infrastructure or the supposed quality of software tools and analytics, the bottom line for any hedge fund portfolio manager or trader is resilience and reliability: i.e. how well does the technology stack perform when the markets getting stressed? Does the technology vendor have the capability to maintain first-rate service levels within the infrastructure to support trade investment activity, no matter how frothy the markets are becoming? That is the litmus test.

“Even if your trading needs are relatively small today or are relatively low-volume, you still need to know that your IT partner and the technology behind them is able to handle periods of high volatility in the market,” says Neuman. “It’s not just a volume capability, but also a reliability element. I’ve heard stories of some IT vendors over the last couple of years suffering network outages and systems shutting down during stressed markets. 

“We regularly test our technology as part of our disaster recovery plan to ensure we have a highly reliable system. It’s something that asset managers should be thinking about, as it’s a way to demonstrate to investors that they’ve done their own due diligence and take this reliability issue seriously.”

To best deploy new solutions, and avoid innovating for innovation’s sake, Abacus places great emphasis on partnering with its clients as much as possible.

“We have a dedicated account management team that meets regularly with our clients to go through not only what’s working and what’s not working on the platform, but to let them know what’s coming down the line and what new areas of technology we are looking at.

“A lot of that future looking is driven by those meetings, with clients telling us what they need and us developing solutions accordingly,” explains Ponzeka.

System glitches and why ‘software intelligence’ is important

Reliability is something that financial institutions, especially UK retail banks, have struggled with. TSB and Tesco Bank have suffered major system failures and just last week, the London Stock Exchange failed to open for trading for one hour, causing havoc and frustration among traders. The outage was allegedly caused by human error during a software patch exercise on the platform. However, there are some who consider it to be part of a wider, underlying systemic problem, caused by financial institutions adding layers of new technology to their legacy IT systems to try to keep pace with trading innovation.

These IT glitches are often caused by poorly built software or software upgrades, which are insufficiently tested or checked before going live.

Lev Lesokhin is VP of Strategy at CAST, a pioneer in software intelligence which financial organisations and global exchanges use to produce reliable and resilient software. Lesokhin argues it is time for the financial services sector to pause and rethink whether its heritage risk prevention strategies are capable enough to handle today’s sheer level of software architectural complexity.

“LSE has kept itself out of any negative light for quite some time but exchanges have gotten a lot more complex in recent years as they look to support new order types. There is a systemic nature to these glitches,” says Lesokhin.

The way the IT community has gotten around the fundamental lack of software robustness is to put in place specific protocols to follow when deploying systems. When Knight Capital lost USD440 million in 2012, forcing it to eventually close down, the reason was down to a simple software glitch. An automated, high-speed algorithmic router used to send orders into the market “was intended to replace old, unused code referred to as “Power Peg” – functionality that Knight hadn’t used in 8-years,” according to Doug Seven.2  

This Power Peg is what is known as ‘dead code’: a piece of software that is not supposed to be accessed by the system.

“On average, 20 per cent of the code in big legacy systems is dead code. The danger with that code is that it ends up in the wrong library and the application starts executing that dead code.

“There’s a fundamental lack of ‘software intelligence’ within financial services institutions, knowing how these systems are structured from an end-to-end standpoint,” suggests Lesokhin. He says that the UK’s financial industry has been decimated by these glitches.

“At LSE, it looks like it was more of a data corruption problem but I think it is part of a wider, systemic issue. I think the exchanges are less affected by this than high street banks, who have to adapt the functionality they bring to customers with the rise of challenger banks (which are more technologically agile and have no legacy IT issues). Many of them still have core legacy systems that they haven’t changed in 20 years and it is very hard to replace those systems.”

What ends up happening is that financial institutions layer cake their IT and that’s where the complexity comes in. It’s not that the legacy systems can’t keep pace, rather the business environment that banks face today requires them to make changes and put these extra layers of technology in place, and as Lesokhin explains, “the traditional methods of controlling the robustness of those layers are not working anymore.

“You have to test and test, but the testing community can’t keep up. There are more paths through a bank’s IT system than there are stars in the known universe. The likes of Google and Microsoft do a lot of internal, system-level analysis to engineer robustness into their software. That’s something the banks haven’t figured out a way to do yet.”

Although most hedge fund managers might think: ‘Well, I don’t have these legacy IT problems so why do I care?’, there’s a wider issue at play. If their trading counterparts – be they investment banks or trading firms like the former Knight Capital – or exchange partners cannot be trusted, because of the software intelligence issue Lesokhin refers to, future problems will always be a potential threat.

To mitigate that threat, and avoid unnecessary glitches, the way software is managed needs to change. One important step is to introduce engineering discipline into the technology development process.

“Traditional IT management has been to outsource to a favourite vendor but if you have an engineering mindset you think more carefully about what these systems look like, how robust they are, etc. In the end it becomes a technology management-led issue.

“At Apple, the number of software bugs has increased since the passing of Steve Jobs. Jobs was completely focused on quality control. He would reject products constantly if they didn’t work properly. Everything just worked at Apple, there were rarely glitches.

“Fundamentally, you need to introduce an engineering approach and establish software intelligence from the top down, with management driving the robustness of your systems,” advises Lesokhin.

Over at Aspect Capital, Reeve says that they look at technology from an enhancement and a resilience perspective; resilience with respect to things like outages on exchange systems or broker systems as well as with its own internal systems.

“We don’t want to have any interruptions on the markets we trade. We do a lot of work on this. I sit on our operational risk committee where we try to anticipate potential operational risks as well as oversee testing on primary and back-up systems.

“We run all of our production systems out of two data centres located outside London, which replicate each other, but we are using cloud computing for activities such as strategy research. If we need to run something that is computationally intensive it makes a lot more sense to lease that computing power (on the cloud) as and when we need it.”

Alpha generation via alternative data

One could argue that for another fast emerging technology trend – the use of alternative, unstructured data sets – a similarly disciplined approach needs to be taken. Much is being made of the potential for these data sets but hedge funds should think carefully about how to ingest them, which third party data feeds to consume, and whether they can genuinely generate any meaningful trading signals.

Taking a slapdash attitude to using alternative data sets, just because it is ‘on trend’, is not going to benefit anyone over the long-term.

Still, it is an interesting development, made possible only by the sheer increase in computer processing power. As outlined by Deloitte in a recent white paper entitled “Collective intelligence investing: Alpha generation via alternative data brings new risks”, investment managers should keep the below points in mind when adopting alternative data:

• Build a well-rounded talent team. A combination of data scientists, engineers, economists;

• Consumer experts, and finance professionals could help create a competitive edge from alternative data. Consider hiring multi-skilled professionals with both data science and security analysis expertise;

• Have an integrated insights team. An integrated team of data scientists, engineers, behavioural economists, and financial analysts collaborating with each other would be well positioned to derive new insights;

• Establish a fluid data architecture. The technology, storage, and computing requirements for alternative data are vastly different. Having a system in place to handle multiple data feeds via API along with scalable processing power could be prerequisites for successfully managing alternative datasets.

There are many different online platforms springing up for fund managers to tap in to for alternative data purposes. Broadly speaking, these can be categorised as open communities (Seeking Alpha, eToro), digital expert communication networks (SumZero), digital expert contribution networks (Harvest Exchange) and crowdsourcing platforms (Estimize, Quantopian).

“If you’re looking to incorporate Estimize, for example, it’s not very different from taking in an earnings estimate from one of the sell-side banks,” says Doug Dannemiller, Research Leader, Investment Management, Deloitte Center for Financial Services. “They might have arrived at that number differently for a particular stock price, but it works almost exactly the same way.

“Transition that to if you do it yourself. Instead of just managing a vendor risk you then have to manage and understand the dynamics of the participants in that ecosystem; you have to roll up your sleeves and understand the reliability of the data going forward, and you might need to sanitise the information to make it acceptable for investment purposes.

“If you go one step further and set up your own crowd platform and try to draw signals out of it, then you take on an additional risks from all perspectives; data integrity, provenance, material non-public information, etc. And you run the risk of keeping that crowd viable long enough for the time you want to use it and generate signals.”

The point about MNPI is critical as it exposes fund managers who use it to potentially harmful legal and regulatory risks. One only has to see what happened to Cambridge Analytica to appreciate this.

Asked whether the proliferation of alternative data could lead to the opening of Pandora’s box, with fund managers unwittingly using MNPI to generate alpha, Dannemiller responds:

“I think there are some ticking time bombs. Innovators (such as quant hedge funds) saw the opportunity for alpha and went headlong into it, although so far no investment firm that I’m aware of has blown up as a result of using alternative data.

“But it is a problem. Investment managers have to avoid MNPI at all costs. In a firm like Estimize, it has algorithms and user input profiles that can scrub and eliminate any signal generated by MNPI. The rules for MNPI as they translate into the cyber world don’t necessarily apply. Just because somebody with a server and some bot-programming capability can get data does not mean it is public. The rules on what’s public and non-public are difficult to navigate. They need to evolve.”

Conclusion

At Eze Soft, Neuman is a firm believer that technologists should try to reach for the stars but at the same time be pragmatic and take the small gains as they arise. He says that the team is currently experimenting with machine learning tools for portfolio construction.

“Gary Kasparov said a machine can beat a man but a machine plus a man is better than either. Augmenting human intuition with AI and machine learning tools to provide the right set of data points [for investment decision-making] for the portfolio manager is probably going to be the first level of evolution in front-office trading. 

“I also think this could apply to the back office. What if we could use technology to have real-time connections between the trading and accounting teams so that throughout the day, the back office gets a jump on end-of-day activities? There are interesting applications that could apply across the whole firm, front to back,” says Neuman.

Each year, Eze Soft runs the Innovation Challenge for a few days, designed to encourage its development teams to set their imaginations free and work on whatever they think could be the next best game changing technology tool.

“Often, a lot of valuable insights come out of that; voice trading over Amazon Alexa, connecting our platform to a cryptocurrency trading platform, developing ML-based matching tools and so on. It gives our developers a chance to explore the realms of what’s possible, and encourages them to always keep innovating,” concludes Neuman.


1. https://www.hedgeweek.com/features/sei/digitalising-investor-experience
2. https://dougseven.com/2014/04/17/knightmare-a-devops-cautionary-tale/

 

Like this article? Sign up to our free newsletter

Most Popular

Further Reading

Featured