In the first part of this article, I concluded that although Microsoft Azure has some momentum amongst investment management platforms, the wise move for investment management firms and software vendors is to adopt a multi-cloud strategy. This means retaining the flexibility to deploy in several cloud environments.
In this part, I want to share experiences we had at Ryedale as we strive for a cloud-native architecture that is nevertheless portable amongst clouds. How do we walk-the-walk, rather than just talk-the-talk of multi-cloud? How does multi-cloud reconcile with our core belief that our software is optimised for the cloud?
An article-of-faith for us that good architecture and good design are the first two pillars on which high-quality software is built. Encapsulation is the essence of object-oriented programming. It is a powerful tool in abstracting code away from the specific deployment model the software is destined for.
For instance, we have touchpoints with our host platform through a database, storage and a topic-driven message bus. Each of these building-block services is encapsulated into a class written by us. We only permit the features we use of these services on the public interface of these classes. We hide platform features we do not wish to make use (yet) and we hide the actual, private implementation of these features, which connect with the host platform.
This means that re-plumbing a service is easy. It is just a matter of visiting the class that provides that interface to the rest of the application. Let's say we needed to move from a Topic on Azure's Service Bus to something else, let's say AWS SNS. We only have one class to re-implement and unit test. Since we have not allowed native platform references to proliferate around our code base, we have not given ourselves an impossible task.
This is the same idea as 'hardware virtualization' which has found application everywhere from integrated circuit design to entire data centres. We apply it within our application code. Take the idea one step further and bundle together all the touchpoints with the host into one package, and we have a hardware abstraction layer that gives us good portability between different hosts.
When we started, although we hosted our services in the cloud, we used virtual machines. Our deployment model didn't look very different from a typical on-premises, client-server system, c.a. 2005. To use the jargon, we used infrastructure as a service (IaaS), and we were about to move to platform as a service (PaaS).
At first, we hadn't figured out all the things that were anchoring us to IaaS. Chief amongst these was our use of the file system, aka the C drive. We had not, at that stage, thought to virtualize the idea of reading or writing a file to storage. The file system seemed like an omnipresent companion, something that it was safe to assume would always be there.
It must have been routine monitoring of the disk space on a VM when the thought occurred to us. It was crazy spending time on this when we were sitting on many terabytes of cheap storage, with only a few gigabytes allocated to our VM. The penny dropped.
Now we don't make any assumptions about the platform providing storage to our application. It is implemented behind an abstraction layer, and it could just as easily be from Google Cloud, AWS, Alibaba, Azure, or anything else.
Our priority is to make our investment platform great for investment professionals who use it. We don't play with technology just for the sake of it. Our day-to-day work centres on Azure which we regard as our home cloud. Our managed service business is firmly rooted in Azure because we have staff who are familiar with it. We have also built a suite of deployment and monitoring tools that give us great control over many, single-tenant instances of our platform. Another part of our business is licensing our intellectual property to businesses wanting to develop their own investment system in-house. These assignments can be at the leading edge of what our clients have attempted before and right at the forefront of their own cloud software' development efforts. What we provide is a 'jump start' kit. It is here that portability and knowledge across platforms come into play.
Part of our R&D involves scanning the horizon to understand what is on the roadmap for Azure and for other cloud providers too. If something sounds like it might produce a real benefit for our clients, we'll investigate it. One of the questions we ask ourselves is 'does the technology have an equivalent in other environments?'
If we can't identify equivalents, the technology remains on the watch-list. If, however, other major cloud providers deliver something similar, we will do a small-scale investigation to understand the limitations and opportunities of the alternatives. When engaged by a client who may have a different home cloud to ours, we already have a clear picture of the portability issues. They will invariably be small, given the encapsulation approach we have adopted. We can focus on interfacing to in-house systems, which should rightly be the major concern of such technology integration projects. We find that the same attention to software design stands us in good stead here too.
Being cloud-native is important. It gives a massive scale to IT operations which were traditionally constrained by the size of the machine they ran on. It is also difficult to achieve. To be a cloud-native architecture, we had to start again. We had to devote time and effort towards learning how to exploit the power of the cloud correctly. We also needed to appreciate the differences between various cloud platforms. Our experience is that the effort is rewarded many times over by a scalable, robust, portable and cost-effective IT platform.