Why we invested in by Ekaterina Almasque

When I was an investor for EMC and then Samsung from 2014 to 2019, there was already a clear focus on resolving major bottlenecks in the cloud. Data center architectures traditionally rely on storage and network infrastructure with high unpredictable latencies and low performance. In general, these architectures are complex, slow, expensive and difficult to manage, and this cannot be easily resolved by continuing to disaggregate server resources using traditional virtualisation methods.

Since then, the world has evolved. Today, new workloads such as Artificial Intelligence (AI) and High-Performance Computing (HPC) are driving ever-accelerating growth in volumes of data, and overwhelmingly dominating cloud resource consumption. The HPC-as-a-Service market alone is projected to reach $10B by 2023, according to MarketsandMarkets — growth largely driven by Healthtech and Life Science applications, which we have seen the importance of in the current healthcare crisis.

Furthermore, trends like connected cars, autonomous driving, and Industry 4.0 will continue to push workloads like AI to the edge. One of the major challenges in deploying edge infrastructure is that it is not possible to take a “traditional cloud” technology stack and deploy it in an edge architecture; there are very different resource limitations, including processing power, form factor and bandwidth. At the ‘far edge’, for example, there are devices on the factory floor which need to work in harsh environmental conditions, in small enclosures and with low power. Existing approaches, some of which were developed even before the “cloud era”, are extremely inefficient or unusable in such circumstances.

Last but not least, for much of the last 15 years “cloud versus on-premise” has been a recurring theme. The discussion has now morphed into “data everywhere”, as we witness a tectonic shift to a hybrid cloud. Many traditional technologies were architected for the era of the single cloud and do not natively support hybrid, let alone edge, thus creating siloes. As a result, a one-size-fits-all enterprise strategy is not viable.

Hyperconverged infrastructure (HCI) was developed to bring simplicity and software control to the deployment of enterprise applications. However, many of the early players in that space were created before the advent of edge and hybrid requirements. Even though they are racing to adapt, it is often hard to change the underlying architecture. As per Gigaom’s newest report, choosing the right HCI infrastructure remains challenging. Although HCI is becoming good at managing more applications, there is still a balancing act today for supporting both capacity-driven and latency-sensitive workloads.

What if there were no need to sacrifice high capacity for low latency and vice versa, especially when data and computation are all in a hybrid cloud environment? This is where comes into play. Sunlight’s low footprint, high-throughput HCI stack provides distributed storage with equal to bare-metal performance for large amounts of data, with all the benefits of virtualisation and hyperconvergence. It is based on a unique Hypervisor technology, which is disrupting decades of homogenous, one-size-fits-all storage and compute in the datacentre, and its breakthrough can be captured in one word: “Efficiency.”

This gives Sunlight a distinctive advantage in the emerging market of micro-datacentres, colocation and hybrid cloud. In addition, it is second to none when it comes to running workloads at the edge. Sunlight’s ability to optimise resource usage is also vital to software-as-a-service players’ profitability when facing soaring infrastructure costs as they try to minimise the ‘infrastructure costs as a percentage of revenues’ metric.

At its core, the real breakthrough of Sunlight is in efficient handling of Input/Output, the very core of the cloud bottleneck. With targeted support of NVMe storage in several layers of the stack, Sunlight can achieve more consistent performance overall. The approach was developed by Julian Chesterfield, a co-founder of Sunlight and one of the top global minds in virtualisation, having been previously an architect of Xen, the original foundation of the Amazon cloud (XenSource was acquired by Citrix). Julian was inspired in his collaboration work with ARM in a previous venture, where he researched ways to remove inefficiencies in performance of workloads running on ARM-based servers. Luckily for Sunlight, it looks like the world is now moving towards ARM (and other hardware accelerators such as GPUs), with both Apple and Amazon announcing their move to ARM processors.

A series of performance benchmarks have been conducted to compare the performance of MariaDB’s database (OpenOcean’s portfolio company) running on Sunlight in AWS vs natively on an AWS instance. The tests demonstrate that with 8 cores allocated to the instance, Sunlight can achieve ​65% higher performance at ​40%​ of the cost. Access latency is measured at ​68%​ lower than the standard AWS instances. The resulting IOPS (input/output operations per second) in Sunlight’s case are astonishing.

Although our first reaction when we met Sunlight was “why would we need another virtualisation player?”, we are now convinced that Sunlight is the platform enabler we were looking for to finally make a cloud strategy more viable for both data centres and enterprises. We already see benefits for Splunk and other analytics workloads and we look forward to seeing more customers enjoying the benefits of Sunlight, both for delivering highly performant services and maximising Return on Investment.

Categories: News


About the Author