Tag Archives: ultra low latency messaging
Printed words are good, but pictures and sound are better. Watch the video below for a quick summation of how Informatica Ultra Messaging can help your business:
- Increase application performance and throughput
- Reduce fixed and operational costs
- Increase capacity
- Reduce single points of failure
- Increase scalability, reliability, and availability
For more information, have a look at: Ultra Messaging, Better Value with Better Technology.
There are lots of ways to run a trading firm.
Some firms use a strategy centered around high frequency or algorithmic trading, which are similar in that having the best technology and writing the fastest trading applications is essential.
At the other end of the spectrum, some firms employ only human traders, using a traditional buy-and-hold strategy, expecting to hold the security for months or even years before moving it.
But in between these two ends of the spectrum, there exists a hybrid that uses electronic trading with a bit of buy-and-hold added in. Some call this blend “trade smarter, not harder”.
Instead of competing with other traders to get the absolute lowest price, this strategy prioritizes on making better decisions by doing “pre-trade analytics” on historical and financial data.
Evaluating a price in the Equities or Foreign Exchange (“FX”) markets does not require much calculation, and so one of the prime limiting factors on winning those trades has been the speed of data movement, from one application to another, either within the same host or across the network. But the world of Fixed Income, commonly known as “bonds”, is different. (more…)
Similar to the way that a carburetor restrictor plate prevents NASCAR race cars from going as fast as possible by restricting maximum airflow, inefficient messaging middleware prevents IT organizations from processing vital business data as fast as possible.
In the past, the term latency has been largely ignored in the IT world, with the exception of network engineers and algorithmic trading experts. But today, there is compelling evidence that latency is an important metric for every business that runs a website, or that deploys Rich Internet Applications (RIAs), because even small delays in presenting data show a clear pattern of pushing customers and readers away.
Interesting data, replicated by multiple sources (including Bing, Google, and Amazon) show that slow-loading pages can cause the viewer to lose focus and potentially even click on something else, possibly never to return.
For instance, on search results, a delay of just .5 second chases away up to 20% of the traffic and revenue. As it says at this O’Reilly Radar post, “delays under half a second impact business metrics”.
Our first post in this series on Efficiency covered the high-level performance benefits of super-efficient messaging software, whether you measure for latency or throughput, since efficiency is the property of software that provides performance. “Ultra-low latency” is just another term for extremely fast, lean, efficient execution. For more, see the post: Ultra Messaging is Also High-Throughput, High-Availability, Lower-TCO Messaging.
Our next post covered 24×7 availability, reliability and lower TCO from this efficiency. Less hardware and fewer software processes to touch the data in transit between applications provides these benefits. For more, see the post Ultra Messaging: For 24×7 High Availability, Lower TCO, and Robust Reliability.
This post discusses how the same Ultra Messaging efficiency that provides performance, reliability, and lower TCO also provides great agility and near-linear scalability. And with today’s Big Data challenges, especially in the capital markets, efficiency is more prized than ever.
Our first post in this series (Ultra Messaging Is Also High-Throughput, High-Availability, Lower-TCO Messaging) covered, from a very high level, the performance benefits of highly-efficient messaging software by stressing that efficiency is the property of software that provides performance, whether you measure a single piece of data for “ultra-low latency”, or a large batch of data for throughput. Either way, ultimately, it’s all about extremely fast, lean, efficient execution. The way you choose to measure that performance is up to you, and depends on your needs.
But extremely fast, lean, efficient execution has other benefits for the customer besides performance. For example, the same Ultra Messaging efficiency that provides very high performance also provides the foundation for many of the key features of enterprise-quality software, such as true 24×7 high availability, lower total cost of ownership (TCO), and robust reliability. In the earlier post, we just touched on these topics, but here we will discuss them in a bit more detail.
From fuel-efficient cars, to energy-efficient homes and office buildings, to saving time by shaving while reading Twitter on your smart phone and also making sure your kids eat their breakfast, efficiency is on our minds more and more today. And for very good reason.
Efficiency is one of the most sought-after qualities of any product or service, because it means more for your money compared to the competition, and therefore, more overall buyer satisfaction. The product works, it works well, it works when you need it, it doesn’t quit unexpectedly or too soon, and you would buy it again in the future. (more…)
Many companies today must send streaming data across the globe, quickly, which often means use of a shared resource: a WAN. Bandwidth for most WANs is usually restricted to something like 100Mb/sec, or even 10Mb/sec, which is often much slower than the high-speed LANs connected to either side of the WAN. This is especially true in the capital markets where ultra low latency messaging is key.
A link speed mismatch like this can present a problem. Say you’re running a 10Gb Ethernet LAN in New York, and sending data to London and Singapore over a 100Mb WAN link. While your streaming market data has an aggregate rate well below 100Mb, the spikes are many multiples of that aggregate rate. Those spikes are where your problems can begin. (more…)
Friday August 5, 2011 set new records for trading volume around the world. According to this FT.com story: “The amount of data generated by the day’s trading in US futures and equities alone saw over 130m trades on Friday, generating 950 gigabytes of data, according to Nanex, a market data provider.” In London, “some exchanges with older technology could not cope”. And so Big Data strikes again.
But market data volume has been exploding for months, even years. This is just one more chapter in a long story, illustrating the types of problems that a business could encounter if they neglect their technical infrastructure in the face of data volume growth. (more…)