Tag Archives: Cisco
Experts agree that in the future CPU scaling will go out, not up. Clock cycles seem to have plateaued around 3 Gigahertz, and while five years ago people got excited about dual-core machines, today the major commodity CPU manufacturers deliver 4, 6 and 8-core processors and system vendors link these CPUs together in multi-socket servers, as Cisco does in their UCS line.
Our Ultra Messaging customers – financial players like exchanges and investment banks – are always interested in gaining every possible performance advantage in the ultra competitive world of high-frequency electronic trading. (more…)
At 29West, now a part of Informatica, we always strive to provide effective whitepapers and useful benchmark testing results for our Ultra Messaging® products, and lately we’ve been extra busy in this area.
Please see below, where we provide links and short summaries of four of our recent whitepapers and performance reports.
29West and Voltaire join together to provide a field-proven infrastructure solution that combines best of breed enterprise messaging middleware and trading transport infrastructure. The solution – LBM and VMA over Voltaire 10GigE – slashes end-to-end trading latency by analyzing and optimizing each layer of the infrastructure for minimum latency and maximum throughput, and is very cost-effective, too. (more…)
29West now part of Informatica to Speak at High Performance for Linux Financial Markets – April 19, 2010
Bob Van Valzah, director of product marketing for Informatica’s 29West Ultra Messaging, will join the Cisco-sponsored panel of experts.
Session 2, 11 – 11:50
Ensuring Real-World Low-Latency – Performance When it Counts
The key to differentiating low-latency that will impact your bottom line is to ensure that your applications can handle real-world situations such as volume spikes and massive price movements with no degradation in performance. The most intense trading is the time when you make (or lose) your real money and you need your latency to be at the lowest or risk getting run out of the market. New techniques for better understanding the capacity and performance characteristics of these transaction platforms will increasingly drive competitive advantage. Through this lens, winners will design, architect and implement the next generation capabilities for dominance. This session will explore the impact of low-latency and its measurement as well as ensuring performance when it counts.
Stop by Session 2 and say hello to Bob.
STAC Research has published a new STAC Report that shows LBM average latency under 20 microseconds using 10-gigabit Ethernet and kernel bypass technology from Solarflare and a Cisco 4900M switch. This is 30 microseconds faster than what we typically measure for 1-gigabit Ethernet using a standard Linux kernel stack. Our customers are always looking to save microseconds and this report demonstrates a way our customers on gigE today can slash their latency by over half as they upgrade to 10 gigE tomorrow.
In addition to low average latency, the report showed consistently low variance in latency from message to message (latency jitter) for rates up to 125,000 messages per second. The 99.9th percentile did not exceed 63 microseconds and the standard deviation of the latency was always 6 microseconds or less. This is impressively low jitter for our customers who care most about consistent operation from message to message.
Some applications require higher single-stream rates than this system could deliver without taking efficiency into account. The report goes on to perform tests with the system configured for best efficiency, thus allowing higher throughput at the expense of some additional latency as rates rise. Single-stream rates of 860,000 messages per second were reached. The latency varied with test conditions, but it was rare to see average latencies in excess of 60 microseconds.
The report tested throughput as well as latency. Using small messages (64-bytes), 3 publishing applications on a single server could generate almost 2.5 million messages per second. With larger (1204-byte) messages, 3 publishing applications on a single server could generate 550,000 messages per second driving the network to over 4.5 gbps.
The full report contains a wealth of detail on the equipment and test methodologies used. Of particular interest to the latency obsessed will be section 2.2. It details the system tuning including MTU, System Management Interrupt (SMI) settings, core shielding and interrupt affinity, and other settings used to achieve these results.
Many Cisco, Solarflare, and 29West customers use UDP multicast for market data distribution so these tests were all performed with UDP multicast even though there was only one receiver for each source. Other benchmarks as well as many customer deployments have shown scalable and stable results with our reliable multicast protocols.
These tests were run using Novell’s SUSE Linux Enterprise Real Time (SLERT) 10 SP2 64-bit on IBM x3650 servers with dual quad Xeon Clovertowns at 2.66 GHz. Many of our customers are showing interest in using real-time operating systems as a way of reducing latency jitter so we were eager to have this opportunity to test with SLERT.
Please see the full report for more details.