Pivotal Receives Morgan Stanley's Exclusive 'CTO Award for Innovation' for 2014

June 5, 2014 Stacey Schneider

featured-MS-CTOAwardLast night, Pivotal received a very special award. At their annual CTO Summit event, Morgan Stanley singled out one technology company to receive it’s prestigious ‘CTO Award for Innovation.’ Pivotal CEO Paul Maritz accepted the award at Morgan Stanley’s 14th invitation-only US CTO Summit held on June 4.

The CTO Summit is an exclusive event that unites industry leaders with the senior technologists at Morgan Stanley for solution briefings, strategic discussions and focused networking. Leading venture capitalists as well as key executives from both established and emerging technology companies are selected to attend this private event. While much of the event is undisclosed to the public—each year, Morgan Stanley selects one technology vendor to receive its ‘CTO Award for Innovation’, and publicly endorse the innovation of the solution as well as credit the technology with making a significant impact on Morgan Stanley’s business.

This year, the award recognizes the role that Pivotal Greenplum Database plays within the Morgan Stanley’s next generation Big Data platforms for financial reporting and risk management and how it is helping them solve for massive scale while lowering costs.

“Morgan Stanley has been a great partner, and we’re honored by this recognition,” said Paul Maritz, CEO, Pivotal. “Data is at the heart of how businesses function today. We are proud to partner with Morgan Stanley to help them manage and turn highly complex data into actionable business insights, at scale.”

Pivotal’s client roster includes several of the world’s leading Financial Service providers, who rely on the scope of its data, analytics and development technologies to support key business processes such as risk management, compliance, fraud prevention and customer engagement.

In a nutshell: Pivotal Greenplum Database, bridging Apache Hadoop® to real-time operations

Pivotal Greenplum Database, which can be shortened to simply GPDB, manages, stores and analyzes petabytes of data in large-scale analytic data warehouses that uses massively parallel processing (MPP) to speed up data crunching. This concept of using lots and lots of processors to do lots and lots of little tasks in order to get to one big answer is similar to the revolutionary thinking that led us to Apache Hadoop®. However, where Apache Hadoop® places any kind of data into its file system in an unstructured way, and traditional RDBMS databases are completely structured, GPDB is somewhere in the middle. The massively parallel processing means users can experience 10x, 100x or even 1000x better performance over traditional RDBMS products. It’s flexible column- and row-oriented storage approach means it is also faster than Apache Hadoop® for large scale analytics.

To illustrate, think of GPDB as the important bridge between the giant dumping ground that is Apache Hadoop® and the real-time, in-memory data operationalized for use in applications. Paraphrasing from yesterday’s post on Exploring Big Data Solutions: When to Use Apache Hadoop® vs MPP vs In-memory, you can think of the big data strategies as this:

  • In-memory is like your cash register—it’s what’s making money for your business right now. An in-memory data grid that provides real-time data access to applications that are critical to the revenue stream of the business.
  • MPP is where you maximize your revenue by watching trends discovered to look for deviations and make adjustments. It’s massively parallel processing style of data management makes it an excellent choice for analytics.
  • Apache Hadoop® is your research and development arm. As the landing spot for all data, and powered by a powerful SQL query engine, you can explore all data to identify new insights and opportunities you can later operationalize with MPP or in-memory.

Recently released as part of the Pivotal Big Data Suite, Pivotal has been hard at work the past year building out a comprehensive data workbench that bakes in better interoperability between these three styles of data management into a single, uncomplicated subscription model where users are simply charged based on the amount of aggregate processing power they use, not the amount of data or what specific data solution they choose to run it. Users are free to store as much data as they like, and swap between Greenplum Database and in-memory solutions like Pivotal GemFire, Pivotal SQLFire, and Pivotal GemFire XD, as well as Apache Hadoop® based solutions including Pivotal HAWQ, and Pivotal HD.

Pivotal Greenplum Database is available as stand-alone or as part of Pivotal Big Data Suite. For more information, see the product page and download a trial today!

Editor’s Note: Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

About the Author

Biography

Previous
Pivotal's Google Summer of Code 2014: Implementing Clustering Algorithms in MADlib
Pivotal's Google Summer of Code 2014: Implementing Clustering Algorithms in MADlib

Having delivered 50 million lines of open source code in the past 10 years, the Google Summer of Code is un...

Next
Options for Admin Engines in Component-based Rails Applications
Options for Admin Engines in Component-based Rails Applications

In my recent RailsConf talk I said that I would help out with questions regarding component-based Rails app...