Which Way Might the Apache Way Take Geode?

April 14, 2015 Roman Shaposhnik

featured-geodeYesterday we announced that core source code of the first crown jewel from Pivotal’s big data portfolio, Pivotal GemFire, is available to the public under the name Project Geode. Additionally, we proposed Project Geode for Apache Software Foundation (ASF) Incubation.

In 2015, it is not only the availability of source code or the choice of license that matters, but how open, transparent and community-driven your governance is that matters to customers. At the end of the day, we here at Pivotal do not want Project Geode to be a read-only open source project with a community dependent on corporate involvement. We want Project Geode to be as successful as Apache Hadoop® or Apache® Spark, and we know that this success is measured not in lines of code, but in how vibrant and viable of a community can crystallize around it. In short, we want to signal to all current and future Geode hackers that the future is truly in their hands and that the “Apache Way” of running the project is what we really want for Geode.

Sitting here in Austin at ApacheCon, watching the first downloads of Project Geode being snatched away, I can’t help but have a deja vu feeling. It is the exact same feeling I had back in 2005 when the company I worked for (Sun Microsystems) announced that its crown jewel, Solaris OS, was going to be open source. The scope is different of course. Yet, in my mind, they share a very important similarity: both projects have a tremendous potential to spark innovation outside of what the original product was meant to do. For instance, nobody on Sun’s Menlo Park campus back in 2005 would have dreamt that a technology descendent from Solaris could run Linux-based Docker containers. Just the same, nobody is thinking about Geode as anything but the GemFire’s engine [yet!], but given all the projects under the ASF umbrella that can potentially leverage this future in-memory data exchange layer based on GemFire, the possibilities are exciting

I hope this is about to change.

Modern-day data management architectures require a robust in-memory data grid solution to handle a variety of use cases, ranging from enterprise-wide caching to real-time transactional applications at scale. In addition, as memory size and network bandwidth growth continues to outpace those of disk, the importance of managing large pools of RAM at scale increases. It is essential to innovate at the same pace.

Project Geode has all the right ingredients to do for RAM what HDFS has done for direct attach disks. The excitement (and funding) in this area of big data ecosystem is palpable, and the Geode code base can revolutionize this space.

So here’s to Geode! And here’s to many, many exciting new projects that bits and pieces of Geode’s code base are going to power for years to come!

Support Project Geode

Editor’s Note:
Apache, Apache Hadoop, Hadoop, and Apache Spark are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

About the Author


Is Continuous Delivery a First Class Concern of Your Platform?
Is Continuous Delivery a First Class Concern of Your Platform?

In the first of a series, I shared the story of my first morning spent on the Cloud Operations team that ru...

Mendix and Pivotal Partner to Accelerate Delivery of Cloud-Native Applications
Mendix and Pivotal Partner to Accelerate Delivery of Cloud-Native Applications

In this blog post, we highlight a recent integration with Mendix, a new Pivotal partner that provides techn...

SpringOne at VMware Explore 2023

Learn More