How WellCare Accelerated Big Data Delivery To Improve Analytics

November 12, 2015 Adam Bloom

sfeatured-WellCareIn this webinar replay, Pivotal and Attunity present an overview of big data solutions alongside WellCare, who has used both companies’ technologies to accomplish many goals. Most importantly, WellCare used Attunity Replicate and Pivotal Greenplum to reduce mission-critical query times from 30 days to seven—a 77% reduction across all query processes and a tremendous enabler of more agile business operations.

The webinar covers big data challenges and opportunities for healthcare providers, and there is an overview of how reporting speeds can be improved, why real-time data is being used, and where Apache Hadoop™ platforms and data lakes are placed into existing enterprise data warehouse environments. As these industry-specific, big data learnings are covered, the presenters weave in an explanation of how Attunity and Pivotal technologies are used together. The presentation ultimately leads up to a case study on WellCare, who worked with both companies to address their challenges.

In this case, WellCare, a Fortune 500 company, provides managed care for about 2.8 million members and has 5000 employees. Like most companies, their IT environment includes Oracle, MS SQL Server, and MySQL for both operational and analytical systems. These systems supported a wide variety of data sets—clinical members, claims, demographic info, operational decision support, pricing, and much more.

When they began the journey with Pivotal Greenplum and Attunity Replicate, WellCare knew one of their top priorities was advancing their quality reporting systems. In this part of their business, the analytical pipeline included multiple sets of processing queries and took a month to run. The company was making decisions on very old information and impacting regulatory requirements. For several years, the IT team saw servers, processors, storage, and memory requirements continue to grow in the area and knew that traditional database and data warehousing approaches would not continue to scale.

So, the WellCare team set out on a journey to address this problem and accomplished several other goals in the process—improving analytical capabilities overall, eliminating various ETL systems, analyzing data at high volumes, simplifying and streamlining analytical work, and moving data faster for quicker and more efficient analysis. Not only did they accomplish these goals, they were also able to radically reduce their hardware footprint at the same time.

Read the transcript below for more information or watch the original webinar.

Learning More:

TRANSCRIPT

Moderator:
Hello, and welcome. Thank you for attending our customer spotlight webinar, “How WellCare Accelerated Big Data Delivery to Improve Analytics.” I’m happy to introduce our presenters for today, David Menninger from Pivotal, Kevin Petrie from Attunity, and James Clark from WellCare. Please feel free to send in your questions anytime using the Q&A console. We will do our best to answer them at the end or if we can’t get to them today, we’ll get back to you via email. Thanks again for being here, and now I’ll hand it off to David.

David Menninger:
Thank you. Today, we’d like to review with you some of the challenges in big data and the opportunities, not only for healthcare providers but in general, and help you understand how to take advantage of big data. In the case of healthcare analytics specifically, we will talk about how to increase some of your reporting speeds. With all the compliance reporting needed in healthcare, being able to prepare reports quickly and easily is important, and so is enabling real time data initiatives for big data—taking advantage of the data that’s streaming through your system and being able to react to that in real time. We will talk about incorporating Apache Hadoop™ into your enterprise data warehouse environment, how you might utilize Hadoop to achieve something referred to as a data lake, and how you might do that successfully. We will talk about, with the help of our friends from WellCare, how you might find a joint solution leveraging both the Attunity and Pivotal Big Data capabilities when you’re setting up these environments.

The first thing I’d like to do is introduce Pivotal and give you a sense of who we are. Not everyone I meet with—and I meet with hundreds of customers in my role—not everyone knows who Pivotal is. They might have heard of Pivotal as being a startup, and we want to dispel the notion that we’re a small start up. We were founded two years ago and put together with parts of EMC and parts of VMware. We have over 1800 employees now that are part of the organization and 1000 enterprise customers. This is not three guys and a dog in a garage. We have very well established technologies—there are some revenue figures cited here, to get a sense of how big the organization is from a licensing perspective.

The problem that we’re trying to address as an organization, in fact, the name Pivotal comes from this transformation that we’re in the midst of as an industry. Every business has to become a digital business these days to remain competitive. We all do our banking on our phone or over the web. In the case of healthcare, I go to a relatively small medical practice, a three doctor medical practice. Even at that practice, I can get my medical records online and over the phone. This world is changing, and, for you to be competitive in your business, you have to be able to take advantage of that. It’s this pivot or this transformation that the name Pivotal represents. Pivoting to become a digital business. As a business, we saw the opportunity to help businesses across all different kinds of industries have that platform—that would enable them to take advantage of becoming a digital business.

In this world of big data, as we talk more about big data, we’ll talk about how big data technologies typically require scale out implementations—meaning many servers operating together as a cluster of computing power delivering those capabilities to your users. Now, companies like Google and Facebook and Yahoo pioneered these types of approaches. They could afford to invest billions of dollars in building out server farms and experimenting with open source technologies, even creating some of those open source technologies. Most of our enterprise customers don’t really have that luxury to experiment and explore what they might do. We have created a platform that effectively gives you Google like or Amazon like capabilities in your enterprise so you can take advantage of being a digital business.

When we look at what is required in this type of platform, we end up focusing most of our discussion around the big data part of the equation. That’s an obvious part of today’s discussion. In other discussions, we could talk to you about agile capabilities that you need to live in this world. Take a look at your [mobile] app store right now, and see how many apps need to be updated in your app store. That will give you a sense of how quickly things need to change and how quickly you need to be able to react to changes in the market. As I mentioned, scale out technologies are often delivered in public cloud platforms, but those could be private cloud platforms as well. So, you need a way to be able to deal with these elastic architectures—how do you provide services to your users in an elastic environment and do that in a way that is manageable and repeatable as an organization?

Those are the things at we see constituting the key pillars, if you will, of this digital transformation. As I said, today’s discussion is going to focus mostly on the big data portion of the equation.

The fundamental problem for most organizations in dealing with big data is that we have a lot of data as an industry that we just don’t utilize. You can see here the statistics on the right hand side.

We really only prepare 3% of our data for analysis. We really only analyze a half a percent of that, and even less than that is being operationalized. By operationalized, we mean that analysis is not a one time exercise. When you identify a way to analyze information and get useful value out of it, you need to be able to institutionalize that and make that a regular part of your standard operating procedure. It needs to be embedded into the business processes and the applications of your organization. Now without that, you’re left with just a one off analysis. It is rapidly obsolete and out of date. With analysis, you don’t keep it up to date and make it part of your standard operations. The fundamental problem is—how do we provide an architecture and how do we provide analyses that can incorporate all this information and do it in a way that doesn’t leave data falling off the table and being discarded?

Pivotal has what we refer to as a Big Data Suite. You see various components of the big data suite represented on this screen here. Let’s concentrate on the three boxes across the top first. The process of collecting the data often requires a bunch of manipulation and preparation of the data. What we’ve called data engineering here. This data processing box, we’ve got several components based around Hadoop.

You may have heard, if you’re like many organizations I meet with—about 90% of the organizations I meet with, a totally non-scientific number—but about 90% of the organizations I meet with are not very far along in their Hadoop journey yet. There are certainly some exceptions, you know digital media companies and Wall Street firms are certainly further down the path with Hadoop, but it’s not uncommon, if you are listening to this broadcast and you’re concerned that you haven’t yet embraced Hadoop, it’s appropriate to be concerned, but don’t feel like you’ve missed the opportunity. Most organizations are beginning this journey down the Hadoop path.

Hadoop provides several things that are harder to do in more traditional databases. Hadoop is very good at dealing with unstructured data. In the case of healthcare data, think of images and doctors’ notes. Hadoop is also very good at dealing with data at very large volumes. So, those are the types of things that Hadoop provides, and as you work with Hadoop, it is a programming environment. Having some tools like Spring XD and Apache Spark™ available to processing that data as you’re preparing it for analysis and as it’s streaming into the system—those are valuable components—but think of the left hand box as primarily being represented by Hadoop.

We also have, in the middle box, parallel SQL-based advanced analytic capabilities. While Hadoop is very powerful, it’s also much more difficult to use than SQL. You probably have within your organization a number of people with lots of SQL-based skills. We offer those SQL-based skills in two forms or SQL-based capabilities in two forms. The Pivotal Greenplum Database and Pivotal HAWQ. Pivotal HAWQ stands for Hadoop with query. Those same Pivotal Greenplum SQL analytics are available running stand alone in the Pivotal Greenplum Database or as part of Pivotal HAWQ running on top of Hadoop.

To give you a sense of the breadth and depth of the analytics that can be performed in Pivotal HAWQ and the Pivotal Greenplum Database, we are one of only two database vendors that are certified to run SAS, SAS software analytics, in the database. So, it gives you a sense of the breadth of analytics that we can perform there. Similarly, if you’re familiar with the transaction processing counsel, they have a decision support benchmark, TCP-DS. The Pivotal HAWQ and Pivotal Greenplum Database can run all 111 of those queries, which, you’ll find are not supported in most other tools.

The last piece of the puzzle—on the right hand side—is deploying those applications. As I said, it’s important to operationalize the analytics that you create. We have tools to create those applications at scale. The Pivotal GemFire component, represented in the upper left part of the apps that scale box, is an in memory database. If you think about big data, part of the reason it’s big is that it’s occurring constantly. If you can react to that information as it’s happening, spotting opportunities—in the healthcare world, you can even save lives by looking at data in real time. Then, there are several components—Redis and RabbitMQ, components like Spark and Spring XD, components you might use in building out some custom applications or capabilities when you go to operationalize these types of analytics. The Pivotal Cloud Foundry®(PCF) icon represented in there is our platform for deploying the applications so that may be a discussion for another day.

We offer all of these services across the bottom here. You see they’re all also offered as components running within the PCF part of our portfolio.

A couple of things to note about the big data platform—what makes it different from others. I mentioned the SQL leadership on Hadoop already. This is also a single license across all the different components. One of the things we’ve observed from our customers is that they don’t necessarily know which parts of the big data stack they’re going to need or use right away. They might start with SQL—and then they might add Hadoop, then they might add in memory capabilities, or they might start one of the other points. This is a complete platform—all incorporated into a single licensing mechanism. You can pick and choose which pieces you want to utilize, and you’ve got the opportunity to deploy those different pieces either stand alone or in a Pivotal HAWQ configuration with Hadoop and running on top of PCF. These are all open source components or are in the process of being open source, and I’ve already mentioned the enhanced data analysis emphasis capability.

Those are some of the differentiators of this platform and why organizations like WellCare have chosen to work with this technology. With that, I’m going to turn it over to Keven Petrie, and let Kevin Petrie introduce you to the Attunity part of the solution. Kevin?

Kevin Petrie:
Great, thank you, Dave. Everyone, I’m very pleased to have the opportunity to speak with you today for a few minutes about what we see happening and working with healthcare—across a number of other healthcare providers and healthcare related organizations. There’s some fascinating things going in this part of the big data industry, and we’re very pleased to share some of the things that we’ve learned.

To set the table briefly, we are a publicly traded organization. We trade on the NASDAQ under the ticker symbol ATTU. We have global operations. We have 2000 or more customers in 60 countries, and we’re very pleased to be serving over one half of the Fortune 100. We’re also pleased—and, it’s a responsibility going forward—to be recognized by various industry experts for our innovation. That’s something we fully intend to continue to push the envelope with. What we’re fundamentally seeking to do is help organizations manage big data more efficiently in order to free up resources to focus on data science.

Though—if we turn lens here on healthcare, there are three key challenges and three key opportunities that we have seen working with various clients. The first is that, as we all know, risk is moving due to legislation, due to market forces, from the patient to the provider. That creates new pressure on healthcare providers of all types to be more accountable and more focused on patient outcomes and the quality of care. There’s this rising, in tandem, this rising public expectation that it’s not acceptable to have patient care suffer in any fashion—if the right data point doesn’t reach the right provider or the right doctor at the right point in time. There’s a pressure here—there’s an opportunity as well that we’ll talk about. The final challenge is really getting down to platforms. Like any organization, like any enterprise, healthcare providers have more than one platform.

The electronic medical records movement is gaining stride. It is really starting to digitize records put them into usable form. As David pointed out, the challenge remains to continue that and also to integrate those digits across databases, across Hadoop in the cloud and so forth.

If we look at the opportunity side of the ledger, there’s some pretty compelling things going on here. It’s been exciting to see the level of innovation—looking at smartphones, looking at home based technology—to create a very rich data stream from the patient wherever they are, potentially in their home, potentially going about their daily lives. And, give it back to caregivers. The caregivers have an unprecedented opportunity to improve care, both when the patient is within the clinic walls, within the hospital walls, and when the patient is out living their lives.

A second great opportunity here is that there are methods of improving operations which is critical for a healthcare organizations. Though, some of the proven methods include basic logistics if you will. We have a client that we’ve worked with to create—or they’re helping their clients create—the emergency room of the future, essentially by treating a hospital like a factory floor and putting RFID tags on various individuals, doctors, nurses, and so forth—in order to monitor the flow of equipment and monitor the flow of digits and optimize it with future design. That’s the type of approach that has worked in various industrial industries for a long time. It can work in healthcare.

The final point here—and this is something that we’re pleased to be contributing to as well—is that the methods of integrating data continue to improve.

Let’s go to the horse’s mouth here. This is some very interesting survey data from Aberdeen Group earlier this year of healthcare professionals. They see a few key points of pain. The first is that many critical decisions could benefit from more data driven support—no mystery there, no secret or surprise. Another is that a lack of operational visibility, for example, into how data is being used, does create some inefficiencies. Another key point here is that too many disparate data sources and data silos exist. Again, breaking down those silos is critical in healthcare as in other industries. Finally, the volume and the complexity of proliferating endpoints, proliferating data types, and proliferating platforms does create opportunities we talked about, but it can also create some management complexity.

The bottom line is that—to extract advantage from big data—you need to move it to gain that advantage. It’s really a move it or lose it value proposition here. Hadoop and the others are great platforms. Dave talked about Hadoop earlier. Hadoop is usually not the starting point for data. It’s where data goes after it’s generated elsewhere. Some of the end points that generate actual data, you might have transactional systems, you might have point of sale systems, social media streams, and smartphones—the data needs to move from there to a place where it can be analyzed. The method of doing that needs to become increasingly efficient.

What we at Attunity propose is improving big data management on three specific dimensions and each of these feeds into the next. The first is profiling usage in order to optimize placement. We provide visibility software that can help healthcare organizations and other enterprises understand how data is being used within data warehouses so that they can profile that usage. They can, for example, identify hot data or cold data and thereby move it using our replicate software to the right location based on that information.

That feeds into the next point—that we can more easily integrate data more rapidly across multiple platforms. Once that data is in place, it needs to be prepared for analytics and that’s where we provide software that can automate aspects of data warehousing.

We exist as a Switzerland if you will. We support 35 endpoints, in terms of sources and targets, that cross platform capability—moving from a where a data starts to where it resides to where it needs to be in order to support analytics,is our fundamental value proposition. We’re very pleased to be tight partners with Pivotal. We’ve worked together on many enterprise accounts in order to feed data from different starting points into Pivotal HD, into Pivotal Greenplum, and thereby support application usage using just some of the pieces of software here.

We can feed data into the Pivotal Big Data Suite through two primary methods, using our replicate software. We can do full loads of data and we can do change data capture. What that means is that Pivotal HD and Pivotal HAWQ can more easily receive information because we automate the process by which the Pivotal target is reconciled with the source. Whatever it is, we remove the manual coding required to do that. We can load the data very easily and then send continuous updates to support more real time applications using change data capture. We’ll do this all through the Pivotal parallel file distribution program.

At this point, I’m going to hand over to James, and James can tell us a little bit about what he has been doing at WellCare.

James Clark:
Good afternoon, everyone. My name is James Clark. I’m an IT director for WellCare. I’d like to talk a little bit about WellCare itself. We are a leading provider of managed care services, really targeted for government sponsored programs like Medicare and Medicaid. We are a Fortune 500 company. Our membership is about 2.8 million members as of the end of 2013. Our corporate offices are in Tampa, FL. We have a little over 5000 employees.

We, as a company, were using some large and established technologies, Oracle, SQL Server, MySQL, but we really had some need to enable big data analytics across large projects involving clinical member information, claims, lots of demographic information, operational data for decision support, pricing controls, and those kind of things. One particular area where we saw a huge amount of results was in our quality reporting system where queries, multiple sets of processing queries, were taking up to 30 days to run . We’ve seen that across several kinds of verticals for us. Due to the large amounts of data sitting in the traditional Oracle SQL server, we saw servers, processors, storage, and memory requirements to continue to grow and have for several years. We are also in the initial stages of our Hadoop implementation. We’re well down the path with having implemented Pivotal Greenplum.

Next slide here.

Really, some of our goals were to improve the analytical capabilities—in particular, to reduce that 30 day processing of queries that I mentioned of before. As well, we wanted to eliminate various ETL systems and ways of manipulating data to get them into data warehouses—to be able to have the capability to analyze that data at a high volume. We needed a data solution that could move data from transactional systems into our Pivotal Greenplum Database quickly and efficiently with change data capture as well as full data set loads.

What we were able to do—implementing both Attunity Replicate and Pivotal Greenplum Database—was to show a dramatic increase in our capability to get data in the systems and provide analytics on those. I think the average of increase of reporting speed that we’ve seen across all of the processes that we’ve loaded into the system is about 73%. We’re able to do roughly—I think it’s about seven days of processing. Before, we were achieving that in 30 days of processing. As well, we wanted to reduce the complexity of the custom PL SQL ETL jobs and other various components—things like being able to pull in data to the various systems. We’re able to meet our regulatory requirements quickly to be able to analyze that data, to be able to actually run multiple analyses of the data, validate our results, and cut out some third party processing. We were able to radically reduce the hardware footprint of the systems that were previously in place. We are able to do the processing that we needed to do specifically related to quality reporting, financial analysis, decision support, pricing, risk analysis and all of those kind of functions.

I will say in general, we’ve been very pleased with what we’ve been able to accomplish in a short period of time. In a little over a year of actual implementation, we have gotten, like I said, about 73% faster on our ability to produce reports, to do the analytics required to respond to our state and federal partners as well as to effectively close our books, meet our monthly financial reporting obligations, those kinds of things.

That I think concludes my selection of slides.

Moderator:
Thanks so much, James. Just a reminder, please feel free to send your questions in during this Q&A part using the Q&A console at the bottom of the screen. We’ll do our best to answer them.

James, I think we’ve got our first question for you. Can you expand on how WellCare was able to accelerate reporting schedules from 30 days down to eight days?

James Clark:
Sure. The previous technology that we had in place were kind of a hodgepodge of multiple systems that included manipulating data from various transactional systems that were running on Oracle Microsoft SQL server and MySQL. The first step in that process was to homogenize that data, get everything loaded into our Oracle enterprise data warehouse, and then begin the process of doing multiple stages of analytics, data cleansing, codification, those kinds of things to pull in a result set that would allow us to do our deep analytics on it.

The first step in that process for us to be able to speed that up was to use Attunity Replicate to do change data capture and be able to load all of the various data sources that we have from those technologies—as I mentioned, Oracle SQL sever, MySQL, and bring those straight into Pivotal Greenplum. That gave us just a huge step from using things like custom ETL and custom PL SQL, to do a lot of manipulation on it before we were ever able to do analytics.

Once we get the data into Pivotal Greenplum, we’re able to quickly establish relationships and begin our analysis very close to real time—like we were talking about in the presentation. One of the big things for us was, once we saw the data set, it basically took us a month to see the data set before we were able to do the analytics on it. Now, we’re able to really look at on a daily basis with fresh data, look at the way things are trending and be able to more quickly produce that result.

Moderator:
Great. Our next question, I think is for Dave, how is the partnership with Attunity helping other Pivotal customers?

David Menninger:
I think many of our Pivotal customers go through the same exercise that James is describing and Kevin described in terms of the technology. Generally, when we were first engaged with Attunity often, I would say the customer is performing an initial load to Pivotal technologies. They’re using other technologies like James mentioned, and they want to evaluate or have made a decision to use the Pivotal technologies. The first step you have to do is get the data into the Pivotal technology, and so they’ll use the bulk transfer mechanisms to create those initial populations of the databases. Then, once the technology’s in place, they’re moving forward on a regular basis. I talked about operationalizing activities—so, one of the steps in operationalizing is to continue to load the changes from the source databases, capture those changes with the data change capture mechanism, and take that change data and add it to the data that has already been loaded into the Pivotal technology. Generally, we see customers moving through that progression.

We also see scenarios where, once they’ve had a success in one part of the business, then perhaps they want to expand into other parts of the business. We’ll see the relationship with Attunity following us around the organization as we go to new groups within the customer organization. They’ll also adopt the Attunity technology to go through the same process in those groups.

Moderator:
Great, and for Kevin, we have a question, would you give us some other examples of healthcare companies that are using Attunity software to optimize and integrate data?

Kevin Petrie:
Sure, great question. We have a number of clients that use us in order to feed data into or out of the Epic system. We’ve got one healthcare provider that uses Epic. They feed data changes from the underlying Oracle database. What they’re specifically using our Replicate software to do is to run it from Clarity, which is part of the Epic system to Teradata, and to do that to support reporting. That’s just one of several examples in which the Epic system, we’re fully compliant with the underlying database, moving data into and out of it.

Moderator:
Super. I think this question is mainly for David, but maybe also Kevin can chime in. This person says, “If we have Hortonworks Hadoop and Pivotal Greenplum, do we really need Pivotal HAWQ in place?”

David Menninger:
Well, the answer is maybe. Like I described, we provide flexible license that allows you to utilize different components if and when you need them. Pivotal Greenplum certainly has the ability to access data that’s in Hadoop. It does it a little less directly than Pivotal HAWQ. If you wanted to utilize Pivotal HAWQ, the reason you would probably chose to utilize Pivotal HAWQ is if the volume of data that you want to access is so large that you don’t want to spend the time to move it into Pivotal Greenplum. That would be a reason. Or, if the performance, you want to access the data directly and you want to have the queries operating directly on top of the original source of the data—that might be a reason to do it.

It’s not a requirement. We do support Hortonworks as an underlying store for the Pivotal HAWQ capabilities. It’s an announcement we made earlier this year. We formed something called the Open Data Platform together with IBM and Hortonworks. Others are certainly welcome to join, but those are the folks who started this, and there are many Hadoop vendors who’ve joined so far. What that means is that we can inter-operate with those other Hadoop platforms. So, customers can run Pivotal HAWQ capabilities on top of Hortonworks. It’s entirely up to you. If you’re satisfied with the architecture, the way it works today, then I would say, no, you don’t need to add the Pivotal HAWQ, but you do have the option to.

Moderator:
Great. This is a question for Attunity. This person asks, “What RFID tools or technology are you using to track the movement of equipment in hospitals?”

Kevin Petrie:
Sure, I was speaking on behalf of a client, and they would be more equipped to provide details on that. But, I think the key point would be that by aggregating data about the movement and the transactions, if you will, of various pieces of equipment and individuals. They were able to get a great operational flow of their system over a 30 day period. We can follow up as to which specific implementation of RFID was used in that case.

Moderator:
Okay. This question just came in, this asks, “How do you compare the solution with Impala?”

David Menninger:
Sure, I’ll take that, Catherine. This is, Dave.

I specifically highlighted TPC-DS, because I thought that’s a good representation of how it differs from Impala. First of all, Impala is Cloudera’s SQL capability built on top of Hadoop. The Hadoop industry had recognized in general that having SQL capabilities on top of Hadoop is a good thing. All the major Hadoop providers are adding SQL capabilities. We’ve chosen to do it by taking an exist SQL implementation of the Pivotal Greenplum Database and offering it on top of Hadoop.

What that meant was that we already had a very rich set of SQL capabilities. Most of the other approaches are starting from the bottom up—building SQL processing, parallel SQL processing from the ground up on top of their Hadoop—like the Impala implementation. We recently performed some analysis and learned that, among the different projects, the Pivotal HAWQ implementation is the most complete and so that TPC-DS benchmark, as I said, has 111 queries and can perform all 111. Impala, as of our last testing, could only do 31 of those queries. Now, I’m sure they’ll be knocking more and more of those off, but it’s a long, slow process to get that completion of query testing capability.

We both agree that SQL on Hadoop is important. The differentiation right now is that for some period of time, we’ll have a significant in lead in SQL and the performance of our SQL on top of Hadoop.

Moderator:
Great. This question comes in for Kevin. Is your technology managed by triggers or is it log based?

Kevin Petrie:
Great question. We are logged based. We don’t require any software to be installed on the source or the target. We have a very low footprint and we are based on logs.

Moderator:
I think this might be a follow up as well, how do you schedule Attunity? Does it run all the time? Is that the same question? I wasn’t sure.

Kevin Petrie:

It does run on a continuous basis.

Moderator:
Okay.

Kevin Petrie:
I mean it certainly can if you are using the change data technology which can send continuous, somewhat continuous, data streams based on whatever increments work best for the business.

Moderator:
Great. Then one more for Kevin. Does Attunity also support SQL Server and DB2 as sources?

Kevin Petrie:
We do. We support SQL server and DB2 as both source and targets. As I mentioned before, we support—actually I think that number’s over 35 now—sources and targets. So, that includes all the major data warehouse, all the major database platforms. On the cloud side, we support AWS and Azure, which we announced this month. We support MongoDB in the NoSQL camp, and then for Hadoop, all the major Hadoop distributions—most notably Hortonworks, and, by extension, Pivotal HD. We exist as a Switzerland between all those major platforms.

Moderator:
Okay, and then we have a follow up to the previous question which is can Attunity run on a scheduler? I don’t know if this is all ready answered or if this something new.

Kevin Petrie:
I don’t have a specific answer to that question. We can follow up on that.

Moderator:
Okay, sounds good. I think we are just about out of questions. With that, we’ll bring this webinar to a close. We’ll be sending out the links to the recording and a slide share doc of this session within the next few days. There is also more information on the Pivotal, Attunity, and WellCare websites if you’re interested. Thank you again for joining us and have a great day.

About the Author

Biography

Previous
All Things Pivotal Podcast Episode #20–Spring Session
All Things Pivotal Podcast Episode #20–Spring Session

One of the key design patterns needed to deploy a new, or migrate an existing, application to the cloud is ...

Next
CODE: Debugging the Gender Gap @ The Napa Valley Film Festival
CODE: Debugging the Gender Gap @ The Napa Valley Film Festival

Pivotal has partnered with the Napa Valley Film Festival this year to sponsor the screening of CODE: Debugg...

×

Subscribe to our Newsletter

!
Thank you!
Error - something went wrong!