During COVID last year, some health systems that rushed to use the cloud are finding themselves with a suboptimal setup, according to Marc Mangus, principal specialist solution architect for global healthcare with Red Hat. In this episode of healthsystemCIO’s Partner Perspective Series, Anthony Guerra, founder and editor-in-chief, asks Mangus what health system CIOs can do after extending their organizations into the cloud without taking the time to do the proper analysis and planning. Mangus talks about why on-prem is sometimes the right choice; how to get right-sized on a cloud footprint, and how to remove the internal cultural barriers that can stand in the way of optimal solutions. Mangus advises ditching the view that IT should be operating in silos and replacing it with a focus on creating value streams across the enterprise.
The chickens are coming home to roost now after the mad rush to the cloud, and they’re looking back and saying, “How should we have done this with harmonization and analysis to move only the right things to the cloud?”
The move to containers is as big a deal as the move to virtual machines was 15 or 20 years ago.
To put it simply, you as a company need to take responsibility for your own infrastructure. Your first thought shouldn’t be, “What is my EHR vendor doing in this area?”
Guerra: Marc, thanks for joining me today.
Mangus: It’s my pleasure, Anthony.
Guerra: Why don’t we start off with you telling me a little bit about your organization and your role.
Mangus: Great, so many people are familiar with Red Hat, through our Linux product, Red Hat Enterprise Linux. But we’re much more than that. We pioneered the commercial open-source industry 30 years ago, and our model is a little bit different than most high-tech companies in that we do all of our development in what we call the upstream, or the community environment, and all of our software developers are fully engaged in the upstream. What we do is provide 24 by 7 by 365 global support contracts for our customers on the open-source products that are produced. You have the best of both worlds. You get all of the innovation that happens out in the open-source world, but you get it fully supported in an enterprise environment. My role at the company is part of our global health team where we engage specifically with healthcare companies to try and figure out the best ways to not only deploy our products, but the best ways to help them realize their visions of where healthcare is going over the next three to five years.
Guerra: We’re going to talk a lot today about the cloud and what healthcare organizations are doing. The scenario is that of a rush to the cloud, needing to deal with COVID, needing to stand up different applications, perhaps not having the capacity on prem to do that, maybe jumping into cloud environments quicker than they might have planned and perhaps some suboptimal setups from that. Is that what you saw on your end, and can you just go into that a little more? Maybe lay out some of the scenarios that you think are out there right now, and why they’re suboptimal.
Mangus: Sure. What we saw with COVID early last year was a complete oversubscription of whatever resources were available for seeing patients. The hospitals were full. The primary care offices were full. Nobody could deal with the rush. So, they did what they had to do, they got telemedicine, which had been kind of limping along for a few years, in some nascent state, but they got it fully enterprise-grade and running so that they could start handling the rush of all these patients. Unfortunately, what that meant is they didn’t necessarily harmonize that effort with their other IT projects or their other initiatives that were going on, especially the move to the cloud, so what happened was, they ended up getting a lot of resources up and running and up in the cloud but they didn’t necessarily do the up-front analysis and the due diligence for properly setting expectations and properly realizing the kind of things that were expected budget-wise. In other words, the cloud ended up not saving them money; it ended up costing them money. So now, it’s just starting to be this retrospective view amongst our customers to look back at what happened, and asking themselves, “What should be our strategy, and how do we get there? How do we move all this stuff that we had to stand up because we didn’t have a choice, into a more optimal infrastructure that frankly is balanced between public cloud and private data centers because not everything can go into the cloud especially in healthcare, for various reasons, compliance, not the least of which needs to determine where different workloads can run and where the data is.” So, there’s a lot of that kind of analysis that needs to be done, they’re diving into it now because they have to.
Guerra: When you talk about upfront analysis, you’re talking about a number of things. You’re talking about cost being one of the major things and you’re also talking about the placement of different workloads. And you think all of that may have been done incorrectly in different scenarios sub-optimally, so we don’t understand the cost structure we’re in for, and perhaps we didn’t plan on that and what do we do now? And perhaps we’ve put workloads in different places they shouldn’t be and how do we get that all harmonized and right-sized. Is that what you’re saying?
Mangus: Right. Let me give you an example. One example is treating the cloud like a new server. And what I mean by that is I take something that is running maybe on a virtual machine inside my data center, and I just move it to the cloud with no analysis. I see that happening every day, companies doing that. And I don’t know what they’re expecting, quite honestly, but that isn’t going to save anybody any money. You may need to do it for practical reasons or as a way to get a foothold in the cloud for certain workloads, but typically you want to do an analysis about what’s running on that server, why it’s running in that particular configuration, and is it something that can be broken apart? Can we take it and move it into a container environment, which is much less resource-intensive, and much easier, frankly, to move around, between cloud and on prem, and that’s just one example.
The opposite of this is analysis paralysis, where they spend years analyzing thousands of applications that can be in their portfolio and nothing ever gets done. There’s a balance between those two extremes, where you can move some workloads to the cloud where maybe you can containerize then immediately — the low-hanging fruit — and then the rest of them you can move over time, and some you might not ever move.
An example of those that you might not ever move are things that are certified by the FDA, or some other regulating body. That frankly takes a lot of time to get applications certified, and in life sciences and pharma they have to deal with that a lot, so it’s a huge effort to move those applications to the cloud; so those might be the last ones. The chickens are coming home to roost now after the mad rush to the cloud and they’re looking back and saying, “How should we have done this with harmonization and analysis to move only the right things to the cloud?”
Guerra: What do you think causes that moment of realization? Is it that somebody gets a bill, or maybe there’s a breach?
Mangus: It could be as simple as a bill you weren’t expecting and unfortunately our wonderful cloud partners are famous for sending surprise bills. It’s not all their fault. When you go to the cloud you need to think about, “What is my management platform?” and I don’t just mean IT infrastructure management, I mean financial management. I need to be proactive about what I’m spending in the cloud, especially going from a completely different mindset of paying for a subscription versus paying for utilization as it’s done — consumption-based licensing — that’s a huge mindset change. And a lot of companies don’t prepare for that and so they end up with this huge bill that may be out of cycle with their budget cycle, which is the worst-case scenario. Suddenly you get tens of thousands of dollars of charges that you hadn’t budgeted for. And I’ve seen that happen, not just in healthcare. It’s not unique to healthcare, it’s anybody. You have to look at all of that and the centralized management possibilities are out there. Red Hat has them and other companies have them. But the point is, you need to think about how you’re spreading your workloads out all over the place and you need to make sure that you are managing those workloads in a proactive and in a centralized way so that you know where you’re spending your money and you know where your CPU cycles are happening. And quite frankly, so you know where your data is. That’s the really crucial piece. If you don’t know where your data is or where the data flows are happening inside your workflows, you’re opening yourself up; you’re ripe for a breach.
Guerra: Do you think for these larger health systems that they could be so disorganized that they get to the point where they don’t remember where to put all their data because they’re being so experimental and trying everything out. They might have hundreds of applications and 40 or 50 hospitals, and they lose track of what’s where. Is that what you’re talking about?
Mangus: It can happen. I mean, I’ve seen companies do things like trying to keep track of their workloads on spreadsheets. And I’d want to walk in and turn right around and walk back out again of any environment like that. Why would you do something like that? It happens. People need to do what they need to do in the moment, without thinking about the future. It’s kind of the nature of the beast.
Guerra: Most health systems have to deal with that shadow, gray IT, whatever you call it, things not being handled through that central flow. I could see things really getting crazy if you have that, where you have departments putting things into the cloud, here, there, and the CIOs not really being kept informed; then they have to go find it, that type of thing.
Mangus: Absolutely. Look, the promise of the cloud is to give your organization the agility to bring resources online when and where they’re needed. But you can only realize that promise if you have enough discipline in your organization to properly plan and execute not only the migration to the cloud, but right-sizing your workloads, in other words, putting them where they need to be and that’s not always the cloud, sometimes it’s on prem, right? So, the whole disciplines of DevOps and sight reliability engineering (SRE) aren’t just pretty words or fun concepts. Implementation of those types of disciplines are crucial to properly taking advantage of the possibilities you have within this new infrastructure.
Guerra: Do you get the sense that “on prem” is almost a dirty word. And so, the rush to the cloud is just head-long. I’ve heard CIOs say for a few years now, “I don’t want to be in the data center business. I want everything in the cloud.” Is it going too far and perhaps not well thought out, meaning on prem is ok and in fact better in some scenarios?
Mangus: It is. It shouldn’t be a dirty word. You should have the power and the flexibility because of your choices in process and your choices in tools to almost on a workload-by-workload basis decide where the best place to run is. In my opinion, you’re probably going to always have some things running on prem, whether it’s management notes or certified applications or maybe, you know, some of the larger organizations have millions of dollars invested in legacy architecture, like maybe mainframes, that they can’t move to the cloud without significant work to modernize certain pieces of the architecture. They could do it if they really put their mind to it. But they can’t just lift and shift it into the cloud. That’s not going to happen. It’s got to be part of the planning. And then as you grow, and you look at global organizations, it becomes even more poignant, because have got not only the challenge of planning centrally and executing locally, but then they have to think about, “I have to be multi-cloud, by the nature of my organization, because one cloud can’t handle all of the nature of my needs globally.” So, there’s that and you throw that into the mix. It’s a very challenging environment. To me it all starts with discipline. You don’t start planning for how you’re going to do it after you do it, right?
Guerra: Ideally, you wouldn’t do that. Is the technology getting away from the average CIO? Are they able to correctly place workloads? Do you think they’re up to that? Or do they need to go outside to get some help?
Mangus: You’ve bumped up against one of the biggest problems — not just in healthcare — but it’s very common in healthcare. Data and processing are in siloes. And they’re functionally oriented siloes. So, you might have one silo for clinical operations, one silo for pharmacy operations. (I’m just making this up.) Another silo for archived data. They set these siloes up to kind of carve up the pie for all the things they have to do. The problem is, these siloes become institutionalized and they become self-serving, and in most organizations, they don’t really like to talk to each other. They’re very insular and internally focused.
The result of that is, the CIO is sitting over all of this and saying, “I don’t know what’s going on.” The typical response is to initiate a project to rationalize their application portfolio. I hear that a lot. That means analysis. Maybe for thousands of applications you go in and analyze what is run time, what is the data it needs to connect to, how is it secured, what teams are running it, etc., etc. And that’s something, in my view, if you have to do it, you only want to do it once. You’d better be incorporating that effort – which is substantial – into your overall plan for where you’re going in three to five years. And to what the outcome needs to look like, which is, to me, “I want to be able to decide on a workload-by-workload basis where I am going to put this. And I need to have all the infrastructure and processes behind it to make that happen.”
Guerra: You mentioned you only want to do it once. How do you keep that inventory known?
Mangus: Good question. What I meant by that is, the manual toil of doing that analysis, on spreadsheets, or however you do it, would be done once, and then from there on out it would live inside of some kind of a management tool that would keep track of it for you. In truth, what you want is a single plane of glass; you hear that term a lot. What you want is a single pane of glass where you can go in and say, “What’s running in my infrastructure?” Then I can drill into different zones or use different criteria to sort through those workloads and get it to me in a format that I want to see it in. No more spreadsheets.
Guerra: Let’s go back to what you said that some people find attractive about the cloud. For one, its agility, so it’s unlimited potential for computing power so I can buy whatever I need. I’ll never top out and the other thing is you only pay for what you’re going to use. I think your point from before is yes, that sounds great, but you might end up using a lot and paying for more than you thought you would be paying for. Talk about that a little more.
Mangus: Let me answer that in terms the ideal situation. So, the cloud starts to pervade the organization, and departments that have requests for resources start to understand the possibilities of what they could be doing with all this agility and power, so they’re going to start flooding your IT department with requests for standing up higher networks and higher infrastructures, and that’s great. First of all, in a siloed environment, that kind of thing can be a nightmare. Because you don’t necessarily have the cooperation between the different departments within IT to respond to those requests in a way that doesn’t alienate the business and typically it’s the business that’s asking. What you want to do, in my view, is first address the cultural issues that are barriers to the optimal solutions.
Instead of looking at everything in siloes, start to look at them as a value stream. What part of the business is requesting it; what are they trying to do and what’s the likelihood of the overall value of the implementation of their request? Then align the resources behind that value spend, across the departments. So, you might have some data people, you might have some data scientists working on AI, you might have some security people. You create a cross-functional team that’s aligned behind that request, not only to do the proper due diligence to determine what the value is, up front – which is a really good idea to set your expectations – but you’re also kind of prepopulating the team that’s going to execute that request across all the different functional areas that are going to be involved.
I’ve seen some of the more integrative and progressive companies that are starting to do it this way, and not only does it speed things up in terms of executing and time-to-value, but it really smooths out the communication ongoing. This is all ongoing in an environment where you’ve already implemented something like DevOps, and on the ops side you’ve got cyber-liability engineering going on as a discipline. Then you’ve got this value stream thing happening on top of that. Now you’ve really got a way to transform your organization from the old siloed, I-don’t-know-where-anything-is structure, to this new value-based structure, which is really what you’re trying to get.
Everybody talks about digital transformation, and sometimes it’s just noise. How do you actually achieve that? To me, how you do that is you change the way you realize value, and then you pull everything through from that. The only point is, how do I derive value from my investments, either in buying new things or in investing in changes to my organization? How do I invest in something if I don’t know what the value is going to be? So you go back to the value. It’s the value to the business. And that should set the stage for everything else.
Guerra: Right. And I think one of your points is that you can’t let this turn into a free-for-all rush to the cloud, without governance.
Mangus: Rather than putting your hand up and saying, “No,” a better approach is to say, “Ok, I’m going to spin up this team for you, and we’re going to figure out what the value to the business is with you, and then we’re going to determine the right team size and structure and the resources that are required. We’re going to align behind you in IT if it meets the smell test for value.”
Guerra: You mentioned containerization. You want to address this topic a bit more?
Mangus: Sure, for those folks who don’t know what a container is, think of it as an evolution from where we’ve come from virtualizing workloads. A few years ago, virtual machines were where it was at because you’ve disconnected the direct hardware requirement from your application workload by virtualizing it. In a virtual machine there’s a copy of a full-blown operating system in there with your application. It’s portable, to some degree, but it’s rather large in resource intensity because for every copy you’ve got to have the operating system. Containers break that mold and shrink the order of magnitude because there’s only a sliver of an operating system inside a container. There’s only enough functionality to get basic things done. Because there’s so much less overhead, your application gets most of the resources inside the container. The challenge going from virtual machine to container is think about if you’ve got an application that was written maybe 15 or 20 years ago that follows a monolithic architecture – everything that that architecture needs to do is in one executable piece of code. That’s how we used to write applications all the time. And then things evolved and now we talk about microservices, where everything is broken up into services that talk to each other. Each piece of functionality in the code is separated and it talks to every other piece of code in a service connection. That’s an ideal architecture for a container because containers are used to running in clusters of containers — in an environment together — and there are standardized ways for them to talk to each other.
You can spread out the different microservices into the containers and put them where they run best. You can start to profile how many resources a particular microservice takes, and you can put it into the right container so that it’s grouped with other pieces of like code for microservices that take similar amounts of resources. Of course, you have a management program that helps you profile all of this and run all of this together. The net result is you have a much more flexible and agile infrastructure. The containers are very portable because the run time on them is minimal in terms of complexity. It lets you decide which ones would run best on prem. You can be arbitrary about it from a technical point of view. From a business point of view, you can say, “I want a cost-optimized environment to run this workload.” You’ve got the ultimate flexibility because clouds are all based on Linux, and they’re all based on containers now. All of the services that the cloud is offering you are containerized, as well.
Guerra: So, you definitely recommend that our readers look into this and understand this?
Mangus: Yes. The move to containers is as big a deal as the move to virtual machines was 15 or 20 years ago.It’s not the end; we’ll have something after containers. A lot of architectures out on the bleeding edge may be the next thing, but right now containers are where it’s at.
Guerra: In healthcare, there are maybe three very prominent EHR vendors and, for a health system, that’s their core. Everything else sort of hangs off of that. Health systems often complain that they are limited to whatever their EHR will let them do. Do you want to talk a little bit about how open source frees health systems up a bit to do the things they want to do?
Mangus: To put it simply, you as a company need to take responsibility for your own infrastructure. Your first thought shouldn’t be, “What is my EHR vendor doing in this area?” You need to be responsible for your own data. Open source provides the promise. You can set it up in sort of a lab environment to test it out yourself and there are some amazing tools out there. I use data integration as an example. Data integration seems to be centralized because interoperability is such a big deal now with these CMS mandates for corporations to provide data to patients and data to providers, for example. They’re going to start being penalized if they don’t do that. Unfortunately, a lot of organizations move at the pace of regulation; they’re not innovating anything. They’re doing things because they have to do it. So now that they have to do it, a lot of them are going to the EHR vendor and saying, “Help; what do I do?” And they’re going to provide a very expensive solution. And one that is completely beholden to their product platform and their architecture.
But if you’re going to make the effort, anyway, wouldn’t it be better to explore other options that may give you more flexibility and more power into the future? Where it’s just a matter of connecting to their database. Maybe you want to have a repository to create FHIR data on request. This is something that we do for customers every day. The old mindset would create a governance problem. They would think they have to copy an entire repository, put it in a new format and then sync the two formats. It creates security and governance headaches and much more resource utilization. This is the old way of thinking.
Wouldn’t it be better to have an event-driven architecture, based on microservices, running in containers, that would take that incoming request for information and say, “Oh, I know where that is. It’s already been defined.” So I go in and get that data, I translate that data into FHIR on the fly, and I deliver it to whoever asked for it with a complete audit trail, all secured. The data gets to the requester in the format they wanted, but there’s no persistence. The FHIR data, in computer terms, is ephemeral. It doesn’t get saved to a repository, so I don’t have a governance problem. That’s a much better approach, just to take one example of a more open architecture – a more open way of thinking about that kind of problem.
Guerra: It almost sounds like you have a message for health system CIOs that says, “Stand up for yourselves. Get your freedom. Break free!” It sounds like you look at some things in healthcare and you just shake your head. Is that correct?
Mangus: Almost every day.
Guerra: What’s your final inspirational message to our CIO listeners?
Mangus: I think the message is, you have a huge opportunity today — with the technology and the way the world is — to leap ahead with innovation. Even though I mentioned technology, it’s really not about the technology. It’s about changing the culture of your department and aligning yourself with the value players in your company in this new way that I described, where you create a culture of innovation that allows you to build a platform of innovation. And the platform necessarily aligns with the value culture.
What that will do is completely change and realign all of your infrastructure and your architecture in a new way. Stop worrying about your EHR vendor. They have to listen to you. You’re the one writing the checks. The EHR vendors are going to go kicking and screaming into the future and whether they like it or not, they’re going to have to adopt this microservices architecture and they’re going to have to innovate to keep up with you, if you take this path. It’s really up to you and it’s a really exciting time. Not only because of the technology, but I think a lot of healthcare organizations are waking up to the fact that they can get a huge amount of value by realigning this way and having IT come to the table with the lines of business and talk about value.
Guerra: Well, I’m inspired. You sold me. Thank you, Marc. That was a tremendous discussion and I really appreciate your time today. I think people are going to enjoy this, so thank you.
Mangus: Thank you, Anthony. It was my pleasure.