Published February 2023 –
It’s impossible to balance cyber risk with medical necessity without spending time “in the foxhole” with the clinicians to learn how and why they use the technology, according to Jack Kufahl, chief information security officer (CISO) at Michigan Medicine, the medical center affiliated with the University of Michigan. In this interview with Anthony Guerra, healthsystemCIO founder and editor-in-chief, Kufahl talks about the complexities of managing risk within an academic research institution and the big question of how to be open enough for research yet secure enough to prevent breaches. When doctors ask to install a new app, if you merely evaluate it in a yes/no way, that won’t be enough to determine what to do, Kufahl says. “You have to peel it back a little bit, make sure you understand it in context.” Sometimes risk is tolerable, but ultimately, “all risk tolerance is temporary,” he says.
LISTEN HERE USING THE PLAYER BELOW OR SUBSCRIBE THROUGH YOUR FAVORITE PODCASTING SERVICE.
Podcast: Play in new window | Download (Duration: 38:20 — 26.3MB)
Subscribe: Apple Podcasts | Spotify | Android | Pandora | iHeartRadio | Podchaser | Podcast Index | Email | TuneIn | RSS
Bold Statements
“ … an academic medical center, we’re a little bit more like an airport, our job is data movement, we have to get the data that we create, that we ingest, that we curate, and get it to where it can do the most good.”
“ … we’re tilling up that soil, and we’re finding more every day. So it is very often that if people aren’t coming to security proactively, we’re flipping over enough rocks that we tend to run across it somewhere in the process.”
“And the nature of information security is we’re going to be risk averse. It’s unlikely we’re going to underdo it. If we don’t have context, we’re most likely, I would say like 95%, to way overdo it.”
Guerra: Jack, thanks for joining me.
Kufahl: Thanks so much for having me.
Guerra: All right, you want to start off by telling me a little bit about your organization and your role?
Kufahl: Oh, sure. Michigan Medicine is about a 1,000-bed hospital. But what’s interesting about it, in my opinion, is that it’s an academic medical center. So it has a substantial nation-leading research facility and learning program attached to it. I’m the CISO of all three of those missions. And that means I have a very interesting job, because I have to keep the flexibility of the research environment, but the sustainability and the survivability of the hospitals and health centers. So it’s a constant balancing act.
Guerra: A lot of government regulations out there require you not to have breaches, and a lot of bad things that happen if you do. And of course, there are other regulations that require you to be open and share data, and send it here, there and everywhere, which is similar to what you’re saying about having a research function. Talk a little bit more about dealing with the balance between openness and security.
Kufahl: What’s interesting for our facility, and I think our institution, and what’s probably true for a lot of academic medical centers throughout the nation and Canada, is our doctors are also our faculty. So at 8 a.m., they may be doing a procedure; at 8 p.m., they could be in their wet lab, doing discovery. So it is interesting, because we all have to adopt different roles and personas on behalf of the institution to figure out how to protect it, but also not just protect what its cybersecurity posture is or risk posture, but what it is more existentially — a free and open academic environment.
Another metaphor that I use when describing academic medicine is, whereas a lot of hospitals or maybe a lot of critical infrastructures, or even smaller companies, they look at their data and in order to protect it, they take more of a castle, drawbridge and moat, higher walls, more bars, hide the data, don’t let anybody in approach. Whereas an academic medical center, we’re a little bit more like an airport, our job is data movement, we have to get the data that we create, that we ingest, that we curate, and get it to where it can do the most good. A lot of times that’s within the University of Michigan, a lot of times it’s outside the physical borders of University of Michigan. As a CISO at an academic medical center, you become very attuned very quickly to data in motion and ethical and appropriate use of data, because that factors greatly into what is and is not reasonable when it comes to a security control decision.
Guerra: Yes, it’s really interesting. When we talk about it, every organization is going to have a different risk level that it’s willing to accept, right? I mean, you’re going to decide what level of risk are you comfortable with. It’s the CISO’s job, from what I understand, to express the level of risk the organization currently has to the decisionmakers. That’s your job to explain that, but not to make the decision as to what that acceptance level will be, is that correct?
Kufahl: Very often, but there are a couple of questions I use, because I would imagine, like most organizations, we have large degrees of decentralization; decentralization of authority, of decision-making. There certainly is a hierarchy. There certainly is a structure. But it’s so broad, such a diverse organization that you have to be able to find where decisions are being made and hold a mirror up to the institution and say, “Here’s how I see it from my point of view. Does that work with what you perceive as where we need to be?”
So usually, I approach a leader that is compelled to make a risk decision with three questions: Are we informed about what this risk is? Do we have a good understanding of where it is in the sector? What is the definition and scope and concept of that risk, and how does it apply to us, because every risk that’s in the sector isn’t necessarily a risk that we have. We have to understand a risk and apply it. The second question is, is what we’re trying to modify reasonable? Is it reasonable for an academic medical center? Is it reasonable for hospitals or reasonable for a dermatology clinic or a classroom or whatever the context is. Is it reasonable? And that’s where that piece of discussion happens. And then the last one is, because we live in a world of exceptions and we simply can’t plan for every permutation of outcome, how are we going to handle exceptions?
So those three questions really helped me get to the root of ‘is this risk acceptable?’ Because that question is really difficult, because there’s no true motivation for a leader to not push risk as far away from themselves as possible. But you need to have an informed discussion. So just asking, “Is this okay,” in a yes or no in a binary way, doesn’t actually tell you a lot about your risk appetite. You have to peel it back a little bit, make sure you understand it in context. The change could be technical, could be behavioral. And then what do we with those exceptions, those things we didn’t account for? That has helped us move decisions along a little bit more expeditiously than prior tactics.
Guerra: I mean, let’s take a scenario or two, and let’s play with a typical one, and you tell me: It might be a department leader, a clinician, someone saying, “We want to bring in a new application. I learned about this new application, which is so cool. And it does all this awesome stuff. Eventually, Jack’s going to get involved, because we want to bring in a new application, and he needs to take a look at it.” Is that how it might work? And then take me from there through the scenario of how that plays out. And ultimately, that conversation that I think may include those three questions that you mentioned, and then a conversation with them about your findings — and the possible risk implications of actually bringing this app on.
Kufahl: Sure. Very common scenario from all three missions that we work with: there’s a new capability that a group or an individual would like to have activated or bring into the fold or even just investigate. And how do we get that done? How do we get the thing done in an increasingly restrictive environment that is increasingly at risk? You know, 10 years ago this was no big deal. Bring in an application, find a server, or you know, something under your desk, try it out, get it moving. And if it works, and if it solves the problem and it didn’t cause any ruffles, no problem. Now, the overburden of all the security, of all the process, is exposing a lot more.
So we’re tilling up that soil, and we’re finding more every day. So it is very often that if people aren’t coming to security proactively, we’re flipping over enough rocks that we tend to run across it somewhere in the process. So the first thing is to assume that we’re not catching it early, right, at the point of ideation, “Hey, I’ve been thinking about doing X, Y and Z, a couple different applications. How do we make that secure?” Instead, we’re almost always interacting, in this case with the clinician, with some sense of urgency. They’re trying to get something up and running. She already feels maybe she’s behind, or there’s a burning platform or burning reason to get this application up and running. And, “Oh, good. Here comes security.”
So the first question is not really, what’s the security of this application, but it is, “What are you trying to accomplish?” Understand what is hoped to be accomplished, regardless of the application, and have a shared outlook and a shared empathy with that clinician trying to be successful. Because it is not the easiest for an individual to be successful in an academic medical center. There are all sorts of things that present themselves and that are perceived as barriers. So just jumping in that foxhole with them is a good way to go. “What are you trying to accomplish?” Because I don’t know. I probably don’t know you and I probably don’t know your science. I probably don’t know the care they’re trying to deliver, don’t know this application. Don’t know any of the parameters. Start there. Have the conversation. After that, a lot of assumptions are then dispelled on both sides of the equation.
Back to the data. What’s the type of data that this application is going to be interacting with? That’s the key question. If it’s high-risk data, if it’s low-risk data, regulated data versus unidentified data, a lot of times, there’s an assumption that they have to use fully loaded PHI. But a lot of times, you can reduce that risk. You can work with data stewards, you can de-identify those data-type questions. When you start getting into the application, there are processes to vet the application, make sure that the paper program for the application (the contract, the procurement) has the right terms and conditions.
We do third party risk management assessments at a couple of different scales. If it’s a small job, we do a small investigation, if it’s a big high-risk thing, we do a more rigorous investigation. So again, that conversation is about right sizing our risk tools, our risk investigation, to the size of the risk. So, a quantitative risk analysis, as opposed to a one-size-fits-all risk analysis. A lot of times in sharing that same foxhole, you end up also carrying water or helping to get the IT thing done as well. You try to have a separation of duty between security and IT but the truth is that it’s very overlapped. And just the nature of our work means we know a lot of people in IT, we know how to get the thing done.
So a lot of times we help get the thing done in our processes, because IT itself can be very complex, with many different priorities, with many different teams. And it’s sometimes challenging for that smaller voice to get the thing done. After it’s up and running, we go into keeping-an-eye-on-it mode. And we have some third-party risk management systems that help with that. But generally speaking, once we get over the risk analysis and we understand what we’re dealing with, we can slot it into one of about four categories of risk and treat it accordingly. The most difficult thing is when we get that wrong and we apply the wrong risk rigor to the wrong risk type, we either underdo it or typically we really overdo it. Then you’re frustrating the physician; you’re frustrating, probably, the third party; you’re wasting IT and information security time; you’re damaging your brand as a trusted service provider within the organization. So really getting that risk analysis as close to right as early as possible seems to be the real trick to engagement and getting a new application up and running.
Guerra: So the risk analysis is going to determine the vetting level. And you said getting that wrong is a big mistake: you could way overdo it and slow the whole thing down unnecessarily. So when it’s in your lap, and you’re doing that risk analysis, what are some of the reasons that it can go wrong and come out with the wrong result?
Kufahl: Sure. The first is, and it’s a little pithy, but security professionals generally come in two flavors. One is more of the auditor mentality. So in our sector, there’s a lot of compliance auditors or a lot of security auditors. So find a framework, find a checklist, run the checklist, now you’ve got an audit. The other half are engineers. And engineers, they’re MacGyvers. They can make anything work. What isn’t necessarily in there is more of a relationship-management role or somebody who can talk to humans, or can understand the physician, the persona of the physician, the researcher, the nurse, the dietitian, whatever the role may be. And right there, you lose a lot. So the best way to be effective in that with your security team is that at every possible opportunity, get them to understand more about those personas and more about those missions, not through a ticketing interface, but by sitting down and talking with these folks at every possible chance. There’s some orientation-type work. But that’s one of those key areas where if the security practitioner at the frontline doing that risk analysis doesn’t know the context of a basic science researcher versus a clinical researcher (for example) and some of those dynamics that are involved there, you’ve already got one hand tied behind your back.
The next one that is really a marker is the quality of the third party. So tell me a little bit about this application and where it’s coming from? Is it the proverbial two people in a garage coding? Massive mega-corp? If it’s a massive mega-corp, is this their bread and butter, or is it an acquisition that they’ve done? So the nature of that application tells us a little bit about the confidence we have in that third party and how willing they are to engage with us. And we see a full spectrum of vendors that want to partner with us from a security point of view, and understand that it’s a significant part of their role, why they’re contractually obliged, or reputationally obliged? And we also see the complete opposite, which is once you’ve given them the licensing fee, good luck, buddy. So those two factors really, are the nature of it.
What’s the context of the work? In our case, we have at least those three missions, but many, many nuances between them. And even in just a conventional hospital, how a cardiac surgeon or cardiac surgical department acts is going to be categorically different from the personas you fall into with, say, dermatology, a dermatology outpatient clinic and clinician. So even in those missions, there’s phenotypes of groups you’re working with that if you don’t understand just 5 or 6% of that context, you’re probably heading into a wasteful situation. And the nature of information security is we’re going to be risk averse. It’s unlikely we’re going to underdo it. If we don’t have context, we’re most likely, I would say like 95%, to way overdo it.
Guerra: You talk about turning over rocks, right? So ideally, you’re not turning over rocks, it’s coming to you. But we know that’s not always the case. You want to do everything you can to make sure you’re notified in a more formal fashion; that there’s a process. So optimally, you’re taking that relationship angle. You want to have the right folks on the ground, doing those assessments so you get those right. You don’t want everyone being assigned the highest level of risk and review. So if you’re seeing a lot of assessments get that highest level, does that set off alarms in your head?
Kufahl: That’s the page we’re trying to turn. We want to move not just into that proactive quantitative analysis philosophy but also, because we work in a lifecycle, we assess something — and all things being equal, let’s say everything’s healthy — we’re going to assess it again in a period of time for the higher risk items. Hopefully, at that point, we’re learning.
So a great example is almost all healthcare institutions of any size that have the resources use a pretty robust risk cybersecurity assessment standard from NIST, called 853. It just went to revision five, not too recently. And that puppy is pretty sure close to 400 questions — right around that marker. And all those questions are important to an auditor. And all those questions are important in the detail and nuance. There isn’t necessarily time for every risk to be asked 400 questions, most of which are highly technical, that nobody really knows the answer to. It’s not just asking the question, it’s finding the person that might have the answer to the question. And because a lot of times, you’re asking questions of engineers, software engineers, storage engineers, technical engineers, or quite literally some of the smartest people you’ve ever run across — which is what I tell my folks a lot during onboarding: most rooms are going to be full of geniuses.
The University of Michigan attracts and grows incredibly smart researchers. That’s why they’re here. So your job is to go in and tell them they’re clicking on their email wrong. Getting to those answers is a really protracted involvement. So we had some incredibly talented people. And one person in particular who I’d give a shout out to is Jessica Kelly who’s on our team, on our cybersecurity risk management team, was able to digest those questions down to the most meaningful 30 or so and cross-map that in a legitimate way (to the 400 NIST questions). So we’re asking the basic questions, and then digging deeper as we need to. We didn’t always do that. So that’s an example of an optimization that I think a lot of companies and institutions are shifting to. Because the job isn’t just to secure those high-risk items or to retune and switch it, the job is to also adjust your means of analyzing.
So when it comes around again, in its cycle, you are giving it a chance to be more accurately processed, and then sitting down and doing continual process improvement and analysis about the highest and best use of your time. Because it’s not just ‘protect’ that application, it’s we need to have visibility on a very broad risk landscape. So if we do one thing really, really well, but we miss 50 things, we’re not doing our job at all. We have to be able to have tools that scale across that landscape and tell us where to deep dive and tell us where to back off. And that makes us nervous because, inevitably, we’re going to miss things. Notably, we’re going to apply the wrong risk assessment to the wrong risk type. The more variation we have in our tool set, we’ll see more. And we have an operating principle that a known risk is better than an unknown risk, but it doesn’t necessarily mean we can treat all risks. So we have to continually improve our processes to account for that philosophy and those principles.
Guerra: Would you say that, in general, you’re looking to reduce risk without stopping business. And the red flags have got to be very rare. So you’re saying things like, ‘Maybe we have to slow it a little bit. We are willing to reduce risk in the following ways.’ But you do occasionally, I would imagine, have to stop something; rarely, maybe but you do.
Kufahl: Yes. And stopping something is just about the most outrageous thing you can do at an academic medical center, because it’s so hard to get things going. I usually say the hardest thing at Michigan Medicine is building momentum — except stopping something that’s already got momentum. So, once something’s in motion, there’s a compelling interest to keep it in motion, because there’s a respect for how hard it is, at least in IT, to get a thing done. And so there are enumerated authorities at our institution for what I can’t stop and what I can stop it. But that is a pretty big red button.
So our primary goal — I don’t necessarily look at it as keep the business running because that feels like it does a disservice to the healthcare sector as a part of critical infrastructure. I look at it more as, how do I preserve availability of healthcare services. The last thing I want to do is make a healthcare service not available. I’m not a physician. I don’t know how important that ultrasound machine that’s connected to the wall in the ED is. I don’t know if that’s the only one. I don’t know if that’s used for immediate diagnostics for every pre-cardiac patient that comes in there. Or if it’s convenience.
Years ago — it became very popular — a couple of really snazzy device manufacturers created pocket ultrasounds that basically plug into your mobile phone. You do a very quick, not high fidelity, but very quick diagnostic image and it’s really cool technology, really great hardware. Real good, attractive price point, very easy to learn to use. ED docs, in particular, loved it. However, the applications were a little shy on the security controls, we’ll say. But, again, learning the context of how those things are used, understanding how the data flows, and knowing if you’re going to affect that situation, there was high value in having those reasonably secured at that point even though I was originally perceiving them as risky devices.
I had the ability to say, “absolutely not. We won’t allow those to connect,” but going back to that first step, understanding the context, understanding the use, and understanding the benefit is key. Because then it was obvious even to me, a non-physician, that these were quality of care tools; they were quality of life improvement for the docs. Because if they don’t have these, my understanding was a lot of times they have to wait for the ultrasound to become available, which slows down diagnostic, which slows down patient care, which adds costs when you might just need a lower fidelity scan to confirm your initial diagnosis, your initial triage.
So the value was clinically very, very high and we learned this by working with our CMIOs. We have a very great structure of associate CMIOs that work in our office of clinical informatics. So we have a CIO, and then we have – I don’t know the exact number – maybe about 15 or so associate CMIOs who are in the specialties. And we have one for emergency medicine to go to and say, “Help me understand.” And they do and that’s one of those key elements that has been very successful for us.
So if I didn’t have those resources, if I didn’t know those partnerships, and if I didn’t have that context, I feel I would be more apt to disrupt clinical care, and the research environment or educational effectiveness than I do right now. So that would be my advice to other CISOs. Like Mr. Rogers says, look for the helpers, there are lots of helpers out there that want to explain why something is important, or why something might be of urgency. And if I approached it just from a ‘got to fill out my form’ way — don’t get me wrong, there’s a healthy portion of ‘you got to fill out my form’ — but if you don’t have that way of approaching things a little bit more empathetically, a little bit more dynamically, you don’t really get to be a healthcare CISO, you’re just a CISO.
So to be a healthcare CISO, you’ve got to learn part of that context and the part of the mission that helps maintain availability. The more you know, the more you can tune, the more you can advocate. And very, very rarely are we in the position of really terminating a connection or outlawing an application. It does occur but for the vast variety of tools and devices and uses and third parties and collaborations, it’s surprisingly rare.
Guerra: Help me understand the dynamic. You make a great point. I understand what you’re saying about as a CISO in healthcare, probably any business you’re in, it’s very important to understand the context and helps you understand the importance, the criticality of the device, or whatever it is, and things like that. But explain to me a little more the dynamic of who makes the final call, the CISO or the business. How does that work?
Kufahl: Perspective is really important. And if you were to chop up my world into two buckets, there are threats, things that are actively going on at our institution, against our institution, within our institution, that are putting our institution at risk. Active packets, active malicious actors, active insider threats, those are threats. With those — and those are certainly a minority of what we deal with — there is a reasonable expectation that I have to first address those threats and then deal with whatever that recovery may be. So in a threatening situation, it is triage and respond to the threat appropriately to preserve the availability of the institution or the reputation of the institution or the digital systems of the institution. So there, I would say the CISO’s quite empowered. And in fact, in our organization, because we do run a reasonably large security shop, there are key roles inside my organization that don’t even need to wait for me. We have established playbooks that we vet, that we practice, that we continually manage to help scope and limit wherever that threat may be.
Bucket number two: A large part of the organization in that other bucket I talked about in my world of two buckets is vulnerabilities, which is a really squishy, ambiguous term — your digital vulnerabilities and financial vulnerabilities and technical vulnerabilities, short term and long term sorts of things. So in there, we work with the CEO, who is also the dean. At our institution, our academy and our corporation are joined in a single role. So we have the benefit of having a penultimate leader for an academic medical center. But I can’t take every risk to that office, every vulnerability to that office. So early on, we made the assumption and we worked with him to make a risk delineation. So we can’t just say, “These are all Michigan Medicine risks, deal with them.” There has to be some logical grouping of these risks. And we broke it down into seven major risk types. We call it our risk delineation framework.
And typically, those seven major risk types had logical proxies for the dean to empower, to be those penultimate decision makers. For example, we have a risk type that is renowned research productivity, we have a chief scientific officer, and he can delegate to a cadre of the chief scientific officers and his chief of staff and the associate dean of research IT, that at some point in our vetting of how important this risk is, they can be the penultimate decision makers.
Underneath that, there’s a series of levels where the risk comes in. And we use a framework called the four Ts. Every risk we want to ideally treat. How do you fix this risk? We talked about disruption; do we terminate that risk? Do we unplug it? Do we not allow the packets to flow? Do we cancel a contract, transfer risk – which in risk management terms, that’s where you actually preserve a functionality, but you transfer the risk to a separate entity. That rarely happens. It’s happened in my last five or six years, twice, where we’re actually saying, “Well, that that risk is actually somebody else’s, but we still benefit from the functionality.”
And then, the last one is really what you’re asking about here, risk toleration. So those four Ts: treat, transfer, terminate and toleration. So we have a fairly robust process around risk coming in, let’s say, Windows 7, lots of stuff at an academic medical center runs on Windows 7. You try to get rid of it, you get down to this thimble full of Windows 7 machines that you’re going to have to make a decision around: maintain that functionality, have compensating controls or remove it. So if this toleration comes in, let’s say it’s not for something easy to shut off, let’s say it’s for a radiology system. Because radiology is complex, and there are all sorts of purpose and costs and market drivers there.
My team helps assess the technical cybersecurity risk, passes it up to a peer group of other technical people, not security practitioners, but people from the IT community to take a look at it and say if it is reasonable or not. I have my proxy, which is my cybersecurity risk manager, who helps lens it through more of a leadership capacity. And then it comes to our oversight committee for consideration. And if it is a novel risk, meaning a risk that we haven’t adjudicated on before – because we have this concept of precedence, which means if we’ve accepted that risk for a similar purpose, for a similar reason, that doesn’t need to go to oversight, I can sign off on that. But if it’s a novel risk, that goes forward for discussion. In the past five years, a lot of it was practicing and building up that bank of precedent.
And now a lot of the risks that are coming forward are really more associated not with technical gaps, like the company doesn’t make a Windows 10 version in that example, but financial gaps and really starting to expose that we can’t really afford to fix everything. But being able to have that engaged conversation and represent the security point of view with the enriched data from the business to be able to say, “This is actually something we need to make room for,” or, okay, the classic example I use is an MRI machine. MRI machines, $500,000-$1,000,000, the PC that runs it is $500. The $500 PC is not the funding issue. The MRI is the funding issue, and then it has a path to toleration because you’ve got to have MRIs, right?
It wouldn’t be reasonable to say no MRI. Better to say, where is the capital funding for this project? How do you factor in the cybersecurity concerns along with all the other concerns? Now that’s a CISO leaning more into being an institutional leader than a security wonk, and learning how, in that case, imaging capital works inside an inpatient facility. And that’s interesting. It does take you further away from the nuts and bolts of security. But I think you’ll get more risk reduction the more you engage and understand how the funding works.
So that’s a little bit about how that process might work. And then one of the key precedents for risk toleration is all risk toleration is temporary. So it goes on a risk registry. And just because we agreed, say in 2021, that that was an acceptable risk, things change by 2023, so let’s make sure we are refreshing and curating. So a decision made with a cohort based off data from 2021 doesn’t become the rule. So constantly refreshing. And we typically only accept risks for about six months to a year there. Are there are examples where we don’t do that, but that’s our opening stakes.
Guerra: Jack, I would love to keep you on the phone for another two hours. But we’re quickly out of time here. I’m going to ask you just a final open-ended question, what’s your best piece of advice for a CISO at a comparably sized medical center with a research arm, doing all the things you have to do.
Kufahl: My best piece of advice — and I’m almost a broken record about it — is don’t feel like you have to fill your teams out with what I call tier 19 security engineers. Academic medical centers are great employers, but they’re paying typically around 50% of the marketplace. So if you are approaching this with, “I’ve got to have the most proven certified, experienced people to protect the institution,” I think you’re setting yourself up for failure.
Invest in people early in their career, invest in people that have an interest in cybersecurity, and supplement them with training and certification and continuing education. But most importantly, exposure. If you’re only exposing your people (that are either young in their career or looking at cybersecurity as a second career) to your processes, they’re not going to go nearly as fast as embedding them in the workgroups, getting them involved with the big projects, sending them into the wet labs to understand the difference between FPLC and an HPLC, having them talk to CMOs.
If you don’t expose those people, they’re not going to grow. But most importantly, that’s where you get most of the value. There aren’t too many situations where you need a tier 19 engineer. What you really need are people who are going to learn the context and grow their career with you. There is a talent shortfall in cybersecurity. A lot of that I think is approach. So if you really invest in people entering in and growing them and having a solid coaching period, mentoring, continued education program, I think the problem really does a lot to solve itself. And that’s a big piece of advice that I offer.
Guerra: Jack, that was a wonderful interview. I really appreciate it. Thanks so much for your time today.
Kufahl: I had a great time.
Share Your Thoughts
You must be logged in to post a comment.