Principled Device and Vendor Selection

Originally published in EP Lab Digest May 2017

Those of us who choose and implant medical devices shoulder responsibilities that extend well beyond typical patient care. Device implant decisions carry lifelong impact for our patients, but they also have major implications for our staff, hospitals and the vendors with whom we partner. Doctors need to use a principled approach to get these decisions right. Given the stakes involved, there are few areas in doctor/industry relations that carry more meaningful impact than pacemaker and ICD selection.

Early in my career as an electrophysiologist implanting cardiac rhythm devices, I developed a set of principles in vendor and model selection, and these have served me well in almost 20 years of practice. I’ve also learned that it is critical for us to fight to retain control of these implant decisions, and make them in a disciplined and ethical fashion. We, the front line caregivers, are uniquely qualified to best serve our patients’ interests. By responsibly taking charge of vendor selection, we are best suited to maximize benefit and value to both our patients and the healthcare system.

Device selection must carry an inviolable first principle — the patient gets the appropriate device for his or her indications and specific circumstances. All negotiation occurs in the gray areas in which there are alternatives. Pacemakers and ICDs should never be treated as interchangeable commodities supplied to us by the directives of hospital purchasing departments. Doctors, for their part, should not make implant decisions based on personal whims, friendship-based loyalty or self-serving deals.  For these complex devices, doctors should fight for multi-vendor arrangements with MD control over the decision. Healthy competition elevates service and promotes innovation. Using multiple vendors also amortizes risk of product recalls across the practice, and gives our patients a broad array of options. How many vendors should serve us? That’s a question worthy of discussion, but I’d argue “one” is the wrong answer.

At the point of product decision, a doctor should consider four principles before choosing the device or vendor: product, technical support, price, and value-added service. The device decision hinges upon a balance of these factors.

Product:

It is intuitively obvious and principled that MDs should implant the best product for their patient’s needs, but that need not always be the top tier model from the doctor’s “favorite” vendor. Given the complexity of pacers and ICDs, model selection can prove to be a challenging decision fraught with tradeoffs. Each patient has unique needs. One patient’s priority may be a particular diagnostic tool; another may most benefit from maximal battery longevity or ease of remote follow-up. Reliability is important to all, but difficult to predict. The “perfect” device today may be next year’s recall. By rotating among vendors, the doctor reduces overall risk to their patient population. Doctors need to work hard to educate themselves on all of these tradeoffs and choose responsibly. Vendors need to know that if they create great products, this will influence their share of the business.

Technical Support:

The requirement for lifelong technical support distinguishes cardiac rhythm from most other other medical devices. Healthy MD/vendor relations are critical for optimal patient care, even in hospitals without direct in-hospital rep support. Many industry models exist for device technical support in the office and hospital. Each of these models has merits, but all must be handled within ethical and legal bounds. Other expensive medical devices such as coronary stents or valves do not have this lifelong support need, and hospital purchasing departments need to appreciate this critical distinction as they consider the value of their purchases. If a doctor makes it clear to the vendor that they must earn their business each day, they will be rewarded with exemplary service. Doctors who treat their reps as “box openers” will have their negative expectations reinforced with substandard support. In our system, we treat industry reps as partners, and consider them as part of the hospital implant and support team. The quality of their work becomes a factor in device implant decisions. The vendors’ rational response is to compete on these terms, and our patient’s realize this benefit.

Price:

Simply stated, we doctors need to seek the best value for our patients. Because device price decisions typically do not impact the MD or patient directly in the US health system, it has been historically tempting for doctors to ignore cost. Doctors do this at their own peril. If a vendor recognizes this blind spot, the MD may be leveraged into an arrangement unfavorable to the hospital. This will not be sustainable. Hospitals will protect their financial interests and seize control from price-insensitive physicians. To maintain choice, MDs need to directly insert themselves into hospital pricing discussions, and also make it clear to the vendors that price is important to them. We need to have the discipline to walk away from unfair arrangements favoring the vendor. On the other hand, doctors who are complicit with aggressive purchasing departments and consultants in a “we win/you lose” race to the bottom will eventually find they get what they pay for in loss of product, support, and/or service. They will also find those unhappy vendors less receptive at next contract negotiation.

Value-added Service:

Value-added service that deserves vendor recognition benefits our patients, not the doctor’s wallet or ego. The key here is for all to understand this distinction. When an industry rep goes the extra mile to support the healthcare mission, these actions should be noted. Examples might include patient counseling, office staff education, and MD access to experts. Indirect benefits such as research and consulting opportunities might benefit the overall mission, and could be considered in proportional terms, strictly adhering to ethical and legal standards. Doctors need to guard against being “bought” by industry. Nothing a vendor does to earn business should be construed as an overt or potential quid pro quo.

Key to making all of these vendor relations work is transparency and consistent follow-through. Vendors need to know the rules, and doctors need to enforce them. If a vendor slips in their service responsibility, they need to understand this may affect their share of the business. If a vendor has an exciting new product, the alternative vendor may be able to compete on price or support. The doctor needs to respond to these levers in a predictable fashion.

As doctors, our primary mission is to provide high quality healthcare to every patient we see. Responsible device selection must serve this mission.  Vendor selection must be a transparent and principled process. Once it is determined the array of device options available to a given patient, the doctor can apply the four principles discussed to finalize the decision. If the vendors understand the rules of the playing field, natural competition on product, service, price, and support will create the optimal environment for doctors to deliver care.

 

 

Advertisements

Why Healthcare Documentation is So Bad

Care Venn Diagram CropHeath care documentation is done for three reasons.

1) Health care delivery (that’s the obvious one)

2) Regulatory compliance (checking all the boxes our government and payers think are important)

3) Malpractice avoidance (no one wants to get sued)

These three categories actually apply to every task we do in healthcare, but let’s confine this discussion to documentation.

Note in the accompanying figure, our three basic healthcare work requirements fit logically into a venn diagram. Much of what we do serves only one or two of the three driving purposes. In an ideal world, we work in the center of the diagram where all three converge. Unfortunately that “sweet spot” is pretty small, especially when it comes to documentation.

If all clinicians needed to do with our documentation was practice medicine (#1 above, blue in the attached Venn diagram), our notes would be more logical and much less bloated. Laundry lists of irrelevant and inaccurate diagnoses would not populate into every note. Copy and paste would occur a lot less often, and likely could be limited to appropriate uses such as carrying over past medical history (which should always be copy and pasted after verification to reduce errors). Only relevant physical exam findings would be reported, so these would not be lost in a sea of normals. Useful information that is not valued externally, such as personal touches – i.e. patient’s wedding anniversaries, achievements of their children, would have it’s own optimized workflow.

Regulatory compliance and malpractice protection, the #2 and #3 health care documentation purposes above, are responsible for the large majority of the drivel that shows up in our notes. Believe me, we doctors would all love to confine our work to health care delivery, but external forces box us into this uncomfortable place, and this creates junk documentation.

The result of trying to serve all of these missions results in the mess we have today. Healthcare IT expert Fred Trotter says that working with EHR is “like having a conversation with a habitual liar who has a speech impediment.”Care Venn Diagram EHR Crop

As I’ve diagrammed here, EHR serves all three basic functions, but not to equal degrees. EHR is designed for and sold to hospital administrators. Their first priority is business related – i.e. making sure the system runs efficiently and within the law. They work in the peach (Regulatory Compliance) circle.  After the Federal Government stepped in with EHR incentives, Meaningful Use requirements created a set of requirements for the EHR companies that are about 90% peach-colored as well.

After satisfying the needs of administrators and the government, EHR vendors allot remaining resources to serving working clinicians seeing patients, as well as the patients themselves. This results in the lesser segment of EHR devoted to care delivery represented in blue.

Malpractice protection, the green circle, is a critical area of alignment for both the administrators and clinicians. EHR systems provide some degree of protection via completeness and automation, but also introduce new risks.

Since working clinicians don’t make purchasing decisions, what is an EHR vendor’s motivation to optimize the systems for care delivery? Note, also, that the enormous cost of each system coupled with a lack of easy data portability effectively locks in a healthcare system to their EHR. Nowadays, most physicians are employees of their hospitals and lack sufficient leverage to effect an expensive change, even if such a clinician-friendly EHR system were available.

EHR activities fundamentally service the task of Regulatory Compliance (the peach circle) as their primary mission. This satisfies both the hospital administrators and the government. Because all parties have limited resources, the contribution to the Health Care Delivery circle suffers. Both hospitals and clinicians are interested in Malpractice Protection, so the green circle is served at of mutual self interest, although EHR workflow only tangentially addresses this need.

Clinicians need mechanisms to streamline documentation so they can spend time with patients instead of in front of computer screens. Ironically, many of the efficiencies built into EHR to give clinician more time with their patients have become targets of disapproval for our regulators and critics. I find it frustrating when I hear pundits and government officials rally against copy/paste and templates (such as normal physical exam findings). Most of these critics have no perspective on running a busy clinic or inpatient service. It would be impossible to do our jobs without some degree of automation. Do you think the legal profession would consider eliminating templates and copy/paste? Do you think contracts and wills are written freehand each time? Ridiculous.

Good clinicians need to fight external forces to protect their ability to care for their patients. That means we need to devote the large bulk of our time and thoughts to working in the blue circle of healthcare delivery. That’s where our mission is served. The other two circles? We should click/copy/paste/dictate/template only what is necessary to prevent us from being sued, sanctioned, denied payment, or accused of poor quality. If we can do that efficiently, we can get back to taking care of our patients. One casualty of this appropriate triage is ugly documentation.

Folks need to stop confusing healthcare documentation with health care delivery. Those who grade and pay us give far too much weight to the former. Those actually taking care of patients know where to set their priorities.

Edward J. Schloss MD
@EJSMD

Adapted from my comments on EMR & EHR Forum post EHRs Don’t Make Errors, People Do.

Addendum 8/16/15: In response to a comment from Michael Katz MD @MGKatz036 discussing role of EHR in upcoding and other greed and fraud issues, I issued a lengthy comment/reply. Because it extends well my arguments above, I’ll include these as an addendum to the original post for better accessibility.

Mike, I’ve heard the concern that EHR causes fraud and increased cost charge many times. So much to say, but I’ll try to be brief

– E&M billing is already stacked against us. Leaving one irrelevant bullet point off the ROS list or physical exam can cause dramatic devaluation of an encounter. This is a playground for RAC audits and doctors live under that threat continuously. The system is illogical and need for attention to silly details draws up away from our primary mission. The EHR levels the playing field here. As I’ve said before, the dishonest MDs knew how to upcode dishonestly before EHR. Automation and reminders from EHR demystify the rules and lets honest doctors be fairly compensated. Do you have all the E&M rules memorized? Could your billing withstand an audit without the help of an EHR? If you were like many doctors, you’d just code level 3 and not take any chances. Ethical billing consultants (yes they exist) in the pre-EHR days found many doctors to be systematically undercoding because they didn’t want or know how to play the E&M game. If office charges go up after EHR, that may be entirely appropriate.

– How much free care is delivered now BECAUSE we have an EHR? I just spent 30 min on the phone doing two very complex patient evaluations involving EHR and remote pacemaker review. I did this at no charge. Without an EHR, this quality of care would be impossible without the paper chart. That means it won’t get done until tomorrow at the earliest, or I’d have to make do without the chart. In many cases we can prevent office and ER visits with this data access. Patient and system benefits, MD makes less money. It’s the right thing to do, so we do it.

– EHR charting is so painful, I often avoid the encounters altogether just so I don’t have to go through the misery of clicking through the documentation. The care gets delivered, I just don’t get paid. Lots of ways to do this, all ethical and appropriate. None violate our contracts. In the paper days, many of these would have been billed.

I decided not to include a circle on the venn diagram for unethical or inappropriate behavior. I have absolutely no doubt this exists and is a big problem. Nothing I say here denies this fact. This has been reported well and extensively by others.

Thanks,

Jay
@EJSMD

ProPublica’s Surgeon Scorecard: Call for Peer Review

An Open Letter to Healthcare Outcomes Researchers, Journalists and Data Scientists

Thank you for taking to time to read this letter. I’d like to ask you to review some important new information.

Last week ProPublica published a major story and online data base they’ve termed Surgeon Scorecard. It has been promoted this as a tool for individuals to learn more about their surgeons before an operation. After looking at the Surgeon Scorecard data and methodology carefully, I’m left with serious reservations about its quality and applicability. I am requesting your help with an expert peer review.

In the project, ProPublica evaluated eight common elective surgical procedures using previously unreleased data from Medicare. Their source of information was administrative data from billing submissions.  Individual surgeons were rated based on readmissions and mortality. No chart level clinical data was analyzed for the dataset. Each surgeon was assigned a visual ranking based on their performance with a grade that falls into low (green), medium (yellow), or high (red) “adjusted rate of complications” with confidence intervals superimposed. My own work as a cardiac electrophysiologist (i.e. heart rhythm, pacemakers, and defibrillators) is not represented in this data. If you haven’t seen the database, take a look – enter a hospital or doctor you know and note the results.

Prior to the release of the database, ProPublica promoted the project with a video:

It’s worth a watch, as it may reflect the tone and purpose of their mission. There have been some negative reactions to this piece, and a lead reporter for the project has acknowledged this criticism.

This was a big undertaking, as you’ll see when you review. These physician scorecards could have major impact on the medical community, particularly if ProPublica expands their investigations beyond their currently narrow scope. For a journalist generated project, there is some pretty heavy science involved, particularly when it comes to the methodology of the database. The background was published in a separate white paper with appendices. They indicate that they consulted with experts, many unnamed on background, to analyze and format their data.

Upon release, there has been vigorous debate about the methodology of their project, particularly on twitter. If you search the stream of the reporters @marshall_allen and @olgapierce and the hashtag #SurgeonScorecard, you’ll find many of the arguments and their responses. Vocal critics include @JohnTuckerPhD, @skepticalscalpel, @justinmclachlan and @daviesbj. Numerous blog posts have outlined these criticisms and I’ll link to several that are worth reading:

ProPublica’s #SurgeonScorecard Should be Retracted from former journalist Justin McLachlan

ProPublica’s Surgeon Score Card: Clickbait? Or Serious Data? from urologist Benjamin Davies MD

The Problem with ProPublica’s Surgeon Scorecards from transplant surgeon Ewen Harrison

The Surgeon Scorecard is Here! (It’s Just Not Meaningful) from cardiologist Rocky Bilhartz MD MBA

After Transparency: Morbidity Hunter MD joins Cherry Picker MD from radiologist Saurabh Jha MD

The Surgeon Scorecard: Much Ado About Literally Nothing from general surgeon Jeffrey Parks MD

Here are a few high impact tweets addressing the statistical methods:

I realize there’s lot here to digest. Let me take a moment to summarize some major points.

– Responsible doctors agree that increasing transparency is appropriate. One of the major MD blog critics above actually wrote a book on healthcare transparency. We do not object to responsible, accurate reporting of physician performance. We recognize that it is very difficult to assess the quality of a doctor and this needs to be fixed. I have promoted my own idea of direct physician supervision. The folks criticizing this project value patient safety, and are not afraid to criticize doctors when appropriate. We are all seeking the same goals.

– Surgeon Scorecard looks at elective, low risk inpatient procedures and uses purely administrative data to score the surgeons. Only mortality and readmissions are measured. No patient level chart data is reviewed. Actual peri-operative complications and procedural success are not systematically measured. Many clinicians, including myself, have noted inaccuracies in administrative data (which is compiled without MD oversight). I think most clinicians would agree that without direct review of clinical data, it is difficult to accurately judge another doctor’s performance. To their credit, the reporters openly acknowledge these limitations.

– ProPublica applied a clinical risk-adjustment to the data. However, this co-morbidity “Heath Score” did not independently predict outcomes (Item 2.5 on page 11 of their method paper). Their model did not show an increase of deaths or readmissions in the patients determined to be sickest pre-op. This makes me wonder about the validity of their risk adjustment. If pre-op risk is not accurately assessed, the doctors that take on the most difficult cases will be unfairly penalized. Dr. Jha’s parable of Cherry Picker MD vs. Morbidity Hunter MD (linked above) speaks directly to this issue. OB/Gynecologist Dr. Jen Gunter also covers this concern well on her blog. If doctors are reluctant to take on difficult cases for fear of scorecards, needy patients could go undertreated.

– Individual surgeon data is presented with visual red/yellow/green rankings and confidence intervals. In ProPublica’s words, “A high adjusted complication rate indicates that a surgeon’s patients suffered harm more often than his or her peers.” Neither this explanatory document nor the scorecard app discusses the importance of confidence intervals in data reporting (this question is only addressed in a separate FAQ document). A surgeon may have his “dot” in the red, but have confidence intervals that suggest that he may actually be a high performer. I and others wonder if consumers will be able to interpret this complex data without a more up-front discussion by the reporters. There is no visual indication of P=non-significant for surgeons whose CI’s cross into low or medium risk. In a twitter exchange, journalist Reed Miller likened this to reporting a baseball batting average leaderboard without a minimum number of at bats. Scientist John Alan Tucker PhD covers this limitation well in his tweets.

– Procedure numbers for many of the surgeons are low, thus making the risk analysis difficult to interpret. Still these doctors are “graded.” In at least one case, a doctor with zero complications was ranked in the yellow zone (as criticized by cardiology outcomes researcher Mintu Turakhia MD in his tweet cited earlier).

– Many of the outcomes tracked are entirely out of the surgeon’s control, and may better reflect non-surgeon factors such as patient post-op adherence and emergency department staff actions.

– The statistical methods are complex, and there was no independent peer review. ProPublica acknowledges the work of doctors and scientists, many unnamed, in the review of their methodology, but editorial control was entirely in ProPublica’s hands.

– There is no prospective validation that these scorecards predict surgeon performance.

– There does not appear to be a mechanism for physician verification of his or her individual report.

– ProPublica’s promotional video is difficult to describe as anything less than sensational and fear-mongering. It is far out of place with the otherwise professional tone of this project. If you haven’t watched it yet, please do so now and tell me I’m wrong.

To their credit, ProPublica has bravely taken on a critically important mission that was certain to ruffle some feathers. They have done an enormous amount of work to create this database, and their presentation is beautiful. I have been a vocal fan of ProPublica’s work. I have also been both a quoted and background source for their reporters (although not on this project).

Some have argued that it was important to get this data out for public review, despite it’s limitations. I respectfully disagree. I subscribe to the belief that bad data is worse than no data. Certainly the scientific literature is replete with examples that prove this correct.

So is Surgeon Scorecard bad data? Strong words, but I say yes. This analysis was a great idea, but it fails to deliver on its goals. The data and methodology both have significant flaws. I say that from the perspective of a working clinician and clinical researcher with over 20 years experience, but I’d like to see a higher level of review. This project is as much science as it is journalism.  Surgeon Scorecard should be peer reviewed and critically discussed as would any scientific outcomes study. As I suggested to ProPublica, we need to kick the tires.

This is why I’m calling on experts in healthcare outcomes, data science and journalism to review Surgeon Scorecard on methodological grounds to determine its validity, interpretability and appropriate application. This needs to be evaluated thoroughly, and at the highest level of expertise.  I hope you will be willing to take a close look and let us know what you think. ProPublica has invited expert commentary by email at scorecard@ProPublica.org. Please submit your comments there, and leave me a copy in the comments section of this post.

Thank you,

Jay

Edward J. Schloss MD
Medical Director, Cardiac Electrophysiology
The Christ Hospital
Cincinnati, OH
@EJSMD

EHR Review Folders – Saving Trees, Improving Care

I'm pretty sure we generate as much or more paper documents on EHR as we did in the paper charts days.

I’m pretty sure we generate as many or more paper documents on EHR as we did in the paper chart days.

On Twitter I’ve shared in many lively discussions about the struggles we have caring for individual patients on EHR systems that aren’t optimized for that purpose. Many better and more prolific writers have done a wonderful job outlining the frustration we front line clinicians face on a daily basis. Still it seems our voices have a hard time being heard.

I really don’t think EHRs really don’t have to be so difficult. Simple changes could radically change the ease of care delivery if the folks designing and implementing these systems prioritized the needs of the end users. Unfortunately, the bulk of development work these days seems to be aimed at satisfying government Meaningful Use requirements and optimizing systems for charge capture and quality metrics. Clearly EHR vendors have their hands full serving their two primary masters – the US government and hospital administrators – and the needs of those seeing actual patients are lost in the shuffle.

A couple of years ago, I hosted a couple of developers from Epic EHR for a day in my office seeing patients. We talked about a lot, but in the end I said I’d put one wish on the top of my list. I called this EHR Review Folders. Here’s a discussion of these concept, adapted from an email that outlines this simple request:

Thank you for your interest in the idea of “review folders.”  This is an old idea or mine, and I still think a good one.  I discussed this with our Epic site visitors, so I’ll include them in my email.  Let me try to describe my idea so we can try to get this promoted and (I hope) implemented

When we physicians see a patient in a new encounter or as a return after a period of time, there is a subset of the medical record that is highly relevant to us.  Most of this is predictable. We need any recent office notes from the referring MD, we need the most recent diagnostic tests.  We might wish to have the history and physical and discharge summary from any recent hospitalizations.  Every doctor has his or her own needs, but the basics are the same for all.  

In my office, our medical assistant does a chart prep prior to each scheduled visit.  She will comb through the EHR to find these relevant records, print them, and collate them into a packet that is then placed on my desk.  Since we may see 20-30 patients a day, this packet gets pretty thick and the assembly of the packet is pretty labor consuming.

It would be great if we could do away with this old process.  Unfortunately, the current EHR is not organized enough for us to quickly find relevant records on the fly.  Getting what we need can be a bit of a crap shoot because the relevant information is mixed with irrelevant information.  Only after we click on the record is it apparent whether we have what we need.  Even after finding all of the “good stuff,” there is no way to quickly go back to this record as it resides “hidden” with the other records.  In a busy office day, it is extremely challenging to click around the EHR to find all of this stuff.  Hard to find records, such as outside MD letters and other scanned documents are very easy to miss.  Most of us get frustrated, and may do an incomplete review on this basis.

My idea of a review folder would be to have a tab in the EHR in which all of the information relevant to the encounter could be collected during or prior to the encounter.  This tab would live in the patent’s EHR for as long as it is needed, and be visible to any who need it when they log in to the record.  I would envision having my MA sort records prior to the office visit by dragging and dropping the relevant records into this folder.  As I do my own prep, I will add and subtract records as well.  Some of this could be automated by having the folder collect specific types of records by date parameters or type.

A great analogy for what I envision is an Apple iTunes playlist.  On my iTunes, I can create a collection of songs into one folder that may be labeling something like “today’s run.”  I might drag and drop songs individually, or I might set up a “smart playlist” in which I specify parameters like “songs added after December 12, 2012” or “songs by the artist Ratatat.”  That list sorts what I need into one easy to find folder.

Uses for review folders could extend beyond what I’ve described.  Recently a interdepartmental complication review meeting was run for the first time in an EHR only format.  In front of a group of doctors and QA personnel, I struggled to find the relevant records in order to present a case.  Had these records been electronically sorted prior to the meeting and available on all parties Epic desktop, the meeting would have gone much better.

I find this idea conceptually logical, but I’m not sure I’ve done an adequate job describing.  I feel strongly that we could enhance patient care and save a lot of time and money if we could get this done.  I’d be more than happy to discuss further.

I’d love to hear other’s thoughts about EHR data organization. We go thorough an enormous amount of paper in my practice, purely to allow clinicians to review relevant data in an easy to access format. If some form of organized, intuitive digital data review is implemented, I could easily envision doing away with most or all of this printing. Going to a two screen solution with review data on a tablet and data entry on a bigger screen and keyboard is really an attractive option to me. Simple programming changes in our system could get us to this point. Does anyone have this sort of clinician data organization implemented in their EHR (Epic or other)? Would you find this useful? Would you help make it happen?

Jay

ICD System High Voltage Component Failure HRS 2014

What an honor it was to speak on ICD high voltage component failure at this year’s Heart Rhythm Society Scientific Sessions. For those interested, I’ve a link to a few slides outlining potential high voltage failure mechanisms in Riata ICD leads. Please feel free to review and comment. I would request attribution for anything you elect to share.

Riata HV Shorts

Edward J. Schloss MD FACC FHRS

 

 

How Sure Can We Be About Optisure?

Edward J. Schloss, MD

On March 24, St. Jude Medical announced the global launch of the Optisure family of ICD leads. It’s been a while since a new ICD lead was launched, and I’m probably not the only one who was caught by surprise. I’d like to explore why this approval is important for the ICD community. First, a brief history of ICD leads from St. Jude.

FROM RIATA TO DURATA
St. Jude Medical developed its own line of ICD leads after it purchased the former ICD vendor Ventritex in 1996. The first-generation Riata lead, approved in 2001, was succeeded by the Riata ST line in 2006. These leads were distinguished, in part, by their thin diameter, permitting implantation through a 7 Fr introducer sheath. In that era, implanting physicians’ interest in a thin lead was very strong. Even the high-profile failure of the 7 Fr Medtronic Fidelis ICD lead didn’t seem to dampen that enthusiasm.

Both of St. Jude’s Riata lead families later developed problems. Reports of subacute perforation soon after implant in the Riata ST line arose in the late 2000s. A year or two later, the internal core structure of both the Riata and Riata ST leads was discovered to break down in 25% and 10%, respectively, of these leads, as evident on fluoroscopic evaluation — a process called externalization. This problem, along with noted increased electrical failures of this lead, prompted an FDA class I recall of both product lines in December 2011, in addition to intense scrutiny and discussion in the lay press, investor press, blogosphere, and academic literature.

By the time the Riata and Riata ST leads were recalled, St. Jude had already gotten approval and marketed the successors: Riata ST Optim and, later, the Durata lead. Both these leads shared design similarities with the Riata ST lead, but additional modifications were intended to prevent the failures that the predecessor lines had exhibited. To mitigate the perforation risk specifically, changes in the Durata lead were intended to minimize tip pressure to the myocardium. And both new leads had a new insulator wrapping around the silicon core from Riata ST. This Optim insulation, shown to be more resistant to abrasion, has apparently been successful at preventing the fluoroscopic externalization that had occurred with the earlier leads.

The failure of the Riata leads has been shown to be time-dependent, so the device community has expressed some concern about Durata’s future performance. In addition, FDA has continued to apply pressure, with a January 2013 warning letter about this lead, specifically noting problems detected during a California plant inspection. Early active registry studies of Durata have been highly favorable, but a limited number of Durata problems have been discussed in case reports. Noted ICD critic Dr. Robert Hauser has also reported on a series of Durata failures from the FDA MAUDE database.

INSIDE THE DURATA
The Durata and Riata ST may share some failure mechanisms. In particular, the Swerdlow case report revealed inside–out abrasion under the distal shocking coil, resulting in a short between that coil and the ring-electrode cable, and consequent oversensing. Swerdlow and the Hauser MAUDE study have suggested that a similar form of insulation failure at the proximal shocking electrode could result in failure to defibrillate. (Because Durata and Riata ST have essentially the same internal design and materials at the level of the shocking coils, it is possible that this failure mechanism will occur with the newer leads.)

Moreover, Swerdlow found evidence of disruption of the Optim layer, which he hypothesized was due to Optim degradation, possibly related to hydrolysis of the polymer and cyclical stresses during the 4 years of lead service. The long-term biostability of Optim is critical, because without the Optim layer, the Durata leads are quite similar to Riata ST.

St. Jude has staunchly defended Durata, citing the favorable active registry data and additional testing in a large bibliography on its website. The company’s independent engineering analysis concluded that Swerdlow’s lead was damaged externally as a result of the extraction tools, not Optim degradation (counter to Swerdlow’s assertion).

THE BASICS ABOUT OPTISURE
St. Jude released Optisure this week, its first new ICD lead line since Durata. The product literature describes Optisure as “providing an additional system enhancement for addressing lead complications and improving system reliability.” The company says the slightly thicker 8 Fr lead is “for physicians who prefer a larger lead diameter.”

According to St. Jude, Optisure is built on the basic design of Durata with these additional modifications:
• 8 Fr lead body
• additional Optim insulation at the proximal end of the lead
• new layer of Optim insulation under the SVC shocking coil

FDA filings show Optisure was submitted for approval as a PMA (pre-market approval) supplement on 10/24/12 and approved for release on 02/21/14. The filing links back to the original PMA for the Ventritex TVL lead issued in 1996. It does not appear that a human clinical trial was performed, as is common in PMA supplement approvals.

MY ANALYSIS OF OPTISURE
I’m happy that ICD companies continue to pursue process improvement. If we ever reach the point when we think we have a lead that is “good enough,” that will be really unfortunate. I’ve continued to have some concerns about Durata. ICD lead failures in the Riata lines have not become evident until 4 years of use, and we are only recently accumulating large numbers of Durata leads that have been implanted that long. Fortunately, Optisure’s design attempts to directly address two of the feared possible failure mechanisms of the Durata lead.

First, the increased Optim thickness in the proximal lead is likely to diminish the can/lead abrasion in the pocket, and perhaps in areas of cyclical stress. I find it really ironic and satisfying to read St. Jude promoting Optisure “for physicians who prefer a larger lead diameter.” Back in 2010, when I criticized thin ICD leads in an HRS debate, I had a hard time getting people to agree with me. Now, going thicker is a marketing strategy. Times really have changed.

Second, the Optim layer under the proximal shocking coil should help to prevent internal shorts that could cause lead failure. This type of short, if it involves the distal high voltage cable, is especially worrisome, as it may manifest only at the time of clinical or induced ventricular fibrillation. I fear that proximal coil HV shorting may be responsible for many of the Riata and Durata lead failures and deaths documented in MAUDE database entries, such as those published by Hauser (as well as this more recent report). Having a layer of Optim between the silicone core and the SVC shocking coil should help to prevent this shorting, just as it has prevented externalization. Unfortunately, this mitigation will not change the likelihood of shorting under the RV coil (as in Swerdlow’s case) but should help overall lead reliability. St. Jude seems to feel the same way, citing Optisure’s design as an “enhancement for addressing lead complications and improving system reliability.”

WHAT’S NEXT FOR ICDs?
Getting a pacemaker or ICD lead designed, approved, and built is an enormous undertaking. The process has only become more difficult because of increasing regulatory barriers. The formerly common process of PMA supplement approval has come under greater scrutiny. ICD and LV leads that formerly might have been approved under PMA supplement now require large U.S. trials. The trials’ costs, coupled with the fear of another Fidelis or Riata debacle, appear to have stifled lead innovation. Given the development of two new leadless pacemakers (now being implanted in Europe) and the U.S.-approved subcutaneous ICD, we may be at the beginning of the end of the era of transvenous cardiac leads.

I have to agree with Zheng and Redberg that the PMA supplement process for medical device approval is problematic. The fact that leads from Riata to Optisure were approved on the basis of a dissimilar lead developed by a different company nearly 20 years ago should be ample evidence of this argument. Should Riata leads have gone through a clinical trial? Answering yes may seem logical. The unfortunate reality, however, is that no pre-market clinical trial would have picked up this lead’s late and novel failure mechanism. Even today, I would argue that careful industry engineering and close post-market scrutiny (including FDA-mandated registries) are doing far more to help our ICD patients than any pre-market trial ever could.

Nevertheless, it is critical to improve existing products, especially ICD leads. Most of us agree these are the “weak link in the chain.” I fear that a more highly regulated environment is having the paradoxically adverse effect of forcing us to settle with what we already have. That’s why I tweeted on March 24 that the quick approval of Optisure “both surprises and pleases me.” I wonder if this lead would even have been developed if it had been forced through a long, expensive clinical trial. Would that outcome have been a good thing?

Lessons Learned Part II

One thing I’ve missed since finishing up my formal fellowship training has been the ready opportunity to bounce ideas off people in the lab and fellow’s room.  At Cleveland Clinic in the mid-1990s I had the good fortune to work with a lot of smart EP attendings and fellows.  During cases and in the fellows’ room we’d learn a lot from each other about things that aren’t in journal articles and books:

  • What’s the best way to fashion a pacemaker pocket?
  • How do you do pre-op procedural counseling?
  • What’s the best way to manage lead recalls?
  • How do you access the coronary sinus for LV lead placement?
  • What’s your take on single coil ICD leads?
  • What works for you for getting vascular access?
  • How do decide which device vendors to work with?
  • How do you save money in the EP lab without compromising care?
  • What is the most important attribute in an ICD lead?

Online and social media has been a great new sounding board for these types of interactions.  Last winter when @EPLab Digest invited me to submit an article, I brainstormed a list of tips that I titled Lessons Learned in 18 Years of Device Implant and Followup.  The list probably serves better as a starting point for conversation than as a list of answers.  This month Lessons Learned Part II goes up in the October EP Lab  Digest.  Take a look and let me know what you think.

EJS

October 1, 2013