Shocking: Black children are 80% less likely to get opioids for appendicitis

child-stomach-flu-400x400.jpg

For the video version of this post, click here. I usually don't get concerned about the results of observational studies. I can often convince myself that there is some confounder or issue of study design that makes the results less dramatic than they seem. This week, I saw an observational study that genuinely worried me.

Its long been noted that black patients get treated differently in the ER. Specifically, black patients get less opioid pain medication, and less aggressive interventions. Much of this relationship was explained in the past by something called spectrum bias. The idea is that if a population of individuals uses the ER for less severe illnesses, the treatment they receive will naturally be less intense - no bias, no racism, just statistics.

That's what makes this study, appearing in JAMA pediatrics, so interesting. Researchers from Childrens national hospital in Washington DC analyzed a nationwide sample of children treated for appendicitis in the nations ERs. The elegance here is that pain associated with appendicitis should be similar across populations. Indeed, the triage pain scores were no different between black and non-black kids in this study.

Here's the bottom line: there were around 1 million appendicitis cases in children between 2003 and 2010. While both black and non-black children got analgesia around 55% of the time, non-black children got opioid analgesia 43% of the time, while black children got opioid analgesia just 20% of the time. This relationship persisted after adjustment for pain score, insurance status, age and sex. In fact, after considering those factors, black children were about 80% less likely to receive opioid analgesia than non-black children.

 

If true, this is simply unacceptable. It speaks to either an overt, or implicit bias in the medical system, whereby children in pain are treated not according to their symptoms, but to the color of their skin. But this wasnt a randomized trial - how could it be? Should we believe the results?

There are a couple of signals in the data to note. First, the data presented has significant variability. For instance, according to the study the rate of opioid analgesia usage was 25% in 2004, but 55% in 2005.

Such year-to-year fluctuation is hard to believe. If the data quality isnt up to snuff, then the results we see here shouldnt be completely trusted.

We should also acknowledge the fact that when physicians are unsure of a diagnosis of appendicitis in a child, they will often withhold opioids to see how the case develops. This practice could systematically differ in hospitals that cater to different populations, and drive some of the relationship seen here.

Despite these facts, these numbers are dramatic. Were not talking about a couple of percentage points that can be explained away by subtle issues of study design or data acquisition. I join my voice with the authors who state that dedicated studies of disparities in this area are urgently needed. I hope you do too.

"Well, son, you'll either be a violent criminal or a triathlete. Only time will tell".

1-Precrime.jpg

For the video version of this post, click here. Predicting future crime is a cool idea one that has seen play from dystopian novels to Hollywood movies, but the results never seem to work out that well. Thats the case in a study appearing in JAMA psychiatry that well be discussing in the next 150 seconds. This study examines the relationship between resting heart rate and future crime. It is 100 times larger than all prior studies of this phenomenon combined. And what it says is, yes, your heart rate is associated with your risk of future violence. But, Im going to argue, that it doesnt matter. Here are the details.

Swedish researchers looked at a cohort of around 750,000 Swedish men reporting for mandatory military conscription evaluation at the age of 18. Using Swedens robust national reporting system for health and crime, they were able to follow these individuals for as many as 35 years. They collected data on violent crime, non-violent crime, being a victim of violence, and even unintentional injury.

What they found was that those with lower heart rates were more likely to experience all of these outcomes. If your heart rate was less than 60 beats per minute, your risk of committing a violent crime was 25% higher than someone whose heart rate was above 83. Thats before adjusting for things like cardiorespiratory fitness and socioeconomic factors, but accounting for those confounders actually increased the risk - to about a 50% increased likelihood of violent crime.

The authors offer two explanations. One, that low resting heart rate is a marker of chronically low physiological arousal - those with low resting heart rate might engage in risky behaviors to bring themselves up to a more normal level. Or two, its a marker of fearlessness - in the stressful situation of a conscription exam, low resting heart rate suggests youre not easily frightened. If thats the case, maybe you dont fear the consequences of your actions as much in the future. These theories cant be teased out in the context of the study, but they are certainly intriguing.

Where things go a bit off the rails though is in the introduction, discussion, and accompanying editorial, where the prognostic value of resting heart rate is seriously considered. The authors imply that, perhaps, we should be paying special attention to those with low RHR. Aside from the fact that this rankles my libertarian sensibilities, I dont believe this is at all supported by the data.

The authors dont give us enough data to assess how good a test low resting heart rate is, but I made some rough estimates and heres what I found.

If we had a million 18-year olds, according to this study roughly 58,000 would commit a violent crime, and 200,000 would have a resting heart rate less than 60. If we targeted that group, wed capture roughly 10,100 future criminals, and 190,000 future innocents. Wed be right about 5% of the time. Interestingly, if we just picked a random 200,000 people and labeled them aspotential criminals, wed be right 5.8% of the time. A test that works better when you dont do it is not a very good test. So no, we should not identify these adolescents as being at risk, as the authors suggest, or consider resting heart rate as a mitigating factor in criminal trials as the editorialists suggest. Doing that would, well, really get my heart rate up.

"Association" is a biomedical weasel-word. Does low Vitamin D cause MS?

sun.jpg

For the video version of this post, click here. I’ll admit I’m a bit of a vitamin D skeptic. Studies demonstrating that the wonder-vitamin can improve cognition, decrease the risk of colon cancer, and prevent heart disease are often observational in nature.  These associations are always confounded by sunlight exposure and diet – two factors which themselves are strongly associated with a variety of health outcomes. It's no surprise that randomized trials of vitamin D supplementation have been less than impressive.

So a study, appearing in PLOS medicine, linking lower Vitamin D levels to the development of multiple sclerosis caught my eye.

It has long been noted that MS is more common in latitudes further away from the equator, where people get less sun.  It has also been shown that people with MS have lower Vitamin D levels than people without MS.  In this analysis, the issues of confounding of Vitamin D levels is addressed via Mendelian randomization.  Here’s a thirty second primer:

You want to know if some biomarker (like vitamin D level) is causally linked to a disease (like MS). Vitamin D level is determined by a slew of things that you have no control over, like sun exposure, but there may be genetic polymorphisms that predispose you to lower-than-average vitamin D levels for your entire life.  At birth, you may be "randomized" to one of these genes. If low vitamin D causes MS, than surely people who inherited those low-vitamin D genes would have a higher risk of MS.

The caveat is that those genes have to be linked to MS only through a Vitamin D pathway – you want to avoid what's called "pleiotropy". And those genes need to not be near any other genes on the chromosome that can cause MS. Finally, the genes need to be randomly spread through the population of interest – if they are preferentially carried by a certain ethnic group you might be finding a marker of increased risk in that group due to cultural, environmental, or other biologic factors.

Point is – Mendelian randomization is hard.  Did the authors make the grade?

First they identified 4 single-nucleotide polymorphisms (SNPs) that were associated with low Vitamin D levels from a database of over 30,000 individuals. These variants were all in genes related to Vitamin D synthesis or metabolism.  They then looked for these SNPs in a huge genetic study of MS patients comprising 14,000 cases and 24,000 controls.

The big finding is that, yes, people with MS were more likely to carry at least one of those low vitamin D genes. And while the genes weren't obviously associated with MS risk factors outside of Vitamin D, they were involved in things like steroid synthesis, so the potential for unknown off-target effects is pretty high.

Best we can tell, the implication is that increasing your vitamin D level significantly – by say 10 – 50 nmol/L – may decrease your risk of MS by up to 50%.  Clearly, these numbers will have to be borne out in clinical trials.  But considering that genetically low vitamin D is a lifetime risk, my hope that a short-term vitamin D supplementation trial will show a positive effect is very slim.  There are ongoing efforts to start a Vitamin D trial among individuals with a first MS flare – but that is clearly not the population who was studied here.

In the end, this study ends up being long on methods, and short on actionable results.

Is there anything coffee can't do?

8777910107_231af75382_z.jpg

For the video version of this post, click here.

Coffee. It’s hard not to be biased when it comes to the ubiquitous drink. Many of us, myself included, depend on the stuff to start our day, continue our day, and give us something to do when we should otherwise be working. Studies linking coffee to better health get a lot of press. A few months ago, a big splash was made when a study linked coffee consumption to lower risk of melanoma (though they failed to account for sun exposure). Now, we have coffee staving off colon cancer.

The paper, appearing in the Journal of Clinical Oncology, examined roughly 1000 individuals with stage 3 colon cancer, who had been through at least the first round of surgery and chemotherapy. Each of them filled out a detailed food-frequency questionnaire within a couple months after the initial treatment, and they were followed prospectively for cancer recurrence or death.

The majority of the cohort reported drinking 1-3 cups of coffee per day. A small number, 6%, reported taking more than 4 cups per day. Heavy coffee drinkers were more likely to be male, white, and smokers, and had a higher level of physical activity.

After around 7 years of follow-up, 35% of the patients had experienced cancer recurrence or died. Among those who drank 4 or more cups of caffeinated coffee per day, the overall risk of recurrence or death was reduced by about 50% after adjustment for confounders.

Let that sink in a minute. 50%. Has one of the most potent anti-cancer agents been literally sitting under our nose all these years? Well, as much as I’m a java fan, I might need to cool this off a bit.

First off, these patients were part of a clinical trial evaluating the role of adding irinotecan to standard adjuvant chemotherapy for colon cancer. Clinical trials recruit very specific patients - these results may not hold for your typical colon cancer survivor. 

Another issue: Food frequency questionnaires generate a ton of data - you can't possibly control for everything people eat. The authors adjusted their results for total caloric consumption, but it is possible that foods that correlate with coffee intake are the actual drivers of the relationship here. Put simply, it's just as likely that this is a biscotti effect as a coffee effect.

Finally, the big issue: What do we mean when we say coffee? Is an espresso the same as a venti caramel macchiato? Does it matter where the beans come from? How they are roasted? How much sugar you add to it? This is the central problem of dietary research, and one that can only be overcome by randomized trials.

So let’s do it. There seems to be enough data now to justify actually trying this under controlled settings. My prediction is that we won’t see a 50% reduction in recurrence of colon cancer, but we may see something. After all, coffee is a drug. A wonderful, tasty, necessary drug that goes great with pie.

 

Soul food is bad for the heart

1024px-Soul_Food_at_Powells_Place.jpg

For the video version of this post, click here. What diet do you ascribe to?  If you answered “I have no idea what you mean”, then you can join the rest of the 80% of Americans who don’t follow a specified diet regime. Sure, lots of us try to avoid fat, or sugar, or meat, but when it comes to defining the health benefits of a particular dietary pattern, it’s hard to label people.

Now an article appearing in Circulation from researchers at the University of Alabama at Birmingham suggests that a “Southern” style diet may significantly increase the risk of heart disease.

Before we get our pitchforks and charge over to Paula Deen’s house, let’s take a minute to look how this study was done. The data comes from the “REGARDS” study, which was a large cohort study primarily designed to look at stroke risk factors.  About half of the REGARDS cohort, 15,000 people, were eligible for this analysis and provided dietary data in the form of a food frequency questionnaire.

What’s cool about this study is that they derived the dietary patterns without any preconceived notions.  Using a technique called factor analysis, they let the data speak for itself, and find which foods tend to “hang together” in the diets of individuals.  Five major patterns emerged:  the Southern diet (which is the focus of the study) was characterized by fried foods, eggs, organ meats, and sugar-sweetened beverages. Other dietary patterns included a “plant-based” pattern, a “convenience” food pattern, a “sweets” pattern and, my favorite, an “alcohol and salad" pattern.

What I like about this study is that it doesn’t force individuals into a specific category.  The analysis allows your dietary pattern to be part Southern, part plant-based, for example. So we’re not in the situation of trying to label each person with one, and only one, diet.

After follow-up of around 6 years, greater “adherence” to the Southern diet increased the risk of incident coronary heart disease by around 35%. To stop one heart attack per year, you’d need to convert roughly 1200 fried-organ meat gourmands to a healthier option, but that assumes there were no confounders at play.  Clearly there were, as the southern dietary pattern was associated with male sex, black race, lower income, and diabetes.  The authors adjusted for these factors, but it’s likely that other socio-economic factors including access to health care may play a significant role here.

In unfortunate news, the “Alcohol and Salads” dietary pattern, which describes my eating habits pretty darn well, had no relationship to heart disease in either direction.  This may hurt sales of my upcoming diet book “Alcohol and Salads: 20 blurry days to a better you”.

The analysis doesn’t allow for too much subtlety - describing anyone’s dietary habits using 5 criteria is limiting.  In addition, the lack of signal in the “sweets” dietary pattern runs counter to a lot of prior research linking high processed carbohydrate consumption to heart disease. That said, I for one, am going to forgo that second helping of chicken-fried steak tonight.

Solanezumab is not a breakthrough for Alzheimer's patients.

old-63622_640.jpg

For the video version of this post, click here. Is there a breakthrough drug, that, unlike currently available medications for Alzheimer’s disease – targets the disease itself, and not just the symptoms?  That’s what various news outlets are reporting after the publication of a study in the Journal Alzheimer’s and Dementia evaluated the anti-amyloid antibody solanezumab.

But, in my opinion, this is flat out wrong.  In fact, the incredibly dense article, which reads somewhat like a love letter to the FDA, could be a lesson in how to try to get your failed drug approved.  Here’s the story:

Eli Lilly ran two phase 3 trials of Solanezumab in patients with mild to moderate Alzheimer’s disease.  This is pretty standard practice, as the FDA requires two independent randomized trials to grant a new drug approval.  Expedition 1 failed to meet either of its two primary endpoints – performance on the Alzheimer’s Disease Assessment Scale Cognitive Subscale or the Alzheimer’s Disease Cooperative Study Activities of Daily Living Inventory.

Of course, the study looked at tens of outcomes, and, in Expedition 1, it appeared that there was a significant improvement in a different cognitive scale.  Expedition 2 was still going on, so they changed the primary outcome of Expedition 2 to be this new scale.  Clever.  But, swing and a miss – in Expedition 2 this outcome was not significantly different between the groups.

But – by combining the results of Expedition 1 and 2 and – and this is starting to seem desperate frankly – limiting the analysis to just those with mild Alzheimer’s at baseline – they were able to finally demonstrate a statistically significant difference.  This is not surprising, because the new endpoint was chosen based on the outcome of Expedition 1.

Now hopelessly out of the realm of quality trials, the company (and this paper is almost entirely authored by Lilly employees), performed what’s called a “delayed-start” analysis.  After Expedition 1 and 2, participants randomized to placebo could stay on and switch to solanezumab.  So, the argument goes, you have an early-start group and a new “delayed-start” group.  The argument they are trying to make is that, if the “delayed-start” group catches up to the performance of the early-start group, the drug is merely masking symptoms.  If, instead, they fail to catch up, then the drug is fundamentally affecting the disease process itself.

The delayed start group didn’t catch up, at least according to the incredibly broad definition of “catch up” used in this study.  The authors’ conclusion? Our drug targets the disease itself.  Cue press release and breathless excitement.

Listen, I really wish this were true.  But the likelihood is that this drug just doesn’t work - not in a way that will matter to patients at least.  You can tweak the statistics, you can market it however you want, but the data is the data. The major lesson we learn from this paper is how modern clinical trial design allows for many ways out when your primary hypothesis isn’t supported. We’ve got to stay skeptical. And if staying skeptical doesn’t work – try some other outcome – maybe the FDA will approve that.

Have we been giving kids juvenile idiopathic arthritis inadvertently?

inherited-abnormal-foot-structures.jpg

For the video version of this post, click here. If you’ve ever taken care of a kid with juvenile idiopathic arthritis, it sticks with you.  This disease, which is occasionally referred to as juvenile rheumatoid arthritis, isn’t fatal, but it can rob children of the ability to be active, play, and grow - the real essence of childhood. And to date, we still don’t know what causes it.  It’s clearly auto-immune, but there isn’t even a serologic test for the disease.

An article appearing in Pediatrics, from Daniel Horton, and - full disclosure - several of my former Penn colleagues - links antibiotic exposure in childhood to the subsequent development of JIA.

The researchers used the huge Health Improvement Network dataset, which captures much of the primary care activities that go on in the United Kingdom.  Examining over 140,000 children, they found 152 cases of JIA, and matched them based on age and gender to 10 controls each.  The bottom line?  88% of cases had been exposed to antibiotics.  75% of controls had been exposed.

Of course, you get antibiotics for infections. And cases were more likely to have infections as well: 93% versus 85%.  A question and an observation, then.  Question: is it the infection that causes the JIA and the antibiotics are just a bystander? Observation: man, we give a lot of antibiotics to kids.

The authors did a tremendous job putting the blame on antibiotics here.  With multivariable adjustment, infection fell away as a risk factor - antibiotics persisted. There was a dose-response finding as well - more antibiotic courses were associated with higher risk.  There was also a temporal component - there was a higher risk of JIA if antibiotics were given within a year of the index date compared to earlier time points. Finally, they tried to rule out reverse causation by considering the diagnosis of JIA whenever the first symptom (like limp or joint pain) appeared.

Bottom line? I believe that this finding is real.  Whether the offered explanation - that antibiotics affect the intestinal microflora and alter immunity - is true remains to be seen, of course.

But I want to use this study to illustrate one issue that plagues almost every study of risk factors - and that is a failure to give us a sense of the attributable risk.  In other words - how many of the cases of JIA can be explained by antibiotic use, compared to other possible risk factors (like the things that went into the multivariable adjustment)? Can every case of JIA be traced back to some antibiotic exposure? Of course not - but how much of the variation is explained by this variable.  That’s the information we need to actually counsel our patients.

One thing this study does is open the door to future research.  Given the microflora argument, that research will likely involve a lot of stool collections.  And that, my friends, is why I’m occasionally glad that I’m a nephrologist.

Combining these two common medications might kill you.

Vincent_Willem_van_Gogh_002.jpg

For the video version of this post, click here. With depression a major health problem, and broadening indications for antidepressants, these medications are seeing rapidly increasing use.  Non-steroidal anti-inflammatory drugs (NSAIDs), ranging from indomethacin, to ibuprofen, to the cox-2 inhibitors, are also very frequently used. Simple logic would tell us that there would be significant overlap in the Venn diagrams for these two classes, but it turns out that individuals with depression are more likely to experience chronic pain, and thus, more likely to take NSAIDs. The combination of these two classes may be problematic.

The problem is that the most commonly used class of antidepressants, the SSRIs, may increase bleeding risk by reducing serotonin uptake by platelets. And NSAIDs can mess with platelet function in multiple ways.  Now a study by Korean researchers, appearing in the BMJ suggests that the concomitant use of both of these agents can significantly increase the risk of intracranial hemorrhage.

This was a large, population-based study using the Korean national medical database. The researchers looked for everyone with an incident antidepressant prescription from 2009 to 2013.  Of these roughly 5 million individuals, just about half had received an NSAID prescription within the 30 days following the antidepressant. That proportion seems crazy to me, but apparently you can’t get NSAIDs over the counter in Korea, so the demand for prescriptions must be high.

Rather than comparing the NSAID group to the no NSAID group directly, the researchers matched NSAID users with non-NSAID users with similar qualities using propensity scores - this left roughly 2 million people in each group for analysis.

The results? Well, the intracerebral hemorrhage rate was about 60% higher in the group taking NSAIDs.  Now, the overall risk was low - you’d need to prevent around 250 people from taking NSAIDs to prevent one intracerebral hemorrhage, but there are a few issues with the study that make me question the results.

First - an issue of interpretation.  The authors state that caution should be used when combining NSAIDs with antidepressants.  But they don’t have data on people who take neither drug, nor do they have data on people taking NSAIDs alone. So the picture is incomplete.  Maybe NSAIDS increase the risk of intracerebral hemorrhage and antidepressants have nothing to do with it.

We also saw no difference based on the type of antidepressant prescribed. SSRIs, due to their mechanism of action, should increase bleeding risk more than tricyclics for example, but we don’t see that here.

Finally, the follow-up period was only 30 days, and the median length of follow-up was only 14 days. While a signal seen in such a short period of time may be compelling, it doesn’t help us advise our patients who are using antidepressants for a longer period of time.  This is a potential interaction worth following up on in more robust studies, but until we know more, I suggest we continue to do our best to relieve all types of pain.

 

Bitter news: citrus fruits linked to higher rates of melanoma.

121207_HOL_Grapefruit.jpg.CROP.rectangle3-large For the video version of this article, click here.

Browsing through article titles this week, my eye caught one from the Journal of Clinical Oncology with the words “melanoma” and “citrus fruit”.  Not worth looking at I thought, clearly a study linking citrus intake with melanoma is hopelessly confounded.  People who eat citrus fruits are probably healthier in general, have better access to care, etc - clearly they’ll have lower melanoma rates.

But then I read the abstract, and, go figure - people consuming more citrus fruits had higher rates of melanoma. This was worth a deeper dive.

The background here is that all citrus fruits contain compounds called “psoralens”. Psoralens are photo-reactive chemicals and, in pharmacologic doses, are used to sensitize the skin to UVA radiation in the treatment of psoriasis, for example. When exposed to UV light, these compounds can intercalate into DNA, causing mutations, so there might be some biologic plausibility here.

That said, the dose you take as part of PUVA therapy is the equivalent to what you’d find in about 10,000 liters of grapefruit juice…

But let’s push on. Here are the details:

Harvard researchers used two large, prospective databases: the Nurse’s Health Study and the Health Professionals follow-up study.  Combined, these studies comprised about 170,000 people, but after applying various exclusion criteria (including excluding anyone who wasn’t white), they had around 105,000 individuals to study. After a whopping 25 years of follow-up there were about 2000 cases of melanoma. They correlated answers from a food frequency questionnaire with subsequent melanoma incidence. Bottom line? A dose-response relationship, with the highest category of citrus eaters (>1.6 times per day) having a roughly 50% increase in the rate of melanoma.

Surprisingly, the annual UV exposure and number of sunburns weren’t different among the citrus consumption groups - so we’re not seeing a “Florida effect” here.

The authors strengthened their findings by looking at the association of other fruits and fruit juices to melanoma (none found), and looking at the association between citrus fruits and other types of cancer (none found), but there were still a couple of odd findings.

When they broke down the citrus fruits, they found that grapefruit, but not grapefruit juice, was associated with melanoma. Conversely, orange juice, but not oranges, were associated with melanoma. They argue that orange intake in it’s non-juice form was so rare that they didn’t have power to detect a link, but we don’t get numbers to support that, and frankly, I’d be surprised if grapefruits are getting eaten more than oranges in this country.

Now, if you want to throw out a study, you can always argue (as many have) that food frequency questionnaires are terrible instruments. But even taking the study at its face-value, let me give you some data the authors don’t include directly.  The incidence of melanoma in this study was 7 cases per 10,000 individuals per year. To prevent one of those cases, you’d need to convince around 2500 people who eat a lot of citrus to stop, whereas you’d only need to convince 140 people to wear sunscreen. My advice?  Keep your grapefruit juice, and wear a hat.

If drugs for erectile dysfunction cause cancer, would you want to know?

Malignant_melanoma_cns If this is one of those "ignorance is bliss" situations, read no further...

With that in mind, I present to you a study linking erectile dysfunction drugs to malignant melanoma.

For the video version of this post, click here.

The background here is that ED drugs work by inhibiting phosphodiesterase-5, and the down-regulation of that enzyme also occurs in some biochemical pathways that lead to melanoma, so we can put a check mark next to biologic plausibility. Human evidence of the link, prior to this week, involves a cohort study in the US which suggested that men taking sildenafil had a nearly two-fold increase in melanoma risk (but of the melanoma cases, only 14 had been taking sildenafil).

This week, appearing in the Journal of the American Medical Association we get the results of a Swedish study that examined over 5000 cases of melanoma in an effort to put this issue to bed.

The researchers used a preexisting cohort of around 600,000 Swedish men.  In that group, there were roughly 4000 cases of melanoma, which they matched (based on year of birth) to 20,000 controls.

In unadjusted analyses, the PDE-5 inhibitors were associated with about a 30% increase in melanoma. This persisted after adjusting for a smattering of confounders such as income and comorbidity scores, but the authors state that they believed their adjustments were incomplete. If the association were causal, it would mean an additional 7 cases of melanoma out of every 100,000 men taking ED drugs.

But despite the association, two major findings make the link hard to believe. First, the relationship between ED drugs and melanoma was only seen in those who had a one-time prescription for the drug. If the drugs were causal, we’d expect an increase in risk among those who got more prescriptions.  In addition, the researchers found a link between ED drug use and basal-cell carcinoma, a malignancy that doesn’t have a known PDE5 link.  This all suggests that men who take ED drugs might also engage in other behaviors that increase the risk of melanoma - like taking vacations in sunny places.

Just to make it clear that we’re not totally out of the woods here, I should note that this PDE5 pathway only appears to be relevant in the roughly 50% of melanoma cases that have a BRAF mutation - it’s conceivable that if the researchers could stratify by BRAF status, they may have found a link. For now, though, we can rest easy - data linking ED drugs and melanoma is simply not that firm.