#23 Dust mite avoidance for asthma

Dr. Daniel Aronov

Does dust mite avoidance improve asthma? 

Asthmatics are commonly advised to do things to reduce dust mites in their environment: vacuum regularly, wash the curtains, avoid soft toys, wash bed linen regularly, etc.

This is because there is a strong association between asthma and house dust mite allergy: Around 65% of asthmatics are also allergic to house dust mites (on skin prick testing) and it seems that higher exposures to house dust mite allergens are associated with worsening asthma. But is this relationship causal? Will reducing house dust mite allergens in the environment actually improve asthma?

This episode looks at the evidence around reducing exposure to house dust mites and its impact on asthma management.

Bottom Line:

Using dust mite impermeable bed linen reduces asthma-related hospitalisations for 1 in 8 children per year (provided these children have already had an exacerbation that led them to go to hospital, and that they have a positive skin prick reaction to dust mites) This was shown in a randomised controlled trial (2017) of 286 children. There was, however, no difference in the number of children who required a course of oral steroids and no meaningful difference asthma control scores. Furthermore, other studies, albeit of generally poor quality, have failed to show a benefit with dust mite impermeable bedding.


#22 Subclinical hypothyroidism in pregnancy

Dr. Daniel Aronov

A pregnant woman has come to you for her first antenatal appointment. She’s perfectly healthy with no signs or symptoms of thyroid disease. You arrange the gamut of blood tests: full blood exam, blood group, HIV, etc., but do you also check her Thyroid Stimulating Hormone levels (TSH) to screen for thyroid problems? And if she ends up having subclinical hypothyroidism, do you treat it? This week we look at the evidence, the guidelines and the history to try and answer this question.

The story of treating subclical hypothyroidism in pregnancy unfortunately, follows a very common formula in medicine:

Here’s how it goes:

  • Step 1: Data from observational studies suggest a strong association between an abnormal blood test and disease.
  • Step 2: Guidelines jump on this data very quickly and make strong recommendations to fix the blood test if it’s abnormal to try and reduce said disease.
  • Step 3: This practice quickly becomes the standard of care.
  • Step 4: Only after this has become standard of care is a good quality randomised controlled trial finally conducted to actually test the recommendations by the guidelines – to see if fixing the abnormality on the blood test actually improves the disease.
  • Step 5: The randomised controlled trial/s shows that fixing the abnormal blood test has absolutely no impact on the disease to which it is associated.

The worst part about this formula is step 6: That it often takes years or decades for practice to change now that there is good evidence that what we were doing was wrong.

Let’s follow the history of treating subclinical hypothyroidism and see how it fits this formula to a tea!

But first let’s define our terms: Subclinical hypothyroidism is when the TSH is elevated, but the thyroid hormone or T4 is within the normal range. So there must be some low thyroid hormone process going on, because it’s stimulating TSH to make more but the system is compensating and maintaining normal thyroid hormone levels. Theres another term called hypothyroxinaemia – this is the reverse – where there is low T4 but the TSH is within normal range. Then you can get a low T4 plus a high TSH – and this would just be called straight up hypothyroidism. But I would argue that you could even split this into two: symptomatic hypothyroidism and non-symptomatic hypothyroidism. Non-symptomatic hypothyroidism would be where the patient has low T4 and a high TSH but feels completely fine with no symptoms or signs of hypothyroidism whatsoever.

So let’s go through our little formula:

Step 1: Data from observational studies suggest a strong association between an abnormal blood test and disease.

We have known for over a hundred years that hypothyroidism is associated with adverse pregnancy and neonatal outcomes, like mental retardation. That’s when it was discovered that overt hypothyroidism, where patients were iodine deficient and had symptomatic hypothyroid disease was associated with impaired brain function in the baby. But what about if the women are totally asymptomatic but have a low thyroid hormone or a high TSH? Well, in 1999 two studies came out to show that even in these women, there is an association with poor brain development of the child.

One study measured the TSH in 25,216 pregnant women and found that those with TSH levels in the top 0.3% ended up having children with lower IQ scores

The second study followed 220 pregnant women and found that babies born to mothers with lower T4 at the time of pregnancy did worse on  psychomotor development scores.

So there seems to be a pretty consistent association. The problem is, there was absolutely no evidence that fixing the thyroid hormone levels leads to better outcomes. But that didn’t stop Step 2 of our association formula from going full steam ahead.

Step 2: Guideline committees make strong recommendations to fix the abnormal blood test to try and reduce the disease.

Well, it certainly hasn’t been universally accepted in all guidelines, but some big guidelines have definitely taken this information and said every pregnant women should be tested with a thyroid function test, and any high TSH or any low T4 should be corrected, even if they are completely asymptomatic. In 2005, for example the American Association of Clinical Endocrinologists, the American Thyroid Association, and the Endocrine Society came out with a joint statement that every pregnant women should be screened for subclinical hypothyroidism and treated. One review found that based on the thresholds that some of these guidelines were recommending treatment, 15% of all pregnant women would need to take thyroid hormone.


Step 3: This practice becomes standard of care.

Certainly in my experience, I have noticed that pregnant women get treated for their subclinical hypothyroidism, especially if they end up under the care of an endocrinologist. I’m sure it’s not universal but I think it is very common.

Step 4: randomised controlled trials are finally done to test whether fixing said blood test abnormalities will actually reverse or improve the disease to which they are associated

Let’s start with the CATS study or the Controlled Antenatal Thyroid Screening study, which was published in NEJM in 2012.

They study took 21,846 pregnant women and checked their TSH and T4 levels within the first 16 weeks of pregnancy. They were then randomised into two group:

The first group was the screening group: in this group the treating team were given the results of the TSH and T4 tests straight away, and if the TSH was high or the T4 was low or both, they were put on thyroxine.

The second group was the control group and in this group, they took the blood sample but froze it straight away, stored it at -40 degrees Celsius, and only after delivery, thawed it out and measure the TSH and T4.

390 women in the screening group tested positive for hypothyroidism, either with a high TSH or low thyroid hormone or both, and so were treated, while 404 women in the control group tested positive for hypothyroidism but were not treated because the results were only available after delivery.

The primary outcome was cognitive function of the the babies born to these mothers at age 3.

And….there was no difference between the groups. The IQ score was the same whether you had abnormal thyroid hormone levels and you got treatment or whether you had abnormal thyroid hormone levels and you didn’t get treatment. There was also no difference in preterm birth, birth weight, etc.

In March of 2017, the second randomised controlled trial looking into this had come out. It was also published in the New England Journal of Medicine, and Brian Casey was the lead author.

It was actually 2 trials:  One trial was testing whether treating subclinical hypothyroidism during pregnancy improves cognitive function in their baby.  While the second trial was testing whether treating hypothyroxinaemia in pregnancy improves cognitive function in their baby.

They screened 97,228 pregnant women with thyroid function testing before 20 weeks gestation. 3057 of them ended up having subclinical hypothyroidism (TSH>4mU) and of these, 677 fit all the inclusion criteria, gave consent and underwent randomisation to either get thyroid hormone therapy or not. That was study 1 – The subclinical hypothyroidism study. Study 2 was the hypothyroxinaemia study. 2805 of the 100,000 women ended up having hypothyroxinaemia, where the T4 level was low but TSH was normal. 526 of those underwent randomisation.

They were all randomised to either take levothyroxine or placebo. Every month they got a blood test and adjusted the levothyroxine dose if they needed, in order to keep the TSH level (study 1) or T4 level (study2) in the normal range. They did sham dose adjustments in the placebo group as well. For every real dose adjustment they did in the levothyroxine group, they did a sham dose adjustment to someone in the placebo group. It’s great. The goal was to keep the TSH level between 0.1mU/L and 2.5mU/L for study 1. For study 2, the goal was to keep the  T4 level between 0.86 and 1.9 ng per decilitre (11 and 24.5 pmol/L).

The primary outcome was a full scale IQ test at age 5 in the baby that eventuated from this pregnancy. Secondary outcomes were other cognitive, motor and language scores at 12 and 24 months and a bunch of other developmental tests like behaviour and social competence were done at different stages of follow up. They also looked at a bunch of pregnancy and neonatal outcomes as well.

So what did they find? Let’s start with the pregnancy and neonatal outcomes: There was absolutely no difference. In preterm birth, preeclampsia, placental abruption, apgar scores, admission to NICU, nothing! No difference in either of the studies. So if you treat a low T4 or a high TSH with thyroid hormone, there is absolutely no impact on pregnancy or neonatal outcomes.

Now let’s move onto the neurodevelopment and behavioural outcomes. They were able to follow these kids up to the age of 5 in 96% of the cases. There was no difference in IQ scores and no difference in any of the other developmental scores either. Nothing. And again, this was in both trials. So whether you had a high TSH or a low T4, doesn’t matter…fixing it does not help anything to do with pregnancy.

Bottom Line

Treating either subclinical hypothyroidism (high TSH, normal T4) or hypothyroxinaemia (low T4, normal TSH) with levothyroxine in pregnancy does not have any impact on pregnancy outcomes (such as preterm birth, preeclampsia, gestational diabetes or placental abruption), it does not have any impact on neonatal outcomes (such as apgar scores, admission to NICU, stillburth, miscarriage, or neonatal death) and it does not have anyone impact on the childs neurodevelopment or behaviour.



#21 Steroids for acute urticaria

By Dr. Daniel Aronov

A patient presents with acute urticaria (hives) and after your comprehensive assessment, you decide to give them an antihistamine. But do you also give them a corticosteroid? Maybe some prednisolone to speed up the recovery from their hives? It’s a pretty common practice, in a study of one emergency department in Italy, 93% of those presenting with acute urticaria where treated with steroids on top of their antihistamine. But does it actually add any benefit on top of antihistamines? This episode we explore the evidence

Approximately 10% of the population will develop acute urticaria at some point and most of the time we have no idea what causes it. Idiopathic urticaria, where no trigger is identified, is actually the most common cause (60% of cases). When a trigger for the urticaria is identified, in about 40% of cases, most are due to drugs, then insect bites, then foods.

There have only been 2 randomised controlled trials to answer today’s question. One done in the United States in 1995 and the other done in France in 2017.

Pollack, 1995

This was conducted out of one Emergency Department in Phoenix, Arizona in America. Anyone who came into this Emergency Department over a 7 month period with a generalised itchy, urticarial rash was enrolled in the study. The rash had to be present for less than 24 hours and if they had any signs of a more serious allergic reaction, like angioedema or stridor, then they were excluded. They were also excluded if it was only a local allergic reaction or if they had already used an antihistamine or steroid in the previous 5 days.

They were all given a 50mg intramuscular shot of diphenhydramine in the emergency room and then randomised into two groups: This first group were sent home with an antihistamine (hydroxyzine)  and prednisolone (20mg twice per day for 4 days).  The second group were sent home with an antihistamine, plus a placebo to take twice per day for 4 days.

They recruited 43 patients all together, 19 in the placebo group and 24 in the prednisolone group. The primary outcome was the average change in Itch Score (0-10 itchiness rating) at day 2 and day 5


The average itch score when they presented to the ED was somewhere between 7.5 and 8. On day 2, the average itch score was 4.4 in the placebo group and 1.3 in the steroid group (a  3 point reduction in the 10 point itch scale with adding prednisolone to the antihistamine). At the 5 day mark, the itch score was 1.6 in the placebo group and 0 in the steroid group.

Barniol, 2017

This study recruited 100 participants presenting with acute urticaria to one of 2 emergency departments. Again they were excluded if they had angioedema or anaphylaxis. They had to have had the rash for less than 24 hours and they can’t have used steroids or antihistamines within the last 5 days.

They were randomised to either antihistamine plus steroid or antihistamine alone. The antihistamine they used this time was levoceterizine 5mg daily for 5 days and for the steroid, they used prednisolone 40mg once daily for 4 days.


The primary outcome was how many people had an Itch Score of 0 out of 10 on day 2.. They also checked itch scores at 5 days, 15 days and 21 days.


At  2 days: 79% of those in the placebo group had absolutely no itch (An Itch Score of 0 out of 10 ), But in the prednisolone group, 62% had an itch score of 0. This was the only statistically significant result. All other results: Itch Score at 5, 15 and 21 days and complete resolution of rash were not statistically different between the two groups.

Bottom Line

There have been 2 randomised controlled studies assessing the benefit of steroids on top of antihistamines for the treatment of acute urticaria. One study from 1995 with 43 patients found that steroids improved itch scores by 3 points on a 10 point itch scale by day 2.  The second study had 100 patients and was done in 2017. It showed that steroids DID NOT improve recovery from acute urticaria when added to antihistamines.

So what should we do?

  • Option 1:  The better study showed that steroids did not improve outcomes so stop using them for acute urticaria.
  • Option 2: Even if the first study is true and steroids speed up recovery from acute urticaria by a little bit,  they all eventually got better anyway so it’s not worth the potential for side effects with systemic steroids.
  • Option 3: There is some evidence steroids improve recovery so we should use them.

#20 Evidence Based Pearls for Respiratory Tract Infections

Dr. Daniel Aronov

This episode is a live broadcast from a lecture given at the Royal Australian College of General Practitioners conference. It is a collection of my favourite evidence-based clinical pearls for the most common presentation in primary care: respiratory tract infections. We’ll cover antibiotics for otitis media, sore throat and bronchitis, steroids for sore throat, Tamiflu, treatments for cough and a few other random things in between.
To watch this talk with the slides head on over to my YouTube channel (and subscribe while you’re there ;-p): www.youtube.com/drdanMD


  • Ebell, M., et al., How Long Does a Cough Last? Comparing Patients’ Expectations With Data From a Systematic Review of the Literature. Annals of Family Medicine  Feb 2013
  • Thompson M, Vodicka TA, Blair PS, et al, for the TARGET Programme Team. Duration of symptoms of respiratory tract infections in children: systematic review. BMJ 2013;347:f2027
  • Smith SM,, Fahey T,, Smucny J,, Becker LA.. Antibiotics for acute bronchitis.. Cochrane Database of Systematic Reviews 2014,, Issue 3.. Art.. No..:: CD000245.. DOI:: 10.1002//114651858..
  • Paul;, et al., Effect of honey, dextromethorphan (robitussin), and no treatment on nocturnal cough and sleep quality for coughing children and their parents., Arch Pediiatric Adolescent Medicine 2007
  • Shadkam, et al., A Comparison of the Effect of Honey, Dextromethorphan, and Diphenhydramine on Nightly Cough and Sleep Quality in Children and Their Parents. Journal of Alternative Complementary Medicine, 2010.
  • Cohen., et al. Effect of honey on nocturnal cough and sleep quality: a double blind, randomized, placebo-controlled study. Pediatrics 2012
  • Paul, et al. Vapor Rub, Petrolatum, and No Treatment for children with nocturnal cough and cold symptoms. Pediatrics Nov 2010
  • Centor, et al. The diagnosis of strep throat in adults in the emergency room. Medical Decision Making., 1981
  • Spinks A, Glasziou PP, Del Mar CB. Antibiotics for sore throat. Cochrane Database of Systematic Reviews 2013, Issue 11. Art. No.: CD000023. DOI: 10.1002/14651858.CD000023.pub4.
  • Hayward G, Thompson MJ, Perera R, Glasziou PP, Del Mar CB, Heneghan CJ. Corticosteroids as standalone or add-on treatment for sore throat. Cochrane Database of Systematic Reviews 2012, Issue 10. Art. No.: CD008268. DOI: 10.1002/14651858.CD008268.pub2.
  • https://www.theguardian.com/business/2014/apr/10/tamiflu-saga-drug-trials-big-pharma
  • Vergison, et al., Otitis media and its consequences: beyond the earache. Lancet 2010
  • Venekamp RP, Sanders SL, Glasziou PP, Del Mar CB, Rovers MM. Antibiotics for acute otitis media in children. Cochrane Database of Systematic Reviews 2015, Issue 6. Art. No.: CD000219. DOI: 10.1002/14651858.CD000219.pub4.

#19 Does high cholesterol CAUSE cardiovascular disease?

This episode takes a deep dive into the evidence for and against the lipid hypothesis. The lipid hypothesis states that abnormal blood cholesterol levels cause cardiovascular disease. But is this true? Does high LDL (“bad cholesterol”) and/or low HDL (“good cholesterol”) actually CAUSE cardiovascular disease or is it just an association? This episode was recorded live from a General Practitioner conference. To view the presentation with the slideshow visit: www.youtube.com/DrDanMD

#18 Which antihypertensive is best? The ALLHAT trial

The ALL-HAT trial is by far the most important clinical trial ever done in the management of hypertension. It answers the question: Which class of antihypertensive medication is the best for reducing cardiovascular disease? And it is the definitive source for the answer. They randomised a whopping 42,000 patients to get one of the four antihypertensive medications: an ACE-inihibitor, a calcium channel blocker, a thiazide or an alpha-blocker. The ACE-inihibitor they used was lisinopril, the calcium channel blocker was amlodipine,  the thiazide was chlorthalidone and the alpha-blocker was doxazocin. They followed for between 4-8 years and they where interested in how many in each group develop cardiovascular disease. Perhaps one class of antihypertensive is better than the others?

The trial was started in the mid-90s, when the thiazide, chlorthalidone, had been around for ages and the case for lowering blood pressure was now well established.  Lot’s of new agents were coming to market and each being vastly more expensive chlorthalidone. So the authors were interested in whether the more expensive drugs at the time: ACI inhibitors, Calcium channel blockers or alpha blockers were actually any better than good old cheap chlorthalidone?


The 42,000 participants were recruited from 623 centres in the US, Canada and Puerto Rico between 1994 and 1998. There were three main inclusion criteria:

  1. They had to be 55 years old or older
  2. They had to have hypertension as defined as either a systolic BP greater than 140 or a diastolic BP greater than 90.
  3. They had to have one additional risk factor for cardiovascular disease on top of hypertension. And this could be: Either a previous heart attack, type 2 diabetes, currently smoking, left ventricular hypertrophy on ECG or echo or an elevated cholesterol.

The only excluded those who had symptomatic heart failure. They did not exclude patients who were already taking antihypertensive, in fact, 90% of them were, but they stopped them all when the study started. On the day of randomisation, they stopped all of their antihypertensives, then the next day they started their study drug. That way they were testing that particular drug in the purest way possible.

They concealed allocation and randomised to a 1.7:1:1:1 ratio so that more participants were in the thiazide group. Around 15,000 were randomised to chlorthalidone and around 9000 to each of doxazocin, lisinopril and amlodipine. They stopped the trial in 2002 so those who were recruited in 1994 were followed up for 8 years while those were recruited in 1998 were followed up for 4 years. On average, the follow up was 5 years. They followed them up every 3 months in the first year then every 4 months in the following years. They would increase the dose of the study drug to get the BP to a target of below 140/90. If they couldn’t do that with maximal dose, then they would add in either atenolol, clonidine or reserpine and this was up to the doctor.

The primary outcome was heart attack – either fatal heart attacks or non-fatal heart attacks. Secondary outcomes included: stroke, all-cause mortality, cancer, GI bleed, end-stage renal failure and a composite of all the bad cardiovascular outcomes.


Baseline characteristics

  • The average age was 67
  • half of them were women
  • the average Blood Pressure was 146/84,
  • 22% were smoking
  • 51% had established cardiovascular disease,
  • 36% had type 2 diabetes
  • The average BMI was 29.2
  • Around 30% were black, 70% white.

Alpha-Blocker (Doxazocin)

They stopped the doxazosin (Cardura) arm of the trial early because an interim analysis showed that it was inferior. They published this interim analysis on doxazosin in a separate article before the results of the main trial came out. It was in JAMA in the year 2000. After an average of 3.3 years of follow up for this interim analysis, they found that compared to the thiazide chlorthalidone:

  • Those getting doxazocin had a 25% increase in adverse cardiovascular outcomes – it went from 21.67% in the chlorthalidone group to 25.45% in the doxazocin (NNH 27). Which means that for every 27 patients you decide to treat with doxazocin over chlorthalidone, 1 will develop a cardiovascular event.
  • Congestive heart failure doubled. It went from 4.45% in the chlorthalidone group to 8.13% in the doxazosin group making a number needed to harm of 27.

Now, this was the first ever decent study to compare an alpha blocker to another class of antihypertensive and while it should have spelt the end of doxazosin for the treatment of hypertension…it didn’t! Pfizer, who brought doxazosin to market under the brand name “Cardura”, was also one of the sponsors of the ALLHAT trial. So they became aware of these results before the trial was published. So what do you do when one of your drugs is found to be harming people? Do you pull it from the market? Or perhaps you put a little warning on the box? No. You create a marketing and damage control campaign. In fact, the sales of Doxazosin or Cardura where completely unscathed after this trial was published with virtually no change in the $800 million dollars of sales per year for this drug.

Some internal documents leaked from Pfizer to show the techniques they used.  They got an external research agency to study the doctors’ awareness of this preliminary report from the ALLHAT trial. When the agency found that “knowledge of the trial’s preliminary results is minimal for all specialities,” they took great steps to make sure, as best as possible, that word did not get out. Firstly, Pfizer deliberately did not issue a statement about the ALLHAT results, because it  “would likely draw more media attention to the situation.” Secondly, they taught their drug reps to provide information about the ALLHAT trial “only when asked.” And how’s this for a bit of genius: At the American College of Cardiology conference in California in the year 2000, Dr. Furberg, who is the lead researcher of the ALLHAT trial, was set to give a presentation on the ALLHAT results. So what did Pfizer do? They brought in the top big shots in cardiology to do a tour at the same time as the presentation. Therefore keeping doctors who attended the conference from attending the talk on the ALLHAT trial.  In fact, a document leaked where the two Pfizer employees who came up with this idea were praised as being “quite brilliant” by their management.

The blood pressure reductions between chlorthalidone and doxazosin were pretty much the same –  Around 2mmHg between them. So I think this is an important lesson that blood pressure reduction is not equal. A 10mmHg reduction in blood pressure with one class of drug will impact on cardiovascular disease differently than a 10mmHg reduction with another drug. And this is the same for everything: cholesterol, HBA1c, and it’s why we need to ask for clinical trials to show us the impact on hard outcomes rather than these surrogate markers.

Calcium Channel Blockers (Amlodipine)

The primary outcome, which was fatal or non-fatal heart attacks was the same in all three groups. It occurred in 11.5% of the participants regardless of which antihypertensive they got.

But, it’s in the secondary outcomes where some differences lie. And seem to be in favour of chlorthalidone:

  • There was a 40% relative increase in heart failure with amlodipine compared to chlorthalidone it went from 7.7% to 10.2% (NNH 40).
  • There was an increase in coronary revascularization with amlodipine as well, it went from 9.2% to 10%, (the P-value was 0.06 though).
  • Peripheral vascular disease decreased by 0.4% with amlodipine – from 4.1% to 3.7% (again the P-value was 0.6 – just short of that arbitrary cut off for statistical significance).
  • All the other outcomes: stroke, end-stage renal disease, cancer and all-cause mortality were the same between the two. And it didn’t matter if they were male or female, older or younger, black or white or diabetic or non-diabetic, the results were consistent across all those groups.

ACE Inhibitor – Lisinopril

Here again, chlorthalidone came out on top.

  • There was a 15% relative increase in stroke with the ACE inhibitor compared to the thiazide: It went from 5.6% to 6.3% (NNH of 143).
  • The combined cardiovascular outcome was also worse for the ACE inhibitor – it went from 30.9% to 33.3%, (NNH 41).
  • Heart failure was also worse with lisinopril (NNH 100).
  • More people developed angina with lisinopril, (NNH 67), and more coronary revascularization.
  • End-stage renal failure was not different between the groups – it was 2% for those taking lisinopril versus 1.8% in those taking chlorthalidone. 
  • There was also no difference in all-cause mortality, peripheral arterial disease or cancer.

They did a lot of subgroup analyses and did show that within certain age ranges and within certain races, lisinopril was not worse than chlorthalidone:

  • Age: When they looked at those under the age of 65 there was no difference in the combined cardiovascular disease outcome between lisinopril and chlorthalidone. It was only in those over the age of 65, where lisinopril was worse.
  • Race – lisinopril was worse in black people. There was a 40% increase in stroke in black people taking lisinopril, but no increase in stroke rates in white people taking lisinopril. Similarly, with the combined cardiovascular outcome, it increased by 20% in black people taking lisinopril but only increased by 6% in white people taking lisinopril.
  • Diabetics status: The outcomes were consistent with the overall findings
  • Gender: The outcomes were consistent with the overall findings

Blood Pressure Targets

The blood pressure achieved in each of the three groups were actually statistically difference. Whether they were clinically or meaningfully different is another story. So at 5 years, the systolic BP was 134 for chlorthalidone, 134.7 for amlodipine and 135.9 for lisinopril. So a 2mmHg difference between chlorthalidone and lisinopril. Could that be the sole reason for the worse outcomes with lisinopril? I highly doubt it but could be. And even if it is, it still puts thiazides ahead in my books because great…it’s better at reducing blood pressure than the others.

Side Effects

At the end of 5 years, 20% of those in each of the chlorthalidone and the amlodipine group were no longer actually taking those drugs. While 27% were no longer taking lisinopril by this time. The most common reason was that of adverse effects or the patient refused to take it after a certain point. So it seems more people were stopping the lisinopril. They did collect some limited data on other adverse events. Angioedema occurred 4 times more in the lisinopril group than the other groups but it was still pretty rare occurring in about 1 in every 250 people taking lisinopril. They measured potassium levels and found that hypokalaemia occurred more commonly with chlorthalidone. At the start of the study, before any of them were given their study drug, around 3% had a potassium less than 3.5. In those randomised to chlorthalidone, 8.5% developed a potassium less than 3.5, while only 1% did with lisinopril and 1.9% with amlodipine.

Statin Arm Of the Trial

For everyone who had elevated cholesterol in this study, of which there were 20,000, they randomised them to either get a statin or a placebo. I guess they thought: “well, we’re spending billions of dollars recruiting and following these people up, we might as well check if statins work as well”. This arm of the study will not be discussed here,

Bottom Line:

The ALLHAT study is the most definitive study to compare different classes of antihypertensives in terms of their ability to reduce cardiovascular disease. They found that the alpha-blocker – doxazocin (Cardura) was significantly inferior, causing double the rate of congestive cardiac failure and a 25% relative increase in cardiovascular disease compared to chlorthalidone. The calcium channel blocker, Amlodipine was equivalent to chlorthalidone in terms of heart attacks, strokes and cardiovascular disease, but in terms of heart failure, Chlorthalidone was superior – treatment with amlodipine resulted in a 40% increase in congestive cardiac failure (NNH 40). Chlorhalidone seemed to be superior to the ACE-inhibitor lisinopril as well – treatment with lisinopril resulted in a 15% relative increase in stroke compared to treatment with chlorthalidone. It also resulted in an increase in cardiovascular disease (NNH 41) and heart failure (NNH 100). This increase in adverse cardiovascular outcomes with ACE-inhibitors where much less profound in a subgroup analysis of only white people.


So how did the guidelines respond to this piece of evidence?

American guidelines  (JNC 8):

“In the general nonblack population, including those with diabetes, initial anti-hypertensive treatment should include a thiazide diuretic, calcium channel blocker, angiotensin-converting enzyme (ACE) inhibitor, or angiotensin receptor blocker (ARB). In the general black population, including those with diabetes, initial treatment should include a thiazide diuretic or calcium channel blocker.

Australian Heart Foundation Hypertension guidelines:

“In patients with uncomplicated hypertension ACE inhibitors or ARBs, calcium channel blockers, and thiazide diuretics are all suitable first-line antihypertensive drugs, either as monotherapy or in some combinations unless contraindicated”.

 UK NICE guidelines

For under 55 they recommend starting with an ACE or an ARB. For over 55 or for black people they recommend starting with a calcium channel blocker. If a calcium channel blocker is not tolerated then a thiazide diuretic. For under 55 they recommend starting with an ACE or an ARB. For over 55 or for black people they recommend starting with a calcium channel blocker, if a calcium channel blocker is not tolerated then a thiazide diuretic. They actually recommend using chlorthalidone or indapamide as the thiazide of choice rather than hydrochlorothiazide. NICE also recommend to: ” Prescribe non-proprietary drugs where these are appropriate and minimise cost”

And while we’re on the topic of cost, here is a quote from the conclusion of the ALLHAT trial article:

“One of the stated objectives of ALLHAT was to answer the question, “Are newer types of antihypertensive agents, which are currently more costly, as good or better than diuretics in reducing CHD incidence and progression?”18 Consideration of drug cost could have a major impact on the nation’s health care expenditures. Based on previous data that showed that diuretic use declined from 56% to 27% of antihypertensive prescriptions between 1982 and 1992, the health care system would have saved $3.1 billion in estimated cost of antihypertensive drugs had the pattern of prescriptions for treatment of hypertension remained at the 1982 level”

#17 Honey for cough

By Dr. Daniel Aronov

On average, children get about 8 upper respiratory tract infections per year. Most of which involve a cough which can be a nuisance. It can ruin the child’s sleep and the parents sleep and it can also be very distressing for the parents. A survey found that one of the common fears about their child’s cough is that they may die from asphyxiation. It’s no wonder then, that we spend a fortune on cough medications. In Australia alone, we spend $67 million per year on over the counter cough medications for kids. Yet, almost all guidelines and drug regulators warn against using them because they don’t work and they may be harmful. the Australian therapeutic guidelines, the Royal children’s hospital guidelines, the American Academy of Paediatrics guidelines, the FDA and the TGA, to name a few, all recommend against using cough medicines in kids under 6 years of age. So is there anything else we can use to combat cough in kids? Some cultures have an age-old tradition of giving honey to treat coughs, and believe it or not, honey as a treatment for cough has been tackled in the scientific literature 3 times! This week, we look at the evidence.

By far, most of the over-the-counter cough medications have dextromethorphan as their active ingredient: Robitussin, Dimetapp (cough, cold and flu), Vicks, Codral (Cold&Flu and cough), Bisolvon, Mucinex, and others.
Occasionally, like in Benadryl, Diphenhydramine is the antitussive ingredient.

RCT 1 – (Paul, 2007)

The first of our three randomised controlled trials compared honey, to dextromethorphan to usual care (doing nothing). It was done in Pennsylvania in the United States and was published in 2007 in the Archives of Paediatric and Adolescent Medicine.
They recruited anyone between the ages of 2 and 18 who presented to a single pediatric clinic in Pennsylvania with a cough that was due to an upper respiratory tract infection, having been present for less than 7 days.
All three of these RCT’s were just a one-day study. They did a cough survey on the day they presented to their doctor, then that night they got the honey pr the placebo, then the next day the would repeat the cough survey to quantify the difference in cough between the two nights. So as they came to their paediatrician, wanting them to fix their child’s cough, they were asked if they wanted to participate in the study. If they consented, they immediately had to fill out a questionnaire about the child’s cough from the night before (when they didn’t have any treatment for their cough). All three randomised controlled trials used the same questionnaire which had five questions, each with 7 possible tick box answers. The questions were:
  1. How severe was your child’s cough last night?
  2. How frequent was your child’s cough last night?
  3. How bothersome was your child’s cough last night?
  4. How much did your child’s cough affect the child’s ability to sleep?
  5. How much did the child’s cough affect the parent’s ability to sleep?

The possible answers were:

  • not at all
  • not much
  • a little
  • somewhat
  • a lot
  • very much
  • extremely

Only parents who gave a score of at least 3,  (“somewhat”) for at least 2 of these 5 questions, were then able to be included in the study. They managed to recruit 130 kids, but only 105 completed the study for whatever reason. They were then randomised into three groups: The first group got an artificially honey-flavoured dextromethorphan preparation. The second group got honey. And the third group got nothing. The honey they used was buckwheat honey. Good on them for making an artificial honey flavoured dextromethorphan! So the no treatment arm were not blinded but the honey and dextromethorphan groups were. And the investigators were blinded from all three interventions. They gave the honey or dextromethorphan in an unlabelled syringe and were told to give it to their child 30 minutes before bed that night. They then called them the next day and asked them to answer the exact same cough questionnaire that they did the day before.

The average age of the child was 5, the oldest person in the study was 17 and they were sick for an average of 4.5 days before presenting to the clinic. The primary outcome they were assessing, was the difference in cough frequency from the night they didn’t take anything to the next night when they had either the honey, dextromethorphan or nothing. The other outcomes were the other 4 questions in the survey: cough severity, cough bothersome-ness, child’s sleep and parental sleep. They determined that a clinically meaningful change in score would be 1. So dropping from “very much” to “somewhat” or from “somewhat” to “a little”.
A the start of the study, the average cough score for each of the 5 questions in the survey, was about 4 out of 6 which means, on average, they were ticking the “a lot” box. So my child’s cough effected my sleep “a lot” last night. How severe was your child’s cough last night? “a lot”. And so on and so forth.
So what did they find?
Well for all five of the outcomes, the greatest reduction in score was seen with honey, then dextromethorphan, then doing nothing. Important to note that the doing nothing group did improve, because as we know, these things just get better with time.
No need to go into the exact results of all five of these outcomes because they were all pretty much the same (and I don’t want to put you to sleep). But let’s look at the primary outcome: “cough frequency” as an example. “how frequent was your child’s cough last night?”. The night before, when they didn’t get any treatment, the average score for this question was 4 representing “a lot”. Giving them no treatment on the night of the experiment took their score down by 0.92. So it took the average response to this question down from “a lot” to “somewhat” just by doing nothing. Giving honey took the score down by 1.89 points, so from “a lot” to “a little”. While dextromethorphan took it down to somewhere in between, 1.39 points – so somewhere in between “somewhat” and “a little”. And that was the pattern for all their outcomes. Doing nothing leads to some improvements, giving honey lead to greater improvements and giving dextromethorphan disguised as honey was somewhere in between the two. But despite the fact that this pattern was easily visible for all five questions, only one of them reached statistical significance when camparing honey to doing nothing, and that was for cough frequency. The difference in score was 0.97…so just shy of that minimally clinically important difference of 1.
So that’s the first study. Yes some benefits, but questionable whether these benefits were meaningful. And this study was funded by the National Honey Board!

RCT 2 – (Shadkam, 2010)

This one was done in Iran in 2010 and was published in the Journal of Alternative and Complementary medicine. Here, they were comparing honey versus dextromethorphan versus diphenhydramine versus usual care.
They recruited 141 children, between the ages of 2-5 and randomised them into 4 groups. The first group got 2.5mL of honey. The second group got 2.5mL of dextromethorphan syrup, the third group for 2.5mL of diphenhydramine and the fourth group got usual care.
This was again, an overnight study – On the day they came into the paediatric clinic, they were given that same cough survey, then that night they had the intervention, and this time they came back into the clinic the next day where they did the questionnaire again.
Importantly, the methods used in this study were really poor. There was no blinding, which, granted is difficult to do with honey, but at least in the other study, they made an effort with honey flavoured dextromethorphan preparation.  And while blinding patients is important, what’s even more important is blinding the researchers. There was no blinding of the researchers. There was no allocation concealment which always makes you wonder whether there was actually any randomization. But perhaps the weirdest part of this study was, and I quote: “Any ambiguous question for the mother, if any, was answered by a paediatrician” …what the? So the investigators were answering questions if the mothers didn’t know the answer? This makes it even more alarming when you consider that the investigators were not blinded. So if they had a personal belief that honey was better, they could very easily influence the results, even if only subconsciously.
The starting score for each of these questions was around 4 (which was the same as the other trial) and the results were pretty much identical to the other study. In the usual care group, the score went down to about an average of 2.5 across the outcomes from the questionnaire.  In the honey group the score went down to about 1.5. And for dextromethorphan and the diphenhydramine, the score went down to 2. This time, when comparing honey to usual care, the improvement was statistically significant for all of the outcomes and they were all also clinically meaningful having at least a difference of 1 point between the two for each of the outcomes.
So that’s two down and what do we have so far: We have one study which showed a benefit for honey but only reaching statistical significance for 1 of 5 outcomes and questionable clinical significance and this second study which shows a statistical and clinically meaningful difference but at a high risk of bias. Let’s put all our eggs in the third study basket.

RCT 3 – (Cohen, 2012)

This one was done in Israel and compared eucalyptus honey, to citrus honey to labiatae honey to placebo. Which is awesome because as someone who does use honey in general practice, I always get asked, “which honey is best?”
This was the best designed out of the three trials. They recruited children between the ages of 1 and 5, from 6 paediatric clinics who were presenting with a nocturnal cough due to an URTI.
They used the exact same cough questionnaire as the other two studies. They managed to recruit 270 kids and blinded both the patient and their family as well as the investigators. They also concealed allocation.
They randomised them to 4 groups – 3 of them were honey but different types of honey (eucalyptus, citrus and labiatae honey) and the fourth group was placebo. For placebo, they used Silan date extract – which they report looks and tastes similar to honey. They were all packaged in little 10 gram packets and you couldn’t tell the difference between them.
Like the other two studies this study was only an overnight study – so they took the survey on the day of presentation, gave them the honey or placebo that night (30 minutes before bed), then redid the survey the next day – like the first study, an investigator called them and they did the second day questionnaire over the phone.
These kids were just over 2 on average and were sick for about 3 days when they presented. Again, they used the five separate questions of the cough survey as an individual outcome. The primary outcome was cough frequency and the other 4 were secondary outcomes.
So what did they find?
Well firstly, there was no difference between the 3 types of honey for any of the 5 outcomes.But, honey was superior to the placebo date syrup for all 5 of the outcomes. And this was statistically significant. The reductions were very similar to the other two trials.
On average, across the five outcomes, the date extract had a 1 point reduction in the cough score, while the 3 different honey’s had about a 2 point reduction in cough score.

Bottom Line

Honey reduces the frequency and severity of cough in children associated with upper respiratory tract infections. It also improves child’s sleep and parental sleep. When compared head to head, it is superior to the vast majority of cough medications which either contain dextromethorphan or diphenhydramine as their active ingredient.

Adverse effects

Cough medications have had a lot of serious adverse effects reported in the literature. Firstly, 15% of all childhood overdoses in America are from cough medications. One-third of the time it’s because the incorrect dose was given to the child by the parent, but in two-thirds of the time, it’s because the child found it, went “hmmmm…This tastes nice”….and drank the whole thing. But even with standard doses, dystopias have been reported, as has anaphylaxis, hallucinations, mania. But it’s not only these studies which show they don’t work, a separate Cochrane review has also found no benefit with dextromethorphan.  
Well now we have honey! Kids love it. minimal risk and more effective than the medications out there. But do not give honey to babies <1 years old because of the risk of botulism.


#16 The best way to quit smoking according to science

By Dr. Daniel Aronov
14.5% of Australian adults smoke cigarettes – this is down from 22.4% at the turn of the century. The rates are similar in the US but much higher across Europe – with an average closer to 30%.
There’s pretty much nothing we can do for a smoking patient that would improve their health as much as getting them to quit smoking would. So how do we do it? Firstly, are you more likely to quit successfully if you stop it “cold turkey”, or is it better to stop gradually? Secondly, can medications help? and by how much?
This week we look at a randomised controlled trial that compared gradual smoking cessation to abrupt cessation. We’ll then examine the evidence for each of the different pharmacological treatments used to help people quit.

Etymology of “Cold Turkey”

Given that it’s almost universally used to depict the sudden stopping an addiction suddenly, where in the world did this term come from?
According to the Online Etymology Dictionary, it came from the fact that cold turkey is a dish that doesn’t require much preparation. So quitting “cold turkey”, is quitting suddenly or without any preparation. But then dictionary.com reports it’s origin comes from a common phrase used in America in the 1950’s: “to talk turkey” which means to speak bluntly about something. …if only there was a peer-reviewed journal of etymology.

Benefits of quitting smoking

I think we are all sold on the benefits of quitting smoking, but it’s worth mentioning one particular trial just to remind us. It was published in Chest in 2007 and the brilliance of this trial was that they were assessing the outcomes of a smoking cessation intervention, rather than whether they actually quit or not. Normally studies will tell us, say, the cardiovascular disease reduction in a group of people who quit smoking versus those who continued smoking. But this study just assessed the cardiovascular outcomes from whether you gave an intervention to help people stop smoking or whether you didn’, regardless of how many actually quit in each group.
They recruited 209 smokers who had just been admitted to hospital for either heart failure or a heart attack. Everyone got about half an hour of counselling about the harms of smoking and how to quit and were given a lot of written information prior to discharge from hospital. They were then randomised into two groups: An intensive treatment group who got a further 12 weeks of counselling plus pharmacological therapy to help them quit like nicotine replacement therapy or bupropion. The second group was a usual care group who didn’t get further intervention outside of that 30 minute counselling session in hospital.
At the end of 2 years, the mortality rate was 12% in the usual care group, but in the intensive smoking cessation group it was only 2.8%. So a 10% absolute reduction in all-cause mortality after 2 years just by providing an intervention to help patients quit. This is unheard of for any other intervention. Let’s compare it to aspirin for example, because no doctor in the world would not be firm about taking aspirin after a heart attack (unless they couldn’t take it for whatever reason). But giving aspirin for 2 years after a heart attack leads to a 1.4% reduction in mortality. So this 10% is huge.
What’s encouraging to me, is that the rates of smoking cessation at 2 years were not fantastic. 9% in the usual care group and 33% in the intensive treatment group. So it’s nice to know that even if the majority of patients are not quitting despite your constant nagging, overall, it’s still providing a very impressive benefit.

 Cold Turkey Versus Gradual Quitting

A good quality randomised controlled trial has been conducted to see whether gradual smoking cessation is more or less effective than cold turkey. It was published in the Annals of Internal Medicine in May 2016.
They took 697 adult smokers who were smoking at least 15 cigarettes per day and were addicted. They made sure they were addicted by conducting a survey called the Fagestrom Test for Nicotine Dependence (FTND), which asks things like: how soon after waking do you have your first cigarette, do you find it hard to refrain from smoking in places it’s forbidden like a library or church, and do you smoke even if you’re so sick, that you stay bed the whole day? It’s a score out of 10, where a score above 5 is considered moderately dependant.
They were randomised into 2 groups: All of the participants in both groups were told to set a quit date for 2 weeks time. The cold turkey group were told to smoke as normal and not reduce the number of cigarettes they smoked until that quit date. The gradual cessation group were told to aim to half the number of cigarettes they smoked in the first week, then half them again in the second week and then quit after the second week. The second group were given nicotine replacement therapy during those 2 weeks as they were reducing their cigarettes. They could get either nicotine gum, nasal spray, mouth spray, lozenges, inhalers, or sublingual tablets.
Both groups received counselling and support during the 2 weeks before the quit date.
After the quit date, both groups got a daily 21mg nicotine patch plus a short-acting nicotine replacement therapy that they could choose (inhaler, gum, mouth spray, lozenge)
They followed them up for 6 months and the primary outcome was how many were abstinent from cigarettes after 4 weeks. They confirmed abstinence by using a tool called the Russell Standard, which incorporates exhaled carbon monoxide concentrations to confirm abstinence. They also did this at 8 weeks and 6 months.


The average age of the patients was 49, half of them were men, and they were smoking an average of 20 cigarettes per day. The average FTND addiction score was 6 indicating that they were moderately addicted. 94% were white.
At 4 weeks, abstinence was achieved in 39% of those in the gradual cessation group, but in the cold turkey group, this increased to 49%. So a 10% absolute reduction in smoking rates making a number needed to treat of 10 in favour of cold turkey quitting.
By 6 months, as you’d expect with these things, the rates of abstinence were much lower – but they were still better for the cold turkey quitters. They were 15.5% in the gradual cessation group versus 22% in the cold turkey group. making a number needed to treat of 16.
They asked all the patients at the start of the study whether they would prefer gradual or sudden smoking cessation. Interestingly, of those who preferred gradual cessation, they were still more likely to quit when they stopped abruptly compared to if they stopped gradually.

Pharmacological treatment of smoking addiction 

There are 4 agents that have good evidence to improve smoking cessation rates in smokers who are willing to quit. They are: Nicotine Replacement Therapy (NRT), nortriptyline, bupropion and varenicline (Champix or Chantix)

Nicotine Replacement Therapy

There ’s been a Cochrane review on this. In fact, there’s a Cochrane review for each of these 4 medications.
Firstly, what would you predict the actual quit rate is for someone who is motivated to quit? (i.e.they’re in the “action” phase of that Prochaska-Diclemente cycle of change).
It’s around 10%! And this was consistent among all the placebo arms in all of these Cochrane reviews. It just shows how addictive these things are. Mark Twain wrote: “giving up smoking is easy, I’ve done hundreds of times”
The Cochrane review was able to find 117 RCT’s making up over 50,000 patients assessing the benefits of nicotine replacement therapy.
In the no intervention group, 10% were able to achieve abstinence and this went up to 16% in the NRT group. And it didn’t seem to matter which NRT you used whether it was gum, patches, inhalers, sprays or lozenges.
They had some really great analyses in the review, here are the highlights:
  • In studies that compared short duration of nicotine replacement therapy versus longer duration – there was no difference in abstinence rates. The authors of the Cochrane review recommend 8 weeks.
  • Using a combination of a long-acting NRT like a patch, together with a short-acting NRT, like gum, spray or inhalers (to control sudden cravings), seem to be better than either one alone. It was about 15% abstinence with just a long acting or short acting, versus 20% when using a combination of both.
  • There was a slight benefit to starting the nicotine replacement therapy before the actual quit date rather than starting after.
  • The harms of NRT seemed to be local reactions like skin irritation from the patch, hiccups and sore throat in the mouth spray, and bad taste with the gum. They couldn’t find any increase in cardiovascular disease but they did find an increase in palpitations or chest pain which occurred in 1.4% of the placebo group versus 2.6% in the NRT group.


The Cochrane review that looked into the efficacy of bupropion for smoking cessation found 44 RCT’s with 13,700 patients
The abstinence rate in the placebo group was 11.5%  and this went up to 18.7% with bupropion. So very similar to nicotine replacement therapy. In fact in the hand full of trials that directly compared bupropion to nicotine replacement therapy there was no difference in effectiveness. The main side effects with bupropion are insomnia, which occurring in 25% compared to 15% in placebo; dry mouth and nausea. More alarmingly, it does increase seizures but this is very rare, in the order of 1 in 1000, but obviously wouldn’t want to give it anyone with epilepsy.


This Cochrane review found 11 studies making up 2 and a half thousand patients, and showed very similar results to the others – 10% abstinence rates for placebo versus 20% for nortriptyline. In the trials that compared it to nicotine replacement therapy it was maybe slightly better but not statistically significant.
In terms of side effects, they are mainly the anticholinergic side effects like sedation, dry mouth, constipation, difficulty urinating and blurred vision. The main fear with the tricyclic antidepressants is their often fatal in overdose so I you’d need to be confident the patient doesn’t have any risk of suicidality.

Varenicline (Champix)

This Cochrane review found 27 RCT’s making up around 12 thousand patients. Those who attempted to quit smoking with placebo had an 11% chance of remaining abstinent in these trials. But if they quit using varenicline, this went up to 25%.
There have also been studies comparing varenicline to some of the other smoking cessation medications:
  • 8 trials compared varenicline to nicotine replacement therapy – For NRT abstinence was achieved in 19% versus 23.7% with varenicline. So the chance of successfully quitting was 4% higher with varenicline over nicotine replacement therapy.
  • 5 trials compared varenicline to bupropion finding varenicline to be superior by about 6.5% – it was 17% versus 24%.
In terms of adverse effects, the major concern has been about neuropsychiatric harms: things like depression, suicide, strange behaviours, anger and things like that. In fact, in 2009, the FDA put a black box warning on Champix with regards to these adverse events. But in 2015 a large meta analysis specifically looking at these side effects was published in the BMJ and with 39 RCT’s, found no difference in depression, suicide ideation, attempted suicide, aggressive behaviours or irritability. Since then there has also been a large, good quality trial with 8000 patients of which 4000 had a psychiatric disorder, and the main aim of the study was to detect any neuropsychiatric effects. This was called the EAGLES study and they also couldn’t find any increase in psychiatric issues with varenicline. The harms they could find were: nausea occurring in 25% of people, abnormal dreams occurring in about 12%, more fatigue and insomnia as well.

Bottom Line

Cold turkey seems to be a better approach to smoking cessation achieving a 10% increase in abstinence rates at 4 weeks over gradual cessation. The chance of achieving abstinence in a motivated person without any help is about 10%, and this can be doubled with either nicotine replacement therapy, bupropion, nortriptyline or varenicline (Champix). Champix seems superior to the other methods achieving a 4-6% improvement in abstinence over NRT and bupropion.


#15 Preventing allergies – the LEAP study

There’s no doubting that allergies are on the rise. We know it because when we were in school it was pretty rare, but for kids in school nowadays its all too common. The United States, who have been collecting data on the rates of peanut allergy over time, found that in 1997, 0.4% of people reported peanut allergy, and this had tripled by 2008 to 1.4%. Currently, it’s around 2%. Medicine has done a complete 360 in the way that it thinks about allergies and it’s all thanks to the LEAP trial. (…or is it a 180?)


Could it be that expert guidelines have contributed to this massive rise in allergies?

Almost all guidelines, up until recently, have been recommending that we avoid giving babies any sort of allergic foods. The theory was that if babies don’t come into contact with their allergen early in life, they will be less likely to develop an allergy. The United Kingdom Department of Health commissioned a working group on allergies who issued the following recommendations in 1998: If the mother or father or any siblings of the baby have any sort of atopic disease (hay fever, asthma, eczema or allergies), they should avoid eating peanuts during pregnancy, avoid eating peanuts while breastfeeding and avoid giving any peanut products to the child until they are at least 3 years old!

Meanwhile, in the US, the American Academy of Paediatrics, in the year 2000, issued the same recommendations. Here’s a quote from the guidelines: “Mothers should eliminate peanuts and tree nuts (eg, almonds, walnuts, etc) and consider eliminating eggs, cow’s milk, fish, and perhaps other foods from their diets while nursing. Solid foods should not be introduced into the diet of high-risk infants until 6 months of age, with dairy products delayed until 1 year, eggs until 2 years, and peanuts, nuts, and fish until 3 years of age”

Amazing that there was universal agreement on this despite no evidence to back up these recommendations.

Unfortunately, making recommendations without any scientific evidence to back them up is all too common for guidelines. A group of researchers showed that only 6% of recommendations made by endocrinology guidelines were based on randomised controlled trial evidence. And Pierluigi Tricoci and colleagues (JAMA 2009) showed that for cardiology guidelines, this was 11%. Now that’s not the issue,  there are a lot of things in our practice that don’t have randomised controlled trial data to guide our decisions. Fine. But here’s the problem: This study also found that 50% of the recommendations were based on opinion only. So no evidence to back it up WHATSOEVER! Again, this is not necessarily a bad thing. But what is completely unacceptable, is that these recommendations are written with the exact same authority as the ones based on high-quality evidence. The same tone, the same language, the same style. And then these become absolute truths.  It’s even more frightening when you consider that 50-80% of members in guideline committees have financial conflicts of interest (Neuman 2011). Guidelines need to be more humble when they are making recommendations that are based purely on expert opinion – they should change their wording to something like: “there is no evidence for this recommendation but the committee felt that this was the best approach to manage this situation”.  Therefore, patients and doctors can exercise their judgment when applying these recommendations to the very nuanced clinical scenario they are facing.

Back to peanuts. What if the recommendations to exclude dietary allergens early in life was actually harmful? What if it contributed to the huge rise in allergies we’ve been facing?

Early Evidence

In 2008, a team of researchers, led by George Du Toit, published an interesting observational study that got everyone thinking.

They surveyed 5,600 parents of Jewish kids in Israel to determine their rates of peanut allergy, and they also surveyed 5,100 parents of Jewish kids in the UK so that they could compare the difference in peanut allergies rates. The thinking was that as they share a common heritage, any difference in peanut allergy rates, would likely be due to environmental influences rather than genetic factors. They also did surveys on both of these populations on how they weaned, when they introduce peanuts and when they introduced other solids to their child’s diet. They found that peanut allergies were more than 10 times higher in Jewish kids in the UK compared to Jewish kids in Israel. The prevalence was 1.85% in the UK compared to only 0.17% in Israel. So whats the difference between these two populations that led to such a massive discrepancy in the rates of peanut allergy?

Well interestingly, while in the UK parents were not giving their children any peanut products, presumably to comply with the guidelines, in Israel, they were giving their babies heaps of peanuts products. Most babies had been introduced to peanuts by 7 months of age. It turns out that Israel has a peanut snack called Bamba – it’s like a dissolving cheese puff- similar to Cheeze Doodles, but it’s made from peanuts and is marketed in Israel for babies as well as adults.

Interestingly, the rates of egg and milk allergy where pretty similar between the two groups which reflects the fact that they both populations introduced them to the diet at similar times.

The researchers who did this study got thinking: perhaps we got it wrong – perhaps the early introduction of allergenic foods protects kids from developing allergies to those foods?

LEAP trial

This led those same researchers to conduct the LEAP study (Learning Early About Peanut Allergy).


This was a very well designed, randomised controlled trial. It was “open-label”, which means it was not blinded, which is reasonable given that it would be very difficult to blind parents who are giving their children regular peanut products. The peanut product they used was this Bamba snack, but if they couldn’t tolerate that they could use smooth peanut butter.  The study did not receive any money form the manufacturers of any of these products. They recruited 640 babies between the ages of 4 and 11 months and then randomised them to either receive regular peanuts (Bamba) – 6 grams of peanut protein every week (divided into 3 meals) until the age of 5. Or to avoid peanut products entirely until the age of 5. Now the authors didn’t call it five years old….they called it 60 months old. I just translated it for you. I’m good at that because I’m always translating my wife: “how old’s your boy?” “29 months” my wife would say. And as I stand there, watching them looking like they’re trying to solve a calculus equation, I swoop in with: “2 and a half” and watch the sigh of relief come over their face. These babies were all at very high risk of developing an allergy – they had to have either an already established egg allergy or severe eczema.

When it comes to allergy prevention there’s “primary prevention” and “secondary prevention”. Primary prevention is when the child has no evidence of an IgE mediated reaction to the allergen. In other words, they have a completely negative skin prick test or RAST test.  Secondary prevention is when a child does have an IgE related reaction to the allergen – so they do have a positive reaction to skin prick testing or RAST testing but you want to prevent them getting a clinical allergy (i.e a rash, tissue swelling, angioedema or anaphylaxis when actually eating the allergen).

The researchers were keen to find out if the early introduction of peanuts could prevent the development of an allergy in both primary and secondary prevention. So for each of the 640 babies in the study, they first did a peanut allergy skin prick test. They then divided the babies into two groups: those that had absolutely no reaction to the skin prick test and those that did have a reaction. The reaction had to be wheal greater than 1mm in diameter but less than 4mm. They excluded anyone who had a wheal greater than 4mm in diameter.

Anyone who was randomised to the peanut group had to have a peanut challenge – This is where they gave the babies peanut products (under strict clinical monitoring and with resuscitation equipment on standby). If they had an allergic reaction to the peanut challenge then they were told to avoid peanuts. But if they didn’t have an allergic reaction they could continue with the study. Interestingly, 87% of those who had a positive skin prick test did not have a reaction to the peanuts when given the food challenge.

At the end of the study, all 640 babies got a peanut challenge after they turned 5. The researchers were keen to find out how many babies in each group developed a proper peanut allergy.


The average age of the baby was just over 7 and a half months.  98 of the 640 kids did have a positive skin prick test and the rest did not. The families in the peanut group were very good at giving their babies peanut products – with an average of 7.7 grams of peanut protein per week across the group.

So what did they find?

Of those who were told to avoid peanuts: 17.2% had a confirmed peanut allergy by 5 years of age. Of those who were given peanuts form an early age, only 3.2%  developed a peanut allergy. So there was a 5 fold increase in peanut allergies in those who avoided them. Or to put it another way, an 82% relative reduction in peanut allergies with early and regular exposure to peanuts.

These were the results of the intention to treat analysis. They also did a per-protocol analysis where they excluded people who didn’t follow the protocol either because they had an allergic reaction to the initial food challenge and so couldn’t get peanuts even though they were randomised to the peanut group, or for whatever reason. And as expected with sort of analysis the results were even more impressive – a similar amount of those who avoided peanuts got allergies – 17.3% but much less in the peanut exposure group – 0.3% making it a 57 fold decrease in the rate of peanut allergy with early exposure, or a relative risk reduction of 98%.

They also separated the results based on whether there was a positive skin prick test initially or not. And the results were very similar.

Other Research

The authors of the LEAP study did a follow on study after this, which they called the LEAP-on study. They followed these children up for another 12 months (or 1 year for all those engaging in calculus) to see whether the benefits persisted and they did.

There’s been other studies as well. Most of them for peanut and egg allergy – all showing early exposure to be beneficial. One particular study, also published in 2016 in the NEJM, randomised 1,300 to either start giving their kids allergenic foods at 3 months, or to start giving them at 6 months. The foods included milk, peanuts, eggs, fish and wheat. And here they found that starting at 3 months was better than at 6 months 2.4% had any allergy in the 3 month exposure group versus 7.3% in the 6 months exposure group.

Bottom Line

Among kids with a high risk of atopic disease, introducing peanuts into their diet before 11 months and giving it to them regularly results in an 86% reduction in the development of peanut allergies compared to when they avoid peanut altogether. The benefits hold true even in children who have a positive reaction to peanut on skin prick testing. Guidelines should be more upfront when making recommendations that do not have a basis in the evidence. This may have prevented some of the massive increase in peanut allergies that we have witnessed.


#14 Evidence for low fat diet and decreasing saturated fat – PURE Study

Almost all guidelines are recommending that we should reduce dietary fats and restrict saturated fats. The PURE study has called to question these recommendations and in this episode, we explore the evidence.

It’s very hard to find a guideline that does not recommend reducing total fat and saturated fat intake:

  • The Australian Heart foundation guidelines
  • The heart association guidelines
  • The Australian dietary guidelines
  • The RACGP and Diabetes Australia diabetes guidelines
  • The American Heart Association Guidelines
  • The NICE cardiovascular disease guidelines
  • The World Health Organisation healthy diet guidelines
  • The World Heart Federation guidelines

…and the list goes on and on. It seems that every single evidence based guideline has made a statement on limiting fat intake and on avoiding saturated fats, but are these statements evidence based? A study from the Lancet from the PURE trial has resurfaced this controversy and in this episode, we will explore the evidence.

Ecological Evidence

Saturated fats are in eggs, animal meats and milk products like milk, cream, cheese and butter. The recommendation to avoid saturated fats is so well known and so widely adopted that you’d think there was pretty solid evidence to back it.  Well you might be surprised – because the recommendation to decrease fat and avoid saturated fats actually came from pretty weak evidence from ecological studies. This is where you look at different populations, see what they eat and count their heart attacks. If a population is having more heart attacks, see if you can blame something in their diet. The most famous of these studies was the seven countries study by Ancel Keys. Here, Keys showed that the Countries which ate the most saturated fat had the most heart attacks. Fascinatingly, he actually collected data from 21 countries but only reported on the 7. When a guy called Jacob Yerushalmy analysed the data from all 21 countries, the association was no longer there. But it was too late, so to speak, the cat was out of the bag. Fats were deemed bad. And saturated fats…worse.

For a more in depth discussion of the history of fats and diet and guidelines, there’s a book called “Good calories, bad calories” by Gary Taubes that goes into a lot more detail. If you can handle his passive aggressive and sort of salesman type tone, then give it a read.

Surrogate Marker Studies

The second piece of evidence used a few decades ago to launch the saturated fat recommendations, is that eating foods high in saturated fats increases cholesterol. (Actually, while it increases LDL it also increases HDL and some studies suggest it conveys a more favourable HDL:LDL ratio). Does it matter though? The reason we care about cholesterol is because of its link to cardiovascular disease.  It is a surrogate marker for cardiovascular disease. But there are plenty of drugs that lower cholesterol but have no impact on cardiovascular disease. So the fact that something lowers cholesterol doesn’t always mean it is good for us. Not only that, but treatments like the Mediterranean diet reduces cardiovascular disease without having any impact on cholesterol levels. So it’s best we use evidence that assesses the impact of low fat diets and low saturated fat diets directly on cardiovascular diseases, rather than on things like cholesterol.

Cohort Studies

Since this seven countries trial, studies that have tried to back up the claim that saturated fats are bad, have failed to come through with the goods.

The next level of evidence, up from ecological studies, are cohort studies – this is where you take a population, ask them how much fat they eat (among other things), and then follow them up to see how many got cardiovascular disease and whether it was associated with their diet.

A recent review of all of the studies that used this approach was published in the BMJ in 2015 (reference below). They found that there was NO association between saturated fats and death, heart attacks, strokes or diabetes.

Randomised Controlled Trials

The highest level of evidence are randomised controlled trials, where you actually randomise a group of people to either reduce their saturated fat intake or keep it the same. By far the biggest of these studies was the Women’s Health Initiative. If you look at systematic reviews like the Cochrane Review, this trial contributed to about 60% of the power of the review. This is a complicated trial that needs its own episode, but in one of the arms of the trial, 50,000 women were randomised to two groups: The first group simply continued their usual diet, and the second group got a very intensive program of dieticians and eduction to reduce their total fat and saturated fat intake. They followed them up for 8 years and found absolutely no difference in cardiovascular disease whether they continued their usual diet or reduced their fat and saturated fat.

The PURE Study

The PURE study is the latest to stab this saturated fat theory in the back. It’s a huge prospective cohort study – and probably the highest quality one we have to date.

They recruited 156,424 people and got them to complete surveys at the start of the study and then every 3 years after that.  They were interested in things like smoking, physical activity, medications, socioeconomic things like education and income and a full medical history.

They recruited people from 18 countries getting a good mix of  third world and first world countries. This is what sets it apart from other studies.

They recruited from three high income countries – like Canada and Sweden, 11 middle income countries – like Brazil, China, Poland and South Africa, and four low-income countries like India and Pakistan. And they based this on the World Bank classification.

They also tried to get the most accurate representation of what each of these people were eating. They did this by making participants fill out food questionarres on their diet. Every country had its own questionarre that was specific to that country but the questionnaires were standardized across the countries. The problem with this method is that there is good evidence that food questionaries are often not an entirely accurate depiction of what someone is actually eating. Imagine if I asked you what sorts of foods you eat –  How different would your answer be if you were in a good mood compared to being in a bad mood – your diet might not be different, but you might feel guilty and self loathing and inflate all the bad things you eat. If you just got out of gym, on the other hand, and are feeling amazing, you might inflate all the fruits and vegetables you eat. And then there’s the problem of recall – who can remember how many times they added salt to their food? And how are you to know how much butter or oil was in that eggs Benedict you ordered from the restaurant. This is one of the big limitations of any diet study, be it a cohort or even a randomised controlled trial. In order to verify the accuracy of the answers to these questions, they got a sample of about 50-250 participants from each country to keep a 24 hour food diary every now and then. That way they could compare how well matched their actual food intake was to what they had written on their food questionnaire – and they could try and adjust for this.

This PURE study is still ongoing. They are following this population continuously and publishing studies about them left right and centre. The specific study from the PURE data that we’re looking at today was titled: “Associations of fats and carbohydrate intake with cardiovascular disease and mortality in 18 countries from five continents (PURE): a prospective cohort study. It was published in the Lancet in August 2017.

Out of the 156,424 people in the PURE cohort, only 135,335 could be included in this study because they didn’t have all the data they needed. They also excluded people who had a history of cardiovascular disease. The average follow up was for 7 and a half years.

The aim of the study was to see if they could make any associations between diet and cardiovascular disease. They followed each patient up every single year to see if they had a heart attack or a stroke or any other cardiovascular outcomes.

They calculated the proportion of carbohydrates, fats and protein in everyone’s diet. They did this by converting the foods they ate into nutrients. They then split up the population into quintiles for each of these macronutrients. So for example, with saturated fat intake, they split the population into 5 groups based on the proportion of saturated fat they were eating. The top 20% – who ate the most saturated fats –  were put in the highest quintile and the 20% who ate the least amount of saturated fat were put in the lowest quintile and so on and so forth. They did this for all the macronutrients. And then for each quintile, they checked to see how many died, how many got strokes, heart attacks and so on.


Saturated Fats

Put that skinny flat white down and and take a seat for this one because you could be in for a bit of a shock. The recommendation is that we eat less than 6-10% of our total energy intake from saturated fat, it differs slightly depending on the guideline. The lowest quintile of saturated fat consumers in this study were eating 2.8% of their total energy intake from saturated fat – so they were well and truly within all the guidelines. And they were the most likely group to die! They were also the most likely group to develop stroke! And while it wasn’t statistically significant, they seemed to be at the greatest risk of all the other cardiovascular outcomes.

There was no difference between any of the other quintiles, only this lowest quintile of saturated fat consumers where at a higher risk. So it didn’t matter if you were getting 13.2% of your total energy from saturated fat or 9.5% or 7.1% or 4.9% – there was no difference in death or cardiovascular disease. But if you were eating a really low amount – 2.8%, then your risk significantly increased. All cause mortality was 7.2% in the lowest saturated fat eaters and this went down to around 4.5% as you increased the amount of saturated fat in your diet.

Total Fats

Again, the lower your total fat intake the more likely you are to die or develop a stroke. The highest quintile of fat eaters were getting 35.3% of their energy from fat and they were the least likely to die. Total death and stroke was more common when total fat made up 10% of your diet compared to if it was 35.3%.


This is where things get interesting. When I was in primary school there was a national program to teach kids about the food pyramid. On the bottom of this food pyramid – the foods we should be eating the most – were all carbohydrate foods – breads, cereals and pasta. Well, what does the PURE study have to say about that? It showed that as your intake of carbohydrates increased, your risk of death and cardiovascular disease also increased. And if you look at the graph it really looks like it’s in a linear fashion. So in the lowest quintile of carbohydrates eaters (those who obtained 46.4% of their total calories from carbs) – 4.1% died during the follow-up period. But among those who ate the most carbohydrates (77.2% of their total calories) – 7.2% of them died.

Other Macronutrients

Interestingly, animal protein intake was associated with lower rates of mortality.


  • Why do we insist on making recommendations when we don’t have sound evidence to support these recommendations. Especially when they involve a radical change in lifestyle.  Using a different example,  most dietary guidelines, up until recently, used to strongly recommend that parents should avoid giving their babies foods that are allergenic. The American Academy of Pediatrics told all mothers not to eat peanuts during pregnancy, while breastfeeding and not to give them to their child until they reach 3 years of age. Then the LEAP trial came along in 2015 and showed that actually, the earlier you give peanut products to babies, the less likely they are to develop allergies. It is clear that these recommendations had actually caused a lot of food allergies.  The difference is that the paediatric nutritional guidelines have been quick to adjust their recommendations after the LEAP trial came out, but the total fat and saturated fat recommendation don’t seem to show any signs of slowing down despite an increasing body of evidence that we were probably wrong.
  • This isn’t perfect evidence. Nothing beats a randomised controlled trial in terms of telling us whether something is causative, but it’s almost impossible to do a good quality randomised controlled trial on diet. Firstly blinding is impossible. Secondly, if I told you to stop eating saturated fats, or you start eating much more saturated fats, would you be able to do that? For 5 years? It’s very tricky. This is one area of medicine that you argue that cohort studies are probably better at giving us answers than randomised controlled trials – just because it’s so hard to control the bias in RCT’s. But even still, we do have randomised controlled trials – which do not find any benefits from a diet low in saturated fats. It would be very unusual not to pick up an association from cohort studies if there was something very bad about saturated fats as the guidelines suggest.  When you look at cohort studies of smoking – there was like 8-10 times higher rates of lung cancer in those who smoked. It was a very robust association. But here, cohort studies just don’t seem to find an association.
  • One might argue that people who are health conscious might exercise more, smoke less and choose their diet carefully – therefore skewing the results. But if anything, this would skew the results towards low-fat diets looking better. Because healthy people are eating less fat and less saturated fat – because that’s what society has been telling them to do – so if the people in the low-fat group are exercising more – this would make the results look even more impressive.
  • Perhaps people who can’t afford meats which are high in fat are more likely to die not because they are eating less fat, but because they don’t have money to pay for healthcare, etc. Interesting potential bias. So the researchers did additional analyses where they adjusted for socioeconomic status and found the same results.

Bottom Line

This is a direct quote from the conclusion of the study:

Global dietary guidelines should be reconsidered in light of the consistency of findings from the present study, with the conclusions from meta-analyses of other observational studies and the results of recent randomised controlled trials.

References and Links