Biblio du mois : Novembre 2018
Après ce petit mois de rentrée, il est temps de vous livrer la nouvelle biblio du mois !
Celle-ci est riche avec la suite des études présentées à l’ESICM avec sa foison d’études négatives mais également des études très intéressantes en Anesthésie dont des études avec du Machine Learning pour les initiés.
On n’oublie pas les études sur l’ACR, la ventilation et de l’infectieux avec une revue sur les antibiotiques inhalés.
Bref, c’est Noël avant Noël, y’en aura pour tout le monde !
N’oubliez pas de nous suivre et de participer à notre jeu-concours exclusif avec Arnette : ici !
Et venez à Toulouse avec nous la semaine prochaine pour vous spécialiser dans la gestion péri-opératoire des transplantés : renseignements ici !
Et oui, vous êtes gâtés !
Fausse croyance dans les IPP en réanimation ?
Krag et al., NEJM, 2018
DOI: 10.1056/NEJMoa1714919
Background
Prophylaxis for gastrointestinal stress ulceration is frequently given to patients in the intensive care unit (ICU), but its risks and benefits are unclear.
Methods
In this European, multicenter, parallel-group, blinded trial, we randomly assigned adults who had been admitted to the ICU for an acute condition (i.e., an unplanned admission) and who were at risk for gastrointestinal bleeding to receive 40 mg of intravenous pantoprazole (a proton-pump inhibitor) or placebo daily during the ICU stay. The primary outcome was death by 90 days after randomization.
Results
A total of 3298 patients were enrolled; 1645 were randomly assigned to the pantoprazole group and 1653 to the placebo group. Data on the primary outcome were available for 3282 patients (99.5%). At 90 days, 510 patients (31.1%) in the pantoprazole group and 499 (30.4%) in the placebo group had died (relative risk, 1.02; 95% confidence interval [CI], 0.91 to 1.13; P=0.76). During the ICU stay, at least one clinically important event (a composite of clinically important gastrointestinal bleeding, pneumonia, Clostridium difficile infection, or myocardial ischemia) had occurred in 21.9% of patients assigned to pantoprazole and 22.6% of those assigned to placebo (relative risk, 0.96; 95% CI, 0.83 to 1.11). In the pantoprazole group, 2.5% of patients had clinically important gastrointestinal bleeding, as compared with 4.2% in the placebo group. The number of patients with infections or serious adverse reactions and the percentage of days alive without life support within 90 days were similar in the two groups.
Conclusions
Among adult patients in the ICU who were at risk for gastrointestinal bleeding, mortality at 90 days and the number of clinically important events were similar in those assigned to pantoprazole and those assigned to placebo.
Nutrition des patients sous ventilation mécanique : Hypercalorique ou normocalorique ?
The TARGET Investigator.NEJM, 2018
Background
The effect of delivering nutrition at different calorie levels during critical illness is uncertain, and patients typically receive less than the recommended amount.
Methods
We conducted a multicenter, double-blind, randomized trial, involving adults undergoing mechanical ventilation in 46 Australian and New Zealand intensive care units (ICUs), to evaluate energy-dense (1.5 kcal per milliliter) as compared with routine (1.0 kcal per milliliter) enteral nutrition at a dose of 1 ml per kilogram of ideal body weight per hour, commencing at or within 12 hours of the initiation of nutrition support and continuing for up to 28 days while the patient was in the ICU. The primary outcome was all-cause mortality within 90 days.
Results
There were 3957 patients included in the modified intention-to-treat analysis (1971 in the 1.5-kcal group and 1986 in the 1.0-kcal group). The volume of enteral nutrition delivered during the trial was similar in the two groups; however, patients in the 1.5-kcal group received a mean (±SD) of 1863±478 kcal per day as compared with 1262±313 kcal per day in the 1.0-kcal group (mean difference, 601 kcal per day; 95% confidence interval [CI], 576 to 626). By day 90, a total of 523 of 1948 patients (26.8%) in the 1.5-kcal group and 505 of 1966 patients (25.7%) in the 1.0-kcal group had died (relative risk, 1.05; 95% CI, 0.94 to 1.16; P=0.41). The results were similar in seven predefined subgroups. Higher calorie delivery did not affect survival time, receipt of organ support, number of days alive and out of the ICU and hospital or free of organ support, or the incidence of infective complications or adverse events.
Conclusions
In patients undergoing mechanical ventilation, the rate of survival at 90 days associated with the use of an energy-dense formulation for enteral delivery of nutrition was not higher than that with routine enteral nutrition.
DOI: 10.1056/NEJMoa1808217
Background
There are conflicting data on the effects of antipsychotic medications on delirium in patients in the intensive care unit (ICU).
Methods
In a randomized, double-blind, placebo-controlled trial, we assigned patients with acute respiratory failure or shock and hypoactive or hyperactive delirium to receive intravenous boluses of haloperidol (maximum dose, 20 mg daily), ziprasidone (maximum dose, 40 mg daily), or placebo. The volume and dose of a trial drug or placebo was halved or doubled at 12-hour intervals on the basis of the presence or absence of delirium, as detected with the use of the Confusion Assessment Method for the ICU, and of side effects of the intervention. The primary end point was the number of days alive without delirium or coma during the 14-day intervention period. Secondary end points included 30-day and 90-day survival, time to freedom from mechanical ventilation, and time to ICU and hospital discharge. Safety end points included extrapyramidal symptoms and excessive sedation.
Results
Written informed consent was obtained from 1183 patients or their authorized representatives. Delirium developed in 566 patients (48%), of whom 89% had hypoactive delirium and 11% had hyperactive delirium. Of the 566 patients, 184 were randomly assigned to receive placebo, 192 to receive haloperidol, and 190 to receive ziprasidone. The median duration of exposure to a trial drug or placebo was 4 days (interquartile range, 3 to 7). The median number of days alive without delirium or coma was 8.5 (95% confidence interval [CI], 5.6 to 9.9) in the placebo group, 7.9 (95% CI, 4.4 to 9.6) in the haloperidol group, and 8.7 (95% CI, 5.9 to 10.0) in the ziprasidone group (P=0.26 for overall effect across trial groups). The use of haloperidol or ziprasidone, as compared with placebo, had no significant effect on the primary end point (odds ratios, 0.88 [95% CI, 0.64 to 1.21] and 1.04 [95% CI, 0.73 to 1.48], respectively). There were no significant between-group differences with respect to the secondary end points or the frequency of extrapyramidal symptoms.
Conclusions
The use of haloperidol or ziprasidone, as compared with placebo, in patients with acute respiratory failure or shock and hypoactive or hyperactive delirium in the ICU did not significantly alter the duration of delirium.
Etude POLAR : Hypothermie prophylactique pour les traumatisés crâniens ?
Importance After severe traumatic brain injury, induction of prophylactic hypothermia has been suggested to be neuroprotective and improve long-term neurologic outcomes.
Objective To determine the effectiveness of early prophylactic hypothermia compared with normothermic management of patients after severe traumatic brain injury.
Design, Setting, and Participants The Prophylactic Hypothermia Trial to Lessen Traumatic Brain Injury–Randomized Clinical Trial (POLAR-RCT) was a multicenter randomized trial in 6 countries that recruited 511 patients both out-of-hospital and in emergency departments after severe traumatic brain injury. The first patient was enrolled on December 5, 2010, and the last on November 10, 2017. The final date of follow-up was May 15, 2018.
Interventions There were 266 patients randomized to the prophylactic hypothermia group and 245 to normothermic management. Prophylactic hypothermia targeted the early induction of hypothermia (33°C-35°C) for at least 72 hours and up to 7 days if intracranial pressures were elevated, followed by gradual rewarming. Normothermia targeted 37°C, using surface-cooling wraps when required. Temperature was managed in both groups for 7 days. All other care was at the discretion of the treating physician.
Main Outcomes and Measures The primary outcome was favorable neurologic outcomes or independent living (Glasgow Outcome Scale–Extended score, 5-8 [scale range, 1-8]) obtained by blinded assessors 6 months after injury.
Results Among 511 patients who were randomized, 500 provided ongoing consent (mean age, 34.5 years [SD, 13.4]; 402 men [80.2%]) and 466 completed the primary outcome evaluation. Hypothermia was initiated rapidly after injury (median, 1.8 hours [IQR, 1.0-2.7 hours]) and rewarming occurred slowly (median, 22.5 hours [IQR, 16-27 hours]). Favorable outcomes (Glasgow Outcome Scale–Extended score, 5-8) at 6 months occurred in 117 patients (48.8%) in the hypothermia group and 111 (49.1%) in the normothermia group (risk difference, 0.4% [95% CI, –9.4% to 8.7%]; relative risk with hypothermia, 0.99 [95% CI, 0.82-1.19]; P = .94). In the hypothermia and normothermia groups, the rates of pneumonia were 55.0% vs 51.3%, respectively, and rates of increased intracranial bleeding were 18.1% vs 15.4%, respectively.
Conclusions and Relevance Among patients with severe traumatic brain injury, early prophylactic hypothermia compared with normothermia did not improve neurologic outcomes at 6 months. These findings do not support the use of early prophylactic hypothermia for patients with severe traumatic brain injury.
Petit Vt versus Vt intermédiaire chez les patients de réanimation sans SDRA
PReVENT Investigators, JAMA 2018
Importance It remains uncertain whether invasive ventilation should use low tidal volumes in critically ill patients without acute respiratory distress syndrome (ARDS).
Objective To determine whether a low tidal volume ventilation strategy is more effective than an intermediate tidal volume strategy.
Design, Setting, and Participants A randomized clinical trial, conducted from September 1, 2014, through August 20, 2017, including patients without ARDS expected to not be extubated within 24 hours after start of ventilation from 6 intensive care units in the Netherlands.
Interventions Invasive ventilation using low tidal volumes (n = 477) or intermediate tidal volumes (n = 484).
Main Outcomes and Measures The primary outcome was the number of ventilator-free days and alive at day 28. Secondary outcomes included length of ICU and hospital stay; ICU, hospital, and 28- and 90-day mortality; and development of ARDS, pneumonia, severe atelectasis, or pneumothorax.
Results In total, 961 patients (65% male), with a median age of 68 years (interquartile range [IQR], 59-76), were enrolled. At day 28, 475 patients in the low tidal volume group had a median of 21 ventilator-free days (IQR, 0-26), and 480 patients in the intermediate tidal volume group had a median of 21 ventilator-free days (IQR, 0-26) (mean difference, –0.27 [95% CI, –1.74 to 1.19] P = .71). There was no significant difference in ICU (median, 6 vs 6 days; 0.39 [–1.09 to 1.89] ; P = .58) and hospital (median, 14 vs 15 days; –0.60 [–3.52 to 2.31]; P = .68) length of stay or 28-day (34.9% vs 32.1%; hazard ratio [HR], 1.12 [0.90 to 1.40]; P = .30) and 90-day (39.1% vs 37.8%; HR, 1.07 [0.87 to 1.31]; P = .54) mortality. There was no significant difference in the percentage of patients developing the following adverse events: ARDS (3.8% vs 5.0%; risk ratio [RR], 0.86 [0.59 to 1.24]; P = .38), pneumonia (4.2% vs 3.7%; RR, 1.07 [0.78 to 1.47]; P = .67), severe atelectasis (11.4% vs 11.2%; RR, 1.00 [0.81 to 1.23]; P = .94), and pneumothorax (1.8% vs 1.3%; RR, 1.16 [0.73 to 1.84]; P = .55).
Conclusions and Relevance In patients in the ICU without ARDS who were expected not to be extubated within 24 hours of randomization, a low tidal volume strategy did not result in a greater number of ventilator-free days than an intermediate tidal volume strategy.
Etude HIGH : Optiflow chez l’immunodéprimé ?
Importance High-flow nasal oxygen therapy is increasingly used for acute hypoxemic respiratory failure (AHRF).
Objective To determine whether high-flow oxygen therapy decreases mortality among immunocompromised patients with AHRF compared with standard oxygen therapy.
Design, Setting, and Participants The HIGH randomized clinical trial enrolled 776 adult immunocompromised patients with AHRF (Pao2 <60 mm Hg or Spo2 <90% on room air, or tachypnea >30/min or labored breathing or respiratory distress, and need for oxygen ≥6 L/min) at 32 intensive care units (ICUs) in France between May 19, 2016, and December 31, 2017.
Interventions Patients were randomized 1:1 to continuous high-flow oxygen therapy (n = 388) or to standard oxygen therapy (n = 388).
Main Outcomes and Measures The primary outcome was day-28 mortality. Secondary outcomes included intubation and mechanical ventilation by day 28, Pao2:Fio2 ratio over the 3 days after intubation, respiratory rate, ICU and hospital lengths of stay, ICU-acquired infections, and patient comfort and dyspnea.
Results Of 778 randomized patients (median age, 64 [IQR, 54-71] years; 259 [33.3%] women), 776 (99.7%) completed the trial. At randomization, median respiratory rate was 33/min (IQR, 28-39) vs 32 (IQR, 27-38) and Pao2:Fio2 was 136 (IQR, 96-187) vs 128 (IQR, 92-164) in the intervention and control groups, respectively. Median SOFA score was 6 (IQR, 4-8) in both groups. Mortality on day 28 was not significantly different between groups (35.6% vs 36.1%; difference, −0.5% [95% CI, −7.3% to +6.3%]; hazard ratio, 0.98 [95% CI, 0.77 to 1.24]; P = .94). Intubation rate was not significantly different between groups (38.7% vs 43.8%; difference, −5.1% [95% CI, −12.3% to +2.0%]). Compared with controls, patients randomized to high-flow oxygen therapy had a higher Pao2:Fio2 (150 vs 119; difference, 19.5 [95% CI, 4.4 to 34.6]) and lower respiratory rate after 6 hours (25/min vs 26/min; difference, −1.8/min [95% CI, −3.2 to −0.2]). No significant difference was observed in ICU length of stay (8 vs 6 days; difference, 0.6 [95% CI, −1.0 to +2.2]), ICU-acquired infections (10.0% vs 10.6%; difference, −0.6% [95% CI, −4.6 to +4.1]), hospital length of stay (24 vs 27 days; difference, −2 days [95% CI, −7.3 to +3.3]), or patient comfort and dyspnea scores.
Conclusions and Relevance Among critically ill immunocompromised patients with acute respiratory failure, high-flow oxygen therapy did not significantly decrease day-28 mortality compared with standard oxygen therapy.
Effet d’une phosphatase alcaline recombinante sur la clairance dans le sepsis : Etude STOP-ICU
Pickkers et al., JAMA, 2018
https://jamanetwork.com/journals/jama/article-abstract/2710776
La simulation pour prévenir le stress au travail : OK pour les paraméd ?
Importance Nurses working in an intensive care unit (ICU) are exposed to occupational stressors that can increase the risk of stress reactions, long-term absenteeism, and turnover.
Objective To evaluate the effects of a program including simulation in reducing work-related stress and work-related outcomes among ICU nurses.
Design, Setting, and Participants Multicenter randomized clinical trial performed at 8 adult ICUs in France from February 8, 2016, through April 29, 2017. A total of 198 ICU nurses were included and followed up for 1 year until April 30, 2018.
Interventions The ICU nurses who had at least 6 months of ICU experience were randomized to the intervention group (n = 101) or to the control group (n = 97). The nurses randomized to the intervention group received a 5-day course involving a nursing theory recap and situational role-play using simulated scenarios (based on technical dexterity, clinical approach, decision making, aptitude to teamwork, and task prioritization), which were followed by debriefing sessions on attitude and discussion of practices.
Main Outcomes and Measures The primary outcome was the prevalence of job strain assessed by combining a psychological demand score greater than 21 (score range, 9 [best] to 36 [worst]) with a decision latitude score less than 72 (score range, 24 [worst] to 96 [best]) using the Job Content Questionnaire and evaluated at 6 months. There were 7 secondary outcomes including absenteeism and turnover.
Results Among 198 ICU nurses who were randomized (95 aged ≤30 years [48%] and 115 women [58%]), 182 (92%) completed the trial for the primary outcome. The trial was stopped for efficacy at the scheduled interim analysis after enrollment of 198 participants. The prevalence of job strain at 6 months was lower in the intervention group than in the control group (13% vs 67%, respectively; between-group difference, 54% [95% CI, 40%-64%]; P < .001). Absenteeism during the 6-month follow-up period was 1% in the intervention group compared with 8% in the control group (between-group difference, 7% [95% CI, 1%-15%]; P = .03). Four nurses (4%) from the intervention group left the ICU during the 6-month follow-up period compared with 12 nurses (12%) from the control group (between-group difference, 8% [95% CI, 0%-17%]; P = .04).
Conclusions and Relevance Among ICU nurses, an intervention that included education, role-play, and debriefing resulted in a lower prevalence of job strain at 6 months compared with nurses who did not undergo this program. Further research is needed to understand which components of the program may have contributed to this result and to evaluate whether this program is cost-effective.
Analyse Bayésienne de l’ECMO précoce pour le SDRA sévère : bénéfice sur la mortalité ?
Décontamination digestive contre les bactériémies à BGN multi-résistants ?
Importance The effects of chlorhexidine (CHX) mouthwash, selective oropharyngeal decontamination (SOD), and selective digestive tract decontamination (SDD) on patient outcomes in ICUs with moderate to high levels of antibiotic resistance are unknown.
Objective To determine associations between CHX 2%, SOD, and SDD and the occurrence of ICU-acquired bloodstream infections with multidrug-resistant gram-negative bacteria (MDRGNB) and 28-day mortality in ICUs with moderate to high levels of antibiotic resistance.
Design, Setting, and Participants Randomized trial conducted from December 1, 2013, to May 31, 2017, in 13 European ICUs where at least 5% of bloodstream infections are caused by extended-spectrum β-lactamase–producing Enterobacteriaceae. Patients with anticipated mechanical ventilation of more than 24 hours were eligible. The final date of follow-up was September 20, 2017.
Interventions Standard care was daily CHX 2% body washings and a hand hygiene improvement program. Following a baseline period from 6 to 14 months, each ICU was assigned in random order to 3 separate 6-month intervention periods with either CHX 2% mouthwash, SOD (mouthpaste with colistin, tobramycin, and nystatin), or SDD (the same mouthpaste and gastrointestinal suspension with the same antibiotics), all applied 4 times daily.
Main Outcomes and Measures The occurrence of ICU-acquired bloodstream infection with MDRGNB (primary outcome) and 28-day mortality (secondary outcome) during each intervention period compared with the baseline period.
Results A total of 8665 patients (median age, 64.1 years; 5561 men [64.2%]) were included in the study (2251, 2108, 2224, and 2082 in the baseline, CHX, SOD, and SDD periods, respectively). ICU-acquired bloodstream infection with MDRGNB occurred among 144 patients (154 episodes) in 2.1%, 1.8%, 1.5%, and 1.2% of included patients during the baseline, CHX, SOD, and SDD periods, respectively. Absolute risk reductions were 0.3% (95% CI, −0.6% to 1.1%), 0.6% (95% CI, −0.2% to 1.4%), and 0.8% (95% CI, 0.1% to 1.6%) for CHX, SOD, and SDD, respectively, compared with baseline. Adjusted hazard ratios were 1.13 (95% CI, 0.68-1.88), 0.89 (95% CI, 0.55-1.45), and 0.70 (95% CI, 0.43-1.14) during the CHX, SOD, and SDD periods, respectively, vs baseline. Crude mortality risks on day 28 were 31.9%, 32.9%, 32.4%, and 34.1% during the baseline, CHX, SOD, and SDD periods, respectively. Adjusted odds ratios for 28-day mortality were 1.07 (95% CI, 0.86-1.32), 1.05 (95% CI, 0.85-1.29), and 1.03 (95% CI, 0.80-1.32) for CHX, SOD, and SDD, respectively, vs baseline.
Conclusions and Relevance Among patients receiving mechanical ventilation in ICUs with moderate to high antibiotic resistance prevalence, use of CHX mouthwash, SOD, or SDD was not associated with reductions in ICU-acquired bloodstream infections caused by MDRGNB compared with standard care.
Effet de perfusion de Polymyxine B ciblée dans le choc septique ?
Dellinger, et al. The EUPHRATES Randomized Clinical Trial. JAMA. 2018;320(14):1455-1463
Importance Polymyxin B hemoperfusion reduces blood endotoxin levels in sepsis. Endotoxin activity can be measured in blood with a rapid assay. Treating patients with septic shock and elevated endotoxin activity using polymyxin B hemoperfusion may improve clinical outcomes.
Objective To test whether adding polymyxin B hemoperfusion to conventional medical therapy improves survival compared with conventional therapy alone among patients with septic shock and high endotoxin activity.
Design, Setting, and Participants Multicenter, randomized clinical trial involving 450 adult critically ill patients with septic shock and an endotoxin activity assay level of 0.60 or higher enrolled between September 2010 and June 2016 at 55 tertiary hospitals in North America. Last follow-up was June 2017.
Interventions Two polymyxin B hemoperfusion treatments (90-120 minutes) plus standard therapy completed within 24 hours of enrollment (n = 224 patients) or sham hemoperfusion plus standard therapy (n = 226 patients).
Main Outcomes and Measures The primary outcome was mortality at 28 days among all patients randomized (all participants) and among patients randomized with a multiple organ dysfunction score (MODS) of more than 9.
Results Among 450 eligible enrolled patients (mean age, 59.8 years; 177 [39.3%] women; mean APACHE II score 29.4 [range, 0-71 with higher scores indicating greater severity), 449 (99.8%) completed the study. Polymyxin B hemoperfusion was not associated with a significant difference in mortality at 28 days among all participants (treatment group, 84 of 223 [37.7%] vs sham group 78 of 226 [34.5%]; risk difference [RD], 3.2%; 95% CI, −5.7% to 12.0%; relative risk [RR], 1.09; 95% CI, 0.85-1.39; P = .49) or in the population with a MODS of more than 9 (treatment group, 65 of 146 [44.5%] vs sham, 65 of 148 [43.9%]; RD, 0.6%; 95% CI, −10.8% to 11.9%; RR, 1.01; 95% CI, 0.78-1.31; P = .92). Overall, 264 serious adverse events were reported (65.1% treatment group vs 57.3% sham group). The most frequent serious adverse events were worsening of sepsis (10.8% treatment group vs 9.1% sham group) and worsening of septic shock (6.6% treatment group vs 7.7% sham group).
Conclusions and Relevance Among patients with septic shock and high endotoxin activity, polymyxin B hemoperfusion treatment plus conventional medical therapy compared with sham treatment plus conventional medical therapy did not reduce mortality at 28 days.
Moins d’insuffisance rénale post-opératoire avec le NO inhalé en chirurgie cardiaque ?
Place de la ventilation en APRV dans le SDRA chez l’enfant
Le rôle délétère des épisodes de déventilation dans le SDRA chez le rat
Paramètres de PEEP personnalisés en per-opératoire pour moins d’atélectasie
Background: Intraoperative lung-protective ventilation has been recommended to reduce postoperative pulmonary complications after abdominal surgery. Although the protective role of a more physiologic tidal volume has been established, the added protection afforded by positive end-expiratory pressure (PEEP) remains uncertain. The authors hypothesized that a low fixed PEEP might not fit all patients and that an individually titrated PEEP during anesthesia might improve lung function during and after surgery.
Methods: Forty patients were studied in the operating room (20 laparoscopic and 20 open-abdominal). They underwent elective abdominal surgery and were randomized to institutional PEEP (4 cm H2O) or electrical impedance tomography–guided PEEP (applied after recruitment maneuvers and targeted at minimizing lung collapse and hyperdistension, simultaneously). Patients were extubated without changing selected PEEP or fractional inspired oxygen tension while under anesthesia and submitted to chest computed tomography after extubation. Our primary goal was to individually identify the electrical impedance tomography–guided PEEP value producing the best compromise of lung collapse and hyperdistention.
Results: Electrical impedance tomography–guided PEEP varied markedly across individuals (median, 12 cm H2O; range, 6 to 16 cm H2O; 95% CI, 10–14). Compared with PEEP of 4 cm H2O, patients randomized to the electrical impedance tomography–guided strategy had less postoperative atelectasis (6.2 ± 4.1 vs. 10.8 ± 7.1% of lung tissue mass; P = 0.017) and lower intraoperative driving pressures (mean values during surgery of 8.0 ± 1.7 vs. 11.6 ± 3.8 cm H2O; P < 0.001). The electrical impedance tomography–guided PEEP arm had higher intraoperative oxygenation (435 ± 62 vs. 266 ± 76 mmHg for laparoscopic group; P < 0.001), while presenting equivalent hemodynamics (mean arterial pressure during surgery of 80 ± 14 vs. 78 ± 15 mmHg; P = 0.821).
Conclusions: PEEP requirements vary widely among patients receiving protective tidal volumes during anesthesia for abdominal surgery. Individualized PEEP settings could reduce postoperative atelectasis (measured by computed tomography) while improving intraoperative oxygenation and driving pressures, causing minimum side effects.
Délai avant la reprise des beta-bloquants en post-opératoire ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2707255
doi:10.1097/ALN.0000000000002457
Background: Beta (β) blockers reduce the risk of postoperative atrial fibrillation and should be restarted after surgery, but it remains unclear when best to resume β blockers postoperatively. The authors thus evaluated the relationship between timing of resumption of β blockers and atrial fibrillation in patients recovering from noncardiothoracic and nonvascular surgery.
Methods: The authors evaluated 8,201 adult β-blocker users with no previous history of atrial fibrillation who stayed at least two nights after noncardiothoracic and nonvascular surgery as a retrospective observational cohort. After propensity score matching on baseline and intraoperative variables, 1,924 patients who did resume β blockers by the end of postoperative day 1 were compared with 973 patients who had not resumed by that time on postoperative atrial fibrillation using logistic regression. A secondary matched analysis compared 3,198 patients who resumed β blockers on the day of surgery with 3,198 who resumed thereafter.
Results: Of propensity score–matched patients who resumed β blockers by end of postoperative day 1, 4.9% (94 of 1,924) developed atrial fibrillation, compared with 7.0% (68 of 973) of those who resumed thereafter (adjusted odds ratio, 0.69; 95% CI, 0.50–0.95; P = 0.026). Patients who resumed β blockers on day of surgery had an atrial fibrillation incidence of 4.9% versus 5.8% for those who started thereafter (odds ratio, 0.84; 95% CI, 0.67–1.04; P = 0.104).
Conclusions: Resuming β blockers in chronic users by the end of the first postoperative day may be associated with lower odds of in-hospital atrial fibrillation. However, there seems to be little advantage to restarting on the day of surgery itself.
Score pour estimer le risque neurologique en post-opératoire de craniotomie ?
Background: Craniotomy for brain tumor displays significant morbidity and mortality, and no score is available to discriminate high-risk patients. Our objective was to validate a prediction score for postoperative neurosurgical complications in this setting.
Methods: Creation of a score in a learning cohort from a prospective specific database of 1,094 patients undergoing elective brain tumor craniotomy in one center from 2008 to 2012. The validation cohort was validated in a prospective multicenter independent cohort of 830 patients from 2013 to 2015 in six university hospitals in France. The primary outcome variable was postoperative neurologic complications requiring in–intensive care unit management (intracranial hypertension, intracranial bleeding, status epilepticus, respiratory failure, impaired consciousness, unexpected motor deficit). The least absolute shrinkage and selection operator method was used for potential risk factor selection with logistic regression.
Results: Severe complications occurred in 125 (11.4%) and 90 (10.8%) patients in the learning and validation cohorts, respectively. The independent risk factors for severe complications were related to the patient (Glasgow Coma Score before surgery at or below 14, history of brain tumor surgery), tumor characteristics (greatest diameter, cerebral midline shift at least 3 mm), and perioperative management (transfusion of blood products, maximum and minimal systolic arterial pressure, duration of surgery). The positive predictive value of the score at or below 3% was 12.1%, and the negative predictive value was 100% in the learning cohort. In–intensive care unit mortality was observed in eight (0.7%) and six (0.7%) patients in the learning and validation cohorts, respectively.
Conclusions: The validation of prediction scores is the first step toward on-demand intensive care unit admission. Further research is needed to improve the score’s performance before routine use.
Spécial Intelligence Artificielle :
– Prédiction d’hypotension artérielle
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2685008
Background: With appropriate algorithms, computers can learn to detect patterns and associations in large data sets. The authors’ goal was to apply machine learning to arterial pressure waveforms and create an algorithm to predict hypotension. The algorithm detects early alteration in waveforms that can herald the weakening of cardiovascular compensatory mechanisms affecting preload, afterload, and contractility.
Methods: The algorithm was developed with two different data sources: (1) a retrospective cohort, used for training, consisting of 1,334 patients’ records with 545,959 min of arterial waveform recording and 25,461 episodes of hypotension; and (2) a prospective, local hospital cohort used for external validation, consisting of 204 patients’ records with 33,236 min of arterial waveform recording and 1,923 episodes of hypotension. The algorithm relates a large set of features calculated from the high-fidelity arterial pressure waveform to the prediction of an upcoming hypotensive event (mean arterial pressure < 65 mmHg). Receiver-operating characteristic curve analysis evaluated the algorithm’s success in predicting hypotension, defined as mean arterial pressure less than 65 mmHg.
Results: Using 3,022 individual features per cardiac cycle, the algorithm predicted arterial hypotension with a sensitivity and specificity of 88% (85 to 90%) and 87% (85 to 90%) 15 min before a hypotensive event (area under the curve, 0.95 [0.94 to 0.95]); 89% (87 to 91%) and 90% (87 to 92%) 10 min before (area under the curve, 0.95 [0.95 to 0.96]); 92% (90 to 94%) and 92% (90 to 94%) 5 min before (area under the curve, 0.97 [0.97 to 0.98]).
Conclusions: The results demonstrate that a machine-learning algorithm can be trained, with large data sets of high-fidelity arterial waveforms, to predict hypotension in surgical patients’ records.
– Prédiction de la mortalité post-opératoire
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2679105
Background: The authors tested the hypothesis that deep neural networks trained on intraoperative features can predict postoperative in-hospital mortality.
Methods: The data used to train and validate the algorithm consists of 59,985 patients with 87 features extracted at the end of surgery. Feed-forward networks with a logistic output were trained using stochastic gradient descent with momentum. The deep neural networks were trained on 80% of the data, with 20% reserved for testing. The authors assessed improvement of the deep neural network by adding American Society of Anesthesiologists (ASA) Physical Status Classification and robustness of the deep neural network to a reduced feature set. The networks were then compared to ASA Physical Status, logistic regression, and other published clinical scores including the Surgical Apgar, Preoperative Score to Predict Postoperative Mortality, Risk Quantification Index, and the Risk Stratification Index.
Results: In-hospital mortality in the training and test sets were 0.81% and 0.73%. The deep neural network with a reduced feature set and ASA Physical Status classification had the highest area under the receiver operating characteristics curve, 0.91 (95% CI, 0.88 to 0.93). The highest logistic regression area under the curve was found with a reduced feature set and ASA Physical Status (0.90, 95% CI, 0.87 to 0.93). The Risk Stratification Index had the highest area under the receiver operating characteristics curve, at 0.97 (95% CI, 0.94 to 0.99).
Conclusions: Deep neural networks can predict in-hospital mortality based on automatically extractable intraoperative data, but are not (yet) superior to existing methods.
– Prédiction d’hypotension artérielle post-induction
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2696388
Background: Hypotension is a risk factor for adverse perioperative outcomes. Machine-learning methods allow large amounts of data for development of robust predictive analytics. The authors hypothesized that machine-learning methods can provide prediction for the risk of postinduction hypotension.
Methods: Data was extracted from the electronic health record of a single quaternary care center from November 2015 to May 2016 for patients over age 12 that underwent general anesthesia, without procedure exclusions. Multiple supervised machine-learning classification techniques were attempted, with postinduction hypotension (mean arterial pressure less than 55 mmHg within 10 min of induction by any measurement) as primary outcome, and preoperative medications, medical comorbidities, induction medications, and intraoperative vital signs as features. Discrimination was assessed using cross-validated area under the receiver operating characteristic curve. The best performing model was tuned and final performance assessed using split-set validation.
Results: Out of 13,323 cases, 1,185 (8.9%) experienced postinduction hypotension. Area under the receiver operating characteristic curve using logistic regression was 0.71 (95% CI, 0.70 to 0.72), support vector machines was 0.63 (95% CI, 0.58 to 0.60), naive Bayes was 0.69 (95% CI, 0.67 to 0.69), k-nearest neighbor was 0.64 (95% CI, 0.63 to 0.65), linear discriminant analysis was 0.72 (95% CI, 0.71 to 0.73), random forest was 0.74 (95% CI, 0.73 to 0.75), neural nets 0.71 (95% CI, 0.69 to 0.71), and gradient boosting machine 0.76 (95% CI, 0.75 to 0.77). Test set area for the gradient boosting machine was 0.74 (95% CI, 0.72 to 0.77).
Conclusions: The success of this technique in predicting postinduction hypotension demonstrates feasibility of machine-learning models for predictive analytics in the field of anesthesiology, with performance dependent on model selection and appropriate tuning.
Composition de l’équipe d’Anesthésie : Médecin en salle inutile ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2682840
Background: In the United States, anesthesia care can be provided by an anesthesia care team consisting of nonphysician providers (nurse anesthetists and anesthesiologist assistants) working under the supervision of a physician anesthesiologist. Nurse anesthetists may practice nationwide, whereas anesthesiologist assistants are restricted to 16 states. To inform policies concerning the expanded use of anesthesiologist assistants, the authors examined whether the specific anesthesia care team composition (physician anesthesiologist plus nurse anesthetist or anesthesiologist assistant) was associated with differences in perioperative outcomes.
Methods: A retrospective analysis was performed of national claims data for 443,098 publicly insured elderly (ages 65 to 89 yr) patients who underwent inpatient surgery between January 1, 2004, and December 31, 2011. The differences in inpatient mortality, spending, and length of stay between cases where an anesthesiologist supervised an anesthesiologist assistant compared to cases where an anesthesiologist supervised a nurse anesthetist were estimated. The approach used a quasirandomization technique known as instrumental variables to reduce confounding.
Results: The adjusted mortality for care teams with anesthesiologist assistants was 1.6% (95% CI, 1.4 to 1.8) versus 1.7% for care teams with nurse anesthetists (95% CI, 1.7 to 1.7; difference −0.08; 95% CI, −0.3 to 0.1; P = 0.47). Compared to care teams with nurse anesthetists, care teams with anesthesiologist assistants were associated with non–statistically significant decreases in length of stay (−0.009 days; 95% CI, −0.1 to 0.1; P = 0.89) and medical spending (−$56; 95% CI, −334 to 223; P = 0.70).
Conclusions: The specific composition of the anesthesia care team was not associated with any significant differences in mortality, length of stay, or inpatient spending.
ALR : Echec de la Bupivacaïne liposomale ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2682370
Background: Although some trials suggest benefits of liposomal bupivacaine, data on real-world use and effectiveness is lacking. This study analyzed the impact of liposomal bupivacaine use (regardless of administration route) on inpatient opioid prescription, resource utilization, and opioid-related complications among patients undergoing total knee arthroplasties with a peripheral nerve block. It was hypothesized that liposomal bupivacaine has limited clinical influence on the studied outcomes.
Methods: The study included data on 88,830 total knee arthroplasties performed with a peripheral nerve block (Premier Healthcare Database 2013 to 2016). Multilevel multivariable regressions measured associations between use of liposomal bupivacaine and (1) inpatient opioid prescription (extracted from billing) and (2) length of stay, cost of hospitalization, as well as opioid-related complications. To reflect the difference between statistical and clinical significance, a relative change of −15% in outcomes was assumed to be clinically important.
Results: Overall, liposomal bupivacaine was used in 21.2% (n = 18,817) of patients that underwent a total knee arthroplasty with a peripheral nerve block. Liposomal bupivacaine use was not associated with a clinically meaningful reduction in inpatient opioid prescription (group median, 253 mg of oral morphine equivalents, adjusted effect −9.3% CI −11.1%, −7.5%; P < 0.0001) and length of stay (group median, 3 days, adjusted effect −8.8% CI −10.1%, −7.5%; P < 0.0001) with no effect on cost of hospitalization. Most importantly, liposomal bupivacaine use was not associated with decreased odds for opioid-related complications.
Conclusions: Liposomal bupivacaine was not associated with a clinically relevant improvement in inpatient opioid prescription, resource utilization, or opioid-related complications in patients who received modern pain management including a peripheral nerve block.
Prévention des syndromes dépressifs en post-opératoire avec la Kétamine ?
Diminution des doses de Propofol avec la Lidocaine IV durant l’anesthésie générale pour coloscopie
Facteurs de risque d’agitation en salle de réveil
Mécanique ventilatoire durant l’anesthésie générale
Safety de l’Albumine 20% ?
Purpose
We set out to assess the resuscitation fluid requirements and physiological and clinical responses of intensive care unit (ICU) patients resuscitated with 20% albumin versus 4–5% albumin.
Methods
We performed a randomised controlled trial in 321 adult patients requiring fluid resuscitation within 48 h of admission to three ICUs in Australia and the UK.
Results
The cumulative volume of resuscitation fluid at 48 h (primary outcome) was lower in the 20% albumin group than in the 4–5% albumin group [median difference − 600 ml, 95% confidence interval (CI) − 800 to − 400; P < 0.001]. The 20% albumin group had lower cumulative fluid balance at 48 h (mean difference − 576 ml, 95% CI − 1033 to − 119; P = 0.01). Peak albumin levels were higher but sodium and chloride levels lower in the 20% albumin group. Median (interquartile range) duration of mechanical ventilation was 12.0 h (7.6, 33.1) in the 20% albumin group and 15.3 h (7.7, 58.1) in the 4–5% albumin group (P = 0.13); the proportion of patients commenced on renal replacement therapy after randomization was 3.3% and 4.2% (P = 0.67), respectively, and the proportion discharged alive from ICU was 97.4% and 91.1% (P = 0.02).
Conclusions
Resuscitation with 20% albumin decreased resuscitation fluid requirements, minimized positive early fluid balance and was not associated with any evidence of harm compared with 4–5% albumin. These findings support the safety of further exploration of resuscitation with 20% albumin in larger randomised trials.
Optiflow chez l’enfant : 2L/kg/min suffit ?
et al., ICM, 2018
Purpose
High-flow nasal cannula (HFNC) therapy is increasingly proposed as first-line respiratory support for infants with acute viral bronchiolitis (AVB). Most teams use 2 L/kg/min, but no study compared different flow rates in this setting. We hypothesized that 3 L/kg/min would be more efficient for the initial management of these patients.
Methods
A randomized controlled trial was performed in 16 pediatric intensive care units (PICUs) to compare these two flow rates in infants up to 6 months old with moderate to severe AVB and treated with HFNC. The primary endpoint was the percentage of failure within 48 h of randomization, using prespecified criteria of worsening respiratory distress and discomfort.
Results
From November 2016 to March 2017, 142 infants were allocated to the 2-L/kg/min (2L) flow rate and 144 to the 3-L/kg/min (3L) flow rate. Failure rate was comparable between groups: 38.7% (2L) vs. 38.9% (3L; p = 0.98). Worsening respiratory distress was the most common cause of failure in both groups: 49% (2L) vs. 39% (3L; p = 0.45). In the 3L group, discomfort was more frequent (43% vs. 16%, p = 0.002) and PICU stays were longer (6.4 vs. 5.3 days, p = 0.048). The intubation rates [2.8% (2L) vs. 6.9% (3L), p = 0.17] and durations of invasive [0.2 (2L) vs. 0.5 (3L) days, p = 0.10] and noninvasive [1.4 (2L) vs. 1.6 (3L) days, p = 0.97] ventilation were comparable. No patient had air leak syndrome or died.
Conclusion
In young infants with AVB supported with HFNC, 3 L/kg/min did not reduce the risk of failure compared with 2 L/kg/min.
Etude sur les différentes cibles d’hypothermie dans l’ACR extra-hospitalier
Lopez-de-Sa. et al., the FROST-I trial. Intensive Care Med 2018
Purpose
To obtain initial data on the effect of different levels of targeted temperature management (TTM) in out-of-hospital cardiac arrest (OHCA).
Methods
We designed a multicentre pilot trial with 1:1:1 randomization to either 32 °C (n = 52), 33 °C (n = 49) or 34 °C (n = 49), via endovascular cooling devices during a 24-h period in comatose survivors of witnessed OHCA and initial shockable rhythm. The primary endpoint was the percentage of subjects surviving with good neurologic outcome defined by a modified Rankin Scale (mRS) score of ≤ 3, blindly assessed at 90 days.
Results
At baseline, different proportions of patients who had received defibrillation administered by a bystander were assigned to groups of 32 °C (13.5%), 33 °C (34.7%) and 34 °C (28.6%; p = 0.03). The percentage of patients with an mRS ≤ 3 at 90 days (primary endpoint) was 65.3, 65.9 and 65.9% in patients assigned to 32, 33 and 34 °C, respectively, non-significant (NS). The multivariate Cox proportional hazards model identified two variables significantly related to the primary outcome: male gender and defibrillation by a bystander. Among the 43 patients who died before 90 days, 28 died following withdrawal of life-sustaining therapy, as follows: 7/16 (43.8%), 10/13 (76.9%) and 11/14 (78.6%) of patients assigned to 32, 33 and 34 °C, respectively (trend test p = 0.04). All levels of cooling were well tolerated.
Conclusions
There were no statistically significant differences in neurological outcomes among the different levels of TTM. However, future research should explore the efficacy of TTM at 32 °C.
Cibler des objectifs différents de PaO2 et PaCO2 en post-ACR ?
et al., ICM, 2018
https://link.springer.com/article/10.1007/s00134-018-5453-9
Purpose
We assessed the effects of targeting low-normal or high-normal arterial carbon dioxide tension (PaCO2) and normoxia or moderate hyperoxia after out-of-hospital cardiac arrest (OHCA) on markers of cerebral and cardiac injury.
Methods
Using a 23 factorial design, we randomly assigned 123 patients resuscitated from OHCA to low-normal (4.5–4.7 kPa) or high-normal (5.8–6.0 kPa) PaCO2 and to normoxia (arterial oxygen tension [PaO2] 10–15 kPa) or moderate hyperoxia (PaO2 20–25 kPa) and to low-normal or high-normal mean arterial pressure during the first 36 h in the intensive care unit. Here we report the results of the low-normal vs. high-normal PaCO2 and normoxia vs. moderate hyperoxia comparisons. The primary endpoint was the serum concentration of neuron-specific enolase (NSE) 48 h after cardiac arrest. Secondary endpoints included S100B protein and cardiac troponin concentrations, continuous electroencephalography (EEG) and near-infrared spectroscopy (NIRS) results and neurologic outcome at 6 months.
Results
In total 120 patients were included in the analyses. There was a clear separation in PaCO2 (p < 0.001) and PaO2 (p < 0.001) between the groups. The median (interquartile range) NSE concentration at 48 h was 18.8 µg/l (13.9–28.3 µg/l) in the low-normal PaCO2 group and 22.5 µg/l (14.2–34.9 µg/l) in the high-normal PaCO2 group, p = 0.400; and 22.3 µg/l (14.8–27.8 µg/l) in the normoxia group and 20.6 µg/l (14.2–34.9 µg/l) in the moderate hyperoxia group, p = 0.594). High-normal PaCO2 and moderate hyperoxia increased NIRS values. There were no differences in other secondary outcomes.
Conclusions
Both high-normal PaCO2 and moderate hyperoxia increased NIRS values, but the NSE concentration was unaffected.
Dialysat froid pour une meilleure tolérance hémodynamique de l’HDI prolongée
https://insights.ovid.com/crossref?an=00003246-900000000-96097
DOI: 10.1097/CCM.0000000000003508, PMID: 30394919
Objectives:
Acute kidney injury requiring renal replacement therapy is associated with high morbidity and mortality. Complications of renal replacement therapy include hemodynamic instability with ensuing shortened treatments, inadequate ultrafiltration, and delay in renal recovery. Studies have shown that lowering dialysate temperature in patients with end-stage renal disease is associated with a decrease in the frequency of intradialytic hypotension. However, data regarding mitigation of hypotension by lowering dialysate temperature in patients with acute kidney injury are scarce. We conducted a prospective, randomized, cross-over pilot study to evaluate the effect of lower dialysate temperature on hemodynamic status of critically ill patients with acute kidney injury during prolonged intermittent renal replacement therapy.
Design:
Single-center prospective, randomized, cross-over study.
Setting:
ICUs and a step down unit in a tertiary referral center.
Patients:
Acute kidney injury patients undergoing prolonged intermittent renal replacement therapy.
Interventions:
Participants were randomized to start prolonged intermittent renal replacement therapy with dialysate temperature of 35°C or dialysate temperature of 37°C.
Measurements and Main Results:
The primary endpoint was the number of hypotensive events, as defined by any of the following: decrease in systolic blood pressure greater than or equal to 20 mm Hg, decrease in mean arterial pressure greater than or equal to 10 mm Hg, decrease in ultrafiltration, or increase in vasopressor requirements. The number of events was analyzed by Poisson regression and other outcomes with repeated-measures analysis of variance. Twenty-one patients underwent a total of 78 prolonged intermittent renal replacement therapy sessions, 39 in each arm. The number of hypotensive events was twice as high during treatments with dialysate temperature of 37°C, compared with treatments with the cooler dialysate (1.49 ± 1.12 vs 0.72 ± 0.69; incidence rate ratio, 2.06; p ≤ 0.0001). Treatment sessions with cooler dialysate were more likely to reach prescribed ultrafiltration targets.
Conclusions:
Patients with acute kidney injury undergoing prolonged intermittent renal replacement therapy with cooler dialysate experienced significantly less hypotension during treatment. Prevention of hemodynamic instability during renal replacement therapy helped to achieve ultrafiltration goals and may help to prevent volume overload in critically ill patients.
Etat de l’art des méthodes statistiques utilisées dans les essais cliniques en réanimation
Amélioration du pronostique avec une nutrition entérale précoce chez les grands brûlés
Prévention du stress post-traumatique après séjour en réanimation
Intérêt de mesures répétées du qSOFA chez les patients possiblement septiques
Revue sur la gestion des patients polytraumatisés
Etude de coût-efficacité de la PCT en réanimation
Michelle M. A. Kip, et al, CC, 2018
https://ccforum.biomedcentral.com/articles/10.1186/s13054-018-2234-3
Background
Procalcitonin (PCT) testing can help in safely reducing antibiotic treatment duration in intensive care patients with sepsis. However, the cost-effectiveness of such PCT guidance is not yet known.
Methods
A trial-based analysis was performed to estimate the cost-effectiveness of PCT guidance compared with standard of care (without PCT guidance). Patient-level data were used from the SAPS trial in which 1546 patients were randomised. This trial was performed in the Netherlands, which is a country with, on average, low antibiotic use and a short duration of hospital stay. As quality of life among sepsis survivors was not measured during the SAPS, this was derived from a Dutch follow-up study. Outcome measures were (1) incremental direct hospital cost and (2) incremental cost per quality-adjusted life year (QALY) gained from a healthcare perspective over a one-year time horizon. Uncertainty in outcomes was assessed with bootstrapping.
Results
Mean in-hospital costs were €46,081/patient in the PCT group compared with €46,146/patient with standard of care (i.e. − €65 (95% CI − €6314 to €6107); − 0.1%). The duration of the first course of antibiotic treatment was lower in the PCT group with 6.9 vs. 8.2 days (i.e. − 1.2 days (95% CI − 1.9 to − 0.4), − 14.8%). This was accompanied by lower in-hospital mortality of 21.8% vs. 29.8% (absolute decrease 7.9% (95% CI − 13.9% to − 1.8%), relative decrease 26.6%), resulting in an increase in mean QALYs/patient from 0.47 to 0.52 (i.e. + 0.05 (95% CI 0.00 to 0.10); + 10.1%). However, owing to high costs among sepsis survivors, healthcare costs over a one-year time horizon were €73,665/patient in the PCT group compared with €70,961/patient with standard of care (i.e. + €2704 (95% CI − €4495 to €10,005), + 3.8%), resulting in an incremental cost-effectiveness ratio of €57,402/QALY gained. Within this time frame, the probability of PCT guidance being cost-effective was 64% at a willingness-to-pay threshold of €80,000/QALY.
Conclusions
Although the impact of PCT guidance on total healthcare-related costs during the initial hospitalisation episode is likely negligible, the lower in-hospital mortality may lead to a non-significant increase in costs over a one-year time horizon. However, since uncertainty remains, it is recommended to investigate the long-term cost-effectiveness of PCT guidance, from a societal perspective, in different countries and settings.
Carence martiale en Réanimation évaluée par dosage de l’hepcidine
Sigismond Lasocki, et al., CC, 2018
https://ccforum.biomedcentral.com/articles/10.1186/s13054-018-2253-0
Background
Iron deficiency is difficult to diagnose in critically ill patients, but may be frequent and may impair recovery. Measurement of hepcidin could help in the diagnosis of iron deficiency. We aim to assess if iron deficiency diagnosed using hepcidin is associated with poorer outcome one year after an intensive care unit stay.
Methods
We used the prospective FROG-ICU, multicentre (n = 28 ICUs), observational cohort study of critically ill survivors followed up one year after intensive care unit discharge. Iron deficiency was defined as hepcidin < 20 ng/l, ferritin < 100 ng/l or soluble transferrin receptor (sTfR)/log(ferritin) > 0.8, measured in blood drawn at intensive care unit discharge. Main outcomes were one-year all-cause mortality and poor quality of life (defined as a Short Form 36 (SF-36) score below the median).
Results
Among the 2087 patients in the FROG-ICU cohort, 1570 were discharged alive and 1161 had a blood sample available at intensive care unit discharge and were included in the analysis. Using hepcidin, 429 (37%) patients had iron deficiency, compared to 72 (6%) using ferritin alone and 151 (13%) using the sTfR/log(ferritin) ratio. Iron deficiency diagnosed according to low hepcidin was an independent predictor of one-year mortality (OR 1.51 (1.10–2.08)) as was high sTfR/log ferritin ratio (OR = 1.95 (1.27–3.00)), but low ferritin was not. Severe ID, defined as hepcidin < 10 ng/l, was also an independent predictor of poor one-year physical recovery (1.58 (1.01–2.49)).
Conclusions
Iron deficiency, diagnosed using hepcidin, is very frequent at intensive care unit discharge and is associated with increased one-year mortality and poorer physical recovery. Whether iron treatment may improve these outcomes remains to be investigated.
Revue sur les antibiotiques en aérosol
https://ccforum.biomedcentral.com/articles/10.1186/s13054-018-2106-x
Feng Xu , et al. Critical Care201822:301
Revue sur les qualités prédictives du qSOFA et du SIRS
https://www.clinicalmicrobiologyandinfection.com/article/S1198-743X(18)30294-5/fulltext
Objective
To identify sensitivity, specificity and predictive accuracy of quick sequential organ failure assessment (qSOFA) score and systemic inflammatory response syndrome (SIRS) criteria to predict in-hospital mortality in hospitalized patients with suspected infection.
Methods
This meta-analysis followed the Meta-analysis of Observational Studies in Epidemiology (MOOSE) group consensus statement for conducting and reporting the results of systematic review. PubMed and EMBASE were searched for the observational studies which reported predictive utility of qSOFA score for predicting mortality in patients with suspected or proven infection with the following search words: ‘qSOFA’, ‘q-SOFA’, ‘quick-SOFA’, ‘Quick Sequential Organ Failure Assessment’, ‘quick SOFA’. Sensitivity, specificity, area under receiver operating characteristic (ROC) curves with 95% confidence interval (CI) of qSOFA and SIRS criteria for predicting in-hospital mortality was collected for each study and a 2 × 2 table was created for each study.
Results
Data of 406 802 patients from 45 observational studies were included in this meta-analysis. Pooled sensitivity (95% CI) and specificity (95% CI) of qSOFA ≥2 for predicting mortality in patients who were not in an intensive care unit (ICU) was 0.48 (0.41–0.55) and 0.83 (0.78–0.87), respectively. Pooled sensitivity (95% CI) of qSOFA ≥2 for predicting mortality in patients (both ICU and non-ICU settings) with suspected infection was 0.56 (0.47–0.65) and pooled specificity (95% CI) was 0.78 (0.71–0.83).
Conclusion
qSOFA has been found to be a poorly sensitive predictive marker for in-hospital mortality in hospitalized patients with suspected infection. It is reasonable to recommend developing another scoring system with higher sensitivity to identify high-risk patients with infection.
Quelle posologie de Magnesium pour ralentir des FA ?
Bouida W et al., Acad Emerg Med. 2018 Jul 19.
doi: 10.1111/acem.13522
https://onlinelibrary.wiley.com/doi/full/10.1111/acem.13522
OBJECTIVES: We aim to determine the benefit of two different doses magnesium sulfate (MgSO4 ) compared to placebo in rate control of rapid atrial fibrillation (AF) managed in the emergency department (ED).
METHODS: We undertook a randomized, controlled, double-blind clinical trial in three university hospital EDs between August 2009 and December 2014. Patients > 18 years with rapid AF (>120 beats/min) were enrolled and randomized to 9 g of intravenous MgSO4 (high-dose group, n = 153), 4.5 g of intravenous MgSO4 (low-dose group, n = 148), or serum saline infusion (placebo group, n = 149), given in addition to atrioventricular (AV) nodal blocking agents. The primary outcome was the reduction of baseline ventricular rate (VR) to 90 beats/min or less or reduction of VR by 20% or greater from baseline (therapeutic response). Secondary outcome included resolution time (defined as the elapsed time from start of treatment to therapeutic response), sinus rhythm conversion rate, and adverse events within the first 24 hours.
RESULTS: At 4 hours, therapeutic response rate was higher in low- and high-MgSO4 groups compared to placebo group; the absolute differences were, respectively, 20.5% (risk ratio [RR] = 2.31, 95% confidence interval [CI] = 1.45-3.69) and +15.8% (RR = 1.89, 95% CI = 1.20-2.99). At 24 hours, compared to placebo group, therapeutic response difference was +14.1% (RR = 9.74, 95% CI = 2.87-17.05) with low-dose MgSO4 and +10.3% (RR = 3.22, 95% CI = 1.45-7.17) with high-dose MgSO4 . The lowest resolution time was observed in the low-dose MgSO4 group (5.2 ± 2 hours) compared to 6.1 ± 1.9 hours in the high-dose MgSO4 group and 8.4 ± 2.5 hours in the placebo group. Rhythm control rate at 24 hours was significantly higher in the low-dose MgSO4 group (22.9%) compared to the high-dose MgSO4 group (13.0%, p = 0.03) and the placebo group (10.7%). Adverse effects were minor and significantly more frequent with high-dose MgSO4 .
CONCLUSIONS: Intravenous MgSO4 appears to have a synergistic effect when combined with other AV nodal blockers resulting in improved rate control. Similar efficacy was observed with 4.5 and 9 g of MgSO4 but a dose of 9 g was associated with more side effects.