Biblio du mois : Février 2017
L’hiver perdure dans vos coeurs … ?
Heureusement la biblio du mois est arrivée !
Ce mois-ci, le thème, comme vous l’auriez compris est au COEUR !Des études intéressantes en chirurgie cardiaque avec des difficultés à montrer l’efficacité de l’Exacyl en per-opératoire, PM & IRM et d’autres sur la physiologie liée à la CEC et même de l’hémodynamique (Soirée Under Pressure le 9 mars à venir !) avec de l’interaction coeur-poumon appliquée mais également une vague d’études sur les assistances ventriculaires et leurs complications (que vous allez devoir connaitre).
Bien d’autres études sur la nutrition en Réanimation et le jeûne en Anesthésie (complémentaires, me direz-vous ;-)) puis la dexmedetomidine et les corticoïdes en Anesthésie sans oublier du sepsis parce qu’on aime ça !Surtout n’oubliez pas de participer à notre questionnaire sur le don d’organes (cliquez sur l’image ci-dessous) qui va être bientôt clos, principalement pour mieux axer notre prochain événement-phare (déjà plein de surprises) dédié au Don d’Organes (date prévue : le 1er Juillet 2017, Save The Date !) :
L’AJAR donne ses organes, et vous ?
Le pace-maker : pas une contre-indication à l’IRM ?
http://www.nejm.org/doi/full/10.1056/NEJMoa1603265
Russo et al., NEJM 2017
Background
The presence of a cardiovascular implantable electronic device has long been a contraindication for the performance of magnetic resonance imaging (MRI). We established a prospective registry to determine the risks associated with MRI at a magnetic field strength of 1.5 tesla for patients who had a pacemaker or implantable cardioverter–defibrillator (ICD) that was “non–MRI-conditional” (i.e., not approved by the Food and Drug Administration for MRI scanning).
Methods
Patients in the registry were referred for clinically indicated nonthoracic MRI at a field strength of 1.5 tesla. Devices were interrogated before and after MRI with the use of a standardized protocol and were appropriately reprogrammed before the scanning. The primary end points were death, generator or lead failure, induced arrhythmia, loss of capture, or electrical reset during the scanning. The secondary end points were changes in device settings.
Results
MRI was performed in 1000 cases in which patients had a pacemaker and in 500 cases in which patients had an ICD. No deaths, lead failures, losses of capture, or ventricular arrhythmias occurred during MRI. One ICD generator could not be interrogated after MRI and required immediate replacement; the device had not been appropriately programmed per protocol before the MRI. We observed six cases of self-terminating atrial fibrillation or flutter and six cases of partial electrical reset. Changes in lead impedance, pacing threshold, battery voltage, and P-wave and R-wave amplitude exceeded prespecified thresholds in a small number of cases. Repeat MRI was not associated with an increase in adverse events.
Conclusions
In this study, device or lead failure did not occur in any patient with a non–MRI-conditional pacemaker or ICD who underwent clinically indicated nonthoracic MRI at 1.5 tesla, was appropriately screened, and had the device reprogrammed in accordance with the prespecified protocol.
Contrôle glycémique strict : Pas d’intérêt non plus en Réanimation pédiatrique ?
http://www.nejm.org/doi/full/10.1056/NEJMoa1612348
Agus et al. NEJM 2017
Background
In multicenter studies, tight glycemic control targeting a normal blood glucose level has not been shown to improve outcomes in critically ill adults or children after cardiac surgery. Studies involving critically ill children who have not undergone cardiac surgery are lacking.
Methods
In a 35-center trial, we randomly assigned critically ill children with confirmed hyperglycemia (excluding patients who had undergone cardiac surgery) to one of two ranges of glycemic control: 80 to 110 mg per deciliter (4.4 to 6.1 mmol per liter; lower-target group) or 150 to 180 mg per deciliter (8.3 to 10.0 mmol per liter; higher-target group). Clinicians were guided by continuous glucose monitoring and explicit methods for insulin adjustment. The primary outcome was the number of intensive care unit (ICU)–free days to day 28.
Results
The trial was stopped early, on the recommendation of the data and safety monitoring board, owing to a low likelihood of benefit and evidence of the possibility of harm. Of 713 patients, 360 were randomly assigned to the lower-target group and 353 to the higher-target group. In the intention-to-treat analysis, the median number of ICU-free days did not differ significantly between the lower-target group and the higher-target group (19.4 days [interquartile range {IQR}, 0 to 24.2] and 19.4 days [IQR, 6.7 to 23.9], respectively; P=0.58). In per-protocol analyses, the median time-weighted average glucose level was significantly lower in the lower-target group (109 mg per deciliter [IQR, 102 to 118]; 6.1 mmol per liter [IQR, 5.7 to 6.6]) than in the higher-target group (123 mg per deciliter [IQR, 108 to 142]; 6.8 mmol per liter [IQR, 6.0 to 7.9]; P<0.001). Patients in the lower-target group also had higher rates of health care–associated infections than those in the higher-target group (12 of 349 patients [3.4%] vs. 4 of 349 [1.1%], P=0.04), as well as higher rates of severe hypoglycemia, defined as a blood glucose level below 40 mg per deciliter (2.2 mmol per liter) (18 patients [5.2%] vs. 7 [2.0%], P=0.03). No significant differences were observed in mortality, severity of organ dysfunction, or the number of ventilator-free days.
Conclusions
Critically ill children with hyperglycemia did not benefit from tight glycemic control targeted to a blood glucose level of 80 to 110 mg per deciliter, as compared with a level of 150 to 180 mg per deciliter.
Faut-il encore utiliser les HBPM en préventif pour les arthroscopies et plâtres du MI ?
http://www.nejm.org/doi/full/10.1056/NEJMoa1613303
van Adrichem, et al. NEJM 2017
Background
The use of thromboprophylaxis to prevent clinically apparent venous thromboembolism after knee arthroscopy or casting of the lower leg is disputed. We compared the incidence of symptomatic venous thromboembolism after these procedures between patients who received anticoagulant therapy and those who received no anticoagulant therapy.
Methods
We conducted two parallel, pragmatic, multicenter, randomized, controlled, open-label trials with blinded outcome evaluation: the POT-KAST trial, which included patients undergoing knee arthroscopy, and the POT-CAST trial, which included patients treated with casting of the lower leg. Patients were assigned to receive either a prophylactic dose of low-molecular-weight heparin (for the 8 days after arthroscopy in the POT-KAST trial or during the full period of immobilization due to casting in the POT-CAST trial) or no anticoagulant therapy. The primary outcomes were the cumulative incidences of symptomatic venous thromboembolism and major bleeding within 3 months after the procedure.
Results
In the POT-KAST trial, 1543 patients underwent randomization, of whom 1451 were included in the intention-to-treat population. Venous thromboembolism occurred in 5 of the 731 patients (0.7%) in the treatment group and in 3 of the 720 patients (0.4%) in the control group (relative risk, 1.6; 95% confidence interval [CI], 0.4 to 6.8; absolute difference in risk, 0.3 percentage points; 95% CI, −0.6 to 1.2). Major bleeding occurred in 1 patient (0.1%) in the treatment group and in 1 (0.1%) in the control group (absolute difference in risk, 0 percentage points; 95% CI, −0.6 to 0.7). In the POT-CAST trial, 1519 patients underwent randomization, of whom 1435 were included in the intention-to-treat population. Venous thromboembolism occurred in 10 of the 719 patients (1.4%) in the treatment group and in 13 of the 716 patients (1.8%) in the control group (relative risk, 0.8; 95% CI, 0.3 to 1.7; absolute difference in risk, −0.4 percentage points; 95% CI, −1.8 to 1.0). No major bleeding events occurred. In both trials, the most common adverse event was infection.
Conclusions
The results of our trials showed that prophylaxis with low-molecular-weight heparin for the 8 days after knee arthroscopy or during the full period of immobilization due to casting was not effective for the prevention of symptomatic venous thromboembolism.
L’avènement des assistances cardiaques à flux centrifuge
Mehra et al., NEJM, 2017
http://www.nejm.org/doi/full/10.1056/NEJMoa1610426
Rogers et al., NEJM 2017
http://www.nejm.org/doi/full/10.1056/NEJMoa1602954
Biothérapie : un nouveau traitement à connaitre pour la drépanocytose ?
http://www.nejm.org/doi/full/10.1056/NEJMoa1611770
Ataga et al. NEJM 2017
Administration de fibrinogène en chirurgie cardiaque ?
http://jamanetwork.com/journals/jama/article-abstract/2603928
Bilecen et al., JAMA, 2017
Importance
Fibrinogen concentrate might partly restore coagulation defects and reduce intraoperative bleeding.
Objective
To determine whether fibrinogen concentrate infusion dosed to achieve a plasma fibrinogen level of 2.5 g/L in high-risk cardiac surgery patients with intraoperative bleeding reduces intraoperative blood loss.
Design, Setting, and Participants
A randomized, placebo-controlled, double-blind clinical trial conducted in Isala Zwolle, the Netherlands (February 2011-January 2015), involving patients undergoing elective, high-risk cardiac surgery (ie, combined coronary artery bypass graft [CABG] surgery and valve repair or replacement surgery, the replacement of multiple valves, aortic root reconstruction, or reconstruction of the ascending aorta or aortic arch) with intraoperative bleeding (blood volume between 60 and 250 mL suctioned from the thoracic cavity in a period of 5 minutes) were randomized to receive either fibrinogen concentrate or placebo.
Interventions
Intravenous, single-dose administration of fibrinogen concentrate (n = 60) or placebo (n = 60), targeted to achieve a postinfusion plasma fibrinogen level of 2.5 g/L.
Main Outcomes and Measures
The primary outcome was blood loss in milliliters between intervention (ie, after removal of cardiopulmonary bypass) and closure of chest. Safety variables (within 30 days) included: in-hospital mortality, myocardial infarction, cerebrovascular accident or transient ischemic attack, renal insufficiency or failure, venous thromboembolism, pulmonary embolism, and operative complications.
Results
Among 120 patients (mean age; 71 [SD, 10] years, 37 women [31%]) included in the study, combined CABG and valve repair or replacement surgery comprised 72% of procedures and had a mean (SD) cardiopulmonary bypass time of 200 minutes (83) minutes. For the primary outcome, median blood loss in the fibrinogen group was 50 mL (interquartile range [IQR], 29-100 mL) compared with 70 mL (IQR, 33-145 mL) in the control group (P = .19), the absolute difference 20 mL (95% CI, −13 to 35 mL). There were 6 cases of stroke or transient ischemic attack (4 in the fibrinogen group); 4 myocardial infarctions (3 in the fibrinogen group); 2 deaths (both in the fibrinogen group); 5 cases with renal insufficiency or failure (3 in the fibrinogen group); and 9 cases with reoperative thoracotomy (4 in the fibrinogen group).
Conclusions and Relevance
Among patients with intraoperative bleeding during high-risk cardiac surgery, administration of fibrinogen concentrate, compared with placebo, resulted in no significant difference in the amount of intraoperative blood loss.
La désinfection des chambres : point important de lutte contre les BMR et C. difficile
http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(16)31588-4.pdf
Anderson et al., Lancet, 2017
Background
Patients admitted to hospital can acquire multidrug-resistant organisms and Clostridium difficile from inadequately disinfected environmental surfaces. We determined the effect of three enhanced strategies for terminal room disinfection (disinfection of a room between occupying patients) on acquisition and infection due to meticillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, C difficile, and multidrug-resistant Acinetobacter.
Methods
We did a pragmatic, cluster-randomised, crossover trial at nine hospitals in the southeastern USA. Rooms from which a patient with infection or colonisation with a target organism was discharged were terminally disinfected with one of four strategies: reference (quaternary ammonium disinfectant except for C difficile, for which bleach was used); UV (quaternary ammonium disinfectant and disinfecting ultraviolet [UV-C] light except for C difficile, for which bleach and UV-C were used); bleach; and bleach and UV-C. The next patient admitted to the targeted room was considered exposed. Every strategy was used at each hospital in four consecutive 7-month periods. We randomly assigned the sequence of strategies for each hospital (1:1:1:1). The primary outcomes were the incidence of infection or colonisation with all target organisms among exposed patients and the incidence of C difficile infection among exposed patients in the intention-to-treat population. This trial is registered with ClinicalTrials.gov, NCT01579370.
Findings
31 226 patients were exposed; 21 395 (69%) met all inclusion criteria, including 4916 in the reference group, 5178 in the UV group, 5438 in the bleach group, and 5863 in the bleach and UV group. 115 patients had the primary outcome during 22 426 exposure days in the reference group (51·3 per 10 000 exposure days). The incidence of target organisms among exposed patients was significantly lower after adding UV to standard cleaning strategies (n=76; 33·9 cases per 10 000 exposure days; relative risk [RR] 0·70, 95% CI 0·50–0·98; p=0·036). The primary outcome was not statistically lower with bleach (n=101; 41·6 cases per 10 000 exposure days; RR 0·85, 95% CI 0·69–1·04; p=0·116), or bleach and UV (n=131; 45·6 cases per 10 000 exposure days; RR 0·91, 95% CI 0·76–1·09; p=0·303) among exposed patients. Similarly, the incidence of C difficile infection among exposed patients was not changed after adding UV to cleaning with bleach (n=38 vs 36; 30·4 cases vs 31·6 cases per 10 000 exposure days; RR 1·0, 95% CI 0·57–1·75; p=0·997).
Interpretation
A contaminated health-care environment is an important source for acquisition of pathogens; enhanced terminal room disinfection decreases this risk.
Intérêt du Masque laryngé en Anesthésie pédiatrique
http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(16)31719-6.pdf
Drake-Brockman et al., Lancet, 2017
Background
Perioperative respiratory adverse events (PRAE) are the most common critical incidents in paediatric anaesthesia and occur more often in infants. Use of laryngeal mask airways (LMAs) is associated with reduced PRAE compared with endotracheal tubes in older children (>1 year). We aimed to evaluate the effect of these devices on the incidence of PRAE in infants.
Methods
We did a randomised controlled trial at the Princess Margaret Hospital for Children in Perth (WA, Australia) by recruiting infants (aged 0–12 months) undergoing general (with or without regional or local) anaesthesia with anticipated fentanyl dose 1 μg/kg or lower for minor elective surgery. We excluded patients contraindicated for LMA or endotracheal tube; who had known cardiac disease or airway or thoracic malformations; who were receiving midazolam premedication; who were undergoing airway, thoracic, or abdomen surgery at the time of participation; and if the parents did not speak English. Written parental or guardian consent was obtained before enrolment. Participants were randomly assigned (1:1), by computer-generated variable block randomisation, to receive an LMA (PRO-Breathe, Well Lead Medical Co Ltd, Panyu, China) or an endotracheal tube (Microcuff, Halyard Health Inc, Atlanta, GA, USA). Sealed randomisation envelopes were used to conceal device assignment. An interim analysis was planned once half the number of infants needed (145) had been recruited. The primary outcome was incidence of PRAE, assessed in the intention-to-treat population. The institutional ethics committee at the Princess Margaret Hospital for Children granted ethical approval (1786/EP). The trial is registered with the Australian New Zealand Clinical Trials Registry (ACTRN12610000250033).
Findings
The trial began on July 8, 2010, and was ended early on May 7, 2015, after the interim analysis results met the study stopping rules. During this time, 239 infants were assessed and 181 eligible infants were randomly assigned to receive an LMA (n=85) or an endotracheal tube (n=95). Four infants were not included in the analysis (two due to cancelled procedures, one did not meet inclusion criteria, and one with missing dataset). In the intention-to-treat analysis, PRAE occurred in 50 (53%) infants in the endotracheal tube group and in 15 (18%) infants in the LMA group (risk ratio [RR] 2·94, 95% CI 1·79–4·83, p<0·0001). Laryngospasm and bronchospasm (major PRAE) were recorded in 18 (19%) infants in the endotracheal tube group and in three (4%) infants in the LMA group (RR 5·30, 95% CI 1·62–17·35, p=0·002). No deaths were reported.
Interpretation
In infants undergoing minor elective procedures, LMAs were associated with clinically significantly fewer PRAE and lower occurrence of major PRAE (laryngospasm and bronchospasm) than endotracheal tubes. This difference should be a consideration in airway device selection.
Sous-nutrition permissive VS Nutrition standard en réanimation
http://www.atsjournals.org/doi/pdf/10.1164/rccm.201605-1012OC
Arabi et al. AJRCCM, 2017
Etude post-hoc de l’étude PermiT : http://www.nejm.org/doi/full/10.1056/NEJMoa1502826
Rationale: The optimal nutritional strategy for critically ill adults at high nutritional risk is unclear.
Objectives: To examine the effect of permissive underfeeding with full protein intake compared with standard feeding on 90-day mortality in patients with different baseline nutritional risk.
Methods: This is a post hoc analysis of the PermiT (Permissive Underfeeding versus Target Enteral Feeding in Adult Critically Ill Patients) trial.
Measurements and Main Results: Nutritional risk was categorized by the modified Nutrition Risk in Critically Ill score, with high nutritional risk defined as a score of 5–9 and low nutritional risk as a score of 0–4. Additional analyses were performed by categorizing patients by body mass index, prealbumin, transferrin, phosphate, urinary urea nitrogen, and nitrogen balance. Based on the Nutrition Risk in Critically Ill score, 378 of 894 (42.3%) patients were categorized as high nutritional risk and 516 of 894 (57.7%) as low nutritional risk. There was no association between feeding strategy and mortality in the two categories; adjusted odds ratio (aOR) of 0.84 (95% confidence interval [CI], 0.56–1.27) for high nutritional risk and 1.01 (95% CI, 0.64–1.61) for low nutritional risk (interaction P = 0.53). Findings were similar in analyses using other definitions, with the exception of prealbumin. The association of permissive underfeeding versus standard feeding and 90-day mortality differed when patients were categorized by baseline prealbumin level (≤0.10 g/L: aOR, 0.57 [95% CI, 0.31–1.05]; >0.10 and ≤0.15 g/L: aOR, 0.79 [95% CI, 0.42–1.48]; >0.15 g/L: aOR, 1.55 [95% CI, 0.80, 3.01]; interaction P = 0.009).
Conclusions: Among patients with high and low nutritional risk, permissive underfeeding with full protein intake was associated with similar outcomes as standard feeding.
Encéphalites à anti-NMDA en Réanimation : bon pronostic
http://www.atsjournals.org/doi/full/10.1164/rccm.201603-0507OC
De Montmollin et al., AJRCCM, 2017
Rationale: Encephalitis caused by anti–N-methyl-d-aspartate receptor (NMDAR) antibodies is the leading cause of immune-mediated encephalitis. There are limited data on intensive care unit (ICU) management of these patients.
Objectives: To identify prognostic factors of good neurologic outcome in patients admitted to an ICU with anti-NMDAR encephalitis.
Methods: This was an observational multicenter study of all consecutive adult patients diagnosed with anti-NMDAR encephalitis at the French National Reference Centre, admitted to an ICU between 2008 and 2014. The primary outcome was a good neurologic outcome at 6 months after ICU admission, defined by a modified Rankin Scale score of 0–2.
Measurements and Main Results: Seventy-seven patients were included from 52 ICUs. First-line immunotherapy consisted of steroids (n = 61/74; 82%), intravenous immunoglobulins (n = 71/74; 96%), and plasmapheresis (n = 17/74; 23%). Forty-five (61%) patients received second-line immunotherapy (cyclophosphamide, rituximab, or both). At 6 months, 57% of patients had a good neurologic outcome. Independent factors of good neurologic outcome were early (≤8 d after ICU admission) immunotherapy (odds ratio, 16.16; 95% confidence interval, 3.32–78.64; for combined first-line immunotherapy with steroids and intravenous immunoglobulins vs. late immunotherapy), and a low white blood cell count on the first cerebrospinal examination (odds ratio, 9.83 for <5 vs. >50 cells/mm3; 95% confidence interval, 1.07–90.65). Presence of nonneurologic organ failures at ICU admission and occurrence of status epilepticus during ICU stay were not associated with neurologic outcome.
Conclusions: The prognosis of adult patients with anti-NMDAR encephalitis requiring intensive care is good, especially when immunotherapy is initiated early, advocating for prompt diagnosis and early aggressive treatment.
Echo gastrique pour évaluation des « estomacs pleins »
Bouvet et al., BJA, 2017
A jeûn n’est pas « estomac vide »
Van De Putte (no comment) et al., BJA, 2017
Impact de l’Exacyl en per-opératoire de chirurgie du rachis
Colomina et al., BJA, 2017
Background. Perioperative tranexamic acid (TXA) use can reduce bleeding and transfusion requirements in several types of surgery, but level I evidence proving its effectiveness in major spine surgery is lacking. This study was designed to investigate the hypothesis that TXA reduces perioperative blood loss and transfusion requirements in patients undergoing major spine procedures.
Methods. We conducted a multicentre, prospective, randomized double-blind clinical trial, comparing TXA with placebo in posterior instrumented spine surgery. Efficacy was determined based on the total number of blood units transfused and the perioperative blood loss. Other variables such as the characteristics of surgery, length of hospital stay, and complications were also analysed.
Results. Ninety-five patients undergoing posterior instrumented spine surgery (fusion of >3 segments) were enrolled and randomized: 44 received TXA (TXA group) and 51 received placebo (controls). The groups were comparable for duration of surgery, number of levels fused, and length of hospitalization. Transfusion was not required in 48% of subjects receiving TXA compared with 33% of controls (P = 0.05). Mean number of blood units transfused was 0.85 in the TXA group and 1.42 with placebo (P = 0.06). TXA resulted in a significant decrease in intraoperative bleeding (P = 0.01) and total bleeding (P = 0.01) relative to placebo. The incidence of adverse events was similar in the two groups.
Conclusions. TXA did not significantly reduce transfusion requirements, but significantly reduced perioperative blood loss in adults undergoing major spinal surgery.
Jus de pomme précoce en post-opératoire chez les enfants
Chauvin et al. BJA, 2017
Background. In children younger than 4 yr, it is difficult to distinguish the cause of postoperative distress, such as thirst, pain, and emergence delirium. This may lead to inappropriate treatment, such as administration of opioids. The aim of this study was to evaluate the influence of early postoperative oral fluid intake on the use of opioid analgesics and the incidence of postoperative vomiting (POV) after paediatric day case surgery.
Methods. After ethics committee approval and with parental informed consent, planned day surgery patients aged 6 months to 4 yr were randomized to the liberal group (LG), in which apple juice (10 ml kg−1) was offered first if the Face Legs Activity Cry COnsolability (FLACC) score was ≥4 in the PACU, or to the control group (CG), in which children were treated after surgery according to the institutional opioid protocol, and drinking was allowed only upon the return to the ward. Bayesian statistical analysis was used to compare POV incidence and opioid use across groups.
Results. Data from 231 patients were analysed. The incidence of POV in the LG and the CG was 11.40 and 23.93%, respectively. An opioid was needed in 14.04% (mean total dose: 0.18 mg kg−1) and 35.89% (mean total dose: 0.20 mg kg−1) of the patients in the LG and the CG. The PACU stay was 53.45 and 65.05 min in the LG and the CG, respectively (all differences were statistically significant).
Conclusions. In our paediatric outpatient setting, early postoperative oral fluid intake was associated with a reduction in opioid use and POV incidence. These results deserve confirmation in other settings.
Score de propension sur la Dexaméthasone en per-opératoire
Corcoran et al., BJA, 2017
Background. In a post hoc analysis of the ENIGMA-II trial, we sought to determine whether intraoperative dexamethasone was associated with adverse safety outcomes.
Methods. Inverse probability weighting with estimated propensity scores was used to determine the association of dexamethasone administration with postoperative infection, quality of recovery, and adverse safety outcomes for 5499 of the 7112 non-cardiac surgery subjects enrolled in ENIGMA-II.
Results. Dexamethasone was administered to 2178 (40%) of the 5499 subjects included in this analysis and was not associated with wound infection [189 (8.7%) vs 275 (8.3%); propensity score-adjusted relative risk (RR) 1.10; 95% confidence interval (CI) 0.89–1.34; P=0.38], severe postoperative nausea and vomiting on day 1 [242 (7.3%) vs 189 (8.7%); propensity score-adjusted RR 1.06; 95% CI 0.86–1.30; P=0.59], quality of recovery score [median 14, interquartile range (IQR) 12–15, vs median 14, IQR 12–16, P=0.10), length of stay in the postanaesthesia care unit [propensity score-adjusted median (IQR) 2.0 (1.3, 2.9) vs 1.9 (1.3, 3.1), P=0.60], or the primary outcome of the main trial. Dexamethasone administration was associated with a decrease in fever on days 1–3 [182 (8.4%) vs 488 (14.7%); RR 0.61; 95% CI 0.5–0.74; P<0.001] and shorter lengths of stay in hospital [propensity score-adjusted median (IQR) 5.0 (2.9, 8.2) vs 5.3 (3.1, 9.1), P<0.001]. Neither diabetes mellitus nor surgical wound contamination status altered these outcomes.
Conclusion. Dexamethasone administration to high-risk non-cardiac surgical patients did not increase the risk of postoperative wound infection or other adverse events up to day 30, and appears to be safe in patients either with or without diabetes mellitus.
Méta-analyse sur la Dexmedetomidine en ALR
https://academic.oup.com/bja/article/118/2/167/2924212/Evidence-basis-for-using-perineural
Vorobeichik et al., BJA, 2017
Effets rénaux de la CEC en chirurgie cardiaque
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2592738
Lannemyr et al., Anesthesiology, 2017
Background: Acute kidney injury is a common complication after cardiac surgery with cardiopulmonary bypass. The authors evaluated the effects of normothermic cardiopulmonary bypass on renal blood flow, glomerular filtration rate, renal oxygen consumption, and renal oxygen supply/demand relationship, i.e., renal oxygenation (primary outcome) in patients undergoing cardiac surgery.
Methods: Eighteen patients with a normal preoperative serum creatinine undergoing cardiac surgery procedures with normothermic cardiopulmonary bypass (2.5 l · min−1 · m−2) were included after informed consent. Systemic and renal hemodynamic variables were measured by pulmonary artery and renal vein catheters before, during, and after cardiopulmonary bypass. Arterial and renal vein blood samples were taken for measurements of renal oxygen delivery and consumption. Renal oxygenation was estimated from the renal oxygen extraction. Urinary N-acetyl-β-d-glucosaminidase was measured before, during, and after cardiopulmonary bypass.
Results: Cardiopulmonary bypass induced a renal vasoconstriction and redistribution of blood flow away from the kidneys, which in combination with hemodilution decreased renal oxygen delivery by 20%, while glomerular filtration rate and renal oxygen consumption were unchanged. Thus, renal oxygen extraction increased by 39 to 45%, indicating a renal oxygen supply/demand mismatch during cardiopulmonary bypass. After weaning from cardiopulmonary bypass, renal oxygenation was further impaired due to hemodilution and an increase in renal oxygen consumption, accompanied by a seven-fold increase in the urinary N-acetyl-β-d-glucosaminidase/creatinine ratio.
Conclusions: Cardiopulmonary bypass impairs renal oxygenation due to renal vasoconstriction and hemodilution during and after cardiopulmonary bypass, accompanied by increased release of a tubular injury marker.
Complications des assistances ventriculaires
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2598358
Mathis et al., Anesthesiology, 2017
Background: Patients with left ventricular assist devices presenting for noncardiac surgery are increasingly commonplace; however, little is known about their outcomes. Accordingly, the authors sought to determine the frequency of complications, risk factors, and staffing patterns.
Methods: The authors performed a retrospective study at their academic tertiary care center, investigating all adult left ventricular assist device patients undergoing noncardiac surgery from 2006 to 2015. The authors described perioperative profiles of noncardiac surgery cases, including patient, left ventricular assist device, surgical case, and anesthetic characteristics, as well as staffing by cardiac/noncardiac anesthesiologists. Through univariate and multivariable analyses, the authors studied acute kidney injury as a primary outcome; secondary outcomes included elevated serum lactate dehydrogenase suggestive of left ventricular assist device thrombosis, intraoperative bleeding complication, and intraoperative hypotension. The authors additionally studied major perioperative complications and mortality.
Results: Two hundred and forty-six patients underwent 702 procedures. Of 607 index cases, 110 (18%) experienced postoperative acute kidney injury, and 16 (2.6%) had elevated lactate dehydrogenase. Of cases with complete blood pressure data, 176 (27%) experienced intraoperative hypotension. Bleeding complications occurred in 45 cases (6.4%). Thirteen (5.3%) patients died within 30 days of surgery. Independent risk factors associated with acute kidney injury included major surgical procedures (adjusted odds ratio, 4.4; 95% CI, 1.1 to 17.3; P = 0.03) and cases prompting invasive arterial line monitoring (adjusted odds ratio, 3.6; 95% CI, 1.3 to 10.3; P = 0.02) or preoperative fresh frozen plasma transfusion (adjusted odds ratio, 1.7; 95% CI, 1.1 to 2.8; P = 0.02).
Conclusions: Intraoperative hypotension and acute kidney injury were the most common complications in left ventricular assist device patients presenting for noncardiac surgery; perioperative management remains a challenge.
Inocuité des corticoïdes en péri-opératoire ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2592740
Toner et al. Anesthesiologye, 2017
Background: Glucocorticoids are increasingly used perioperatively, principally to prevent nausea and vomiting. Safety concerns focus on the potential for hyperglycemia and increased infection. The authors hypothesized that glucocorticoids predispose to such adverse outcomes in a dose-dependent fashion after elective noncardiac surgery.
Methods: The authors conducted a systematic literature search of the major medical databases from their inception to April 2016. Randomized glucocorticoid trials in adults specifically reporting on a safety outcome were included and meta-analyzed with Peto odds ratio method or the quality effects model. Subanalyses were performed according to a dexamethasone dose equivalent of low (less than 8 mg), medium (8 to 16 mg), and high (more than 16 mg). The primary endpoints of any wound infection and peak perioperative glucose concentrations were subject to meta-regression.
Results: Fifty-six trials from 18 countries were identified, predominantly assessing dexamethasone. Glucocorticoids did not impact on any wound infection (odds ratio, 0.8; 95% CI, 0.6 to 1.2) but did result in a clinically unimportant increase in peak perioperative glucose concentration (weighted mean difference, 20.0 mg/dl; CI, 11.4 to 28.6; P < 0.001 or 1.1 mM; CI, 0.6 to 1.6). Glucocorticoids reduced peak postoperative C-reactive protein concentrations (weighted mean difference, −22.1 mg/l; CI, −31.7 to −12.5; P < 0.001), but other adverse outcomes and length of stay were unchanged. No dose–effect relationships were apparent.
Conclusions: The evidence at present does not highlight any safety concerns with respect to the use of perioperative glucocorticoids and subsequent infection, hyperglycemia, or other adverse outcomes. Nevertheless, collated trials lacked sufficient surveillance and power to detect clinically important differences in complications such as wound infection.
Prédiction de la réponse au remplissage par le recrutement alvéolaire ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2592736
Biais et al., Anesthesiology, 2017
Background: Lung recruitment maneuver induces a decrease in stroke volume, which is more pronounced in hypovolemic patients. The authors hypothesized that the magnitude of stroke volume reduction through lung recruitment maneuver could predict preload responsiveness.
Methods: Twenty-eight mechanically ventilated patients with low tidal volume during general anesthesia were included. Heart rate, mean arterial pressure, stroke volume, and pulse pressure variations were recorded before lung recruitment maneuver (application of continuous positive airway pressure of 30 cm H2O for 30 s), during lung recruitment maneuver when stroke volume reached its minimal value, and before and after volume expansion (250 ml saline, 0.9%, infused during 10 min). Patients were considered as responders to fluid administration if stroke volume increased greater than or equal to 10%.
Results: Sixteen patients were responders. Lung recruitment maneuver induced a significant decrease in mean arterial pressure and stroke volume in both responders and nonresponders. Changes in stroke volume induced by lung recruitment maneuver were correlated with those induced by volume expansion (r2 = 0.56; P < 0.0001). A 30% decrease in stroke volume during lung recruitment maneuver predicted fluid responsiveness with a sensitivity of 88% (95% CI, 62 to 98) and a specificity of 92% (95% CI, 62 to 99). Pulse pressure variations more than 6% before lung recruitment maneuver discriminated responders with a sensitivity of 69% (95% CI, 41 to 89) and a specificity of 75% (95% CI, 42 to 95). The area under receiver operating curves generated for changes in stroke volume induced by lung recruitment maneuver (0.96; 95% CI, 0.81 to 0.99) was significantly higher than that for pulse pressure variations (0.72; 95% CI, 0.52 to 0.88; P < 0.05).
Conclusions: The authors’ study suggests that the magnitude of stroke volume decrease during lung recruitment maneuver could predict preload responsiveness in mechanically ventilated patients in the operating room.
Le lactate, meilleur facteur pronostic chez les polytraumatisés ?
http://anesthesiology.pubs.asahq.org/article.aspx?articleid=2598359
Raux et al., Anesthesiology, 2017
Background: Initial blood lactate and base deficit have been shown to be prognostic biomarkers in trauma, but their respective performances have not been compared.
Methods: Blood lactate levels and base deficit were measured at admission in trauma patients in three level 1 trauma centers. This was a retrospective analysis of prospectively acquired data. The association of initial blood lactate and base deficit with mortality was tested using receiver operating characteristics curve, logistic regression using triage scores (Revised Trauma Score and Mechanism Glasgow scale and Arterial Pressure score), and Trauma Related Injury Severity Score as a reference standard. The authors also used a reclassification method.
Results: The authors evaluated 1,075 trauma patients (mean age, 39 ± 18 yr, with 90% blunt and 10% penetrating injuries and a mortality of 13%). At admission, blood lactate was elevated in 425 (39%) patients and base deficit was elevated in 725 (67%) patients. Blood lactate was correlated with base deficit (R2 = 0.54; P < 0.001). Using logistic regression, blood lactate was a better predictor of death than base deficit when considering its additional predictive value to triage scores and Trauma Related Injury Severity Score. This result was confirmed using a reclassification method but only in the subgroup of normotensive patients (n = 745).
Conclusions: Initial blood lactate should be preferred to base deficit as a biologic variable in scoring systems built to assess the initial severity of trauma patients.
Etude rétrospective sur l’ischémie mésentérique en post-opératoire de chirurgie cardiaque
Guillaume et al., Shock, 2017
Background: Acute mesenteric ischemia (AMI) is a rare but severe complication after cardiac surgery. However, AMI is likely to be more frequent in the subgroup of patients presenting with multiple organ failure after a cardiac surgery. The primary objective of this study was to identify AMI risk factors among patients requiring intensive care unit (ICU) admission after cardiac surgery.
Methods: Retrospective observational study of all the patients requiring admission to two ICUs in a large university hospital after a cardiac surgery procedure. AMI confirmation was based on abdominal computed tomography scan, digestive endoscopy, laparotomy, or postmortem examination. Univariate and multivariate analyses were done to compare pre- and in-ICU characteristics between patients with or without AMI.
Results: Between 2007 and 2013, a cardiac surgery was performed in 4,948 patients, of whom 320 patients (6%) required ICU admission for multiple organ failure. AMI was confirmed in 10% of the patients admitted to the ICU for multiple organ failure (33/320). The prognosis of these patients was extremely poor with 28- and 90-day mortality rates of 64% and 83%, respectively. Nonocclusive mesenteric ischemia (NOMI) was the main mechanism involved in 83% of the patients. Coronary artery bypass graft, need for blood transfusion during cardiopulmonary bypass, aspartate aminotransferase at least 100 UI/L, and Simplified Acute Physiology Score II at least 50 at ICU admission were independently associated with AMI. An AMI risk score based upon these four risk factors was able to identify three classes of risk: low risk (<1%), intermediate risk (9%), and high risk (29%).
Conclusion: AMI is a frequent condition among patients presenting with multiple organ failure after cardiac surgery, occurring in 10% of them. The prognosis of AMI is extremely poor. The main mechanism of AMI is NOMI, occurring in approximately 80% of patients. Further progress should be performed on prevention and earlier diagnosis.
Méta-analyse sur l’impact de l’hypomagnésémie en Réanimation
Jiang et al., Shock, 2017
Hypomagnesemia is commonly seen but frequently overlooked in critically ill patients in intensive care unit (ICU). However, the strength and consistency of the effect of hypomagnesemia on outcomes in critically ill patients remain controversial. In this report, we performed a systematic review and meta-analysis to evaluate the association of serum magnesium level with prognosis of critically ill patients upon admission to the ICU. A comprehensive search for clinical trials was performed, and 10 studies comprising 1,122 cases and 630 controls were finally selected for analysis. The patients with hypomagnesemia had higher mortality rate (risk ratio [RR] 1.76; 95% confidence interval [CI] 1.54–2.00; P <0.00001), more frequently had sepsis (RR 2.04; 95% CI 1.21–3.42; P = 0.0007) and more frequent need for ventilatory support (RR 1.36; 95% CI 1.21 to 1.53; P <0.00001). Length of ICU stay was also higher in the hypomagnesemia group (RR 1.85; 95% CI 0.43– 3.26; P = 0.01). Collectively, our data indicated that hypomagnesemia appears associated with greater risk of mortality, sepsis, mechanical ventilation, and the length of ICU stay in patients admitted to ICU. The role of magnesium therapy for improving outcomes in critically ill patients is needed to further study.
Recommandations de l’ESICM sur la nutrition entérale précoce en Réanimation
Reintam Blaser et al., ICM, 2017
Purpose
To provide evidence-based guidelines for early enteral nutrition (EEN) during critical illness.
Methods
We aimed to compare EEN vs. early parenteral nutrition (PN) and vs. delayed EN. We defined “early” EN as EN started within 48 h independent of type or amount. We listed, a priori, conditions in which EN is often delayed, and performed systematic reviews in 24 such subtopics. If sufficient evidence was available, we performed meta-analyses; if not, we qualitatively summarized the evidence and based our recommendations on expert opinion. We used the GRADE approach for guideline development. The final recommendations were compiled via Delphi rounds.
Results
We formulated 17 recommendations favouring initiation of EEN and seven recommendations favouring delaying EN. We performed five meta-analyses: in unselected critically ill patients, and specifically in traumatic brain injury, severe acute pancreatitis, gastrointestinal (GI) surgery and abdominal trauma. EEN reduced infectious complications in unselected critically ill patients, in patients with severe acute pancreatitis, and after GI surgery. We did not detect any evidence of superiority for early PN or delayed EN over EEN. All recommendations are weak because of the low quality of evidence, with several based only on expert opinion.
Conclusions
We suggest using EEN in the majority of critically ill under certain precautions. In the absence of evidence, we suggest delaying EN in critically ill patients with uncontrolled shock, uncontrolled hypoxaemia and acidosis, uncontrolled upper GI bleeding, gastric aspirate >500 ml/6 h, bowel ischaemia, bowel obstruction, abdominal compartment syndrome, and high-output fistula without distal feeding access.
Evénements thrombo-emboliques veineux chez les traumatisés crâniens
Skrifvars et al., ICM, 2017
Purpose
To estimate the prevalence, risk factors, prophylactic treatment and impact on mortality for venous thromboembolism (VTE) in patients with moderate to severe traumatic brain injury (TBI) treated in the intensive care unit.
Methods
A post hoc analysis of the erythropoietin in traumatic brain injury (EPO-TBI) trial that included twice-weekly lower limb ultrasound screening. Venous thrombotic events were defined as ultrasound-proven proximal deep venous thrombosis (DVT) or clinically detected pulmonary embolism (PE). Results are reported as events, percentages or medians and interquartile range (IQR). Cox regression analysis was used to calculate adjusted hazard ratios (HR) with 95% confidence intervals (CI) for time to VTE and death.
Results
Of 603 patients, 119 (19.7%) developed VTE, mostly comprising DVT (102 patients, 16.9%) with a smaller number of PE events (24 patients, 4.0%). Median time to DVT diagnosis was 6 days (IQR 2–11) and to PE diagnosis 6.5 days (IQR 2–16.5). Mechanical prophylaxis (MP) was used in 91% of patients on day 1, 97% of patients on day 3 and 98% of patients on day 7. Pharmacological prophylaxis was given in 5% of patients on day 1, 30% of patients on day 3 and 57% of patients on day 7. Factors associated with time to VTE were age (HR per year 1.02, 95% CI 1.01–1.03), patient weight (HR per kg 1.01, 95% CI 1–1.02) and TBI severity according to the International Mission for Prognosis and Analysis of Clinical Trials risk of poor outcome (HR per 10% increase 1.12, 95% CI 1.01–1.25). The development of VTE was not associated with mortality (HR 0.92, 95% CI 0.51–1.65).
Conclusions
Despite mechanical and pharmacological prophylaxis, VTE occurs in one out of every five patients with TBI treated in the ICU. Higher age, greater weight and greater severity of TBI increase the risk. The development of VTE was not associated with excess mortality.
Les SDRA tardifs (>48h) associés à une mortalité plus élevée ?
Zhang e tal., ICM, 2017
Purpose
To evaluate the association between acute respiratory distress syndrome (ARDS) onset time and prognosis.
Methods
Patients with moderate to severe ARDS (N = 876) were randomly assigned into derivation (N = 520) and validation (N = 356) datasets. Both 28-day and 60-day survival times after ARDS onset were analyzed. A data-driven cutoff point between early- and late-onset ARDS was determined on the basis of mortality risk effects of onset times. We estimated the hazard ratio (HR) and odds ratio (OR) of late-onset ARDS using a multivariate Cox proportional hazards model of survival time and a multivariate logistic regression model of mortality rate, respectively.
Results
Late-onset ARDS, defined as onset over 48 h after intensive care unit (ICU) admission (N = 273, 31%), was associated with shorter 28-day survival time: HR = 2.24, 95% CI 1.48–3.39, P = 1.24 × 10−4 (derivation); HR = 2.16, 95% CI 1.33–3.51, P = 1.95 × 10−3 (validation); and HR = 2.00, 95% CI 1.47–2.72, P = 1.10 × 10−5 (combined dataset). Late-onset ARDS was also associated with shorter 60-day survival time: HR = 1.70, 95% CI 1.16–2.48, P = 6.62 × 10−3 (derivation); HR = 1.78, 95% CI 1.15–2.75, P = 9.80 × 10−3 (validation); and HR = 1.59, 95% CI 1.20–2.10, P = 1.22 × 10−3 (combined dataset). Meanwhile, late-onset ARDS was associated with higher 28-day mortality rate (OR = 1.46, 95% CI 1.04–2.06, P = 0.0305) and 60-day mortality rate (OR = 1.44, 95% CI 1.03–2.02, P = 0.0313).
Conclusions
Late-onset moderate to severe ARDS patients had both shorter survival time and higher mortality rate in 28-day and 60-day observations.
Hypercapnie sévère associée à une mortalité plus importante ?
Nin et al., ICM, 2017
Purpose
To analyze the relationship between hypercapnia developing within the first 48 h after the start of mechanical ventilation and outcome in patients with acute respiratory distress syndrome (ARDS).
Patients and methods
We performed a secondary analysis of three prospective non-interventional cohort studies focusing on ARDS patients from 927 intensive care units (ICUs) in 40 countries. These patients received mechanical ventilation for more than 12 h during 1-month periods in 1998, 2004, and 2010. We used multivariable logistic regression and a propensity score analysis to examine the association between hypercapnia and ICU mortality.
Main outcomes
We included 1899 patients with ARDS in this study. The relationship between maximum PaCO2 in the first 48 h and mortality suggests higher mortality at or above PaCO2 of ≥50 mmHg. Patients with severe hypercapnia (PaCO2 ≥50 mmHg) had higher complication rates, more organ failures, and worse outcomes. After adjusting for age, SAPS II score, respiratory rate, positive end-expiratory pressure, PaO2/FiO2 ratio, driving pressure, pressure/volume limitation strategy (PLS), corrected minute ventilation, and presence of acidosis, severe hypercapnia was associated with increased risk of ICU mortality [odds ratio (OR) 1.93, 95% confidence interval (CI) 1.32 to 2.81; p = 0.001]. In patients with severe hypercapnia matched for all other variables, ventilation with PLS was associated with higher ICU mortality (OR 1.58, CI 95% 1.04–2.41; p = 0.032).
Conclusions
Severe hypercapnia appears to be independently associated with higher ICU mortality in patients with ARDS.
Score prédictif d’échec de VNI ?
Duan et al., ICM, 2017
Purpose
To develop and validate a scale using variables easily obtained at the bedside for prediction of failure of noninvasive ventilation (NIV) in hypoxemic patients.
Methods
The test cohort comprised 449 patients with hypoxemia who were receiving NIV. This cohort was used to develop a scale that considers heart rate, acidosis, consciousness, oxygenation, and respiratory rate (referred to as the HACOR scale) to predict NIV failure, defined as need for intubation after NIV intervention. The highest possible score was 25 points. To validate the scale, a separate group of 358 hypoxemic patients were enrolled in the validation cohort.
Results
The failure rate of NIV was 47.8 and 39.4% in the test and validation cohorts, respectively. In the test cohort, patients with NIV failure had higher HACOR scores at initiation and after 1, 12, 24, and 48 h of NIV than those with successful NIV. At 1 h of NIV the area under the receiver operating characteristic curve was 0.88, showing good predictive power for NIV failure. Using 5 points as the cutoff value, the sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy for NIV failure were 72.6, 90.2, 87.2, 78.1, and 81.8%, respectively. These results were confirmed in the validation cohort. Moreover, the diagnostic accuracy for NIV failure exceeded 80% in subgroups classified by diagnosis, age, or disease severity and also at 1, 12, 24, and 48 h of NIV. Among patients with NIV failure with a HACOR score of >5 at 1 h of NIV, hospital mortality was lower in those who received intubation at ≤12 h of NIV than in those intubated later [58/88 (66%) vs. 138/175 (79%); p = 0.03).
Conclusions
The HACOR scale variables are easily obtained at the bedside. The scale appears to be an effective way of predicting NIV failure in hypoxemic patients. Early intubation in high-risk patients may reduce hospital mortality.
Méta-analyse sur la stratégie conservative de remplissage vasculaire
Silversides et al., ICM, 2017
Background
It is unknown whether a conservative approach to fluid administration or deresuscitation (active removal of fluid using diuretics or renal replacement therapy) is beneficial following haemodynamic stabilisation of critically ill patients.
Purpose
To evaluate the efficacy and safety of conservative or deresuscitative fluid strategies in adults and children with acute respiratory distress syndrome (ARDS), sepsis or systemic inflammatory response syndrome (SIRS) in the post-resuscitation phase of critical illness.
Methods
We searched Medline, EMBASE and the Cochrane central register of controlled trials from 1980 to June 2016, and manually reviewed relevant conference proceedings from 2009 to the present. Two reviewers independently assessed search results for inclusion and undertook data extraction and quality appraisal. We included randomised trials comparing fluid regimens with differing fluid balances between groups, and observational studies investigating the relationship between fluid balance and clinical outcomes.
Results
Forty-nine studies met the inclusion criteria. Marked clinical heterogeneity was evident. In a meta-analysis of 11 randomised trials (2051 patients) using a random-effects model, we found no significant difference in mortality with conservative or deresuscitative strategies compared with a liberal strategy or usual care [pooled risk ratio (RR) 0.92, 95 % confidence interval (CI) 0.82–1.02, I2 = 0 %]. A conservative or deresuscitative strategy resulted in increased ventilator-free days (mean difference 1.82 days, 95 % CI 0.53–3.10, I2 = 9 %) and reduced length of ICU stay (mean difference −1.88 days, 95 % CI −0.12 to −3.64, I2 = 75 %) compared with a liberal strategy or standard care.
Conclusions
In adults and children with ARDS, sepsis or SIRS, a conservative or deresuscitative fluid strategy results in an increased number of ventilator-free days and a decreased length of ICU stay compared with a liberal strategy or standard care. The effect on mortality remains uncertain. Large randomised trials are needed to determine optimal fluid strategies in critical illness.
Utilisation de bundle pour mieux soigner le sepsis (coût & survie)
Leisman et al., CCM, 2017
Objectives: To determine mortality and costs associated with adherence to an aggressive, 3-hour sepsis bundle versus noncompliance with greater than or equal to one bundle element for severe sepsis and septic shock patients.
Design: Prospective, multisite, observational study following three sequential, independent cohorts, from a single U.S. health system, through their hospitalization.
Setting: Cohort 1: five tertiary and six community hospitals. Cohort 2: single tertiary, academic medical center. Cohort 3: five tertiary and four community hospitals.
Patients: Consecutive sample of all severe sepsis and septic shock patients (defined: infection, ≥ 2 systemic inflammatory response syndrome, and hypoperfusive organ dysfunction) identified by a quality initiative. The exposure was full 3-hour bundle compliance. Bundle elements are as follows: 1) blood cultures before antibiotics; 2) parenteral antibiotics administered less than or equal to 180 minutes from greater than or equal to two systemic inflammatory response syndrome “and” lactate ordered, or less than or equal to 60 minutes from “time-zero,” whichever occurs earlier; 3) lactate result available less than or equal to 90 minutes postorder; and 4) 30 mL/kg IV crystalloid bolus initiated less than or equal to 30 minutes from “time-zero.” Main outcomes were in-hospital mortality (all cohorts) and total direct costs (cohorts 2 and 3).
Measurements and Main Results: Cohort 1: 5,819 total patients; 1,050 (18.0%) bundle compliant. Mortality: 604 (22.6%) versus 834 (26.5%); CI, 0.9–7.1%; adjusted odds ratio, 0.72; CI, 0.61–0.86; p value is less than 0.001. Cohort 2: 1,697 total patients; 739 (43.5%) bundle compliant. Mortality: 99 (13.4%) versus 171 (17.8%), CI, 1.0–7.9%; adjusted odds ratio, 0.60; CI, 0.44–0.80; p value is equal to 0.001. Mean costs: $14,845 versus $20,056; CI, –$4,798 to –5,624; adjusted β, –$2,851; CI, –$4,880 to –822; p value is equal to 0.006. Cohort 3: 7,239 total patients; 2,115 (29.2%) bundle compliant. Mortality: 383 (18.1%) versus 1,078 (21.0%); CI, 0.9–4.9%; adjusted odds ratio, 0.84; CI, 0.73–0.96; p value is equal to 0.013. Mean costs: $17,885 versus $22,108; CI, –$2,783 to –5,663; adjusted β, –$1,423; CI, –$2,574 to –272; p value is equal to 0.015.
Conclusions: In three independent cohorts, 3-hour bundle compliance was associated with improved survival and cost savings.
« Vt challenge » pour prédire la réponse au remplissage ?
Myatra,et al., CCM, 2017
Objectives: Stroke volume variation and pulse pressure variation do not reliably predict fluid responsiveness during low tidal volume ventilation. We hypothesized that with transient increase in tidal volume from 6 to 8 mL/kg predicted body weight, that is, “tidal volume challenge,” the changes in pulse pressure variation and stroke volume variation will predict fluid responsiveness.
Design: Prospective, single-arm study.
Setting: Medical-surgical ICU in a university hospital.
Patients: Adult patients with acute circulatory failure, having continuous cardiac output monitoring, and receiving controlled low tidal volume ventilation.
Interventions: The pulse pressure variation, stroke volume variation, and cardiac index were recorded at tidal volume 6 mL/kg predicted body weight and 1 minute after the “tidal volume challenge.” The tidal volume was reduced back to 6 mL/kg predicted body weight, and a fluid bolus was given to identify fluid responders (increase in cardiac index > 15%). The end-expiratory occlusion test was performed at tidal volumes 6 and 8 mL/kg predicted body weight and after reducing tidal volume back to 6 mL/kg predicted body weight.
Results: Thirty measurements were obtained in 20 patients. The absolute change in pulse pressure variation and stroke volume variation after increasing tidal volume from 6 to 8 mL/kg predicted body weight predicted fluid responsiveness with areas under the receiver operating characteristic curves (with 95% CIs) being 0.99 (0.98–1.00) and 0.97 (0.92–1.00), respectively. The best cutoff values of the absolute change in pulse pressure variation and stroke volume variation after increasing tidal volume from 6 to 8 mL/kg predicted body weight were 3.5% and 2.5%, respectively. The pulse pressure variation, stroke volume variation, central venous pressure, and end-expiratory occlusion test obtained during tidal volume 6 mL/kg predicted body weight did not predict fluid responsiveness.
Conclusions: The changes in pulse pressure variation or stroke volume variation obtained by transiently increasing tidal volume (tidal volume challenge) are superior to pulse pressure variation and stroke volume variation in predicting fluid responsiveness during low tidal volume ventilation.
Inhibiteurs calciques associés à une baisse de la mortalité chez les patients en réa avec un sepsis ?
Wiewel et al., CCM, 2017
Objectives: Experimental studies suggest that calcium channel blockers can improve sepsis outcome. The aim of this study was to determine the association between prior use of calcium channel blockers and the outcome of patients admitted to the ICU with sepsis.
Design: A prospective observational study.
Setting: The ICUs of two tertiary care hospitals in the Netherlands.
Patients: In total, 1,060 consecutive patients admitted with sepsis were analyzed, 18.6% of whom used calcium channel blockers.
Interventions: None.
Measurements and Main Results: Considering large baseline differences between calcium channel blocker users and nonusers, a propensity score matched cohort was constructed to account for differential likelihoods of receiving calcium channel blockers. Fifteen plasma biomarkers providing insight in key host responses implicated in sepsis pathogenesis were measured during the first 4 days after admission. Severity of illness over the first 24 hours, sites of infection and causative pathogens were similar in both groups. Prior use of calcium channel blockers was associated with improved 30-day survival in the propensity-matched cohort (20.2% vs 32.9% in non-calcium channel blockers users; p = 0.009) and in multivariate analysis (odds ratio, 0.48; 95% CI, 0.31–0.74; p = 0.0007). Prior calcium channel blocker use was not associated with changes in the plasma levels of host biomarkers indicative of activation of the cytokine network, the vascular endothelium and the coagulation system, with the exception of antithrombin levels, which were less decreased in calcium channel blocker users.
Conclusions: Prior calcium channel blocker use is associated with reduced mortality in patients following ICU admission with sepsis.
Faiblesse musculaire à la sortie de Réa & Mortalité à 5 ans
Dinglas et al., CCM, 2017
Objectives: To longitudinally evaluate the association of post-ICU muscle weakness and associated trajectories of weakness over time with 5-year survival.
Design: Longitudinal prospective cohort study over 5 years of follow-up.
Setting: Thirteen ICUs in four hospitals in Baltimore, MD.
Patients: One hundred fifty-six acute respiratory distress syndrome survivors.
Interventions: None.
Measurements and Main Results: Strength was evaluated with standardized manual muscle testing using the Medical Research Council sum score (range, 0–60; higher is better), with post-ICU weakness defined as sum score less than 48. Muscle strength was assessed at hospital discharge and at 3, 6, 12, 24, 36, and 48 months after acute respiratory distress syndrome. At discharge, 38% of patients had muscle weakness. Every one point increase in sum score at discharge was associated with improved survival (hazard ratio [95% CI], 0.96 [0.94–0.98]), with similar findings longitudinally (0.95 [0.93–0.98]). Having weakness at discharge was associated with worse 5-year survival (1.75 [1.01–3.03]), but the association was attenuated (1.54 [0.82–2.89]) when evaluated longitudinally over follow-up. Persisting and resolving trajectories of muscle weakness, occurring in 50% of patients during follow-up, were associated with worse survival (3.01 [1.12-8.04]; and 3.14 [1.40-7.03], respectively) compared to a trajectory of maintaining no muscle weakness.
Conclusions: At hospital discharge, greater than one third of acute respiratory distress syndrome survivors had muscle weakness. Greater strength at discharge and throughout follow-up was associated with improved 5-year survival. In patients with post-ICU weakness, both persisting and resolving trajectories were commonly experienced and associated with worse survival during follow-up.
Epidémiologie sur 10 ans du choc septique
http://journal.publications.chestnet.org/article.aspx?articleID=2540467
Kadri et al., Chest, 2017
Background Reports that septic shock incidence is rising and mortality rates declining may be confounded by improving recognition of sepsis and changing coding practices. We compared trends in septic shock incidence and mortality in academic hospitals using clinical vs claims data.
Methods We identified all patients with concurrent blood cultures, antibiotics, and vasopressors for ≥ two consecutive days, and all patients with International Classification of Diseases, 9th edition (ICD-9) codes for septic shock, at 27 academic hospitals from 2005 to 2014. We compared annual incidence and mortality trends. We reviewed 967 records from three hospitals to estimate the accuracy of each method.
Results Of 6.5 million adult hospitalizations, 99,312 (1.5%) were flagged by clinical criteria, 82,350 (1.3%) by ICD-9 codes, and 44,651 (0.7%) by both. Sensitivity for clinical criteria was higher than claims (74.8% vs 48.3%; P < .01), whereas positive predictive value was comparable (83% vs 89%; P = .23). Septic shock incidence, based on clinical criteria, rose from 12.8 to 18.6 cases per 1,000 hospitalizations (average, 4.9% increase/y; 95% CI, 4.0%-5.9%), while mortality declined from 54.9% to 50.7% (average, 0.6% decline/y; 95% CI, 0.4%-0.8%). In contrast, septic shock incidence, based on ICD-9 codes, increased from 6.7 to 19.3 per 1,000 hospitalizations (19.8% increase/y; 95% CI, 16.6%-20.9%), while mortality decreased from 48.3% to 39.3% (1.2% decline/y; 95% CI, 0.9%-1.6%).
Conclusions A clinical surveillance definition based on concurrent vasopressors, blood cultures, and antibiotics accurately identifies septic shock hospitalizations and suggests that the incidence of patients receiving treatment for septic shock has risen and mortality rates have fallen, but less dramatically than estimated on the basis of ICD-9 codes.
Coût-efficacité de la PCT en Réanimation aux USA
http://journal.publications.chestnet.org/article.aspx?articleID=2548134
Balk et al., Chest, 2017
Background There is a growing use of procalcitonin (PCT) to facilitate the diagnosis and management of severe sepsis. We investigated the impact of one to two PCT determinations on ICU day 1 on health-care utilization and cost in a large research database.
Methods A retrospective, propensity score-matched multivariable analysis was performed on the Premier Healthcare Database for patients admitted to the ICU with one to two PCT evaluations on day 1 of ICU admission vs patients who did not have PCT testing.
Results A total of 33,569 PCT-managed patients were compared with 98,543 propensity score-matched non-PCT patients. In multivariable regression analysis, PCT utilization was associated with significantly decreased total length of stay (11.6 days [95% CI, 11.4 to 11.7] vs 12.7 days [95% CI, 12.6 to 12.8]; 95% CI for difference, 1 to 1.3; P < .001) and ICU length of stay (5.1 days [95% CI, 5.1 to 5.2] vs 5.3 days [95% CI, 5.3 to 5.4]; 95% CI for difference, 0.1 to 0.3; P < .03), and lower hospital costs ($30,454 [95% CI, 29,968 to 31,033] vs $33,213 [95% CI, 32,964 to 33,556); 95% CI for difference, 2,159 to 3,321; P < .001). There was significantly less total antibiotic exposure (16.2 days [95% CI, 16.1 to 16.5] vs 16.9 days [95% CI, 16.8 to 17.1]; 95% CI for difference, –0.9 to 0.4; P = .006) in PCT-managed patients. Patients in the PCT group were more likely to be discharged to home (44.1% [95% CI, 43.7 to 44.6] vs 41.3% [95% CI, 41 to 41.6]; 95% CI for difference, 2.3 to 3.3; P = .006). Mortality was not different in an analysis including the 96% of patients who had an independent measure of mortality risk available (19.1% [95% CI, 18.7 to 19.4] vs 19.1% [95% CI, 18.9 to 19.3]; 95% CI for difference, –0.5 to 0.4; P = .93).
Conclusions Use of PCT testing on the first day of ICU admission was associated with significantly lower hospital and ICU lengths of stay, as well as decreased total, ICU, and pharmacy cost of care. Further elucidation of clinical outcomes requires additional data.