Poor nutritional status prior to surgery in cardiac patients is one of the risk factors for acute kidney injury (AKI), morbidity, and mortality. There is a lack of data in patients undergoing cardiac surgery with regard to nutritional status and risk of AKI. This study was conducted with the objective of assessment of the nutritional status of cardiac surgery patients using body composition measures (BCM) and other biochemical parameters. This study was conducted at Madras Medical Mission Hospital, Chennai. Before enrolling, informed consent from the patients and ethical authorization were obtained. All patients >18 years of age undergoing cardiac surgery had a BCM analysis done on the pre- and postoperative day 5. Paired t-test was used to compare the pre- and postoperative data. Preoperative body mass index (BMI) of the patients showed that the majority of them were overweight, with a mean BMI of ±26.55 kg/m2. There were no significant changes in the BCM results for protein weight in either study group (no AKI group-preop: mean ± SD, 9.0316 ± 2.39, p = 0.67; postop: mean ± SD, 9.1919 ± 2.57, p = 0.77; AKI group-preop: mean ± SD, 9.57 ± 8.00, p = 0.67; postop: mean ± SD, 9.56 ± 8.07, p = 0.77). There was a significant loss of body fat in all patients, but it was higher in patients who developed AKI (preop: mean ± SD, 33.28 ± 10.96, p = 0.11 vs postop: mean ± SD, 31.83 ± 10.94, p = 0.53). The skeletal muscle mass in both groups showed no significant changes. Those who developed AKI postoperatively had a higher preoperative visceral fat area (VFA) (mean ± SD, 116.87) and percentage body fat (PBF) (33%) compared to patients who did not develop AKI (VFA ±102.36 and PBF 30%). We found that patients had lost body fat postsurgically. Those who were diagnosed with AKI had overhydration, high waist circumference, and VFA preoperatively.
Publications
2025
AIM: To study the perception of nonintensivists of Indian intensive care units (ICUs) about the role of intensivists as leaders of the ICU, their impact on patient outcomes, including length of stay on the ventilator, cost of care, and evidence-driven quality care using a survey questionnaire.
MATERIALS AND METHODS: This study employed an online survey conducted using a Google Form and distributed via WhatsApp to nonintensivists taking care of ICU/high dependency unit (HDU) patients in public and private hospitals all over India. It consisted of 24 questions related to perceptions about the role of an intensivist in the ICU, their impact on patient-driven outcomes, ICU processes, and ICU structure.
RESULTS: There was a statistically significant difference in responses from respondents working in closed and semi-open ICUs vs open ICUs. Overall, the presence of an intensivist was perceived to be associated with improvements in patient outcomes, smoother decision-making for complex cases, reduced costs by avoiding unnecessary tests, and reduced litigation by patient families, especially in closed and semi-open ICUs vs open ICUs.
CONCLUSION: This is the first-ever survey done to understand the role of an intensivist in the ICU in India in the eyes of a nonintensivist/admitting physician or surgeon. It shows that intensivists are considered to play a significant role in impacting patient outcomes, such as facilitating smoother decision-making in complex cases, improving decision-making efficiency, reducing costs associated with unnecessary tests, and preventing litigation by families. The survey results are very encouraging and should pave the way for conducting large-scale surveys in the developing world.
BACKGROUND: Sepsis is a leading cause of mortality globally, yet obtaining accurate population-level data remains challenging. According to a 2020 report, there were approximately 48.9 million cases of sepsis and 11 million sepsis-related fatalities worldwide, accounting for 20% of all deaths globally. This study aims to assess the diagnostic efficacy of patient evaluation in comparison with the Sequential Organ Failure Assessment (SOFA), Acute Physiology and Chronic Health Evaluation II (APACHE II), and Simplified Acute Physiology Score (SAPS) indices, in a quaternary care hospital, and to analyze the impact of various clinical parameters and comorbidities on patient outcomes.
MATERIALS AND METHODS: The study was conducted at Hindu Mission Hospital in Chennai and used a retrospective design to analyze septicemia patients' data from June 2018 to January 2020. The database included clinical presentation, vital signs, comorbidities, laboratory values, and septicemia features. Specimens underwent smear microscopic analysis of the mycobacterial culture.
RESULTS: The study found that elevated SOFA and APACHE II scores, comorbidities, prompt antibiotic administration, and infection characteristics significantly impact sepsis patient outcomes, emphasizing the importance of timely intervention and comprehensive scoring systems.
CONCLUSION: The study emphasizes the significance of a comprehensive approach to sepsis management, including early detection, prompt intervention, and managing comorbid conditions, and suggests future research should focus on accurate predictive models and personalized medicine approaches.
BACKGROUND: Contrast-induced nephropathy (CIN) is an iatrogenic impairment to the kidneys that can occur in susceptible persons after intravascular injections of contrast agents. Individuals undergoing percutaneous coronary intervention (PCI) for acute coronary syndrome (ACS) often bear the risk of developing CIN. The likelihood of CIN can be predicted using several techniques, although none of them are very accurate. CHA2DS2-VASc score is used to predict unfavorable clinical outcomes in patients with ACS and atrial fibrillation. The score comprises preprocedural variables and is simple to calculate and can be used for predicting CIN. This study aims to validate CHA2DS2-VASc score to predict occurrence of CIN among patients undergoing PCI.
MATERIALS AND METHODS: This cross-sectional research has been carried out at a tertiary care hospital. The study comprised a total of 182 patients who were admitted with ACS and underwent PCI. CIN incidence was computed. The study population was divided into two groups (the CIN group and the non-CIN group) based on the incidence of CIN. The CHA2DS2-VASc score was computed for every patient. The best cutoff values of the CHA2DS2-VASc score to predict the development of CIN were found using receiver operating characteristic (ROC) curve analysis. The incidence of CIN was computed both above and below the CHA2DS2-VASc score's optimal cutoff point.
RESULTS: The incidence of CIN among patients undergoing PCI was 14.3%, and the ROC value for the CHA2DS2-VASc score was 0.896. Statistically significant increases in the incidence of CIN were observed in patients undergoing PCI who had a CHA2DS2-VASc score of >2. Additionally, a significant relationship was discovered between CIN and age, diabetes, hypertension, prior coronary artery disease (CAD), and Killip class ≥2.
CONCLUSION: Patients with CHA2DS2-VASc score of >2 had higher incidence of CIN. CHA2DS2-VASc score was found to be useful in predicting contrast nephropathy among patients with acute myocardial infarction undergoing angiography.
AIM: To ascertain the function of neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) as biomarkers in evaluation of disease activity in rheumatoid arthritis (RA) patients.
MATERIALS AND METHODS: This cross-sectional research was performed in a hospital and included 381 patients who met the 2010 ACR/EULAR criteria for RA. The clinical disease activity assessment (CDAI) was used to evaluate activity of disease in addition to demographic and disease-related variables. Based on preestablished CDAI cutoff values, the participants were categorized into four groups. For each patient, laboratory analysis included the following: C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and complete blood count (CBC). The conventional procedure was followed in the appropriate computation of PLR and NLR. The four patient groups' NLR and PLR values were compared, and the relation among disease activity indices and NLR and PLR was investigated using Pearson correlation analysis.
RESULTS: In patients, the mean PLR was 132.8 ± 127.7 and the mean NLR was 3.66 ± 2.6. Patients with low disease activity had a substantially lower mean PLR (p = 0.021) in comparison to those with higher disease activity. The mean NLR in relation to CDAI was not observed to be statistically significant (p = 0.69) across the four groups. While there was a weak positive association between PLR and the physician visual analog scale (VAS) (r = 0.22), patient VAS (r = 0.12), and CDAI (r = 0.17), there was no correlation among CDAI and specific disease indices with NLR, according to Pearson correlation analysis.
CONCLUSION: PLR, but not NLR, may be an effective biomarker for evaluating the disease activity level in RA patients, particularly higher disease activity.
OBJECTIVE: To assess the prevalence of guideline-directed medical therapy (GDMT) and identify reasons for nonprescription and dose optimization in heart failure patients with reduced ejection fraction (HFrEF) in a tertiary care hospital in southern India.
METHODS: A cross-sectional study was conducted in a tertiary care hospital involving HFrEF patients. Patients with heart failure were categorized based on GDMT prescriptions. Reasons for nonprescription and suboptimal dosing were identified.
RESULTS: The study included 102 HFrEF patients with a mean age of 54 ± 11.7 years, predominantly male (89%). Only 10.8% of patients received GDMT at optimal doses. Although 62% were on triple therapy, many had one or more medications at suboptimal doses. Additionally, 26% of patients were not prescribed all recommended drug classes. Notably, the majority of patients with renal impairment fail to receive triple therapy. Barriers identified included hemodynamic issues and renal dysfunction.
CONCLUSION: GDMT adherence in HFrEF patients is significantly lower than expected, with only 10.8% receiving therapy at recommended doses. Key issues include suboptimal dosing and incomplete prescription of drug classes, influenced by patient-specific factors and systemic barriers.
BACKGROUND: The number of people living with multiple chronic medical conditions has risen, and with it, the number of medications taken by them. In addition to adherence to medications, it is extremely important to correctly identify the medications. Medication errors occur at all steps, with polypharmacy, low literacy, language barriers, old age, and lack of communication as contributing factors. Many of the patients may not be identifying medications themselves or may be doing so incorrectly. Hence, this study is aimed to check the methods used by patients to identify medications.
MATERIALS AND METHODS: A total of 150 patients attending the outpatient department (OPD) of the medicine department were interviewed using a structured questionnaire, which had multiple-choice questions and one open-ended question. Sociodemographic data, level of education, data on type and number of clinical conditions, groups of medications taken, and methods used for identification of medications were collected. Statistical analysis was done using Stata 14.2.
RESULTS: Most (85.33%) of the patients had a chronic medical condition, out of which 37.33% had two or more clinical conditions. Physical attributes of the tablets (60%) and packaging (39.33%) were used most commonly to identify medications. About 10.67% did not identify the medications themselves. Again 45.33% of the patients depended on the doctor's prescription for the dosing of medications. Patients felt that identification of medications would be easier if the content on packaging included indication, was written in the local language, and was in bold font. They also felt that healthcare professionals spending more time explaining would help them.
CONCLUSION: Irrespective of the level of education, language known, and number of comorbidities, physical attributes and packaging were most commonly used to identify medications.
PURPOSE: To compare the visibility of fundus lesions between RGB and RG images obtained with an ultra-widefield (UWF) fundus imaging device (Optos) for 10 types of fundus lesions.
METHODS: UWF images from 30 patients representing 10 types of fundus lesions were analyzed: vessel sheathing, optic disc cupping, cotton wool spots, epiretinal membrane, laser photocoagulation scars, retinal drusen, retinal hemorrhage, retinal/choroidal detachment, chorioretinal atrophy, and macular degeneration. Three images of each type of lesion were used, and 26 board-certified ophthalmologists compared them. The raters compared the visibility of lesions on a five-point scale: RG significantly better = -2; RG slightly better = -1; equal = 0; RGB slightly better = +1; and RGB significantly better = +2. The Wilcoxon signed-rank test was used to determine the significance of the differences.
RESULTS: RGB images were rated significantly more visually favorable than the RG images for all 10 lesions (P < 0.01). The greatest improvements in perceived visibility in RGB images were observed for vessel sheathing (50.7%), optic disc cupping (49.8%), cotton wool spots (46.9%), and an epiretinal membrane (46.7%). Conversely, macular degeneration (22.7%) and chorioretinal atrophy (25.1%) had minimal advantages in RGB images.
CONCLUSIONS: RGB imaging improves the visibility of white and superficial fundus lesions but adds little benefit for deeper located lesions.
TRANSLATIONAL RELEVANCE: The results indicate that RGB imaging, which includes blue laser light, improves the visibility of superficial and white retinal lesions. These findings support the optimized use of color imaging modalities in clinical practice based on lesion characteristics.
Visual crowding is the disruptive effect of nearby details on the perception of a target. This influence is dependent on both spatial separation and perceived similarity between target and flanker elements. However, it is not clear how these simultaneous influences combine to produce the final "crowded" percept as flankers traverse the limits of the crowding zone. We investigated the reported appearance of a peripherally presented Landolt-C target flanked by a pair of simultaneously presented Landolt-Cs across different levels of target-flanker similarity (relative orientation), spatial separation, and target eccentricity. The distributions of errors in reported target orientation were fitted with a pooling model that simulated errors using a weighted combination of target and flanker orientation signals. The change in error distribution with target-flanker spacing (the "spatial profile") was fitted with a logistic function, estimating both the rate at which target- and flanker-signal weighting varies as target-flanker spatial separation decreases (slope) and the spatial separation at which signals were balanced (midpoint). We found that the slope of the spatial profile increases as target-flanker similarity decreases, with similar modulation patterns across target eccentricities. In contrast, spatial profile midpoints increased linearly with eccentricity, in line with Bouma's law, but were invariant of target-flanker similarity. This suggests similarity-related modulation may operate within a fixed spatial extent at each eccentricity. Investigating the spatial profile of crowding disentangles effects related to the appearance of targets and flankers (i.e., similarity) from appearance-independent influences, which can be confounded when using other common measures to define crowding zone extent.
What is the impact of dynamic changes in facial visibility on identifying the eye and forehead region? This study examines how wearing or removing a mask affects the ability to visually identify the eyes. We investigate whether these changes impact the recognition of upper facial features and alter sensitivity to the misalignment of the face's upper and lower halves, which disrupts holistic face processing. Results show that removing a mask generally impairs visual identification, suggesting that the perception of the whole face hinders recognition of the upper half. This hindering is evident from the fact that the interference decreases when the face is misaligned. In contrast, the impairment in identification caused by adding a mask to a target face rises from losing original support of the holistic processing, given that it was not diminished when the upper and bottom halves were misaligned. Additional findings show that misalignment negatively affects the identification of faces where a mask was either maintained or added, suggesting that masks may actually help direct attention to relevant facial features, rather than being integrated into a holistic representation. We discuss these results in light of their theoretical and practical implications for visual identification, particularly in the context of dynamic changes to facial appearance.