Use este identificador para citar ou linkar para este item: http://repo.saocamilo-sp.br:8080/jspui/handle/123456789/2152
Título: An algorithm for the National Institute of Health Stroke Scale assessment: a multicenter, two-arm and cluster randomized study
Autor(es): Andrade, João Brainer Clares de
Pacheco, Evelyn de Paula
Camilo, Millene Rodrigues
Rodriguez, Carlos Eduardo Lenis
Nascimento, Paula Sanchez
Oliveira, Nathalia Souza de
Carneiro, Thiago S.
Oliveira, Renato Andre Castro de
Silva, Gisele Sampaio
Palavras-chave: Telemedicina
Acidente vascular cerebral
Saúde digital
Data do documento: 2024
Editor: Elsevier
Citação: Andrade, João Brainer Clares de et al. An algorithm for the National Institute of Health Stroke Scale assessment: a multicenter, two-arm and cluster randomized study. Journal of Stroke and Cerebrovascular Diseases, v. 33, n. 7, p. 107723, jul. 2024.
Resumo: Background: The NIH Stroke Scale (NIHSS) is a validated tool for assessing stroke severity, increasingly used by general practitioners in telemedicine services. Mobile apps may enhance its reliability. We aim to validate a digital platform (SPOKES) for NIHSS assessment in telemedicine and healthcare settings. Methods: Hospitals using a telemedicine service were randomly allocated to control or SPOKES-user groups. The discrepancy between the NIHSS scores reported and those confirmed by experts was evaluated. Healthcare providers from comprehensive stroke centers were invited for interrater validation. Participants were random ized to assess the NIHSS using videos of real patients. Weighted Kappa (wk) statistics analyzed the agreement, and logistic regression determined the correlation with the congruency. Results: A total of 299 telemedicine consultations from 12 hospitals were included. The difference between the NIHSS scores reported and double-checked was lower in the SPOKES group (p = 0.03), with a significantly higher level of complete agreement (72.5 % vs. 50.4 %, p = 0.005). Adoption of SPOKES was associated with complete congruency (OR 4.01, 95 %CI 1.42–11.35, p = 0.009). For interrater validation, 20 participants were considered. In the SPOKES group, almost-perfect and strong agreement occurred in 13.3 %(n = 6/45) and 84.4 %(n = 38/45) of ratings, respectively; in the control group, 6.7 %(n = 3/45) were almost-perfect, 28.9 %(n = 13/45) strong and 51 %(n = 23/45) were minimal. Conclusion: A free and reliable mobile application for NIHSS assessment can significantly improve interrater agreement between healthcare professionals, and between NIHSS-certified neurologists and general practi tioners. Our results underscore the importance of ongoing training and education in enhancing the consistency and reliability of NIHSS scores.
URI: http://repo.saocamilo-sp.br:8080/jspui/handle/123456789/2152
ISSN: 1052-3057
Aparece nas coleções:Artigos de Periódicos

Arquivos associados a este item:
Não existem arquivos associados a este item.


Os itens no repositório estão protegidos por copyright, com todos os direitos reservados, salvo quando é indicado o contrário.