Deep Learning for Classification and Localization of COVID-19 Markers in Point-of-Care Lung Ultrasound

Abstract : Deep learning (DL) has proved successful in medical imaging and, in the wake of the recent COVID-19 pandemic, some works have started to investigate DL-based solutions for the assisted diagnosis of lung diseases. While existing works focus on CT scans, this paper studies the application of DL techniques for the analysis of lung ultra sono graphy (LUS) images. Specifically, we present a novel fully-annotated dataset of LUS images collected from several Italian hospitals, with labels indicating the degree of disease severity at a frame-level, video-level, and pixel-level (segmentation masks). Leveraging these data, we introduce several deep models that address relevant tasks for the automatic analysis of LUS images. In particular, we present a novel deep network, derived from Spatial Transformer Networks, which simultaneously predicts the disease severity score associated to a input frame and provides localization of pathological artefacts in a weakly-supervised way. Furthermore, we introduce a new method based on uninorms for effective frame score aggregation at a video-level. Finally, we benchmark state of the art deep models for estimating pixel-level segmentations of COVID-19 imaging biomarkers. Experiments on the proposed dataset demonstrate satisfactory results on all the considered tasks, paving the way to future research on DL for the assisted diagnosis of COVID-19 from LUS data.
 EXISTING SYSTEM :
 Interstitial lung disease: When the state of eruption decreases due to the accumulation offluid or cells, the ultrasound beam travels deeper in the lung. This phenomenon creates vertical reverberation lines known as B-lines . Hyperechoic B-lines start at the pleural line, extend to the bottom of the image without fading, and move with lung sliding. The lower the air content in the lung, the more B-lines are visible in the image. Multiple B-lines in certain regions indicate lung interstitial syndrome. This paper provides a review of contemporary methods for both the segmentation and classification of LUS and is organized as follows: The next section provides a review of existing manual diagnostic techniques currently being employed around the world.
 DISADVANTAGE :
 LUS offers a supplementary screening tool available in any healthcare center. It can allow for a first screening to discriminate between low and high-risk patients. Routine LUS is much easier to implement as a screening tool than other imaging methods and thus earlier and more frequent lung examinations can be offered, even directly in COVID-19 assessment centers outside of hospitals. In the absence of sufficient COVID-19 testing kit availability, LUS can assist in diagnosing patients; LUS images can be obtained directly at bedside reducing the number of health workers potentially exposed to the patient. Currently, the use of chest X-Ray or CT scan requires the patient to be moved to the radiology unit, potentially exposing several people to the virus. With LUS, the same clinician can visit the patient and perform all required tests. This is a primary point since recent data shows that in severely affected countries about 3–10% of infected patients are health workers, worsening the serious problem of health professionals’ shortage Buonsenso et al. (2020); Discharged patients can be actively monitored with LUS imaging directly in their homes. This is crucial in long-term care homes and in regions with saturation of admission in hospital beds; Portable ultrasound machines are easier to sterilize due to smaller surface areas than CT scans; LUS is radiation free and can be performed every 12–24 h, allowing close monitoring of clinical conditions and also detecting very early change in lung involvement; LUS can be easily performed in the outpatient setting by general practitioners. This would also allow a better pre-triage to determine which patients should be sent to a hospital; Lastly, LUS is an inexpensive instrument and can be easily deployed in resource-deprived settings. In case of a massive spread, traditional imaging such as CT scan is much more difficult to be performed compared to LUS.
 PROPOSED SYSTEM :
 VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. The model achieves 92.7% top-5 test accuracy in Image Net, which is a dataset of over 14 million images belonging to 1000 classes. It was one of the famous model submitted to ILSVRC-2014. It makes the improvement over Alex Net by replacing large kernel-sized filters (11 and 5 in the first and second convolutional layer, respectively) with multiple 3×3 kernel-sized filters one after another. VGG16 was trained for weeks and was using NVIDIA Titan Black GPU’s.
 ADVANTAGE :
 In this paper, we present all results omitting the uninformative class, as it is not relevant for the analysis of differential diagnosis performance and would bias the results, that is, lead to a higher classification accuracy due to the recall and precision of almost 100% for the uninformative class (please refer to Appendix C.1 for results including uninformative data). All hidden layers are equipped with the rectification (ReLU) non-linearity. It is also noted that none of the networks (except for one) contain Local Response Normalisation (LRN), such normalization does not improve the performance on the ILSVRC dataset, but leads to increased memory consumption and computation time.VGG16 significantly outperforms the previous generation of models in the ILSVRC-2012 and ILSVRC-2013 competitions. The VGG16 result is also competing for the classification task winner (GoogLeNet with 6.7% error) and substantially outperforms the ILSVRC-2013 winning submission Clarifai, which achieved 11.2% with external training data and 11.7% without it. Concerning the single-net performance, VGG16 architecture achieves the best result (7.0% test error), outperforming a single GoogLeNet by 0.9%.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com