Friday, September 8

The IEEE IUS 2023 includes a satellite symposium on the topic of artificial intelligence (AI) in ultrasound to recognize the increasing use of AI in ultrasound imaging. This satellite symposium will cover fundamental aspects of AI, such as image formation and analysis. It will also include clinical use of AI in ultrasound imaging for different organ systems from head to toe and use case scenarios from the perspective of clinicians. The emphasis of this symposium will be on educational reviews. 

Schedule
Each talk will have 30 minutes + 10 minutes for questions.

8:30 - Muyinatu Bell
9:10 - Hassan Rivaz
9:50 Coffee Break
10:20 - Olivier Bernard
11:00 - Bryce Eakin
12:20 Lunch
13:40 - Ruud Van Sloun
14:20 - James Y. Zou
15:00 Coffee Break
15:30 - Hervé Delingette
16:10 - An Tang
16:50 End

Speakers

  • Ultrasound Image Formation in the Deep Learning Age

    The success of diagnostic and interventional medical procedures is deeply rooted in the ability of modern imaging systems to deliver clear and interpretable information. After raw sensor data is received by ultrasound and photoacoustic imaging systems in particular, the beamforming process is often the first line of software defense against poor quality images. Yet, with today’s state-of-the-art beamformers, ultrasound and photoacoustic images remain challenged by channel noise, reflection artifacts, and acoustic clutter, which combine to complicate segmentation tasks and confuse overall image interpretation. These challenges exist because traditional beamforming and image formation steps are based on flawed assumptions in the presence of significant inter- and intrapatient variations.

    In this talk, I will introduce the PULSE Lab’s novel alternative to beamforming, which improves ultrasound and photoacoustic image quality by learning from the physics of sound wave propagation. We replace traditional beamforming steps with deep neural networks that only display segmented details, structures, and physical properties of interest. Initial applications to be discussed include image-guided interventions, breast mass detection, COVID-19 feature segmentation, and non-invasive photoacoustic imaging. I will additionally describe new resources for the entire community to standardize and accelerate research at the intersection of ultrasound beamforming and deep learning. 

  • AI-Powered Ultrasound: Physically-Inspired, Semi-Supervised, and Self-Supervised Learning for Improved Training Efficiency and Robustness to Out-of-Distribution Data

    This talk focuses on developing AI-driven image analysis techniques that reveal otherwise hidden information in clinical ultrasound signals. Ultrasound is one of the most used imaging modalities because of its low cost and ease of use. However, it has two main drawbacks. First, raw ultrasound data is not suitable for visualization, and as such, is converted to the familiar grey-scale images which leads to a loss of most of its information. Second, these grey-scale images are hard to interpret since they are noisy and collected at oblique angles. In this talk, we tackle these issues by developing techniques that extract clinically important information such as tissue elasticity and attenuation from the complex raw ultrasound signals, and register them to other modalities such as Magnetic Resonance Imaging (MRI) to help with their interpretation.

    In real ultrasound data, there is an absence of ground truth for training deep neural networks. While it is feasible to use simulated ultrasound images, doing so brings the challenge of oversimplified simulation, resulting in domain shift. Even if we were to estimate ground truth within real ultrasound data, the variations in ultrasound imaging settings introduce another layer of domain shift. Our databases frequently include thousands of genuine ultrasound images, albeit lacking associated ground truth. During this presentation, I will introduce methodologies centered around self-supervision and unsupervised learning, aimed at harnessing the potential of these unlabeled datasets.

  • Physical simulations for deep learning: applications to image formation and motion estimation

    In recent years, artificial intelligence (AI) techniques have shown remarkable success in the field of medical imaging, encompassing various modalities, including echocardiography. While supervised learning approaches currently stand as the most effective methods, they rely heavily on ground truth data usually obtained from manual annotations, which often poses challenges across diverse applications. In my presentation, I will introduce a comprehensive pipeline aimed at generating realistic simulations of echocardiographic image sequences to be used as inputs for supervised learning algorithms. To demonstrate the potential of these synthetic datasets, I will showcase their application in two different areas: tissue motion estimation in traditional imaging and ultra-fast cardiac image reconstruction. These realistic simulations hold great promise for improving the performance of AI algorithms by reducing dependence on manual annotated data. Utilizing such synthetic datasets can enhance the robustness and efficacy of supervised learning techniques, thereby opening doors to novel opportunities for echocardiographic image analysis and reconstruction.

  • In Practice: An Industry Perspective on Applying AI in Ultrasound

    The state of AI technology has evolved dramatically over the past several years.  Tools such as ChatGPT take the spotlight, but a vast array of less-visible applications have quietly revolutionized seemingly every facet of our daily lives.  Despite this, deployment of modern AI tools in support of ultrasound practitioners in the field is, comparatively, still in its infancy.  This talk will delve into major technical and non-technical obstacles to AI deployment in ultrasonography, including regulatory considerations (focused on the United States), and discuss how it's possible to engineer projects to overcome them.  It will conclude by touching on some learnings that have enabled the successful deployment of AI models into clinical practice in the field with Butterfly ultrasound devices around the world.

  • Driving ultrasound imaging by Deep Generative AI

    Generative modelling has been widely theorized as the main driver behind decision making in autonomous intelligent agents. This is not surprising: the ability to accurately infer scene states from imperfect observations and predict plausible hypothetical futures is powerful. In this talk, we will discuss how generative modeling, and in particular deep generative modeling, can play a similar role in future ultrasound imaging. We start from the Bayesian brain hypothesis and integrate state-of-the-art deep generative modelling methods (such as score-based diffusion models), to derive inference engines for image formation and reconstruction, showing applications in echocardiography. Finally, we will show how to exploit generative predictions about plausible futures to guide decision-making for adaptive image acquisition based on information-theoretical principles.

  • EchoNet: evaluating cardiac ultrasound AI with randomized clinical trial

    I will present EchoNet, an AI model for assessing ejection fraction from cardiac ultrasound videos. In particular, I will share our recent experience conducting a blinded, randomized clinical trial evaluating the efficacy of EchoNet in the clinical workflow. I will conclude by sharing lessons we learned for how to make medical AI broadly more trustworthy and user friendly.

  • A view on the 3 Labours of AI in US imaging: Data Anonymization, Annotation, and Pathology Detection

    I will present our recent advances for detecting pathologies such as focal liver lesions in abdominal echographic images. While a lot of focus is given to the issue of solving narrow clinical tasks in computer aided screening or diagnosis, those tasks require the creation of potentially large annotated databases of US images. I will present the tools and approaches that we have developped in order to anonymise and standardize US images and partially annotate them with radiological reports. Finally, I will present the localization and classification performance obtained  on a static US database in order to  detect focal liver lesions.

  • Imaging of diffuse liver disease and focal liver lesions with AI

    Ultrasound is frequently used for the assessment of: 1) diffuse liver disease and 2) focal liver lesions. This educational lecture will be presented from the perspective of a radiologist specialized in liver imaging. For each topic, we will discuss the clinical framework, investigation workflow, disease classification, and potential role of AI in liver ultrasound based on recent publications. For chronic liver disease, we will discuss AI-based grading of liver fat, inflammation, and fibrosis on ultrasound. For focal liver lesions, we will discuss AI-based detection, segmentation, and classification of focal liver lesions.

    Learning Objectives
    1. Discuss the investigation workflow in liver imaging.
    2. Explain the approach to classification of diffuse liver disease and focal liver lesions.
    3. Recognize the role of AI for detection, segmentation, and classification of liver disease.