Towards Computational Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar, 2024年5月27日

Towards Computational Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar, 2024年5月27日

Suren Jayasuriya

生駒 : 奈良先端科学技術大学院大学, 2024.5

Lecture Archive
Contents Intro.

Acoustic imaging leverages sound to form visual products with applications including biomedical ultrasound and sonar. In particular, synthetic aperture sonar (SAS) has been developed to generate high-resolution imagery of both in-air and underwater environments. In this talk, we explore the application of implicit neural representations and neural rendering for SAS imaging and highlight how such techniques can enhance acoustic imaging for both 2D and 3D reconstructions. Specifically we discuss challenges of neural rendering applied to acoustic imaging especially when handling the phase of reflected acoustic waves that is critical for high spatial resolution in beamforming. We present two recent works on enhanced 2D circular SAS deconvolution in air as well as a general neural rendering framework for 3D volumetric SAS. This research is the starting point for realizing the next generation of acoustic cameras for a variety of applications in air and water environments for the future.

Volume No.

2024年5月27日
No. Printing year Location Call Number Material ID Circulation class Status Waiting

1

P000011

Details

Publication year

2024

Form

電子化映像資料(1時間16分14秒)

Series title

情報科学領域・コロキアム ; 2024年度

Note

講演者所属: Arizona State University

講演日: 2024年5月27日

講演場所: エーアイ大講義室, AI Inc. Seminar Hall (L1)

Country of publication

Japan

Title language

English (eng)

Language of texts

English (eng)

Author information

Suren Jayasuriya