Language-Guided Vision for Remote Sensing

Event Time

Originally Aired - Tuesday, May 7 7:30 AM - 8:30 AM

Info Alert

Create or Log in to My Show Planner to see Videos and Resources.

Info Alert

Your account does not have access to this session.

Videos

Resources


{{video.title}}

Log in to your planner to join the zoom meeting!

{{chatHeaderContent}}

{{chatBodyContent}}

Session Recording

Info Alert

This Session Has Not Started Yet

Be sure to come back after the session starts to have access to session resources.

Event Location

Location: Miami 1


Event Information

Title: Language-Guided Vision for Remote Sensing

Description:

Training Summary: Recent advances in large, multimodal image and language models enable object detection, classification, segmentation, and tracking using only text prompts and imagery. Tasks which used to be trained with large-scale databases of labeled images chips, can now be performed without any labeled in-domain data by leveraging large language-vision models pre-trained on massive foundational datasets. This training session will introduce you to the latest research in language-guided data annotation, object detection, classification, segmentation and tracking in remote sensing. We will emphasize functional demonstrations to highlight what works now. We will also introduce the transformer architecture and how it has revolutionized language and vision AI and led to the cross-trained, multi-modal systems that are now enabling new, language-guided vision applications. 

Learning Outcomes: Attendees will:

  • Know of and see examples of the latest multimodal image and language models and their application to remote sensing.
  •  Use examples of these architectures during the tutorial through Google Colaboratory.  No special hardware or software will be necessary, just a web browser.
  •  Understand the underlying architectures and datasets that make these models work.
  •  Understand the limitations of applying a language-guided approach, particularly with respect to the required kinds of training data.
  •  Appreciate future research directions where the field may progress and how this may influence AI for remote sensing.

Prerequisites: A basic understanding of machine learning is assumed, including training data curation, model training and testing. No specific knowledge of language-guided vision models or transformers is expected. No specific software knowledge is required. This training is designed for remote sensing practitioners/analysts as well as scientists and engineers working on new remote sensing AI models.

Type: Training


Notes

Create or Log in to My Show Planner to add notes.


Speakers


Tracks