2024 MSS Joint Conference

October 4, 2024
MSS Joint Conference. October 28 - November 1, 2024.

October 28-November 1, 2024 in Laurel, Maryland

The 2024 MSS Joint (BAMS and NSSDF) Conference offers a secure platform for the defense, intelligence, and homeland security communities to explore cutting-edge advancements in sensor technology and data fusion. The BAMS committee will focus on innovations in acoustic, seismic, magnetic, and electric-field sensing and their applications, while the NSSDF committee will delve into integrated information operations and multisource data fusion. Attendees will have the opportunity to engage with leading experts, discuss evolving requirements, and explore new applications that enhance national security.

Kitware is proud to bring its extensive expertise in Large Pretrained Models and ethical and responsible AI to the conference. As sponsors of MSS, we are committed to advancing research and fostering collaboration within the defense and security communities. Our contributions this year include a paper presentation at BAMS, two presentations at NSSDF, and active participation on the NSSDF planning committee. We look forward to engaging with fellow experts and sharing our latest advancements in leveraging AI to enhance sensor fusion, improve data processing capabilities, and ensure the ethical application of technology in support of national security and defense.

For more information, please contact our computer vision team.

Kitware’s Activities and Involvement

Large Pretrained Models for Multi-modal Data Annotation in FuelAI
Paper
Author: Josh Anderson, Albert Reed, Bryan Barrent, Caitlin Genna, Dennis Bowen, Sean Fee, Morgan Bishop, Daniel Davila

This paper describes the development of multimodal data curation and annotation tools using Large Pre-Trained Models (LPTMs) to accelerate the machine learning (ML) development lifecycle for defense applications. These tools enable analysts and ML practitioners to quickly find relevant data samples within large un-annotated datasets and conduct open-ended text searches across multiple modalities like image and audio using models like CLIP and BEATs. The paper also introduces a rapid data annotation tool utilizing the zero-shot segmentation model, Segment Anything (SAM), for automatic object labeling and integration into tracking algorithms.

Fusing Visual Data, Text, and Other Modalities in Large Pre-Trained Models
Tutorial
Presenter: Dan Davila

This tutorial will explore cutting-edge techniques for integrating diverse data types to unlock new capabilities in machine learning. Participants will learn how to leverage LPTMs like CLIP and BEATs to connect images, text, audio, and more, demonstrating the transformative potential of multimodal AI. We will highlight practical applications that accelerate data analysis, enhance decision-making, and empower users to tackle complex challenges with modern AI solutions.

Ethical and Responsible AI for Defense Applications
Tutorial
Presenter: Brian Hu and Anthony Hoogs

This tutorial will provide a comprehensive exploration of what it means to develop and deploy AI technologies with integrity in today’s rapidly evolving landscape. Participants will learn about frameworks and best practices for defining and operationalizing ethical principles in AI, including fairness, accountability, transparency, and mitigating unintended consequences. We will showcase strategies for building AI systems that not only meet mission objectives but also uphold the highest standards of ethical responsibility.

Computer Vision at Kitware

Kitware is a leader in leveraging artificial intelligence and machine learning for computer vision. Our technical areas of focus include:

  • Generative AI
  • Multimodal large language models
  • Deep learning
  • Dataset collection and annotation
  • Interactive Do-It-Yourself AI
  • Explainable and ethical AI
  • Object detection, classification, and tracking
  • Complex activity, event, and threat detection
  • Cyber-physical systems
  • Disinformation detection
  • 3D vision
  • Super-resolution and enhancement
  • Semantic segmentation
  • Computational imaging

Kitware’s commitment to continuous exploration and participation in other research and development areas is unwavering. We are always ready to apply our technologies and tools across all domains, from undersea to space, to meet our customers’ needs.

We recognize the value of leveraging our advanced computer vision and deep learning capabilities to support academia, industry, and the DoD and intelligence communities. We work with various government agencies, such as the Defense Advanced Research Project Agency (DARPA), Air Force Research Laboratory (AFRL), the Office of Naval Research (ONR), Intelligence Advanced Research Projects Activity (IARPA), the National Geospatial Intelligence Agency (NGA), U.S. Army and the U.S. Air Force. We also partner with prestigious academic institutions on government contracts.

Kitware can help you solve your challenging computer vision problems using our software R&D expertise. Contact our team to learn more about how we can partner with you.

Leave a Reply