THE 37TH ANNUAL AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE
Hosted by the Association for the Advancement of Artificial Intelligence (AAAI)
February 7-14, 2023 at the Walter E. Washington Convention Center in Washington D.C.
AAAI 2023 promotes AI research and scientific exchange between researchers, practitioners, scientists, students, and engineers across the industry. It is an innovative technical conference featuring paper presentations, workshops, tutorials, exhibit programs, and more. Content is selected according to the highest standards and ties in with this year’s theme of creating collaborative bridges within and beyond AI.
Kitware is pleased to be actively involved in AAAI this year. Our paper titled “xaitk-saliency: An Open Source Explainable AI Toolkit for Saliency” will be presented in the Innovative Applications of AI (IAAI) track. In addition, we co-organized one of the workshops and will be participating in the AI job fair. Our activities this year are focused on ethical and explainable AI. As more people are integrating AI into their systems, products, and processes, it is essential that users have a solid foundation of justified, appropriate trust in the technology. We believe you must understand the factors an AI considers in making decisions in order to feel confident in those decisions. That’s why our team has developed powerful tools, such as the Explainable AI Toolkit (XAITK), for analyzing AI models and datasets. XAITK is used to explore, quantify, and monitor the behavior of deep learning systems. Kitware also understands the need to consider the ethical concerns, impacts, and risks of using AI. That’s why we are developing methods to understand, formulate and test ethical reasoning algorithms for semi-autonomous applications. To learn more about Kitware’s AI expertise and how you can leverage our open source tools, contact our computer vision team.
Kitware’s Activities And Involvement
We are proud to have our paper accepted at AAAI 2023 in the IAAI track. “xaitk-saliency: An Open Source Explainable AI Toolkit for Saliency,” written by Brian Hu, Paul Tunison, Brandon Richardwebster, and Anthony Hoogs (all from Kitware), introduces the open source xaitk-saliency package, an XAI framework and toolkit for saliency. While advances in AI have progressed in fields such as computer vision, these algorithms are often viewed as “black boxes” which cannot easily explain how they arrived at their final output decisions. Saliency maps are a common solution for explainable AI (XAI), indicating the input features an algorithm paid attention to during its decision-making process. Our paper demonstrates its modular and flexible nature by highlighting two example use cases for saliency maps: (1) object detection model comparison and (2) doppelganger saliency for person re-identification. We also show how the xaitk-saliency package can be paired with visualization tools to support the interactive exploration of saliency maps. Our results suggest that saliency maps may play a critical role in the verification and validation of AI models, ensuring their trusted use and deployment. The code is publicly available at: https://github.com/xaitk/xaitk-saliency. The paper presentation will occur during AAAI on Friday, February 10 from 3:45-5 PM (Track: Deployed, 5217).
In addition to this paper, Kitware is co-organizing the following workshop:
Uncertainty Reasoning and Quantification in Decision Making
Monday, February 13, Half-day (2-6 PM)
Workshop Chairs:
- Xujiang Zhao (NEC Laboratories America)
- Chen Zhao (Kitware Inc.)
- Feng Chen (The University of Texas at Dallas)
- Jin-Hee Cho (Virginia Tech)
- Haifeng Chen (NEC Laboratories America)
Deep Neural Networks (DNNs) are very successful in applications such as image and video analysis, Natural Language Processing (NLP), recommendation systems, and drug discovery. However, it is inherently difficult for DNNs to find robust and trustworthy solutions for real-world problems, and they have the potential to overlook uncertainties that come with operating in the real world which can lead to unnecessary risk. For example, a self-driving autonomous car can misclassify a human on the road and delay applying the brakes. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. This uncertainty has started attracting attention within industry and academia as AI continues to be used more frequently in real-world applications. This situation also emphasizes decision-making problems within AI, such as autonomous driving and diagnosis systems. This workshop will explore the wave of research at the intersection of uncertainty reasoning and quantification in data mining and machine learning.
Ready to join the Kitware team?
Kitware is looking for passionate people to join our Computer Vision Team. We are leaders in creating cutting-edge algorithms and software for automated image and video analysis. Our solutions embrace deep learning and add measurable value to government agencies, commercial organizations, and academic institutions worldwide. View a list of our open positions, and apply today!
Physical Event
Walter E. Washington Convention Center
801 Mount Vernon Place