Best Practices in Responsible AI
As artificial intelligence continues to play an expanding role in critical applications, ensuring its ethical deployment is more important than ever. Join Kitware for a webinar on best practices for responsible AI, where we will share insights from the development of the open source Natural Robustness and Explainable AI Toolkits as part of the CDAO Joint AI Test Infrastructure Capability, and over six years of research conducted across multiple DARPA programs, including Explainable AI, Urban Reconnaissance through Supervised Autonomy, In the Moment, and the upcoming Autonomy Standards and Ideals with Military Operational Values.
We will explore how AI can be designed and trained to be ethical by explaining its decisions, reasoning through human values and culture, and navigating morally ambiguous situations. Our experts will discuss the complexities of information fusion, such as dealing with observational uncertainty and missing data, and how these factors influence ethical decision-making in AI systems. Whether you are a researcher, developer, or policy maker, this webinar will provide practical strategies and lessons learned for integrating responsible AI practices into your own projects.
During this webinar, Kitware’s AI experts will:
- Explain how different organizations are handling the ethical use of AI, including the DoD and Intelligence Community.
- Discuss the challenges, issues, and importance of effective, rigorous testing of AI algorithms before deploying them into operational use.
- Share existing AI assurance tools to help you adopt responsible AI practices.
from industry experts on how Best Practices for Responsible AI!