Generative AI is growing rapidly.
Thanks to Large Language Models (LLMs) like Open AI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, users are integrating Generative AI into their everyday lives.
Getting answers to simple questions, translating words or sentences, writing research papers, developing custom computer code, and even generating images or artwork is all possible thanks to Generative AI and LLMs.
It’s in our web browsers, emails, SaaS products, social media platforms, and file systems.
Even the Department of Defense (DOD) is navigating the rapid advancements of Generative AI.
With Generative AI being accessible everywhere and with users inputting business data into various GPTs, it poses an important question for commercial enterprise organizations and the DOD:
Is there a way to make Generative AI more secure and trustworthy?
Generative AI and LLMs have significant limitations that open commercial enterprises and DOD organizations to risk.
Models can provide copyright-protected data, outdated data, or even hallucinate — giving inaccurate answers to mission-critical questions.
Users do not have visibility into why an AI model provided a specific response, nor do they have any way to trace the output of an AI model. This means it’s nearly impossible to ensure the data you receive is accurate, let alone validate the response provided by the model.
Source: https://www.darpa.mil/program/explainable-artificial-intelligence
Our research within the DOD has uncovered the following areas of risk with the deployment of Generative AI models:
Many organizations already have Generative AI technologies within their processes but lack guidelines or policies on how those capabilities should be used.
Take Microsoft Office and GitHub as examples.
Both companies have Generative AI capabilities embedded into their products. Users can access tools and wizards that accelerate their daily tasks but are unaware of how the model uses the data they provide.
This means a few things:
For some organizations, this may not be an issue. However, this poses a significant safety concern for many commercial organizations and the DOD.
So, how do you circumvent the risks associated with Generative AI?
Simple — implement an Explainable AI model.
Many organizations are investing in Explainable AI (XAI) to make their Generative AI models more secure.
Simply put, Explainable AI gives human users transparency and visibility into all aspects of the AI model. This allows them to understand and trust interactions with AI models, especially the model outputs.
Explainable AI provides 7 key areas for understanding AI, including:
Source: https://www.researchgate.net/publication/365954123_Explainable_AI_A_review_of_applications_to_neuroimaging_data
Additional findings, including Investigating Explainability of Generative AI for Code through Scenario-based Design published by Jiao Sun, et al, provide goals, frameworks, research, and live interviews with end users to understand all aspects of developing, deploying, and operating trustworthy AI capabilities.
Users within the DOD and commercial enterprises who need to be able to trust outputs from Generative AI models require a detailed understanding of the model.
This includes:
The image below details how Explainable AI provides high-level capabilities to explain the model output to the end-user.
Source: https://www.darpa.mil/program/explainable-artificial-intelligence
The DoD, in particular, requires applications to support mission-critical capabilities in support of all aspects of its daily operations and, more importantly, mission planning and execution.
Any errors or interruptions can drastically impact operations, giving our adversaries a tactical advantage.
DOD end users require visibility and transparency for all interactions with generative AI models.
You can also look at the Financial Services industry. Analysts and individual traders are always looking for assets that will perform better in their portfolios.
An accomplished analyst isn’t going to take an AI model recommendation for stocks to add to their client portfolios.
They want the background on the suggested stock, how it was selected, what other stocks were evaluated for selection, and the confidence the model has in that stock.
Explainable AI provides several foundational capabilities to help organizations build, deploy, and operate AI models.
These include:
Source: arXiv:2001.02478 [cs.HC]
Generative AI is a powerful tool for commercial enterprises and Department of Defense organizations, but it does come with serious risks.
Implementing an Explainable AI model provides transparency into all aspects of the generative AI model — allowing users to understand all aspects of developing, deploying, and operating trustworthy AI capabilities.
Here at Ulap, we are updating our Machine Learning Workspace to include critical Explainable AI capabilities.
The goal, as always, is to provide an AI/ML Platform that provides trustworthy generative AI to the DOD and enterprise users.