DARPA wants to make sure artificial intelligence is trustworthy and speaks the...

DARPA wants to make sure artificial intelligence is trustworthy and speaks the truth

Published
DARPA now looks for ways to make sure it can trust machine learning. (DARPA illustration)

WASHINGTON — In the race for supremacy in artificial intelligence the Pentagon is facing two challenges: scientific gains by adversaries and the uncertainty of developing technology that can be trusted.

The former is being addressed though a multitude of budget paths. Now the Pentagon’s innovation wing is focusing on how to ensure that machine learning and AI entities can work well — and be trusted — with humans.

“This competency-awareness capability contributes to the goal of transforming autonomous systems from tools into trusted, collaborative partners,” DARPA said in a recent solicitation for its Competency-Aware Machine Learning Initiative.

“Proposed research should investigate innovative approaches that enable revolutionary advances in science,” DARPA said. It specifically warned potential vendors that it was not interested in any proposal that offered “evolutionary improvement” to the current technology.

DARPA stands for the Defense Advanced Research Projects Agency.

Machine learning is a subset of artificial intelligence, focusing on computers using algorithms of data to progressively improve at formulating results.

The 48-month project will give successful vendors 36 months to research and construct their ideas, then 12 months to demonstrate and tweak the technology, DARPA said. First proposals are due next week.

Part of the trust will come from the machine having its version of confidence to tell the human when it can or cannot do something — eventually through machine-to-human language, DARPA said.

“If the machine can say, ‘I do well in these conditions, but I don’t have a lot of experience in those conditions,’ that will allow a better human-machine teaming,” Jiangying Zhou, a program manager in DARPA’s Defense Sciences Office, said in a release. “The partner then can make a more informed choice.”

That would be starkly different than what exists today in machine-learning autonomous systems, which cannot assess or communicate their competence in rapidly changing situations, Zhou said in the release.

As an example, Zhou said the proposed technology could help an individual select the best self-driving vehicle for a night trip in the rain.

That equation would be extrapolated for critical, quick decision needs for military use, DARPA said.

The new initiative further underscores DARPA’s recognition of the growth of machine intelligence into warfare. DARPA also is exploring how machine-learning algorithms could improve battlefield awareness to better train military personnel.

DARPA also is researching ways to protect machine-learning platforms from deception.

  • Subscribe to Talk Media News


  • NO COMMENTS

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.