The future of warfare involves humans and machines working together, Lockheed Martin's chief technology officer Craig Martell predicted on Wednesday at Axios' AI+DC Summit.
Why it matters: As the military expands autonomous weapon use, debates are intensifying over when or whether to trust these systems and who is accountable for mistakes or misfires.
What they're saying: "I want us to really focus on human-machine teaming, because ... I don't believe statistics at scale is going to get us to cognitive machines," Martell said to Axios' Colin Demarest.
- He said it's a human's job to train with the AI system they plan to deploy and to figure out what errors and limitations it has.
- "Then you can make the rational decision if you want to take responsibility, to deploy that device, to deploy that platform."
- "I choose to use it," he said. "And if it gets it wrong, my fault."
Case in point: Martell envisions a dream team of a pilot flying with a swarm of autonomous aircraft that can help protect the pilot.
Catch up quick: Martell previously served as the Defense Department's first chief digital and AI officer and understands how militaries are implementing AI into their classified systems.
Zoom in: The Army just received its first autonomous Black Hawk helicopter, which can complete missions independently or with remote supervision from a secure offsite location.
- The chopper is undergoing "rigorous" testing, and was developed with a Lockheed Martin subsidiary.
- The delivery reflects a broader push toward autonomous systems as drone warfare and unmanned vehicles become increasingly central to modern combat.
Go deeper: Anthropic ban may threaten the military's AI advantage over China