Engendering Technology Trust in Physical and Agentic AI

Manufacturers adopting physical and agentic AI need to build trust in the technology’s reliability, security, and safety impacts.

TAKEAWAYS:
● As manufacturers increasingly adopt physical and agentic AI, they are prioritizing trust in these technologies.
● Given that safety is a key concern with physical AI systems, manufacturers should take precautions and operate these systems within a defined “safety envelope.”
● Because trustworthy AI depends on human accountability, manufacturers should preserve and enhance the trust already present in human-led processes.
The expected business impact of physical and agentic AI is fueling investment and adoption in manufacturing. Despite the attention given to technology orchestration and use case development, technology trust deserves equal focus.
While these types of AI are relatively new, the way manufacturers will come to trust them follows a familiar pattern. New technologies entering the market typically progress from skepticism to trust. For example, safety programmable logic controllers (PLCs) replaced hardwired relay logic, once seen as a cornerstone of industrial safety. The adoption of ethernet-based safety input/output (I/Os) extended the reach of safety controllers and enabled more decentralized safety visibility. Similarly, safe torque off (STO) integrated into variable frequency drives reduced redundant wiring and simplified cabinet design while maintaining safety.
In each case, adoption was led by innovators and early entrants, with most businesses following fast and laggards catching up. This adoption velocity owed to the marketplace gaining increasing trust in a technology, which in turn led to significant efficiency and operational benefits, further accelerating adoption.
Today, with the rapidly improving capabilities and types of AI, manufacturers find themselves once again navigating the skepticism-trust cycle. Across industries, designing and using AI in a trustworthy manner has been essential for realizing its value. When it comes to physical and agentic AI in manufacturing, however, a new level of trust is required, and the stakes are, in some ways, much higher. This presents a significant opportunity.
As the adoption of physical and agentic AI accelerates, trust in technology, data, and human collaboration will increasingly separate industry leaders from laggards. Enterprises that prioritize AI trust will be positioned to confidently move forward with deploying and scaling AI, capturing business value while competitors take a slower, more cautious approach due to initial skepticism. By addressing trust in physical and agentic AI now, organizations can help drive performance, improve safety, preserve institutional knowledge, and establish a competitive advantage.
How Physical AI Changes the Safety Equation
Physical AI sits at the intersection of the digital and physical worlds. It refers to AI systems that autonomously perceive, reason, understand, and take action in the physical world via machines or control systems. Physical AI is found in robots, autonomous vehicles, sensors, controls, and throughout the Industrial Internet of Things (IIoT). For example, sensors and cameras combined with computer vision provide real-time data that AI uses to optimize warehouse operations, which may include other instances of physical AI (e.g., parcel sorting robots, autonomous forklifts). Many organizations are already using physical AI to at least a minimal extent. According to Deloitte’s 2026 State of AI in the Enterprise, many organizations already use physical AI to some degree, with adoption highest in the Asia-Pacific region (71%) and slightly lower in EMEA (56%) and the Americas (56%).
The transition to safety PLCs offers a useful parallel for understanding trust in physical AI. While hardwired relay logic presented limitations, such as difficulty troubleshooting and an inflexible architecture—but it was nevertheless trusted. There was some initial uncertainty surrounding the adoption of technology that could independently read inputs and execute safety logic to detect and stop dangerous machine functions. Trust grew out of controlled pilot programs, hardware redundancies, safety standards, monitoring, and growing workforce confidence resulting from consistently reliable performance.
“Across industries, designing and deploying AI in a trustworthy manner has been essential for realizing its value.”
With physical AI, the concern around safety is magnified. While many systems in the manufacturing environment have physical consequences in the event of an error or failure (e.g., lifting and moving suspended loads), AI actions have additional kinetic consequences for humans and machines because their outputs have real-world implications. Furthermore, AI systems will inevitably encounter edge cases that a human operator could likely resolve using their expertise, experience, and intuition.
A two-channel safety approach is a central design principle for trustworthy physical AI. The primary autonomy channel proposes and executes actions, while a functionally independent safety channel supervises those actions and, when necessary, constrains or overrides them. By incorporating redundancy, separation, and established engineering practices, organizations can increase confidence in the safety of physical AI systems, similar to the role of safety PLC architectures in industrial automation. Safety is further enhanced through certified safety functions and life-cycle processes, such as those defined in IEC 61508, ISO 13849, and IEC 61800-5-2.
For some physical AI, such as robots, the system must operate within a safety envelope—an allowed operating region enforced with runtime assurance. If the machine strays from this envelope, runtime assurance intervenes. This kind of two-channel approach helps build confidence, and organizations adopting physical AI can leverage a validation strategy to enhance trust. Using simulations of physical AI systems, controlled pilots, staged autonomy, and ongoing monitoring, the safety of physical AI deployments can be certified while trust in function and value increases.
Trust with Humans in the Loop
With agents and physical AI, several impactful trust domains require careful consideration. Trusted AI is characterized by being fair and impartial, responsible and accountable, robust and reliable, transparent and explainable, secure, and aligned with privacy expectations. Across these domains, humans hold the central role in trustworthy design, adoption, and use. A common misconception is that AI will make humans obsolete, embedding all human expertise and reasoning into a machine that surpasses human accuracy, consistency, and even attention to safety. If this is the expectation—that machines are just as effective as people but without taking breaks or making mistakes—organizations may be disappointed. Even the most mature AI remains susceptible to inaccuracies or subpar performance due to flawed data, faulty sensors, AI hallucinations, and other factors.
The reality is that trustworthy AI requires human participation, which itself can be a safety feature. By capturing operator heuristics, maintenance intuition, and known solutions to local constraints, physical AI systems can be trained and optimized to function at least as well as humans. This should be considered the starting point.
As AI becomes more prevalent and sophisticated, work will be reimagined to capitalize on the strengths and capabilities of a unified human-machine workforce. Processes will change, workflows will be rebuilt with an AI-native mindset, and the value of human workers will shift from what people can do to what they know. This transformation will redefine roles and responsibilities, with humans becoming AI supervisors who validate outputs, manage exceptions, and drive feedback mechanisms that help AI systems optimize over time.
“The safety concerns surrounding physical AI are significant compared to other systems.”
The adoption reality is that predictability, transparency, and processes for recourse if the AI system makes an error all contribute to building human confidence in the technology. Workforce training plans, improving the operator user experience, and optimizing feedback loops move in this direction. The cycle of validating outputs, refining functions, and building confidence in working with AI ultimately leads to trust.
One important factor to communicate to the workforce is accountability. A machine cannot be meaningfully held accountable; it cannot apologize for a safety violation, nor can it be penalized for suboptimal performance. Therefore, trustworthy AI hinges on human accountability for AI outcomes. Blind trust in technology is unlikely to deliver the necessary level of trust in manufacturing environments. Instead, safety, reliability, transparency, and all elements of trust are assured and enhanced through the combination of human and machine reasoning.
Fostering Trust in the AI-Enabled Supply Chain
AI agents are maturing alongside physical AI systems, and these technologies are likely to converge in a number areas, including the supply chain. AI agents are autonomous systems that perceive the data and technology environment, make decisions, and act with minimal or no human intervention. In the supply chain, AI agents can enable efficiency and productivity use cases like dynamic inventory management, material handling, and proactive constraint management, often working with drawing data from physical AI systems. In these and other supply chain applications, trustworthy AI is paramount, particularly regarding efficiency and reliability, data security, and transparency.
One factor is trust between supply chain partners across all tiers. Supplier attestations, which certify that products or materials meet specific standards and requirements, depend in part on high-quality data collected throughout the product or material life cycle. As AI agents become more widely used in supply chain interactions and documentation, the importance of trust rises commensurately. Consider the importance of trust in AI outputs when defending sustainability disclosures, guarding against counterfeits in the supply chain, or evaluating supplier quality and reliability. As supply chain processes are changed to use AI, the trust that exists in existing human-led processes will need to be preserved and enhanced.
As part of this, data security and privacy should be addressed. With dozens or even hundreds of AI agents drawing data from across the physical AI-enabled manufacturing environment, organizations need confidence that proprietary enterprise data is not being leaked or inappropriately disclosed to other agents. Humans know, almost intuitively, not to share information unnecessarily. There is not much risk in a supply chain manager divulging intellectual property while reordering materials. The same cannot be assumed for AI.
“When architecting AI systems, enterprises require a trust boundary—the divide between which data can be shared with third parties and which should be protected.”
When architecting AI systems, enterprises require a trust boundary—the divide between which data can be shared with third parties and which should be protected. This is achieved through access controls, data retention policies, and privacy-by-design principles. There is a balance to find between auditability and data overexposure, which in turn fosters trust in technology governance and risk mitigation.
Regarding practical mechanisms for fostering AI trust in the supply chain, transparency and explainability are critical. Focus on data traceability and lineage throughout the data life cycle, and ensure data obligations are in place for Tier N data transparency, security, and privacy. In addition, leverage the trust boundary to protect data while maintaining audit readiness. This contributes to demonstrating accuracy and reliability in AI outputs, thereby driving trust not only in the data but also in the AI systems and machines that created it.
Taken together, the trust imperatives across physical AI, human collaboration, and the supply chain highlight a broader organizational reality: trust cannot be an afterthought. It needs to be deliberately designed, measured, and managed across every dimension of AI adoption.
Considerations for Trust in AI Adoption
Going forward, manufacturers will need to address enterprise AI readiness, acceptance of new technology, and processes that ensure trustworthy adoption and deployment of AI. Trust needs to be considered at the outset and revisited throughout the AI life cycle. There are some leading practices and opportunities to support this.
Rollout Playbook
Triage use cases to evaluate value, risk, and enterprise readiness. Where could AI design, function, or use impede trust in the technology? As it relates to safety and physical AI, determine safety classifications and assurance levels to understand which system elements must be deterministic and where safety is a high priority. In addition, develop plans and assessments for the following areas:
- Data readiness: Establish data governance, a labeling strategy, lineage tracking, and access controls.
- Assurance plan: Test the AI strategy, safety case approach, and acceptance criteria.
- Pilot design: Include a human-in-the-loop and an adaptation protocol to support pivots and course corrections as needed.
- Operate and improvement plans: Establish processes for monitoring AI function, accuracy over time, incident response, change control, and audits.
These plans and activities reveal where the organization needs to focus to ensure AI adoption is governed, safe, managed, and trustworthy. With these insights and approaches, manufacturers can confidently scale trustworthy AI across sites.
Standards
Trust needs to be measurable. Leveraging standards transforms trust from a subjective concept into a set of actionable controls and verifiable evidence. Two standards may be most helpful. First, ISO/IEC 42001 is a foundational management system, establishing policies, roles, risk controls, and a framework for continuous improvement. This helps ensure AI governance is consistent and not reinvented for each initiative. Second, the NIST AI RMF 1.0 offers a practical approach to implementing trustworthy AI by guiding teams through essential risk management activities, including governance, mapping, measurement, and management. By using this structure, teams can accelerate deployment while minimizing unexpected challenges.
Trust Scorecard
Technology trust is a distinct area that requires specific competencies, experience, and tools to evaluate and ensure trust. It can be helpful to work with an advisor when rolling out physical and agentic AI systems. For example, advisors can help organizations assess trust across the AI life cycle, structuring activities, processes, and decision-making waypoints that foster confidence in AI usage and governance. Priority assessment factors may include:
- Data completeness, timeliness, and lineage coverage
- Model performance, drift, calibration, and false-negative safety-critical rates
- Operational factors such as downtime impact, mean time to repair, exception rate, and override frequency
- Safety near-misses, safety envelope interventions, and validation coverage
- Workforce adoption, training completion, and operator confidence
- Supply chain provenance coverage, audit findings, and supplier compliance rates
As manufacturers navigate the journey from skepticism to trust with AI, they will undertake the complex work of governing the AI ecosystem and managing deployments, considering not only their capabilities but the impact on safety, security, reliability, and other domains of trust. Trust and business value grow together as operations transform to maximize the potential of a human-machine workforce. Manufacturers who prioritize trustworthy AI today can lay an essential foundation for scaling physical and agentic AI, driving performance and distinguishing the enterprise as an industry leader. M
About the authors:

Rohini Prasad is Principal, Supply Chain and Manufacturing with Deloitte Consulting LLP.

Chris Como is Associate Vice President, Product Strategy and Smart Manufacturing with Deloitte Consulting LLP.