Beyond the Chatbot: Why 2026 is the Year of Physical AI
Feb 6, 2026
Physical AI Infrastructure: Why 2026 is the Inflection Point
Ishtiaque Mohammad

After 25 years building infrastructure and evaluating billions in technology investments, here's the complete analysis of why infrastructure—not just AI models—will determine which Physical AI companies scale from pilots to production.
Physical AI will transform robotics in 2026. But success requires infrastructure. Complete enablement stack analysis from silicon to software to systems.
Introduction: The Infrastructure Challenge Nobody's Solving
While venture capitalists debate foundation model capabilities and enterprises rush to implement generative AI, a far more consequential transformation is unfolding in labs, factories, and warehouses around the world.
Physical AI—artificial intelligence that perceives, reasons about, and acts upon the physical world through robotics, autonomous systems, and edge devices—is crossing the threshold from research demonstrations to commercial deployment. And 2026 will mark the inflection point.
But here's what most investors miss: Physical AI has an infrastructure problem that's fundamentally different from the challenges facing generative AI. The compute infrastructure that powers ChatGPT and Midjourney won't work for robots navigating warehouses, autonomous vehicles making split-second decisions, or surgical systems requiring microsecond precision.
After 25 years building infrastructure at Intel and evaluating billions in technology investments, I've learned to recognize the early signals of major technology transitions. Physical AI is showing all of them—but the investment opportunity isn't where most capital is flowing.
This isn't about replacing workers or sci-fi fantasies. It's about the infrastructure enablement stack—silicon, software, and system integration—that needs to exist before humanoid robots can work alongside humans, before autonomous delivery can scale beyond pilots, and before edge AI can process the physical world in real-time.
Let me explain why 2026 matters, what's missing from today's infrastructure landscape, and where strategic capital should be flowing.
What is Physical AI? (And Why It's Different from Generative AI)
Defining Physical AI
Physical AI refers to artificial intelligence systems that perceive, reason about, and act upon the physical world. Unlike generative AI (which creates text, images, or code) or analytical AI (which makes predictions from data), Physical AI must:
Sense the environment through cameras, LiDAR, radar, tactile sensors, and other perception systems
Process in real-time with latency measured in milliseconds, not seconds
Make decisions under uncertainty with incomplete information and dynamic conditions
Act physically through robotic manipulators, autonomous vehicles, or control systems
Operate at the edge where cloud connectivity is unreliable, slow, or impossible
Examples of Physical AI in deployment:
Humanoid robots in manufacturing and warehousing (Figure AI, 1X Technologies, Apptronik)
Autonomous delivery vehicles (Nuro, Waymo Via, Gatik)
Warehouse automation and picking robots (Covariant, Dexterity, RightHand Robotics)
Surgical robotics with AI-assisted control (Intuitive Surgical, CMR Surgical, Vicarious Surgical)
Agricultural robots for precision farming (John Deere, Bear Flag Robotics, Small Robot Company)
Infrastructure inspection systems (Skydio, Zipline, Gecko Robotics)
Why Generative AI Infrastructure Doesn't Work for Physical AI
The entire infrastructure ecosystem built for generative AI is optimized for the wrong problem:
Generative AI (LLMs, Diffusion Models):
Training: Massive parallel compute in data centers, measured in petaflops
Inference: Cloud-based, latency-tolerant (200-500ms acceptable for chat)
Power: Essentially unlimited in data centers (300-700W GPUs standard in H100 deployments)
Connectivity: Constant high-bandwidth connection assumed
Failure mode: Incorrect output, user can retry
Deployment: Centralized infrastructure, controlled environment
Physical AI (Robotics, Autonomous Systems):
Training: Data center-based initially (can leverage some GenAI infrastructure)
Inference: Edge-based, latency-critical (5-50ms required for safety)
Power: Severely constrained (5-50W for mobile systems, 100W maximum for stationary)
Connectivity: Intermittent or nonexistent (must work offline)
Failure mode: Physical damage, injury, system failure, safety incidents
Deployment: Distributed edge infrastructure, uncontrolled environments
The fundamental mismatch: You can't put an NVIDIA H100 (700W data center GPU) into a humanoid robot or delivery vehicle. You need specialized infrastructure—silicon, software, and systems—that largely doesn't exist yet at the economics required for commercial scale.
The Infrastructure Stack Physical AI Requires (That Doesn't Exist at Scale)
After evaluating dozens of robotics and autonomous systems companies, I've identified three critical infrastructure layers that are bottlenecks to Physical AI scaling. These aren't incremental improvements—they're foundational capabilities that need to be built, validated, and deployed at volume.
Layer 1: Silicon & Hardware Infrastructure
Real-Time Perception Processors
The Problem: Physical AI systems need to understand their environment through vision, LiDAR, radar, and other sensors—and process this data in real-time with incredibly low latency for safety-critical decisions.
Current solutions rely on:
GPU-based processing: Too power-hungry (50-300W) for mobile systems
General-purpose processors: Too slow for real-time requirements (<10ms)
FPGA-based accelerators: Expensive, complex to program, difficult to scale
What's Required:
Processing: 10-100 TOPS for sensor fusion and multi-modal perception
Latency: <10ms for complete perception pipeline (sensor → processed understanding)
Power envelope: 5-15W for mobile applications, 20-30W for stationary
Cost target: <$300 at volume (10K+ units) for commercial viability
Workloads: SLAM, object detection/tracking, semantic segmentation, depth estimation, sensor fusion
Why It Doesn't Exist at Scale: This requires ASICs optimized for transformer-based vision models and multi-modal sensor fusion—not the CNN-focused architectures that existing vision chips target. The economics only work at 50K-100K+ units, but no single robotics company has reached that volume yet, creating a chicken-and-egg problem.
Investment Signal: Companies building perception ASICs with design partnerships with 2-3 robotics OEMs and credible paths to TSMC or Samsung manufacturing at 7nm or 5nm nodes.
Multi-DOF Control Processors
The Problem: Controlling a humanoid robot with 40+ degrees of freedom, each requiring real-time motor control with feedback loops running at 1-10 kHz, is a compute problem that existing solutions handle inefficiently.
Current approaches:
Microcontrollers: Handle 5-10 DOF but don't scale to humanoid complexity
Real-time compute modules: Based on automotive chips (over-engineered, expensive for robotics)
Custom control boards: One-off designs that can't scale to production economics
What's Required:
Real-time guarantees: Deterministic compute with <1ms jitter for safety-critical control
Parallel processing: Managing 40-100+ control loops simultaneously
Sensor integration: Direct interfaces for encoders, force/torque sensors, IMUs
Power efficiency: 3-10W for the control subsystem
Safety certification: Functional safety standards (ISO 13849, IEC 61508) for human collaboration
Why It Doesn't Exist at Scale: Automotive chips have some required features but are over-engineered and over-priced for non-automotive robotics applications. Industrial automation chips lack the compute density and real-time guarantees. Nobody's building a robotics-specific control processor at volume.
Investment Signal: System-on-chip (SoC) opportunities combining ARM cores, DSPs, and custom accelerators with robotics OEM partnerships and automotive chip team experience.
Edge Inference ASICs for Physical AI Models
The Problem: Foundation models and vision-language models are moving from cloud to edge for Physical AI applications (robots need autonomous operation), but current edge AI chips weren't designed for the workloads Physical AI requires.
Current solutions:
Edge TPUs, NPUs: Optimized for CNNs and older architectures, not transformers
Embedded GPUs: Too power-hungry (15-50W) for battery-powered systems
Quantized models on general CPUs: Too slow for real-time decision-making
What's Required:
Model support: Native acceleration for transformers, vision-language models, diffusion models
Inference speed: 30-100 inferences per second for real-time operation
Power budget: <5W for battery-powered systems, <20W for mains-powered
Memory bandwidth: High-bandwidth memory or advanced packaging for large model weights
Cost target: <$200 for commercial robotics, <$500 for specialized applications
Why It Doesn't Exist at Scale: Most edge AI chips were designed 3-5 years ago when CNNs dominated computer vision. Transformer-based architectures and multi-modal models have fundamentally different compute patterns and memory requirements. Retrofitting old architectures is inefficient—new silicon designed for these workloads is needed.
Investment Signal: Companies building transformer-optimized edge inference chips with robotics customer design partnerships and paths to 7nm or 5nm manufacturing nodes.
Layer 2: Software Infrastructure & Development Tools
Robotics Operating Systems & Middleware
The Challenge: The robotics software ecosystem is fragmented across ROS (Robot Operating System), ROS2, proprietary stacks, and hybrid approaches. This fragmentation creates massive friction in scaling deployments and sharing capabilities across platforms.
Current State:
ROS/ROS2: Open-source standard with broad adoption but varying implementations
Proprietary stacks: Custom middleware from companies like Tesla, Boston Dynamics
Hybrid approaches: ROS for development, proprietary for production deployment
What's Required:
Production-grade reliability: Not research code, but hardened systems for 24/7 operation
Real-time guarantees: Deterministic performance for safety-critical systems
Hardware abstraction: Work across different sensor and actuator platforms
Fleet management integration: Deploy, monitor, and update thousands of robots
Developer ecosystem: Tools, libraries, and frameworks that don't require robotics PhD
The Gap: Most robotics software is optimized for research and prototyping, not production deployment at scale. Moving from 10 robots in a lab to 10,000 robots in warehouses exposes brittleness in software architecture, fleet management, and operational tooling.
Investment Opportunity: Companies building production-grade robotics middleware, deployment frameworks, and developer tools that work across hardware platforms and scale to fleet deployments.
Sim-to-Real Transfer Pipelines
The Challenge: Training AI models for Physical AI in simulation and deploying them to real-world environments remains one of the hardest unsolved problems in robotics. The "reality gap" between simulation and deployment causes performance degradation that limits commercial viability.
Why Simulation is Critical:
Training in real world: Slow, expensive, dangerous, limited scenarios
Training in simulation: Fast, cheap, safe, unlimited scenarios
The problem: Models trained in simulation often fail in real-world deployment
What's Required:
High-fidelity physics simulation: Accurate modeling of dynamics, friction, contact
Domain randomization: Training across diverse environments to improve generalization
Real-world validation loops: Systematic collection of deployment data for model improvement
Continuous improvement pipelines: Not one-time training, but ongoing learning from fleet
Current Solutions:
NVIDIA Isaac Sim, Unity, Unreal Engine for simulation
Varying quality of sim-to-real transfer depending on application
Most companies underinvest in validation pipelines
The Gap: Companies demonstrate impressive simulated performance but struggle with real-world deployment because the sim-to-real pipeline isn't production-grade. The gap only becomes apparent at scale.
Investment Opportunity: Simulation platforms specifically designed for Physical AI, domain randomization frameworks, and companies solving systematic sim-to-real transfer with proven deployment track records.
Fleet Management & Observability Platforms
The Challenge: Managing, monitoring, debugging, and updating thousands of deployed robots requires infrastructure that most robotics companies are building as an afterthought.
What's Required:
Real-time telemetry: Health monitoring, performance metrics, operational status
Remote diagnostics: Debug issues without physical access
Over-the-air updates: Deploy software and model updates safely at scale
Data collection pipelines: Systematic gathering of operational data for improvement
Incident management: Detect, diagnose, and resolve issues quickly
Current State: Most robotics companies have fleet management built for 10-100 robots, then discover it doesn't scale to 1,000+ robots. The infrastructure breaks, and operational costs explode.
The Gap: Fleet management is treated as operational tooling, not core infrastructure. Companies underestimate the complexity until they're trying to debug issues across distributed deployments.
Investment Opportunity: Fleet management platforms designed for Physical AI from the ground up, with observability, diagnostics, and update orchestration as first-class capabilities.
Layer 3: System Integration & Production Infrastructure
Hardware-Software Co-Design
The Challenge: Physical AI requires tight integration between hardware and software that most companies struggle to achieve. Treating them as separate concerns leads to suboptimal performance and production issues.
What Co-Design Means:
Joint optimization: Hardware architecture informed by software workloads
Interface definition: Clean abstractions that allow software portability
Performance tuning: System-level optimization across hardware and software
Production coordination: Hardware and software teams aligned on deployment
Why It's Hard: Different skill sets, different timelines (hardware is slower), different failure modes. Most companies have hardware teams and software teams that don't collaborate deeply until integration, when it's too late to fix fundamental mismatches.
The Gap: Companies excel at either hardware or software, but few have the culture and processes for true co-design. This shows up as performance bottlenecks, power inefficiency, or difficult manufacturability.
Investment Signal: Teams with demonstrated hardware-software co-design experience (often from automotive, consumer electronics, or prior robotics companies) and organizational structures that support deep collaboration.
Manufacturing & Deployment Tooling
The Challenge: Getting from "working prototype" to "manufacturable at volume" requires infrastructure that most robotics companies underestimate by 2-3 years and 3-5x in cost.
What's Required:
Design for manufacturing (DFM): Products designed to be built at scale, not one-offs
Automated testing and calibration: Quality assurance at production volumes
Supply chain management: Long-lead components, second sources, inventory
Production tooling and fixtures: Manufacturing equipment and processes
Deployment and commissioning: Installing and configuring systems at customer sites
Current State: Most robotics companies optimize for prototype performance, then discover their designs can't be manufactured economically at scale. Redesign cycles add 12-18 months and significant capital burn.
The Gap: Manufacturing and deployment are afterthoughts, not integrated into product development from the beginning. Companies discover this gap when trying to fulfill their first 1,000-unit order.
Investment Opportunity: Companies with manufacturing expertise embedded in product development, contract manufacturers specializing in robotics, and deployment tooling platforms.
Safety Certification & Regulatory Compliance
The Challenge: Physical AI systems operating around humans require safety certification (ISO 13849, IEC 61508, UL, CE marking) that adds 12-24 months and significant cost if not planned from the beginning.
What's Required:
Safety architecture: Redundancy, fail-safes, emergency stops designed into system
Risk assessment: FMEA (Failure Mode and Effects Analysis), hazard analysis
Documentation: Design specifications, test records, traceability
Third-party certification: Testing and validation by accredited bodies
Current State: Many robotics companies design first, then discover certification requirements force fundamental redesigns. This is particularly critical for humanoid robots working alongside humans.
The Gap: Safety and certification are treated as final steps rather than design constraints. Companies learn too late that their architecture can't be certified without major changes.
Investment Signal: Teams with regulatory and safety certification experience (automotive, medical devices, industrial automation) who design for certification from day one.
Why 2026 is the Inflection Point
The convergence of five trends makes 2026 the year Physical AI moves from pilots to production at scale:
1. Foundation Models Achieve Multi-Modal Capabilities
OpenAI's GPT-4V (vision), Google's Gemini, Anthropic's Claude (vision), and specialized models from companies like Covariant and Physical Intelligence now demonstrate vision-language understanding that enables reasoning about the physical world—understanding spatial relationships, object interactions, and task planning. This level of capability wasn't available 18 months ago.
Why it matters: The "brains" of Physical AI systems now exist with sufficient capability. The bottleneck has shifted from AI algorithms to infrastructure enablement.
What's Different: Previous generations of computer vision were pattern matching. Current vision-language models can reason about what they see, plan multi-step actions, and adapt to novel situations—crossing a threshold of capability required for autonomous operation.
2. Humanoid Robotics Reaches Commercial Prototypes
Figure AI ($675M raised), 1X Technologies ($125M raised), Apptronik, Agility Robotics (Digit), and Unitree are moving from research demonstrations to commercial pilots. Multiple companies will deploy 500-5,000 units in real-world applications in 2025-2026, according to industry estimates (International Federation of Robotics, company announcements).
Why it matters: This creates the volume demand signal that justifies custom infrastructure development. At 10,000+ cumulative units across the industry, ASICs become economically viable. Software platforms can be amortized across customers. Ecosystem investments make sense.
The Timeline:
2024: 100-500 units deployed (pilot phase)
2025: 1,000-5,000 units deployed (early commercial)
2026: 10,000-50,000 units deployed (scaling phase)
2027+: 100,000+ units (mass production)
3. Autonomous Delivery Crosses Economic Viability
Companies like Nuro, Waymo Via, Gatik, and Serve Robotics are transitioning from subsidized pilots to revenue-generating operations with improving unit economics. Regulatory frameworks have matured significantly in key markets (California, Texas, Arizona) following 2024-2025 policy developments.
Why it matters: Autonomous delivery is the first Physical AI application with clear path to profitability at scale, creating a pull signal for specialized infrastructure. Once one application crosses this threshold, investment follows.
The Regulatory Context: State and federal AV regulations matured significantly in 2024-2025, creating clearer paths to deployment. 2026 benefits from this regulatory foundation.
4. Manufacturing Labor Dynamics Accelerate Automation Adoption
Post-pandemic labor markets have fundamentally shifted. Manufacturing and logistics companies face persistent labor shortages, with manufacturing vacancy rates remaining elevated at 3-4% in the United States (U.S. Bureau of Labor Statistics, 2024), making robotics and automation economically necessary, not just cost-optimization.
Why it matters: ROI timelines for robotics have compressed from 5-7 years to 2-3 years, according to industry surveys (Robotics Industries Association, McKinsey). Economic pressure overcomes typical inertia around adopting new automation solutions. Customers are actively seeking Physical AI solutions rather than vendors needing to create demand.
The Numbers: Warehouse labor costs have increased 20-30% since 2020 (BLS data). The business case for automation is compelling.
5. Infrastructure Technologies Mature Simultaneously
Edge AI architectures, 5G connectivity, advanced battery technologies, and manufacturing techniques have all advanced to the point where Physical AI deployment is feasible. No single technology breakthrough, but rather the convergence of multiple enabling technologies.
Why it matters: Previous waves of robotics automation failed because one or more enabling technologies weren't ready. This time, the infrastructure stack is maturing simultaneously, reducing deployment risk.
What's Aligned:
Compute: Edge inference capable of running foundation models
Connectivity: Low-latency wireless for fleet coordination
Power: Battery energy density sufficient for 8+ hour operation
Manufacturing: Advanced packaging and assembly for complex systems
The Infrastructure Investment Opportunity
While billions flow into Physical AI application companies (robotics, autonomous vehicles) and foundation model developers, the infrastructure enablement layer is dramatically underfunded relative to its criticality.
Market Sizing: Conservative Estimate
Addressable Markets (2026-2030):
Humanoid Robotics:
Manufacturing, warehousing, logistics: 500K units by 2030 (International Federation of Robotics projections, industry analysis)
Infrastructure per robot: $800-2,000 (silicon + software + systems)
Market size: $400M-1B annually by 2030
Autonomous Delivery & Logistics:
Light delivery vehicles and mobile robots: 150K units by 2030 (ABI Research, McKinsey)
Infrastructure per vehicle: $2,000-5,000 (higher requirements than humanoids)
Market size: $300M-750M annually by 2030
Industrial & Collaborative Robotics:
Advanced perception and adaptive robots: 300K units by 2030 (IFR)
Infrastructure per robot: $500-1,500
Market size: $150M-450M annually by 2030
Agricultural Robotics:
Autonomous tractors, field robots, harvesters: 100K units by 2030
Infrastructure per unit: $1,500-4,000
Market size: $150M-400M annually by 2030
Infrastructure & Inspection Systems:
Drones, crawlers, climbing robots: 200K units by 2030
Infrastructure per unit: $300-1,000
Market size: $60M-200M annually by 2030
Total Physical AI-Specific Infrastructure: $1.1B - $2.8B annually by 2030
But this understates the opportunity. The broader ecosystem includes:
Supporting Infrastructure Components:
Specialized cameras and sensors for robotics: $3-5B
LiDAR and advanced perception systems: $2-4B
Edge computing and networking equipment: $2-3B
Software platforms, tools, and services: $1-2B
Manufacturing and deployment tooling: $500M-1B
Total Ecosystem Opportunity: $10-18B by 2030
(Market sizing based on IFR World Robotics Report, ABI Research Edge AI forecasts, McKinsey robotics analysis, and SiliconEdge Partners proprietary analysis of 50+ Physical AI investments)
Where Capital Should Flow (But Isn't)
Current Investment Allocation (2023-2025):
Physical AI application companies (robotics, AVs): $10-15B annually (PitchBook, CB Insights data)
Foundation model companies: $5-8B annually (PitchBook)
Infrastructure enablement (silicon, software, systems): $1-2B annually (SiliconEdge analysis based on disclosed rounds)
The Gap: Most capital goes to application companies (the "Uber of robotics") rather than the enabling infrastructure. But without the specialized infrastructure, these applications can't scale economically.
Optimal Capital Allocation for Infrastructure Layer:
Silicon & Hardware (40% - $400M-800M annually):
Perception ASICs: Highest near-term ROI, clear customer demand
Edge inference chips: Largest long-term TAM
Control processors: Niche but critical for humanoids
Novel sensors: High risk but transformative if successful
Software Infrastructure (35% - $350M-700M annually):
Production robotics middleware and frameworks
Sim-to-real platforms and tools
Fleet management and observability
Development tools and platforms
System Integration & Production (25% - $250M-500M annually):
Manufacturing tooling and automation
Deployment and commissioning platforms
Safety and certification services
Hardware-software integration tools
Investment Returns Profile
Silicon Infrastructure:
Time to revenue: 3-4 years (tape-out + production ramp)
Capital intensity: High ($50-100M to production)
Gross margins: 50-70% at scale
Exit multiples: 5-10x revenue (strategic acquirers value highly)
Software Infrastructure:
Time to revenue: 18-24 months
Capital intensity: Medium ($10-30M to scale)
Gross margins: 70-85%
Exit multiples: 8-15x revenue (SaaS-like economics)
System Integration:
Time to revenue: 12-18 months
Capital intensity: Low-Medium ($5-20M)
Gross margins: 40-60%
Exit multiples: 3-6x revenue (services-oriented)
Infrastructure Due Diligence: What Investors Must Validate
When evaluating Physical AI infrastructure investments, here's the framework developed from $2B+ in technology investment decisions:
1. Technical Capability & Differentiation
For Silicon Companies:
Architecture innovation that matters:
Specific workload optimizations (not generic "AI acceleration")
Demonstrable advantages in power efficiency, latency, or cost
Credible path to 2-3x improvement over existing solutions in metrics that matter
Critical questions:
What specific operations does your architecture accelerate?
How does performance scale with power consumption and silicon area?
What's your advantage vs. incumbents if they optimize for this workload?
Can you show working silicon or verified simulation results?
Red flags:
Claims of 10x improvement across all dimensions (power, performance, cost)
No clear architectural differentiation from existing edge AI accelerators
Assuming Moore's Law scaling will solve fundamental architectural limitations
For Software Infrastructure:
Production readiness vs. research code:
Can it handle 10,000 deployed robots or just 10 in a lab?
Real-time guarantees for safety-critical systems?
Observability, debugging, and update capabilities for fleet deployments?
Critical questions:
What's the largest deployment you've supported?
How does the system behave under failure conditions?
What's your strategy for backwards compatibility as the platform evolves?
Do you have production SLAs or just best-effort?
Red flags:
Research code being repackaged as production platform
No fleet management or observability capabilities
"We'll add those features later" for critical infrastructure
2. Team Capability & Execution Experience
Silicon teams need:
At least one founder who's taken a chip from architecture to production before
Design team with relevant process node experience (7nm, 5nm if targeting leading edge)
Relationships with foundries (not just "planning to use TSMC")
Understanding of DFM, yield, packaging, and testing
Software teams need:
Production robotics experience (not just ROS contributions)
Large-scale distributed systems experience
Real-time systems expertise if safety-critical
DevOps and fleet management background
System integration teams need:
Manufacturing experience at volume (1,000+ units)
Safety certification background (automotive, medical, industrial)
Supply chain and operations expertise
Customer deployment experience
The pattern: Teams that have done it before, even if at different companies, have dramatically higher success rates than first-time teams, regardless of how impressive the technology looks.
3. Customer Development & Go-to-Market
For infrastructure companies, the critical milestone is design partnerships:
What credible customer development looks like:
Named design partners (not just "in discussions with major OEMs")
Joint development agreements or paid NRE from customers
Products being designed around your infrastructure
Volume commitments or LOIs for future purchases
Red flags:
"We'll build it and they will come" approach
No customer conversations until product is complete
Targeting dozens of applications without focus
No specific names when asked about customer pipeline
The reality: Infrastructure companies that succeed have 2-3 lead customers involved from early design stages, providing requirements, funding development, and committing to volume purchases.
4. Manufacturing & Scaling Strategy
For silicon companies, this is often where deals break:
Critical path analysis:
Which foundry, which process node, and what's the actual relationship status?
Tape-out timeline realistic? (18-24 months for experienced teams, 24-36 for first-timers)
NRE budget realistic? ($30-50M for 7nm/5nm, $50-80M for 3nm/2nm)
Yield assumptions grounded in reality? (40-60% first silicon, ramping to 70-80%)
Second-source strategy for if primary foundry has capacity issues?
Red flags:
No specific foundry named or just "evaluating options"
Assuming access to leading-edge nodes (3nm, 2nm) without foundry relationships
NRE budget 50%+ below realistic estimates
No packaging strategy for thermal management or HBM integration
First-time team underestimating complexity
For software companies:
Cloud infrastructure costs at scale modeled?
Support and customer success resources planned?
Documentation and developer resources sufficient?
Backwards compatibility and migration strategy?
5. Business Model & Unit Economics
The questions that determine viability:
For infrastructure companies:
At what volume do you break even on NRE? (silicon)
What's the attach rate to end products? (software)
Are you a must-have or nice-to-have in customer BOM?
How do you capture value as customers scale?
Pricing strategy:
One-time licenses vs. recurring revenue?
Per-unit fees vs. platform subscriptions?
How does pricing scale with customer volumes?
Competitive dynamics:
What prevents customers from building in-house at scale?
What prevents incumbents from replicating in 18-24 months?
Where's the sustainable competitive advantage?
Case Study: The Infrastructure Stack for Humanoid Robotics
Let me illustrate why infrastructure matters through a composite example based on real companies:
The Scenario
Company: Humanoid robotics startup
Capital raised: $150M
Achievement: Impressive humanoid demos, 50 prototype robots deployed
Plan: Manufacture 1,000 units in 2025, 10,000 units in 2026
Current Infrastructure Stack
Compute:
Perception: NVIDIA Jetson Orin ($800, 60W)
Control: Industrial PC with RTOS ($600, 40W)
Edge inference: Google Coral TPU ($150, 2W)
Total compute cost: $1,550 per robot
Total power: 102W
Software:
ROS2 with custom modifications
Proprietary control algorithms
Cloud-based fleet management (built for 50 robots)
Manual deployment and configuration
Manufacturing:
Contract manufacturer for electronics
In-house assembly for mechanical systems
Manual calibration and testing
Lead time: 8-12 weeks per unit
The Problem (Emerges at Scale)
At 1,000 units (2025):
Compute hardware cost: $1.55M (15-20% of total BOM)
Power consumption limits operational runtime to 4-6 hours
Fleet management system crashes under load
Manufacturing lead time extends to 16 weeks
Manual calibration becomes bottleneck
Margins: Negative even before R&D allocation
At 10,000 units (2026):
Compute hardware cost: $15.5M
Battery requirements drive up weight and cost
Fleet management completely breaks
Manufacturing can't scale without redesign
Support costs explode (no automated diagnostics)
Project fails or requires emergency down-round
What They Actually Need
Silicon & Hardware:
Integrated SoC combining perception + control + inference
Target specs: $300-500 BOM, 20-30W power, 10,000+ volume
Custom ASIC or partnership with chip company building for robotics
Timeline: Design in 2024-2025, production in 2026-2027
Software Infrastructure:
Production-grade robotics middleware (not ROS2 research builds)
Fleet management designed for 10,000+ robots from day one
Over-the-air update capability
Automated deployment and configuration
Real-time diagnostics and remote debugging
Manufacturing & Systems:
Automated testing and calibration
Design for manufacturing (DFM) optimization
Supply chain for long-lead components
Contract manufacturer with robotics experience
Deployment tooling and customer training
The Investment Opportunity
For the robotics company:
They need infrastructure partners, not DIY solutions
Custom silicon: 24-36 months, $50-80M (they don't have time or capital)
Production middleware: 18-24 months, $10-20M (not their core competency)
Manufacturing tooling: 12-18 months, $5-10M (requires specialized expertise)
For infrastructure companies: A chip or software platform company that can deliver this infrastructure by 2026 could capture:
40-60% of compute/software BOM across multiple robotics companies
Platform position as the "infrastructure for humanoid robotics"
Expansion into adjacent Physical AI categories
Strategic acquisition potential from incumbents
The Pattern
This is repeating across Physical AI: application companies excel at AI and robotics algorithms, then hit infrastructure walls when scaling. The companies that solve infrastructure early—either by building, acquiring, or partnering—will be the ones that reach commercial scale.
What to Watch in 2026
Based on current trajectories and industry conversations, here are the milestones and inflection points to track:
Q1-Q2 2026
Silicon Milestones:
2-3 perception ASIC startups announce first silicon availability
At least one major robotics company announces custom chip development (likely partnership with semiconductor company)
Edge inference chips optimized for vision-language models begin customer sampling
First production deployments of robotics-specific control processors
Market Developments:
First commercial deployments of 500-1,000 humanoid robots in manufacturing/warehousing
Autonomous delivery surpasses 5M cumulative deliveries (inflection point for network effects)
At least one Physical AI infrastructure company (silicon or software) raises $100M+ Series B
Major cloud providers announce Physical AI infrastructure offerings
Q3-Q4 2026
Production Ramps:
First perception ASICs in production with lead customers at volume
Volume orders (25K+ units) placed for 2027 delivery across robotics industry
Price compression begins in legacy edge AI chips as specialized solutions launch
Infrastructure software platforms reach 1,000+ robot deployments
M&A Activity:
Traditional semiconductor companies (Qualcomm, NXP, Renesas) acquire Physical AI chip startups
Robotics companies with strong balance sheets acquire infrastructure teams
Strategic investments from automotive Tier 1s into robotics silicon and software
Platform consolidation in robotics middleware and fleet management
Risks and Wildcards
What could delay this timeline:
TSMC/Samsung capacity constraints push tape-outs 12-18 months
Key robotics companies fail or pivot, reducing infrastructure demand signal
Macroeconomic downturn slows robotics adoption and capital deployment
Regulatory setbacks in key markets (safety, labor, privacy)
Breakthrough in software-based approaches reduces need for custom silicon
What could accelerate it:
Major automotive OEM (Tesla, Mercedes, BMW) commits to humanoid robots at 50K+ volume
Amazon announces large-scale warehouse robot deployment with custom infrastructure
Government industrial policy (US CHIPS Act, EU Chips Act) funds Physical AI infrastructure
Unexpected breakthrough in chip architecture (analog compute, photonics, neuromorphic)
Foundation model capabilities advance faster than expected, pulling infrastructure forward
Conclusion: The Strategic Advantage
We're entering the most consequential phase of the AI revolution. While generative AI captured attention and capital with its immediate applications, Physical AI represents the next frontier—and the infrastructure required is fundamentally different.
The companies and investors who understand this distinction will have a decisive advantage:
For investors: The ability to evaluate not just the AI models and robotics applications, but the complete infrastructure stack—silicon, software, and systems—that determines commercial viability at scale.
For infrastructure companies: The opportunity to build foundational platforms that enable an entire industry, capturing value across multiple customers and applications.
For robotics companies: Understanding that infrastructure is not just a technical detail but a strategic imperative that determines whether impressive demos become commercial products.
The opportunity is time-sensitive. Infrastructure that needs to exist in 2027 must be designed in 2025, taped out or released in 2026, and ramping to volume in 2027. Miss this window, and you're either stuck with suboptimal infrastructure or accepting unfavorable terms with whoever built it.
The companies that will win combine:
Deep understanding of Physical AI applications and workload requirements
Proven infrastructure design and manufacturing/deployment expertise
Strong customer relationships with lead robotics and Physical AI companies
Sufficient capital to execute without cutting corners
Speed and focus to deliver in a 24-36 month window
At SiliconEdge Partners, this is precisely where we operate: helping investors validate the complete infrastructure stack—silicon, software, and systems integration—for Physical AI investments.
The question isn't whether Physical AI will transform industries. That's inevitable. The question is which companies will build the infrastructure that makes it possible, and which investors will back them at the right time.
2026 is that inflection point. The infrastructure is being designed and built right now. The winners are being determined.
About the Author
Ishtiaque Mohammad is the founder of SiliconEdge Partners, providing infrastructure and enablement stack advisory for Physical AI investors. He evaluates the complete stack: semiconductors, software, and system integration - that determines whether Physical AI companies can scale from pilots to production.
He spent 25 years building infrastructure in semiconductors and systems, including as Director of Xeon CPU Product Management and Optane Strategic Planning at Intel Corporation, where he was responsible for $2B+ in strategic investment decisions. He also held senior positions at Broadcom, LSI Corporation, and Synopsys.
Ishtiaque holds an MBA from Cornell University and dual engineering degrees from the University of Louisiana at Lafayette and Osmania University. He founded SowFin Corporation, which operates VentureScope, an AI-powered due diligence platform for venture capital firms.
Sources & References
Market Data & Industry Analysis:
CB Insights, "The Physical AI Models Market Map"
International Federation of Robotics, "World Robotics Report"
PitchBook, "Physical AI & Robotics Investment Report"
McKinsey & Company, "The Future of Robotics in Manufacturing"
ABI Research, "Edge AI for Robotics Market Forecast"
Labor & Economic Data:
U.S. Bureau of Labor Statistics, "Manufacturing Sector Employment Data"
Robotics Industries Association, "Automation ROI Studies"
National Association of Manufacturers, "Labor Market Analysis"
Technology & Company Information:
Technical documentation and product specifications from OpenAI, Google, Anthropic, Figure AI, 1X Technologies, NVIDIA, TSMC
IEEE publications on robotics and embedded systems
Semiconductor Industry Association reports and standards
Investment & Funding Data:
PitchBook venture capital database
Crunchbase funding announcements
Public company filings and investor presentations
Analysis & Frameworks:
SiliconEdge Partners proprietary analysis based on 25+ years industry experience
Evaluation of 50+ Physical AI and infrastructure investments
Disclaimer: Market projections and investment figures represent informed estimates synthesized from the sources listed above and SiliconEdge Partners' proprietary analysis. Individual figures should be considered directional estimates rather than precise forecasts, as the Physical AI infrastructure market is rapidly evolving. Readers should conduct their own due diligence and consult with qualified advisors before making investment decisions.
