Ask HN: Are we forcing LLMs to be State Machines?

I'm building a customer service platform and I've hit a wall of frustration. The "AI Agent" demos and tutorials are always sleek, but the reality of bridging messy, unstructured user intent with rigid, transactional internal processes has been a nightmare of edge cases.

It feels like I spend 80% of my engineering effort building guardrails to prevent hallucinations or catastrophic logic failures, and only 20% actually shipping features.

My question for those with actual skin in the game (production-grade only, please):

Have you found a legitimate architectural "sweet spot" between strict business determinism and the probabilistic nature of LLMs?

Or are we just trying to shoehorn a stochastic token predictor into acting like a Finite State Machine, and deep down, this is just unsustainable hype for mission-critical workflows?

I’m looking for war stories and reality checks, not theoretical pitches.

2 points | by kodiyak 10 hours ago

1 comments

  • aebtebeten 4 hours ago
    pedantry: stochastic token predictors are already finite state machines; they just aren't deterministic finite state machines