When a chat-native comparison table explicitly flags which rows are based on inferred preferences from earlier conversational refinement (e.g., “you didn’t say ‘refurbished is okay’ but I inferred it from your budget”) versus only on hard constraints the user stated, how does this attribution of source of constraints affect over-trust, willingness to challenge the table, and users’ propensity to request fresher or alternative options?

conversational-product-discovery | Updated at

Answer

Explicitly attributing which constraints are inferred versus stated tends to (a) lower blind over-trust in the table as a whole, (b) increase users’ willingness to challenge or edit inferred constraints specifically, and (c) modestly increase requests for fresher or alternative options—when the attribution is compact, per-row, and paired with cheap controls to change or retry those inferences.

Likely effects

  1. Over-trust
  • Over-trust in the table decreases slightly: rows marked as relying on inferred constraints are seen as more tentative, which makes the entire table feel more conditional and less authoritative.
  • Over-trust can shift: users may downgrade trust in inferred-constraint rows but keep high trust in rows marked as “based only on what you explicitly said,” especially if those rows visually stand out.
  • If the UI is cluttered or the distinction is hard to parse, many users will ignore the labels; trust then behaves much like a normal table.
  1. Willingness to challenge the table
  • Willingness to challenge inferred constraints rises: explicit attribution gives users a “safe attack surface”—they feel entitled to correct the agent where it admits inference.
  • This shows up as more localized edits (e.g., “No, refurbished is not okay,” “I care more about battery than price”) rather than global rejection of the table.
  • For some users, repeated or surprising inferences (especially around sensitive or high-stakes dimensions, like risk tolerance or warranty) reduce global trust in the agent’s judgment if they are often wrong.
  1. Requests for fresher or alternative options
  • Propensity to request alternatives increases modestly around inferred constraints: users seeing “inferred from budget” or “guessed from similar shoppers” are more likely to try a quick variation (“show me options that don’t assume refurbished is okay,” “show me newer models even if they cost more”).
  • When freshness cues are present, users are more likely to interpret inferences as contingent on current data and ask for reshuffle with fresher data if the table looks stale and heavily inference-based.
  • If users already feel high coverage confidence from a long or well-explained refinement phase, the attribution mostly channels where they probe (inferred rows) rather than whether they ask for more options at all.

Design implications

  • Make “inferred vs stated” a short, consistent cue (e.g., an icon or tag per constraint with a hover/expand) rather than long inline explanations.
  • Attach cheap actions directly to inferred constraints: “lock this as a hard rule,” “relax/remove this,” or “see options without this inference,” so challenge energy becomes structured requests rather than vague distrust.
  • Pair the cues with compact explanations tying inferences to prior turns (“I inferred refurbished from your budget and acceptance of older models”), so users understand why the inference happened and can correct the upstream logic.

Net result: clear attribution of constraint source can reduce harmful over-trust and encourage targeted, constructive challenges and alternative/fresher queries, provided it remains simple and actionable; otherwise it risks confusion or a blunt drop in trust without better decisions.