Interesting approach. The core problem you're solving - "not enough real users yet but need behavioral feedback" - is very real for early-stage products.
One gap I'd watch for: AI personas will tend to be too rational. Real first-time users do things no persona model would predict. They misread buttons, get distracted halfway through a flow, form completely wrong mental models of what your product does. The "impatient user" persona is a good start, but actual impatient humans are more creatively confused than any simulation.
We're working on the other side of this at Test by Human (testbyhuman.com) - actual people screen-recording their first time through your product with voice narration. The combination could be strong: use Acceptify for fast iteration loops during development, then validate with a real human before shipping.
Curious how the agent decides when to "give up" on a confused flow vs. keep trying. That threshold seems like it would matter a lot for the quality of the feedback.
Great question, and your combination of AI and human validation is definitely something that could be strong.
As far as the 'give up' signal, the Agent has several triggers that could stop it from executing, as well as a max number of actions it can take before giving up. Examples of the triggers are page has not loaded after 30s, button clicks or inputs are not responding, etc. If the agent is given the task of 'complete the onboarding process', it will evaluate after each action whether or not it has sufficient information or needs to continue. Once it determines that it has completed the task, it summarizes the actions taken into the final report, including screenshots where applicable.
Are there additional triggers that you think would be helpful? Should the 'max steps' be user-customizable (i.e. maybe you allow it to take up to 50 actions before failing vs. 20 or something like that)
Interesting approach. The core problem you're solving - "not enough real users yet but need behavioral feedback" - is very real for early-stage products.
One gap I'd watch for: AI personas will tend to be too rational. Real first-time users do things no persona model would predict. They misread buttons, get distracted halfway through a flow, form completely wrong mental models of what your product does. The "impatient user" persona is a good start, but actual impatient humans are more creatively confused than any simulation.
We're working on the other side of this at Test by Human (testbyhuman.com) - actual people screen-recording their first time through your product with voice narration. The combination could be strong: use Acceptify for fast iteration loops during development, then validate with a real human before shipping.
Curious how the agent decides when to "give up" on a confused flow vs. keep trying. That threshold seems like it would matter a lot for the quality of the feedback.
Great question, and your combination of AI and human validation is definitely something that could be strong.
As far as the 'give up' signal, the Agent has several triggers that could stop it from executing, as well as a max number of actions it can take before giving up. Examples of the triggers are page has not loaded after 30s, button clicks or inputs are not responding, etc. If the agent is given the task of 'complete the onboarding process', it will evaluate after each action whether or not it has sufficient information or needs to continue. Once it determines that it has completed the task, it summarizes the actions taken into the final report, including screenshots where applicable.
Are there additional triggers that you think would be helpful? Should the 'max steps' be user-customizable (i.e. maybe you allow it to take up to 50 actions before failing vs. 20 or something like that)
Thank you for your input! It's much appreciated!
[flagged]