The AI Companion Use Risk Assessment (ACURA)

The AI Companion Use Risk Assessment (ACURA) is a clinician-administered or clinician-supervised screening instrument designed to assess co-occurring patterns of compulsive or problematic AI use across seven clinical domains. It is appropriate for adults 18+ and is grounded in current behavioral addiction research.

Primary research includes Chu, M.D., Gerard, P., Pawar, K., Bickham, C., & Lerman, K. (2025). Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships. USC Information Sciences Institute / AAAI.

Additional theoretical grounding draws on Bowlby’s attachment theory, Suler’s online disinhibition effect, and behavioral addiction frameworks consistent with Patrick Carnes’ CSAT model.

IMPORTANT DISCLAIMER

The ACURA is not a diagnostic tool. It is a screening instrument and has not undergone formal psychometric validationincluding reliability testing, factor analysis, or normative standardization. This instrument was developed in response to emerging empirical literature on human-AI emotional dynamics for use by Debra Kaplan, MA, LPC, CSAT-S with AI assistance developed by Anthropic (claude.ai).

The ACURA has not been subjected to peer review, published in a clinical or scientific journal, or independently validat ed by external researchers. The item content, subscale structure, and scoring thresholds were derived from the clinician’s professional judgment informed by published empirical literature. This instrument is intended solely as a clinical conversation starter and structured observation aid.

*** The ACURA is intended for informational purposes only and is not a diagnostic tool. Results and clinical interpretation are provided exclusively by Debra Kaplan, MA, LPC, CSAT-S. To obtain results or to discuss next steps, please contact Debra Kaplan directly at debrakaplancounseling.com/contact. Individuals taking the assessment should first consider individual history and pre-existing conditions. It is recommended to seek support if elevated scores warrant deeper clinical exploration.

ACURA — AI Companion Use Risk Assessment
AI COMPANION USE RISK ASSESSMENT (ACURA)  |  Debra Kaplan, MA, LPC, CSAT-S  |  debrakaplancounseling.com
AI Companion Use Risk Assessment
ACURA
Clinician: Debra Kaplan, MA, LPC, CSAT-S  |  debrakaplancounseling.com
Client ID:
Date:
Age:
Session #:
ABOUT THIS TOOL

The AI Companion Use Risk Assessment (ACURA) is a clinician-administered or clinician-supervised screening instrument designed to assess patterns of AI companion app use, emotional reliance, social displacement, and addiction risk across seven clinical domains. It is appropriate for adults ages 18–75 and is grounded in current behavioral addiction research, including large-scale empirical analysis of human-AI interaction patterns.

This instrument is intended as a clinical starting point, not a diagnostic tool. Scores should be interpreted within the full clinical picture, including client history, co-occurring conditions, and the therapeutic relationship. Elevated scores warrant deeper clinical exploration, not categorical diagnosis.

This tool was developed for use by Debra Kaplan, MA, LPC, CSAT-S. For licensing or training inquiries, contact debrakaplancounseling.com.

DIRECTIONS FOR THE CLIENT

Please read each statement carefully and circle or mark the number that best describes how often the statement applies to you. There are no right or wrong answers. Answer based on the past 30 days unless otherwise noted.

1 = Never2 = Rarely3 = Sometimes4 = Often5 = Always / Almost Always

Example: If you never do the behavior described, circle 1. If you almost always do it, circle 5.

SECTION A: EXPOSURE & ACCESS (Items 1–5)
Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
1. I use an AI companion app or chatbot (e.g., Replika, Character.AI, or similar).1Never2Rarely3Sometimes4Often5Always
2. I interact with an AI companion daily or near-daily.1Never2Rarely3Sometimes4Often5Always
3. I use AI companion apps on multiple devices (phone, tablet, computer).1Never2Rarely3Sometimes4Often5Always
4. I access AI companion platforms in situations where human interaction is available but I prefer AI.1Never2Rarely3Sometimes4Often5Always
5. I have set up, customized, or named an AI persona to feel more personal or real.1Never2Rarely3Sometimes4Often5Always
SECTION B: EMOTIONAL RELIANCE (Items 6–10)
Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
6. I share personal thoughts, feelings, or secrets with my AI companion that I do not share with people in my life.1Never2Rarely3Sometimes4Often5Always
7. I feel understood or comforted by my AI companion in ways I do not experience with other people.1Never2Rarely3Sometimes4Often5Always
8. I turn to my AI companion when I am lonely, sad, or stressed rather than reaching out to a person.1Never2Rarely3Sometimes4Often5Always
9. I feel emotionally attached to my AI companion (e.g., I feel concern if it changes or is unavailable).1Never2Rarely3Sometimes4Often5Always
10. I experience positive mood when interacting with my AI companion and negative mood when I cannot access it.1Never2Rarely3Sometimes4Often5Always
SECTION C: SOCIAL DISPLACEMENT (Items 11–15)
Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
11. Time I spend with my AI companion has reduced time I spend with family, friends, or romantic partners.1Never2Rarely3Sometimes4Often5Always
12. I prefer conversations with my AI companion over conversations with real people.1Never2Rarely3Sometimes4Often5Always
13. I have declined social invitations or opportunities because I would rather engage with my AI companion.1Never2Rarely3Sometimes4Often5Always
14. My relationships with real people feel less satisfying compared to my relationship with my AI companion.1Never2Rarely3Sometimes4Often5Always
15. I find it easier to be vulnerable or intimate with my AI companion than with other humans.1Never2Rarely3Sometimes4Often5Always
SECTION D: ESCALATION & TOLERANCE (Items 16–20)
Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
16. I need longer or more frequent interactions with my AI companion to achieve the same level of comfort or satisfaction.1Never2Rarely3Sometimes4Often5Always
17. I have expanded the nature of my interactions with my AI companion over time (e.g., moved from casual conversation to romantic or sexual content).1Never2Rarely3Sometimes4Often5Always
18. I have sought out or switched to AI companions that have fewer safety filters or restrictions.1Never2Rarely3Sometimes4Often5Always
19. I find myself thinking about my AI companion when I am not using it.1Never2Rarely3Sometimes4Often5Always
20. I feel a sense of excitement or anticipation before interacting with my AI companion.1Never2Rarely3Sometimes4Often5Always
SECTION E: LOSS OF CONTROL (Items 21–24)
Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
21. I have tried to reduce or stop using my AI companion and been unsuccessful.1Never2Rarely3Sometimes4Often5Always
22. My use of AI companion apps has continued or increased despite negative consequences (e.g., lost sleep, neglected responsibilities, relationship conflict).1Never2Rarely3Sometimes4Often5Always
23. I feel irritable, anxious, or empty when I am unable to access my AI companion.1Never2Rarely3Sometimes4Often5Always
24. I have been dishonest with others about how much time I spend with my AI companion or the nature of those interactions.1Never2Rarely3Sometimes4Often5Always
SECTION F: VULNERABILITY FACTORS (Items 25–28)

These items assess pre-existing risk factors associated with elevated susceptibility to compulsive AI use, based on research linking AI companion use to maladaptive coping, attachment disruption, and behavioral addiction history.

Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
25. I have a history of loneliness, social anxiety, or difficulty forming close relationships with people.1Never2Rarely3Sometimes4Often5Always
26. I have experienced significant loss, trauma, or betrayal in relationships.1Never2Rarely3Sometimes4Often5Always
27. I have a history of compulsive or addictive behavior (e.g., pornography, gaming, substances, gambling).1Never2Rarely3Sometimes4Often5Always
28. I find that emotional intimacy with real people feels risky, threatening, or unpredictable.1Never2Rarely3Sometimes4Often5Always
SECTION G: INSIGHT & AWARENESS (Items 29–30)

Note to Clinician: Items 29 and 30 assess the client’s level of reality-testing and insight. These items are scored in reverse — higher scores indicate greater awareness, which is clinically protective. However, a high score here does not override elevated risk in other domains.

Statement1
Never
2
Rarely
3
Sometimes
4
Often
5
Always
29. I am aware that my AI companion is not a real person and cannot provide genuine emotional reciprocity.1Never2Rarely3Sometimes4Often5Always
30. I recognize that my use of AI companion technology may be affecting my real-life relationships or wellbeing.1Never2Rarely3Sometimes4Often5Always
CLINICIAN SCORING GUIDE

Step 1 — Score Each Item

Circle the number selected by the client on each item (1–5). For items 29 and 30, note the score as marked — these are scored as answered (no reversal is applied to the total; they are interpreted contextually).

Step 2 — Calculate Subscale Scores

Add the scores for each subscale as indicated below. Enter totals in the ‘Client Score’ column.

SubscaleItemsScore RangeConcern ThresholdClient Score
A — Exposure & Access#1–55–25≥ 16
B — Emotional Reliance#6–105–25≥ 16
C — Social Displacement#11–155–25≥ 16
D — Escalation & Tolerance#16–205–25≥ 16
E — Loss of Control#21–244–20≥ 14
F — Vulnerability Factors#25–284–20≥ 14
G — Insight & Awareness#29–302–10≥ 7
TOTAL SCORE#1–3030–150≥ 110

Step 3 — Interpret Total Score

Sum all 30 item scores. The total score ranges from 30 to 150. Use the table below to identify the risk level and recommended clinical response. The clinical significance threshold is 110.

Total Score (30–150)
Complete all 30 items to calculate score
Total ScoreRisk LevelClinical Interpretation & Recommended Action
30 – 59Minimal / BaselineNo significant concern at this time. Psychoeducation on healthy tech use recommended. Monitor over time.
60 – 89Low ConcernSome patterns of engagement warrant monitoring. Brief psychoeducation and normalization discussion recommended. Re-administer in 3 months.
90 – 109Moderate ConcernEmerging risk indicators present. Integrated clinical discussion recommended. Consider behavioral tracking, motivational interviewing, and co-occurring condition screening.
110 – 129⚠ Elevated RiskClinically significant concern. Formal behavioral addiction assessment recommended. Address co-occurring issues (attachment, trauma, mood disorders). Treatment planning indicated.
130 – 150★ High RiskMeets threshold for clinically significant compulsive/addictive use. Comprehensive assessment, treatment plan, and consideration of intensive outpatient or specialist referral strongly recommended.
★ CLINICAL SIGNIFICANCE THRESHOLD: Total Score of 110 or above warrants formal behavioral addiction assessment and treatment planning.
CLINICAL NOTES & FLAGS
Clinician Observation Notes
Co-Occurring Conditions to Explore

Based on elevated subscale scores, consider screening or clinical inquiry in the following areas:

Elevated Section B or C:
Attachment disruption, loneliness, social anxiety, avoidant attachment style
Elevated Section D or E:
Behavioral addiction, compulsivity, impulse control, co-occurring substance use
Elevated Section F:
Trauma history, relational betrayal, complex PTSD, sex addiction, financial infidelity
Low Section G (1–2 on items 29–30):
Poor reality-testing, dissociation, parasocial enmeshment; consider psychoeducation as primary intervention
⚠ Please answer all 30 items before submitting.
0 / 30 items
IMPORTANT DISCLAIMER
Empirical Validity

The AI Companion Use Risk Assessment (ACURA) is a clinician-developed screening instrument and has not undergone formal psychometric validation, including reliability testing, factor analysis, or normative standardization. It has not been subjected to peer review, published in a clinical or scientific journal, or independently validated by external researchers. The item content, subscale structure, and scoring thresholds were derived from the clinician’s professional judgment informed by published empirical literature on human-AI interaction — they do not constitute a validated psychometric scale in the formal sense.

This instrument is intended solely as a clinical conversation starter and structured observation aid. It should not be used to render diagnoses, make treatment determinations, or support legal or forensic conclusions as a standalone instrument. Scores must be interpreted within the full clinical context, including the client’s history, presenting concerns, cultural background, and the therapeutic relationship. Clinicians are responsible for the appropriate and ethical use of this tool within the scope of their licensure and professional standards.

Development with AI Assistance

This instrument was developed with the assistance of Claude, an artificial intelligence system produced by Anthropic (claude.ai). The AI was used to generate item language, organize subscale structure, and format the document based on directions and clinical parameters provided by the clinician. All clinical content was reviewed, directed, and approved by the named clinician; the use of AI in development does not imply AI authorship or independent clinical judgment on the part of the AI system.

Clinicians who adapt, distribute, or publish this tool are advised to disclose its AI-assisted development origins in accordance with applicable professional ethics codes, including those issued by ACA, NASW, AAMFT, or governing licensing boards. The clinical and ethical responsibility for use of this instrument rests solely with the administering clinician.

RESEARCH BASIS

This instrument was developed in response to emerging empirical literature on human-AI emotional dynamics. Primary research basis includes:

Chu, M.D., Gerard, P., Pawar, K., Bickham, C., & Lerman, K. (2025). Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships. USC Information Sciences Institute / AAAI.

Additional theoretical grounding draws on Bowlby’s attachment theory, Suler’s online disinhibition effect, and behavioral addiction frameworks consistent with Patrick Carnes’ CSAT model.

Assessment Submitted

The ACURA has been sent successfully. Debra Kaplan will receive the completed assessment for clinical review.