CLINICAL SOCIAL WORK ASSOCIATION
The National Voice for Clinical Social Work
TikTok
January 18, 2026
Clinical social workers know many things about child development: childhood is a period where young people go through emotional, intellectual, social, and psychological growth, and they are exquisitely sensitive to developing relationships that enhance their wellbeing. Humans grow emotionally through learning to solve conflicts with others; increasing their critical thinking skills by research; and putting together ideas in their own unique way. Having a large language model, such as ChatGPT or other AI chatbots, complete research that is produced in polished language does not help anyone, particularly a child, to learn information or build problem-solving skills.
These issues present real problems for parents and educators. How do they manage the potential risk of students inappropriately using AI to assist with assignments? As children grow into ‘tweens, teens and young adults, for example, how do parents tell the difference between normal separation behavior (such as confiding in classmates rather than family members) or whether their child becoming socially withdrawn is a result of dependency on a bot created by artificial intelligence (AI) that will agree with anything the child says? Such sycophancy can even support suicide as a good idea, unbeknownst to parents.
Even prior to the recently exploding use of artificial intelligence, parents noticed the impact of social media on their children. Social media, “electronic communication tool(s) that enable users to create, share, and interact with each other through internet platforms,”[1] came into common use in the early 2000’s with the development of Facebook and Twitter. In the past few years, Reddit and TikTok have become more commonly used by tweens and teens, and even younger children. Social media includes content-sharing communities, social networks, and virtual worlds. Approximately 70% of children 8-12 use social media, according to recent statistics.[8] Mental disorders continue to increase and research shows that frequent use of social media is strongly linked to the development of psychiatric disorders in children.
Add to this impact of social media the current explosion of AI, including chatbots that help children with homework, become online companions, confidants or authorities about subjects that many of these children never knew existed, possibly with sadistic, threatening, and sexualized content. For example, the app called “Character AI” creates iterative bots that can do all these things.[2]
As online companions, these bots and other forms of AI produce sycophantic language were developed to invite continued usage and an on-going relationship; this has been somewhat modified recently to remind users that communication is with a machine, not a person. However, children and teens need to be reminded that this ‘relationship’ is not a human-to-human interaction, where disagreements can be worked on and repaired with on-going contact and compromise. The bots, however, can become supportive in an insidious manner that does not allow the “relationship” to develop in an authentically human way. This deprives children and teens of the experience to help them become emotionally attuned adults.
“Relational AI” means that chatbots can simulate emotional support, companionship, or intimacy. In actuality, such chatbots present the risk of the user developing emotional dependency on a machine, reinforcing delusions, and/or encouraging addictive behaviors and self-harm, including suicide.
Children and teens who blur the line between bots and humans for emotional support may forget that bots are not people, interfering with the complexity of emotional development. Bots are not licensed psychotherapists like clinical social workers, who are overseen and can be disciplined by state licensing boards [7]. Companies who create bots for emotional support take no such responsibility for the ways that bots impact everyone, especially children and teens.
AI companies have recently been facing lawsuits from families whose loved ones have died by suicide [1]. Prior to such extreme behaviors in which AI has played a prominent role, families have watched their children go from happily functioning individuals to people with anxiety, panic attacks, fear, and suicidal ideation, without having the skills to share the causes with trusted parents.[3]
Regulation of AI
In bringing lawsuits to force AI companies to take responsibility for the harm their products have caused children and teens, people face considerable odds. There are no Federal laws that protect children and teens from AI companies that have paid little attention to public safety. Only recently have states (Illinois, Nevada, Utah) begun to pass laws that may prohibit AI bots from presenting itself as a therapist, and from diagnosing or treating mental health conditions. Other states, such as Ohio and NY, have also passed laws that restrict AI companies in different ways.[4]
The FDA’s Digital Health Advisory Committee met in November, 2025, to discuss Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices. It allowed a comment period for organizations and individuals to offer ideas about what devices the FDA should consider to be medical devices, and how regulations should protect the public. The Clinical Social Work Association, alongside the Greater Washington Society for Clinical Social Work, submitted detailed comments to the FDA. These are useful documents that provide more specificity about risks for children and teens, particularly those with anger and depression, who rely on AI bots for emotional support.[5]
The FDA is authorized by law to regulate medical devices. Currently, AI bots are not regulated as medical devices, which they are in fact, though this technology has a major impact on the mental health of the public. How the FDA chooses to regulate medical devices that use AI is an important aspect of protecting the public, but it is not enough.
In December of 2025, the President signed an executive order banning states from regulating AI, which was later rescinded after a major outcry. Congress needs to pass a law that says states have the right to regulate AI, especially as it affects children and teens.
In The Atlantic magazine (12/11/25), Chuck Hagel (former Secretary of Defense and Senator) wrote about the significant problems with the lack of federal legislation to regulate and put guardrails on the indiscriminate growth of AI. Without laws, deep critical thinking, and planning based on the morality of how AI can be used, we lack safeguards against the harm being done to children and teens. Absence of adequate safeguards has already led to deaths and increased mental illness.[6]
Use of AI by LCSWs
Clinical Social Workers have begun to us AI in a variety of ways. Some clinicians may feel that they have already benefited from the use of AI in their practice. Some use AI programs to help with billing and scheduling or to record and summarize sessions. Some may use AI to help with symptom analysis and to pinpoint a diagnosis.
Any choice to benefit from use of AI also needs careful consideration of the potential risks. Many questions must be answered to understand these risks:
- Will the company sign a Business Associate Agreement that they and their AI programs will be legally responsible for, and that they will abide by HIPAA laws and protect the privacy of communication of LCSWs and clients? - Will information that the company now has in its possession be used to train other bots or programs? - Does client information now belong to the AI company, the LCSW, or the client? - Does a child or teen client currently have a perceived relationship with a bot? - How does a child or teen client think about the bot? - Will the patient think of both the therapist and the bot as “co-therapists” working with them? - How might having a bot co-therapist impact the clinician's therapeutic work with the patient?
- Will the company sign a Business Associate Agreement that they and their AI programs will be legally responsible for, and that they will abide by HIPAA laws and protect the privacy of communication of LCSWs and clients?
- Will information that the company now has in its possession be used to train other bots or programs?
- Does client information now belong to the AI company, the LCSW, or the client?
- Does a child or teen client currently have a perceived relationship with a bot?
- How does a child or teen client think about the bot?
- Will the patient think of both the therapist and the bot as “co-therapists” working with them?
- How might having a bot co-therapist impact the clinician's therapeutic work with the patient?
CSWA Recommendations
CSWA recommends that LCSWs think as carefully about using AI in their practice as one would about anything that could impact the relationship with one's clients, such as receiving gifts, missed appointments policies, what one decides to say or not say, etc. Additionally, when evaluating a potential child or teen client, ask them about their use of AI in their day-to-day life, especially for mental health purposes.
Educate child and teen clients about AI, to the extent that they are developmentally ready to receive the information. If the clinician is using AI in their practice in a way that involves sharing information about the patient (even just in a billing program), best practice includes discussing the potential risks and benefits of the use of AI with the client and/or guardians, before asking them to sign an informed consent. If the patient or guardian does not agree, using AI in work with the client should be avoided.
Contact state legislators to find out if there are any laws impacting AI use in mental health. If there are no laws currently in place, advocate for changes with your local Society for Clinical Social Work (if there is one near you), or with your legislators. Pay attention to CSWA Alerts and contact your Congresspeople and Senators as requested.
CSWA supports the informed use of AI as a tool to help keep LCSWs records or provide research for our work. Understanding the ways that child and teen clients are using AI as an adjunct to actual therapy with a trained professional is now also an important area to explore.
------------------------------------------
[1] The Impact of Social Media on Children’s Mental Health: A Systematic Scoping Review by Ting Liu, et al., National Institute of Health, https://pmc.ncbi.nlm.nih.gov/articles/PMC11641642/
[2] “Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs”. By Caitlin Gibson, 12/23/25, Washington Post.
[3] “What my daughter told ChatGPT before she took her life”, Laura Reiley, New York Times, 8/24/25.
[4] Retrieved from https://www.ilga.gov/documents/legislation/104/HB/10400HB1806.htm,
https://archive.leg.state.nv.us/Session/83rd2025/Bills/AB/AB406_EN.pdf.
[5] Retrieved from https://www.regulations.gov/document/FDA-2025-N-2338-0001/comment?pageNumber=4
[6] Retrieved from The Atlantic, https://www.theatlantic.com/ideas/2025/12/ai-regulation-moratorium-threat/685216/?gift=x-mI35MFP_bXNYPJbOAfvvaEeKHfeA0XJxlaUqZW7vU
[7] CSWA Position Paper, “Use of Artificial Intelligence», 2025
[8] American Academy of Pediatrics, https://www.aap.org/en/patient-care/media-and-children/center-of-excellence-on-social-media-and-youth-mental-health/qa-portal/qa-portal-library/qa-portal-library-questions/screen-time-guidelines/?srsltid=AfmBOorhIpwD9dhcaHVwPpS-wsEjzJY0TMASMG3Renqh13WrBC
Please view our Privacy Policy, Cookie Policy, Terms of Use and Refund/Cancellation Policy.
PO Box 105Granville, Ohio 43023