Filter By Service Area
Filter By Title
Filter By Office

Resources

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.

Risks and Regulations with the Use of AI in Behavioral Health

Written by: Kirti Vaidya Reddy, Quarles and Robert Hinyub III, Breazeale, Sachse & Wilson, L.L.P.

As some studies estimate that nearly 23% of the adult population lives with a mental illness, the integration of artificial intelligence (AI) into mental health care has transformative potential in terms of accessibility, cost reduction, personalization, and provider efficiency. To improve the prediction of risk of mental health disorders and the treatment of mental health, AI is commonly used for: (1) AI therapy, (2) wearables that interpret bodily signals using sensors and providing assistance when needed, (3) diagnosing and predicting outcomes by analyzing patient data, (4) improving adherence to treatment by using AI to predict when a patient is likely to slip into noncompliance or issue reminders for medication or provider appointments, and (5) personalizing treatments and adjusting individual treatment plans. To support these advancements, the American Medical Association Current Procedural Terminology (CPT) Editorial Board has incorporated billing codes applicable to the use of AI as well as AI taxonomy that provides guidance for classifying various AI-powered medical services applications. While AI has potential to improve behavioral health care, it also presents challenges as technology is advancing at a much faster pace than regulatory controls that ensure safety and efficacy. This article discusses various challenges with the use of AI in the behavioral health setting and regulatory developments that are attempting to provide safeguards in this dynamic space.

Challenges and Limitations of Al

As with any developing technology, AI in behavioral health presents several challenges and limitations. For example, potential bias may exist within AI systems due to various circumstances. AI systems are trained on large amounts of data, such as medical records or biomarker records, detecting and incorporating patterns and connections within that data. AI systems may result in bias applications if built from historical data that is biased, imbalanced, or otherwise incomplete. 1 Another reason for potential bias in AI is a lack of diverse representation among AI developers and participants in medical research, which may cause the algorithms to perpetuate false assumptions. 1 If the data is misrepresentative of the population variability, AI will reinforce the biases, resulting in misdiagnoses and poor outcomes. As such, guidance is necessary to ensure fairness and avoid discrimination in AI models.

AI chatbots serving as therapists by simulating conversation and offering general support is another area that may pose challenges, particularly for vulnerable populations such as children and individuals with mental health conditions. AI chatbots allow for immediate interaction, which may make them a more attractive alternative to human-to-human therapy that is often inconvenient. Additionally, vulnerable individuals may feel more comfortable having relationships with bots, in part because they can express their thoughts and feelings without fear of judgment, rather than engaging in real-life interactions. Despite their advantages, however, individuals may become over-reliant on AI chatbots, and ultimately, exacerbate a user's isolation and social avoidance. Thus, guardrails are needed to balance the amount of time with AI as compared to real-life human socialization.

Additionally, chatbots that mimic human therapists may repeatedly affirm the user, even if the person says things that are harmful or misguided. 4 While therapists are trained to ask questions about things they do not know and to avoid making certain assumptions, chatbots give the illusion of having mental health expertise in all circumstances even though they do not. 5 Chatbots may fail to recognize irony, complex thoughts, or emergency situations. Limitations on AI are necessary to ensure that it is developed and used responsibly, especially when interacting with people who are in a vulnerable state.

Further, AI creates complexity regarding professional responsibility and accountability. Providers may be held liable not only for improper use of the AI tools but also for issues that appear beyond the physician's direct control, such as using an AI tool that lacks clinically validated e-control over the data. Guardrails such as requiring AI developers to disclose how their AI models are trained and tasked may allow providers to more thoroughly evaluate the liability risk of any AI tool prior to use.

Finally, AI presents inherent data privacy concerns as AI relies on large amounts of data; how the AI tool stores, transfers, retains, and uses patient data creates risks for breaches and misuse. Additionally, many AI tools require third-party integrations, and patient data risks may extend to such third-party vendors. This is particularly important in the behavioral health space where individuals may share sensitive information such as mental health conditions, addiction, suicidal tendencies, or the presence of disabilities. Notably, while the Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects individuals' health information, HIPAA may be inadequate to safeguard protected health information that is ingested by AI. To address this concern, the government should consider requiring AI companies to provide explicit privacy policies that clearly educate users on how their data will be used and offer users the clear option to decline consent for their personal data to be used in the development and application of AI systems.

Regulatory Advancement of Al in Behavioral Health

Under the new Trump administration, there has been significant change in the AI regulatory landscape at the federal level. On his first day in office, President Trump signed a broad Executive Order (EO), "Initial Rescissions of Harmful Executive Orders and Actions," which pulled back numerous Biden administration objectives, including President Biden's EO for "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The Biden EO set new standards for AI safety and security, which encouraged transparency in the development of AI technology and addressed civil rights issues and biases that AI could be prone to perpetuating. 2 However, the Trump administration has been critical of the now-revoked EO, asserting that it hindered AI innovation and placed unnecessary government control over the development of Al. ti

On January 23, 2025, President Trump signed an EO entitled "Removing Barriers to American Leadership in Artificial Intelligence," which calls for the extensive development of AI that is "free from ideological bias or engineered social agendas." 2 Furthermore, the EO mandates the creation of an AI Action Plan within 180 days of the order to promote and improve AI innovation in the U.S. private sector without imposing burdensome federal requirements. 'I;) Given both this order and the Trump administration's generally expressed desire to shrink federal regulation and expand business and innovation, it seems likely that state legislatures will become the primary source of AI regulation.

AI governance on the state level varies widely. Some states have already enacted or proposed legislation pertaining to AI regulation, while others have yet to make any substantive moves toward regulating AI. Massachusetts, California, Illinois, New York, and Utah are some of the states at the forefront of AI regulation for general health care treatment and other purposes.

In September 2024, California passed 18 laws with an eye towards regulating AI in numerous spheres, including AI development, risk management, privacy, and, notably, health care. n In the context of health care, AB-3030 and SB-1120 were enacted to protect patients from risks associated with providers using generative AI i • and mandate effective oversight of AI-driven mete--course of patient care. 

Illinois lawmakers also recognized a need for AI legislation and passed House Bill 3773, which amended the Illinois Human Rights Act near the same time California passed its series of AI laws. Although the Illinois AI measure does not explicitly address health care, it affects the health care industry by regulating the manner in which employers use the help of AI software to both recruit and manage employees. is New York has also taken significant steps toward AI regulation, notably in the public sphere. Taking effect in December 2025, State Technology Law Section 103-E will order how AI is used in state agencies, which includes state-level health care regulators like the New York Department of Health.

Utah recently passed one of the most robust AI-specific laws in the country, specifically targeting consumer protection and privacy risks. Effective on May 1, 2024, S.B. 149 created rigorous safeguards for consumers both using and interfacing with AI software. 17 Specifically, regarding the health care sector, the bill tasks providers with "prominently" disclosing the use of generative AI to aid in the treatment of patients before being used. k3 These states are just a few of the many making significant strides towards regulating the use of AI in the delivery of health care.

In addition to AI-related regulations that are applicable to health care generally, states are beginning to focus on regulations specific to the use of AI in the behavioral health space. Specifically, Massachusetts has put forth a bill—titled "An Act Regulating the use of artificial intelligence in providing mental health services"—that requires any licensed mental health professional who wants to use AI to provide mental health services to seek approval from the relevant licensing board. Additionally, the bill requires that those licensed to do so must disclose the use of AI to their patients and provide informed consent, as well as provide the option to receive treatment from a human instead. The measure also seeks to maintain a human dimension to the therapist-patient relationship by requiring that "[a]ny AI system used to provide mental health services must be designed to prioritize the safety and well-being of individuals seeking treatment and must be continuously monitored by a licensed mental health professional to ensure its safety and effectiveness." 19 Proposed legislation in Rhode Island uses the same language as Massachusetts relating to continuous monitoring by a licensed mental health professional.

Conclusion

While the states discussed above are advancing towards a regulatory framework for the safe and effective use of AI, many other states are also making progress or are on track to pass AI-tailored legislation. As governments continue to implement regulatory changes that may ultimately make the use of AI for mental health treatment safer and more effective, AI developers, health care providers, and attorneys who advise them should be prepared to adapt to a rapidly shifting patchwork of federal and state laws.

Copyright 2025, American Health Law Association, Washington, DC. Reprint permission granted.