AI Use Policy

Purpose
This policy outlines how artificial intelligence (AI) tools are used within this business. It is designed to ensure that all use of AI supports client wellbeing, protects confidentiality, and maintains the integrity of professional judgement and human connection.

Scope
This policy applies to all services, including training, consulting, content creation, and client communications.

Guiding principles
AI is used to support, not replace, human expertise.
Client safety, privacy, and trust come first.
Transparency is maintained about where and how AI is used.
All outputs are reviewed and refined through human judgement.
The voice, values, and professional standards of the business are never delegated to AI.

How AI is used

AI tools may be used in the following ways:

Content development
Drafting outlines, summaries, and early versions of written materials such as presentations, articles, and educational resources.

Idea generation
Supporting brainstorming, identifying patterns, and exploring angles that are then evaluated and refined.

Editing support
Improving clarity, structure, and readability of content that has already been created by a human.

Administrative efficiency
Assisting with internal workflows such as note organization, task planning, or general business operations.

AI is not used to provide therapy, clinical advice, or direct client care.

Human oversight

All AI-generated or AI-assisted work is reviewed, edited, and approved by Elaine Alexander before it is shared or delivered.

  • Human oversight includes:

  • Critical review for accuracy, bias, and appropriateness

  • Alignment with professional ethics and scope of practice

  • Ensuring the tone, voice, and relational quality remain human and grounded

  • Contextual judgement that AI cannot provide

No AI output is used in its raw form.

Transparency

Transparency is maintained in the following ways:

Clients and participants are informed when AI tools are used in the creation of materials or processes.

Examples of AI tools that may be used include ChatGPT, Claude, Granola, Gamma, Canva, Scite.ai, Perplexity and other reputable AI-assisted writing or research tools.

AI is used as a support tool. Final outputs reflect human judgement, lived experience, and professional expertise.

If a client has questions about how AI is used in their specific engagement, clear answers will be provided.

Confidentiality and data protection

No identifying client information is entered into AI tools.

Sensitive, personal, or clinical information is never shared with AI systems.

All client work follows applicable privacy standards and ethical guidelines for counselling and professional practice in Ontario.

Where possible, AI tools are used in ways that minimize data retention and exposure.

Limitations of AI

AI tools can produce inaccuracies, incomplete information, or biased outputs.

They do not understand nuance, context, or relational dynamics in the way a human practitioner does.

For this reason, AI is never relied on for decision-making, clinical interpretation, or ethical judgement.

Client choice

Clients are not required to engage with AI-supported processes.

If a client prefers that no AI tools be used in their project or materials, that preference will be respected.

Ongoing review

This policy is reviewed and updated regularly as AI tools, regulations, and ethical standards evolve.