The BC College of Oral Health Professionals (BCCOHP) has released a proposed Quality Assurance Program (QAP) framework intended to align all oral health professionals under one QAP program, replacing the legacy QAP programs from the former regulatory colleges. This alignment is a requirement under the Health Professions & Occupations Act (HPOA) that was passed by Government late in 2022 and will be enacted on April 1, 2026.
Although the proposed framework eliminates the former QAP exam that many dental hygienists opposed, BCDHA is concerned that the newly proposed QAP framework introduces a program that may be even more complex and burdensome for dental hygienists to navigate.
BCDHA has shared our concerns, however, BCCOHP is most interested in receiving feedback from individual Registrants and the public. Each of your voices matter in shaping how quality assurance will be implemented for our profession in the years ahead. BCDHA cannot do this work alone, but together with feedback and support from individual members across BC, the voice of dental hygienists can help ensure that any Quality Assurance Program is practical, meaningful, and equitable.
We strongly encourage all BCDHA members to review the proposed framework and respond to the consultation survey that was sent to each of you before the deadline on August 5, 2025. To access the proposed framework, visit: https://oralhealthbc.ca/get-involved-engage-with-us/qa-consult/.
If you prefer to send your feedback in letter form rather than, or in addition to, completing the survey, you may email your comments directly to feedback@pivotalresearch.ca.
We understand that many details of the proposed framework are still conceptual. However, we invite you to consider the questions and concerns that BCDHA has identified when forming your response below.
General Concerns Related to the BCCOHP QAP Framework
1. Clarity and Transparency:
Registrants are being asked to offer feedback on the framework, without essential critical details being provided. For feedback to be informed and valid, oral health professionals (OHPs) require additional information on many elements of the framework. For example: the purpose and use of aggregate data from the self-inventory survey, continuing professional development requirements, and how written quiz results will be used by the College. These are only a few of the issues that require clarification for feedback to be meaningful and fulsome.
2. Use of Patient Experience Surveys
It is unclear what purpose patient surveys will serve in the QAP process. It is the responsibility of each registrant to understand and practice within professional standards; not the recipient of those services to know what those practice standards are. Feedback from uninformed patients may misrepresent an OHP’s actual performance based on personal bias or other factors separate from the competency of the provider. Feedback from a patient who received competent oral health care but is “unhappy” with the cost or other personal issue has the potential to significantly and unfairly impact a registrant. There is already a discipline process in place that allows patients to express concerns about the competency of their OHP. More clarity is needed on how patient feedback is to be attained, how it will be used, what influence it has on a Registrant’s QAP assessment and/or plan for improvement, and what safeguards are in place to ensure patient feedback does not misrepresent the OHP’s competence or become punitive.
3. Audit Process
It is unclear how an OHP might be identified as “higher risk” and the role the QA Assessor would play in ensuring compliance. Also, by implementing a lottery system for random audits, it is conceivable that certain Registrants could be selected for audit on multiple occasions, while others may never be audited. How does the framework address equity and fairness for all OHPs?
4. Implementation and Future Revisions
It is unclear how the new QAP would be implemented and how the timeline for the initial rollout and changes to the program as it evolves for ongoing improvement will ensure consistency and fairness. For example, how would implementation of the new framework impact Registrants who are currently in the middle stages of their legacy QAP program? Does BCCOHP foresee a phased in approach to implementation based on current QAP cycles, or do you anticipate all OHPs starting a new cycle at the same time?
Questions and Concerns about the Framework
Component A: Self-Assessment
- If the self-inventory is designed to produce a profession-wide analysis, what influence will the aggregate data have on individual assessments?
- What supports or tools will be recommended based on the aggregate data? Who will develop these tools?
- How does the self-assessment guide the OHP’s continuing professional development if there are no minimum requirements for CE?
- How is individual data from the self-assessment used?
- What happens if an OHP identifies that they are completely competent in all standards and do not require professional development?
- How will changes to scope of practice be addressed and supported in the Self inventory?
- Will SMART goals be automatically generated once the standards self-assessment is submitted, or will registrants need to create their own?
- How will SMART goals be assessed for quality?
- Who decides if the type and level of CE satisfies a learning goal?
Our concerns: How data from the annual self- inventory will be collected, stored, and used. While the use of aggregate data for profession-wide analysis is understandable, it remains unclear whether individual responses could influence a registrant’s QAP record or trigger follow-up.
The framework states that the self-assessment may be used by BCCOHP staff and QA Assessors to follow-up as needed. This suggests that the self-assessment could be used as a performance evaluation, rather than a reflective tool. If registrants believe their individual data may be used for assessment purposes rather than confidential self-reflection, they may be less likely to respond honestly
If trends are identified at the aggregate level, more information is needed about what kinds of supports or tools will be offered in response, and whether those supports will be optional or required.
The lack of minimal CE requirements could easily diminish the perception of competency and actually not address professional development needs for certain OHPs. Does this not put the public at a greater risk of receiving care from an OHP who fails to recognize the need for continuing professional development?
By not quantifying continuing professional development hours, there is potential for OHPs in BC to have difficulty proving they have met continuing competency requirements in other jurisdictions.
Who determines if an individual’s SMART goal meets quality standards? If a SMART goal is deemed inappropriate, does this constitute “non-compliance” and trigger an audit (i.e. the OHP becomes “at risk”)? There is too much ambiguity in how OHPs actually demonstrate their learning.
The patient experience survey component of the framework lacks clarity. Without clear information on how patients (or professionals, for non-clinical registrants) are selected to complete the survey, there is concern about fairness or bias in the feedback collected. If registrants are not provided with individual or meaningful summaries of the feedback, it limits their ability to reflect and grow professionally.
We also question how this feedback will be used, whether it is strictly for self-improvement or if it could influence a registrant’s assessment or compliance status. Most significantly, registrants may be concerned about whether negative survey results could trigger a formal follow-up or investigation, and what is in place to ensure that the feedback is accurate, constructive, and representative.
Component B: Education & Knowledge Application
- If Registrants are able to attempt module quizzes an unlimited number of times, how does this fit in with “instant feedback that may impact Component C”? Does this mean that if an OHP performs poorly on their first attempt at the quiz, they may be subject to follow-up assessment?
- How many modules will be available, and will registrants have the option to choose which ones they complete or will the modules be pre-selected based on results from Component A?
- Are modules standardized or tailored by professional designation?
- Will there be a cost to access the modules?
- Will dental hygienists still be required to complete the current jurisprudence and LA modules within a standard timeframe, in addition to the modules identified from Component A?
Our concerns: Without clarity on how many modules will be available and whether they are pre-selected based on Component A (aggregate results or individual assessment), or self-selected by the Registrant, there is concern over the lack of autonomy and personal control over individual professional development needs. Conversely, if there are no required modules specific to scope of practice in BC, it is possible that dental hygienists coming from other jurisdictions may not be fully versed on BC-specific standards, which could have a negative impact on public safety.
The proposed pace of no more than three modules per year represents a significant increase in workload when compared to the current five-year cycle for dental hygienists. This equates to up to 15 assessments vs. the current one assessment within the same time frame. This is not only significant in terms of time spent on assessment, but also if there is a cost associated with the assessment modules.
Component C: Follow-Up
- How will bylaw changes affect the QAP framework?
- What is the definition of “higher risk” as outlined in Component C?
- Are the follow-up/audit criteria the same for Registrants who are randomly chosen for audit vs. those who are flagged as “higher risk”?
- What aspects of the program are considered adaptable and what would trigger a review or update?
- Are there additional costs associated for QA Assessor engagement?
- Could unsatisfactory engagement with a QA Assessor trigger a formal investigation?
- How are patients (or professionals, for non-clinical registrants) selected for Patient Experience Survey?
- Will registrants receive individual or summarized feedback from the Patient Surveys?
- How will this feedback be used in assessments or professional development?
- Can a patient experience survey trigger an investigation?
Our concerns: A truly random audit system could result in some registrants being selected multiple times while others are never audited, raising questions about fairness and equity.
The lack of a clear definition of what constitutes “higher risk,” creates uncertainty for Registrants on how and what actions could trigger an audit or additional scrutiny. It is important that the criteria for audits, both random and risk-based, are clearly defined, applied consistently, and communicated openly to all registrants.
The evolving nature and adaptability of the program creates a risk for inconsistent experiences and QAP requirements for registrants. It is concerning that some Registrants may be held to different standards or expectations than others, depending on when or how they engage in the program. This is particularly true if a phasing in of the new framework is not considered.
The lack of clarity on what determines successful compliance for OHPs engaged in Component C is concerning. Clear guidelines and criteria need to be established.
We hope this information will help you provide feedback to those issues that are important to you. If you have identified other concerns that are not included in this list, please share them with us. We can also answer your questions and provide help to navigate the consultation – just reach out to us at practicesupport@bcdha.com.

