Project File Details


Original Author (Copyright Owner):

NWONU, EUNICE IFEYINWA

3,000.00

The Project File Details

  • Name: DEVELOPMENT AND VALIDATION OF A STRUCTURED CLINICAL ASSESSMENT TOOL FOR ASSESSING STUDENT NURSES’ CLINICAL COMPETENCE
  • Type: PDF and MS Word (DOC)
  • Size: [780 KB]
  • Length: [270] Pages

 

ABSTRACT

Assessment of clinical performance contributes to academic qualifications that incorporate professional awards. The administrators of Nursing Schools are facing the problem of subjectivity in practical examination of student nurses. This is evident in examination situations in which the examiner assigns any task of choice to the student and scores the student based on his/her perception of the student’s competence in performing the task. By this, some students are exposed to more difficult tasks than others and subjective scoring, all depending on the inclination of the examiner. In response to this problem, the study developed and validated a Structured Clinical Assessment Tool (SCAT) that will make it possible for all the students to be examined on the same tasks for any examination episode and judged on the same premise. Instrumentation research design was used. One hundred and thirty seven student nurses from three Schools of Nursing in the South East Zone of Nigeria formed the sample for the study. Prior to developing the tool, a competency assessment framework was developed based on the nursing process model with the five steps of the process being the core competencies and sub skills identified for each of the core competencies. The appropriateness of the sub-skills was verified using 52 nurse educators. The care sub-skills were pooled to form the model for SCAT. The model consists of twelve activity stations which are examination points where students perform specified nursing tasks and are scored using a predetermined standard. Initially 48 items (four per station) and their scoring guide were generated and four experienced nurse educator/managers were used to verify their appropriateness. Thirty six items survived the validation exercise using average congruency percentage. Data collected were analysed using alpha coefficient, t-test and analysis of variance. The results of the analysis confirmed the validity of the 36 items and showed that the items were able to discriminate between the high and low achievers. The high reliability index (0.84-0.99) for most of the procedure station items and moderate reliability index (0.69-0.78) for others confirm that the instrument has a good inter-scorer consistency and therefore is reliable. Based on these findings, the SCAT is a tool that has the potentials for reducing the subjectivity that is inherent in clinical assessments that are based on observation and is therefore recommended for assessing clinical competence of student nurses.

TABLE OF CONTENTS

TITLE PAGE ii APPROVAL iii CERTIFICATION iv DEDICATION v ACKNOWLEDGEMENT vi ABSTRACT vii TABLE OF CONTENTS viii LIST OF TABLES xi CHAPTER ONE: INTRODUCTION Background to the study 1 Statement of the problem 8 Purpose of the study 11 Significance of the study 12 Scope of the study 14 Research questions 15 Research hypothesis 16

CHAPTER TWO: REVIEW OF RELATED LITERATURE Introduction 17 Conceptual framework 18 – Definition of Nursing 18 – Clinical competence in nursing 24 – Nursing process 24 – Competency outcome performance assessment 32 – Competency assessment framework 34 Theoretical framework 36 Organizational theories 36 -Max Weber theory of bureaucracy 36 -Getzel and Guba theory of organizational behaviour 38 Developing criterion-referenced measures 41 -Determining conceptual framework 43 -Explicating objectives or domain definition 44

9

-Preparing of test specifications 46 Validating clinical competency assessment tool 50 Empirical studies on instrumentation 63 Summary of reviewed literature 74

CHAPTER THREE: RESEARCH METHODOLOGY Introduction Research Design 77 Area of the Study 78 Population of the Study 78 Sample and Sampling Procedure 80 Instrument for Data Collection 81 Development of SCAT 84 Validity of the Instrument 100 Trial testing of the Instrument 101 Reliability testing of the Instrument 103 Method for Data Collection 104 Method of Analysis of Data 106

CHAPTER 4: PRESENTATION AND ANALYSIS OF DATA Presentation of Data 109 Research Question 1 109 Research Question 2 118 Research Question 3 121 Research Question 4 125 Hypothesis 1 127 Hypothesis 2 128 Summary of findings 132

10

CHAPTER FIVE: DISCUSSION OF RESULTS, CONCLUSION AND RECOMMENDATION Introduction 134 Appropriateness of Tasks and the Activities for Assessing Competence 134 Validity of SCAT 135 Reliability of SCAT 136 Hypotheses testing 136 Conclusion 138 Educational Implication of the Study 138 Recommendations 140 Limitations of the Study 140 Summary 141 REFERENCES 145 APPENDICES 150

CHAPTER ONE

INTRODUCTION
Background to the Study
Effective administration requires rational decision making which will
lead to the selection of the way to reach the anticipated goal. The
educational administrator in trying to achieve the ultimate goal of
improving learning and learning opportunities to ensure competent
products is faced with the responsibility to make decision on such issues
as selecting appropriate curriculum, selecting appropriate teaching
methods, and selecting appropriate methods for assessing the student’s
progress. If appropriate decisions are made on these issues, appropriate
educational policies will be made and the goals of education will be met.
However if inappropriate decisions are made, particularly on methods of
assessing students, the society will be exposed to the danger of
incompetent practice. This is so because learners who have not acquired
the necessary knowledge and skills for competent practice may be
certified to be qualified to practice and may not give quality and safe
care.

15

Generally the school curriculum is organized to expose students to
subjects that provide opportunity for them to acquire the knowledge and
skills that should help them practice. Sometimes students who have
passed written examination and certified fit to practice fail to do so.
Considering the legal and financial implications of employee
performance and safe practice in a rapidly changing environment, a major
concern of an educational administrator of an institution should be to
produce manpower that is competent. It is therefore important in
assessing students for certification to practice, in this case, in a health
care institution, to generate appropriate data that will help in making
decision on whether they are able to perform tasks that the knowledge
they have acquired should help them to accomplish. This can be done if
an appropriate assessment tool is in place.
Stressing the importance of assessing what nursing care providers can
do, not what they know, Del Bueno (1990) cited situations in which
people who had performed excellently well in examination had difficulty
performing a procedure or recognizing warning signs in patients
experiencing difficulty. This kind of situation is unacceptable and

16

informed the reforms in nursing education which led to calls for
assessment of clinical performance to contribute to academic
qualifications that incorporate professional awards. In response to this
call, training institutions have developed clinical assessment tools.
However, Redfern, Norman, Calman, Watson & Murrels, (2002)
expressed some concern about the psychometric quality of the tools that
are available and the ability of the tools to distinguish between different
levels of practice. They analyzed some tools of assessing competence to
practice in nursing, while Norman, Watson, Murrels, Calman, and
Redfern (2002) tested selected nursing and midwifery competence
assessment tools for reliability and validity. Both team of researchers
concluded that a multi-method approach which enhances validity and
ensures comprehensive assessment is needed for clinical competence
assessment for nursing and midwifery.
In order to ensure such a tool, Lenburg (2006) created a constellation
of ten basic concepts and suggested that they should be adapted for
developing and implementing objective performance examination. They
include:

17

 Concept of examination  Dimensions of practice  Required critical elements  Objectivity of the assessment process  Sampling critical skills for the testing period  Level of acceptability  Comparability in extent, difficulty and requirements  Consistency in implementation  Flexibility in actual clinical environment  Systematized conditions. These concepts are very useful to the development of accurate
assessment instruments. Thus far in the nursing context in Nigeria, such
tool does not exist. The administrators of nursing schools are facing the
problem of subjectivity in practical examination of student nurses. This is
evident in situations where students are given different tasks to perform
during clinical examination and awarded grades based on the tasks they
perform. By this some students are exposed to more difficult tasks than
others, all depending on the inclination of the examiner and yet judged on
the same maximum score. This is unfair. It is therefore necessary to
develop an assessment tool that will examine the students on the same
tasks for a particular examination episode.
In order to accomplish this, consideration should be given to the
concepts proposed by Lenburg (2006) which were mentioned earlier. To

18

achieve objectivity in an assessment process two components must be
considered. First the content (skills and critical elements) for the
particular assessment should be specified in writing and second, there
should be a consensual agreement of everyone directly involved in any
aspects of the examination process. When individual examiners begin to
digress from the established standards and protocols, objectivity erodes
back into subjectivity and inconsistency. This regression destroys the
process and the purpose.
To prevent this from occurring, the educational administrator
should ensure that the content of the examination is specified by the list
of the dimensions of practice, that is, the skills and competencies and
their required critical elements that determine the extent and conditions of
competence.
The use of a conceptual framework to systematically guide the
assessment process increases the likelihood that concepts and variables
universally salient to nursing and health care practice will be identified
and explicated (Waltz, Strickland & Lenz, 2005).

19

Concepts of interest to nurses and other health professionals are
usually difficult to operationalise, that is to render measurable. This is
partly because nurses and other health professionals deal with a
multiplicity of complex variables in diverse settings, employing a myriad
of roles as they collaborate with a variety of others to attain their own and
others goals. Hence, the dilemma that they are apt to encounter in
measuring concepts is two fold; first; the significant variables to be
measured must, by any means, be isolated, and second, very ambiguous
and abstract notions must be reduced to a set of concrete behavioural
indicators. It is therefore the responsibility of the educational
administrator who knows the goals that are intended and that selected the
content that should help in the achievement of the goal to select the
variables that must be measured and to reduce them to concrete
behavioural indicators of competence. These should be incorporated into
a protocol that will guide the assessor.
Protocols ensure that each test episode for a given group is
comparable in extent, difficulty and requirements. Protocol also ensures
that the process is implemented consistently, regardless of who

20

administers the examination or when it was conducted. When
performance examinations are administered in actual clinical
environment, not simulation, the concept of flexibility is essential as each
client is different. The responsible educational administrator, who
prepares students for professional practice is therefore challenged to
develop appropriate competency-based assessment tools for use in the
assessment of students’ clinical competence.
Competency-based assessment tool focuses on measuring the
actual performance of what a person can do rather than what the person
knows. It is based on criterion-referenced assessment methods where the
learner’s performance is assessed against a set of criteria provided so that
both the learner and assessor are clear on what performance is required.
Competency-based assessment technique addresses psychomotor,
cognitive and affective domains of learning and its goal is to assess
performance for the effective application of knowledge and skill in
practice setting. The competencies can be generic to clinical practice in
any setting, specific to a clinical specialty, basic or advanced (Benner,
1982; Gurvis & Grey, 1995).

21

Criterion-referenced measures are particularly useful in the clinical
area when the concern is the measurement of process and outcome
variables as applies in nursing. A criterion-referenced measure of process
according to Waltz, Strickland & Lenz (2005), requires that one identifies
standards or the client care intervention and compares the subjects’
clinical performance with the standard of performance which is the
predetermined target behaviour. When all these are taken into
consideration in developing a clinical assessment tool, the tool is bound
to be authentic.
Statement of Problem
In Nigeria, assessment of clinical performance contributes to the
academic qualification for professional award. The Nursing and
Midwifery Council of Nigeria (NMCN) has adopted the Objective
Structured Clinical Examination (OSCE) for midwifery but has not done
the same for general nursing examination. The tool that is currently in use
for clinical assessment for the general nursing examination leaves a lot to
be desired. It lacks the comparability and consistency that are required to

22

make an assessment tool objective and fair hence the need for a structured
clinical assessment tool. Some of the pitfalls of the tool include;
 The tool makes allowance for the selection of the procedure to
be performed by the candidate to be made by the assessor and
this is varied from one candidate to another. The implication is
that all the candidates do not perform the same tasks and the
tasks they perform are not comparable and since the task
difficulty is not the same for all tasks, the candidates are not
examined nor judged on the same premise. This is unfair.
 Another problem that is closely linked with not specifying tasks
that all candidates must perform is that the mark allotted to the
item, “procedure” is the same for all procedures whether simple
or complex and since some candidates are assigned simpler
tasks than those assigned to others and are judged on the same
optimal score for less work, the tool is unfair. Again, because
the activities expected to be carried out for each procedure is
not specified, the scoring of the candidates’ performance is
based on what the scorer thinks is right and this may vary from

23

one scorer to another. The implication is that most times, the
scoring is subjective.
 Sometimes, the length of time required to accomplish a certain
task the assessor assigned to a candidate to perform may not
allow the assessor opportunity to assess the candidate on all the
areas that are listed on the clinical performance assessment
guide. Since all the items sum up to give the maximum score, it
creates the difficulty of determining what to do about scoring
those items particularly as it was not the fault of the candidate
that he was not examined in those areas by the particular
assessor.
 Again, some of the criteria on which the candidates are judged
are not stated in specific terms. For example such statements as
“handles patients gently and skillfully” and “adapts the
environment for the patient’s comfort” are not specific enough
as to what the candidate is expected to do and therefore leaves
room for assessor’s subjective conclusions. The implication of
all these is that some of the results of assessments using this

24

kind of tool are not valid and may have negative impact on the
candidate who failed when actually he/she should have passed
and on the consumers of nursing care where a candidate who
had not acquired the necessary skills for competent and safe
practice passed when he/she should have failed.
In view of this problem, there is the need to develop a clinical
assessment tool that is objective and fair. This is the intent in this
study.
Purpose of the Study
The main purpose of the study is to develop and validate a
structured Clinical Assessment Tool which will provide opportunity for
all the students to be examined on the same tasks for a particular
examination period and be scored based on predetermined performance
criteria. This will ensure a fair, objective and valid assessment of student
nurses’ clinical performance.
Specifically the objectives are to:
1. develop appropriate tasks for assessing student nurses’ clinical
competence.

25

2. develop appropriate activities for determining competency in
the tasks
3. determine the content validity of the Structured Clinical
Assessment Tool (SCAT) that was developed
4. determine the construct validity of the Structured Clinical
Assessment Tool (SCAT).
5. determine the inter-rater reliability of the SCAT.
Significance of the Study
The study will result in the availability of an instrument for a more
comprehensive and objective clinical assessment of student nurses.
Because the instrument will cover the core practice competency areas in
nursing, it will be useful in determining whether or not student nurses
have acquired the complex repertoire of knowledge, skills and attitudes
required for competent practice before they enter the profession. The
instrument will be useful to nurse educators and clinical
supervisors/managers of health care institutions who are preparing
students for practice because it will show them the core elements of
competence in nursing and thus help them to guide the students

26

appropriately to acquire the skills necessary to become competent and
safe. It will also be useful to the students because they will know from the
start what is expected of them, and being focused, they will work toward
success.
The instrument will eliminate the problem of leaving the
candidates to the whims and caprices of their assessors which results in
some candidates carrying out more complex tasks than others and yet
judged on equal score. Instead, the candidates will perform the same and
specified tasks. This way, the candidates will be examined on the same
premise and any judgment made on the results that are generated by the
instrument will be worthy and valid.
Again, because the instrument has broken down the elements of
competence into performance criteria on which the performance can be
judged acceptable, scoring of students’ performance during assessment
will be easier and will be devoid of subjectivity and therefore will make
the result more authentic. The tool will serve as an impetus for the
Nursing and Midwifery Council of Nigeria (NMCN) to revise the tool
that is currently in use for the final qualifying examination to become

27

more objective and fair. If this is done only those who have acquired the
necessary knowledge and skill will be certified competent and licensed to
practice and the consumers of nursing care will be sure to receive quality
and safe care. The tool will also be a reference for other researchers who
will want to develop tools that will address procedures that are not
accommodated in the present study.
The Scope of the Study
The study is delimited to developing a structured clinical
assessment tool, developing a scoring scheme for the tool, establishing
the content and construct validity of the tool and determining the internal
consistency reliability of the tool. Only the average congruency
percentage for determining content validity; mean and standard deviation
of contrasted groups for determining the construct validity, as well as the
internal consistency reliability using index of inter-rater agreement were
determined.
The clinical events that were assessed were limited to procedures that
would be completed within 5 minutes. This was to ensure that the
students are assessed on a good variety of events within the one hour they

28

are normally assessed during practical exams. Exposing them to
procedures that take longer will limit the number of events that they will
be assessed on. The tool however presupposes that the students would
have been assessed (using a structured assessment tool) on those
procedures that take longer time to accomplish prior to this final
assessment.
Though the tool is developed for assessing clinical competence of
student nurses in Nigeria, the validation of the instrument was conducted
in the South East zone of Nigeria using three randomly selected Schools
of Nursing.
Research Questions
The study is guided by the following Research Questions
1. How appropriate are the tasks of SCAT for assessing student
nurses’ clinical competence?
2. How appropriate are the activities for determining competence
in the selected items?
3. How valid is the content of SCAT?
4. What is the inter-rater reliability coefficient of SCAT?

29

Hypotheses
The following hypotheses were tested at an alpha of 0.05
Ho1: There is no significant difference in the mean scores on SCAT
of high and low achievers.
Ho2: There will be no significant difference in the scores of the
students on any of the procedure stations of SCAT as determined
by the three assessors.

GET THE FULL WORK

DISCLAIMER:
All project works, files and documents posted on this website, projects.ng are the property/copyright of their respective owners. They are for research reference/guidance purposes only and the works are crowd-sourced. Please don’t submit someone’s work as your own to avoid plagiarism and its consequences. Use it as a guidance purpose only and not copy the work word for word (verbatim). Projects.ng is a repository of research works just like academia.edu, researchgate.net, scribd.com, docsity.com, coursehero and many other platforms where users upload works. The paid subscription on projects.ng is a means by which the website is maintained to support Open Education. If you see your work posted here, and you want it to be removed/credited, please call us on +2348159154070 or send us a mail together with the web address link to the work, to [email protected] We will reply to and honor every request. Please notice it may take up to 24 - 48 hours to process your request.