Home | About Us | Contact Us | Help | Advertising | Authors | Terms of Use | Privacy Policy LOGIN TO YOUR ACCOUNT
Approvals/Requirements Satisfied by eRADIMAGING Courses
~ ASRT accreditation for ARRT Category A credit   ~ MDCB accreditation by the Medical Dosimetrist Certification
~ ARDMS and NMTCB accepted (All Courses) ~ CAMRT and Sonography Canada recognize the ASRT approval of these courses.
~ California CE requirements met for all radiography courses ~ Florida approval for all courses 1 credit or more

Establishing Quality Indicators for Medical Imaging and the Basic Quality Management Toolbox

Andrea Stevens, MS, RT(R)(QM)

     *Quality Assurance Coordinator, Department of Radiology, Medical Center of Central Georgia, Macon, Georgia.
    Address correspondence to: Andrea T. Stevens, MS, RT(R)(QM), 752 Forest Lake Drive South, Macon, GA 31210. E-mail: astevens@earthlink.net.


Quality management is a term used to encompass quality control, quality assurance, and quality improvement/performance improvement. This article discusses the concepts of quality, the meaning of quality, how medical imaging indicators are selected and established, and ways to easily handle them once they are selected. In addition, an extensive discussion of the tools found in the quality management toolbox is included.


The scope of this article is divided into 2 main parts. The first part discusses the concepts of quality, defining quality, the selecting and establishing of quality indicators for medical imaging, the 3 M's of quality indicators, methods of grouping indicators into specific categories, and the use of dashboards and scorecards.

The second part of the article discusses the basic tools found in the basic quality management toolbox. These tools include types of data, data collection methods, populations, samples, sampling techniques, and basic statistical tools.

Selecting and Establishing Indicators

In order to select, develop, and measure quality indicators, we need to determine, "What is quality?" Quality is a subjective term for which each person has his or her own definition.1 Even the American Society for Quality (ASQ), the Institute of Medicine (IOM), and the National Association for Healthcare Quality (NAHQ) cannot agree on a specific definition for quality.1-3 The ASQ defines quality as follows: "In technical usage, quality can have two meanings: (1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs; and (2) a product or service free of deficiencies."1,4 The IOM defines quality in terms of healthcare: "Healthcare quality is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge."3 Interestingly, NAHQ resources do not attempt to define quality because of its subjective nature (Personal communication with NAHQ). The ASQ and IOM definitions are acceptable and have a place in healthcare because quality is so subjective and based in perception. Each imaging department should establish a definition that is based on its mission, vision, and values statement. This type of definition provides a framework to start with when tracking, selecting, and establishing medical imaging indicators.

Quality is a continuum5,6 that has a beginning but no end. It is an entity that will always be changing. In essence, once a decision has been made to do something with quality it will become perpetual, and it would then be called quality improvement. Quality improvement does not require a cast of thousands, but it does require a core of dedicated professionals seeking a common goal—the goal being what the department has defined as quality.

When discussing quality, it is important to understand that there are 3 separate and distinct levels of quality,5,6 and each level plays an important roll in the overall scheme of quality. Quality control can be considered the first level of quality because it sets the baseline. The second level is quality assurance, which involves raising the baseline to a higher standard. The third and final level of quality is quality improvement. This is a proactive measure, which involves the analyzing, developing, and implementing of ongoing improvement measures throughout the medical imaging arena. Figure 1 illustrates where ongoing quality improvement should be occurring. It is not limited to a specific area. Beginning quality managers, practicing quality managers, and senior management should be aware that quality improvement is interdisciplinary.

In order to understand the process by which indicators are selected and established, it is necessary to define quality and consider the levels of quality that exist in medical imaging. Once this is accomplished, one can seriously select and establish medical imaging indicators. When selecting indicators, it is important to understand the relationship between indicators and outcomes. Indicators are used to measure over time the performance of functions, processes, and outcomes of an organization.7,8 The term performance measure has now become synonymous with indicator.7 It is important for a beginning or practicing quality manager to remember that an indicator or performance measure is a valid and reliable quantitative process or outcome measure related to 1 or more dimensions of performance. Whenever an indicator is selected for measurement, there will be an outcome. Outcomes are the result of the performance or nonperformance of a function or process.6,7 An outcome may be either positive or negative. If an outcome is negative, it is not necessarily bad because it provides a basis for starting the performance improvement process.

A basic question that arises is, how are indicators established or selected? There are 3 basic ways that indicators are selected: a priori, reactive, and proactive.9 A priori is a decision that is characterized by or derived by reasoning from self-evident propositions. Basically it is a decision that is made independent of experience. Another way of selecting or establishing indicators is by reacting to an unexpected event. The reactive method really is not a good way to select indicators because there is not a lot of thought put into which indicator would be appropriate to the particular situation. The most logical and appropriate way to select an indicator or several indicators is the proactive way. The proactive selection of indicators requires a great deal of thought and time because the indicator or indicators need to be representative of the imaging services department. Regardless of the method used to select indicators, it is important that they be meaningful, manageable, and measurable. This process is known as the 3 M's of indicator selection. Each of the 3 M's is important, but it is critical for any indicator selected for monitoring to be meaningful and manageable.

Performance/quality improvement can be enhanced when indicators are selected to focus on objective data, which in turn will heighten the understanding of the variation that exists in a process.9 Indicators also provide a common frame of reference, which provides a more accurate basis for prediction.

When selecting an indicator or indicators, there are 10 basic questions that need to be asked before the actual tracking or monitoring process can commence.9 Failure to completely answer any of the questions can lead to long-term problems. These 10 questions are outlined in the following sections.

Question 1: How Many Indicators Are You Selecting?
There are no clear-cut parameters on the number of indicators that need to be tracked. Research conducted by Ondategui-Parra et al found that academic radiology departments track anywhere from 0 to 21 indicators.10 The number of indicators that are selected need to be meaningful, measurable, and manageable. It would be better to start with 4 or 5 indicators and progress to more indicators. As a cautionary note, there should probably never be more than 20 indicators because at some point data overload will occur, and the indicators become less meaningful and manageable.

Question 2: What Are You Trying to Do with the Selected Indicators?
It is very important for the leadership team to communicate what they are trying to achieve by tracking certain indicators to the appropriate staff. Clearly communicating the intent and expectations to everyone involved will reduce the possibility of misunderstanding and increase the possibility of obtaining accurate data.

Question 3: How Often and for How Long Will You Collect Data?
This is one of the most important questions that needs to be asked before data collection takes place. Data can be collected daily, weekly, biweekly, monthly, or quarterly. They will be determined by the type of indicator that is selected and what is being monitored.

Once the frequency of data collection is determined it is critical to define how long the particular indicator is monitored. There are no guidelines that determine this, but typically, indicators should not be tracked for more than 2 years. An indicator that is monitored for more than 2 years loses its impact value, which makes it less meaningful and manageable. Typically, an institution will have all the information that they need to make a decision regarding a particular process within 2 years.

Question 4: What Are the Current Baselines or Benchmarks for the Selected Indicators?
Baselines or benchmarks are measures that provide a gauge by which data results from selected indicators can be compared. If such measures are readily available, then they should be used.

Question 5: Do You Have Targets and Goals?
If baselines or benchmarks are not readily available, the management team will need to establish targets and goals for the selected indicators. It is very important to remember that once a target or goal has been set, it can be either raised or lowered. It would not be a bad idea to initially set a target or goal a little high to compensate for unexpected data results.

Question 6: What Impact Will the Selected Indicators Have on Your Facility?
When the management team selects a group of indicators for monitoring, it is important to weigh the impact that it will have on the staff. First, failure to inform the staff and get their input has the potential for creating resentment and resistance in the collection of data. This in turn may result in false assumptions being made by management. Also, the tracking of selected indicators may yield data that have a negative outcome, which impacts preconceived notions.

Question 7: Are You Prepared to Accept the Results (Positive or Negative) of Data Analysis?
Once the data from the selected indicators have been collected and analyzed, the results may be positive or negative. Positive results from an indicator or a group of indicators would please the majority of managers because it is an indication that things are going well within the department. If the results are positive it does not mean that the monitoring of a certain indicator or a group of indicators should necessarily be stopped. The positive results can be used as a baseline for future monitoring endeavors. However, if the results are negative then managers would not be pleased. In reality, negative results can be an indirect positive because there is an indication that something is wrong in the process that the indicator is monitoring. With this knowledge, the management team can take a proactive approach to finding a solution and fixing the process.

Question 8: Have You Adequately Defined All Terms Associated with the Indicator?
It is critical that all the terms associated with a particular indicator or set of indicators be clearly defined to ensure there is no confusion about the meaning of the terms. Failure to adequately define all the terms can result in the erroneous interpretation of data. As an example, consider wait time—what is it exactly? In a literal sense, it is the time a customer waits for a service. However, from a medical imaging point of view, wait time would include the time spent waiting for registration, waiting to be called back for the procedure, the time spent in the dressing room, the time for the actual procedure, the time waiting to be released, and quite possibly the time spent waiting for the physician to be notified of the results.

Question 9: How Accurate Are the Data or the Data Source?
This is a topic that will be discussed in more detail later in this article. Needless to say, if the data source (radiology information systems [RIS]/hospital information systems [HIS]) and the resulting data are not accurate for whatever reason, the management team may get the wrong impression of what is actually happening with a process.

Question 10: Have You Taken into Consideration the Overall Design of the Indicator Project and All the Factors That Can Influence and Affect the Monitoring Project?
The majority of indicators are selected by a senior manager or the senior management team, and this includes the actual design of the project and the mechanism of how data will be collected. Failure to properly design the project can lead to misinterpretation of the results. This may result in a false sense of well being when the data indicate the exact opposite. Additionally taking into account all factors and/or parameters affecting the overall design will definitely have a tremendous impact on the outcome of the project. It is critical that checks and balances be incorporated early in the design and implementation of the indicator monitoring process. Checks and balances provide a means of correcting errors when the project is in its developmental stages. The concepts mentioned here will be discussed further in the data collection section.

Types of Indicators

There are potentially hundreds of indicators that could be monitored in a medical imaging department. The tracking of indicators plays a significant role in the operation of a medical imaging department. The use of quality indicators is one of many ways that medical imaging departments can successfully monitor a variety of processes,11 and with the current emphasis on patient safety and standard of patient care, it is critical. Table 1 shows a variety of indicators that may have been routinely monitored in a 20th century imaging department. Table 2 shows a range of indicators that might be tracked in a 21st century imaging department.11 Close examination of both tables reveals that many of the indicators found in Table 1 are also in Table 2. The major difference in Table 2 is the removal of indicators that would have tracked processes occurring in a film library or file room.

Methods of Grouping or Categorizing Indicators

As previously stated, medical imaging indicators are typically selected by a senior manager or the senior management team. Earlier it was stated that depending on the number of indicators selected, it is critical that they be meaningful, manageable, and measurable. If more than 6 indicators are selected for monitoring, then it would be in the best interest of senior management to consider grouping or categorizing them for manageability. Examples of categorization or grouping models can be found in a variety of sources.4,9,12 In Crossing the Quality Chasm: A New Health System for the 21st Century, the IOM looks at 6 aims for improvement, including safety, effectiveness, patient-centeredness, timeliness, efficiency, and equity.3 In the article, Reading the Right Signals: How to Strategically Manage with Scorecards, Pieper gave an example of Silver Cross Hospital's 2005 strategic balanced scorecard.12 The scorecard developed by Silver Cross Hospital used quality, financial performance, operational effectiveness, and workplace excellence as a way for categorizing key performance indicators. In his book, Quality Health Care: A Guide to Developing and Using Indicators, Lloyd mentions that the Joint Commission on Accreditation of Healthcare Organizations identified 9 dimensions of clinical performance that could effectively be used to group or categorize indicators.9 The dimensions are appropriateness, availability, continuity, effectiveness, efficacy, efficiency, respect and caring, safety, and timeliness. The Silver Cross Hospital's model is easily adapted for use in medical imaging. Using the 4 categories established by Silver Cross Hospital, medical imaging indicators can be readily grouped as follows:


  • Image quality issues
    • Wrong patient
    • Wrong examination
    • Misidentified images
    • Image quality audit
  • Patient safety
    • Falls
    • Skin tears
  • Critical incidents
    • Sentinel events
    • Near misses
  • Complications
    • Intravenous extravasations
    • Pneumothorax rates
    • Post-procedure hematomas
    • Contrast media reactions
    • Adverse drug reactions
  • Picture archiving and communication systems-related issues
    • Images not sent
    • Network interface issues
  • Peer review
    • Radiologist
    • Staff
  • Report issues
    • Report accuracy
    • Dictated but not signed
  • Radiation safety issues
    • Patient exposure
    • Staff exposure
  • Satisfaction
    • Internal/external customers
  • Examination issues
    • Scheduling
    • Ordered but not performed
    • Completed but not interpreted

Financial Performance

  • Net operating margin
  • Net revenue
  • Number of procedures
  • No shows
  • Cost per examination
  • Salary expenses
  • Non-salary expenses
  • Cost analysis
  • Current Procedural Terminology coding accuracy
  • Medical necessity review

Operational Effectiveness

  • Productivity
    • Procedures/full-time equivalent (FTE)
    • Technical relative value units (RVU)/FTE
    • Professional RVUs/FTE
  • Timeliness of service
    • Wait time
    • Report turnaround time (TAT)
    • Report availability
    • Unsigned reports
  • Length of stay in the department
  • Equipment issues
    • Repairs
    • Preventive maintenances missed

Workplace Excellence

  • Turnover rate/vacancy rate
  • Employee injury
  • Productivity
    • Staffing
    • Workload

When categorizing medical imaging indicators it should be mentioned that indicators can fall into multiple categories, and there is nothing wrong with this.

Dashboards and Scorecards

Over the past decade there has been a lot of discussion in the literature about the use of dashboards and scorecards in healthcare.12-16 There is some confusion regarding dashboards and scorecards because many people use them interchangeably, and this has resulted in compounding the dilemma. A dashboard is a graphic representation of the key performance indicators that an institution monitors. The scorecard is "a straightforward exhibit of certain measures of performance within the organization."11 It is a way of presenting selected key performance indicators in a non-graphic format. The scorecard shows the key performance indicator and its associated metrics.11 Kaplan and Norton stress that a single measure cannot provide a clear performance target or focus attention on the critical areas of business.9,13 An organization should monitor a set of indicators, which represent the key strategic areas in the organization's business plan. Dashboards and scorecards are representative of "the data collected which should be tied to strategic objectives and represents a balanced set of measures that cut across the full range and scope of services being delivered."9

The Basic Quality Management Toolbox

Once the quality indicators have been established it becomes necessary to collect the required data and measure the results. This can be achieved by using the tools found in the basic quality management toolbox. The quality management toolbox will include a variety of items, such as the computer, data collection methods, and statistical tools.

The Computer
The computer is a critical tool because it has the ability to store a tremendous amount of information that is readily available for the quality manager to access. Quality managers should be equipped from a hardware perspective with a minimum of 2 GB of RAM, a 100-GB hard drive, a CD burner, and network capability. Additionally, the computer should be equipped with a productivity package that consists of a word processing program, a spreadsheet program, and a database program. Optional software includes a comprehensive statistical package and a flowcharting package. The medical imaging quality manager's computer will be the most valuable tool found in the quality management toolbox.

Data Collection Methods
In quality management, data collection and data analysis are the essence of the job, and it is an ongoing task for all quality managers. But what are data? Hansen defines data "as collections of numbers gathered for the purpose of assessing some activity."17,18 Brown defines data as "uninterpreted clinical observations, facts, or material, usually collected as a result of assessment activities."8

Data can be collected or captured in a variety of ways, including benchmarking, manually, and electronically. The continuous process of measuring products, services, and/or practices against the competition in order to find and implement the best practices is called benchmarking. One of the most common ways to collect data is manually. Typically the data are obtained by observation, logs, and surveys. Collecting data manually is an acceptable way but very labor intensive and, because it relies on humans, there is a certain amount of error that is introduced with this method of data collection. A more accurate method of collecting data is electronically because the chance of human error is greatly reduced. Electronic sources of data include the HIS, RIS, and outside databases.

Data can be classified as either quantitative or categorical.7-9 Quantitative data are also referred to as measurement data. Measurement data yield a measurement or number for each observation or unit. This type of data can be subdivided, often indefinitely. Examples of measurement data are time, weight, costs, and temperature. Synonyms that are typically associated with measurement data are variable, analog, and continuous. Categorical data are also referred to as count data. Categorical data consists of counts, observations, or incidents falling into categories. Examples of categorical data are the number of errors, daily census, number of injuries, or the number of no shows. Synonyms that are associated with categorical data are attribute, digital, and discrete.

There are 2 measurement scales that are used with data collection. They are referred to as categorical and continuous.7-9,19,20 Categorical measurement can be either nominal or ordinal. The nominal scale includes count data, discrete data, or qualitative data. Examples of nominal data are the number of patients imaged during an 8 hour shift, the gender of the patient, and patient safety inservices. The ordinal scale is used when characteristics are put into categories and it is ordered. Examples of ordinal data are rank in the department, level of education, and years of services. Continuous measurement data are measured on scales that theoretically have no gaps. They are often referred to as interval data because the distance between each point is equal. Examples of interval data are the numeric values on a thermometer and ratios. An important point about continuous measurement data is the distance between each is equal and there is a true zero.

The purpose of data collection is to acquire information to affect change in a process or processes; it is not to make someone feel good about the job they are doing if they are not doing it well. Data collection can be hampered by faulty design and the failure to precisely define all the terms associated with a particular indicator. The following are 2 examples of how this may occur.

In the first example, there were physician complaints about the amount of time it took to fulfill their requests for film jackets and the failure to obtain all the requested jackets. Senior management made the decision to monitor the overall process for a period of approximately 90 days as a means to determine the validity of the complaints. The management team decided that data would be collected by observation for 1 hour, 2 or 3 days each week during peak times. Peak times were identified as being 9 AM to 11 AM. The individual assigned to collect the data sat in the area where physicians made their requests for films. Data were collected by observation using a wristwatch to capture the time interval from when the physicians made their requests to when the request was fulfilled. Additionally, the file room clerk would write on a slip of paper the number of film jackets requested and the number of film jackets given to the physician. All the data were recorded and given to the quality manager for analysis. After analyzing the collected data, the quality manager reported that 45% of the physicians' requests were fulfilled, and the average time to fulfill a request was 10 minutes. The reactions to the results were very surprising, and they ranged from anger to denial. The most shocking response was the demand to change the results of the investigation. In utter disbelief, the quality manger asked why anyone would want the results altered. The file room supervisor said that the results made the file room personnel look bad. The basic premise for tracking was valid because there was a valid complaint. The design for collecting the data was flawed because of how the data was captured, and the accuracy of the data collection was called into question because of how it was collected. Instead of data being collected 2 or 3 times a week, they were actually collected twice a month. The actual collection of data could have been facilitated by using a preprinted form and a time stamp. Finally, the entire design could have been improved if the quality manager had been involved in the process from the start.

In the second example the emergency department (ED) wanted the hospital to purchase a computed tomography (CT) scanner for the ED and wanted to justify the capital expenditure by having imaging services track the TAT for CT scans originating from the ED. It was decided by a joint task force to focus on several aspects of TAT. The verbal order to arrival time was 1 component. Another component was the arrival time to release time. The final component was verbal order to release time. The CT personnel manually collected the data and entered them in a log, and the quality manager would collect the logs monthly and tabulate the data. The entire project covered a period of approximately 6 years. The basic idea of tracking CT TAT was sound but there were several things that were inherently wrong with the overall design and execution of the project. The TAT components were not adequately defined by the joint task force. The following TAT components should have been included: arrival to examination start time, examination start to examination complete time, and examination complete to patient release time. The project should have been terminated at the end of 2 years because sufficient data had been collected to answer the initial question. Two critical factors that had a tremendous impact on the project were how the data were collected and a decision made by the CT supervisor. Originally the quality manager was going to collect the data using the RIS but was told that the RIS had limited capabilities with respect to collecting this type of data. As a result of this limitation, data collection was done by hand, which introduced the human element and potential error because the times entered by the staff were subjective. Finally, at some point in the process, the CT supervisor arbitrarily made the decision to change verbal order time to order print time without checking with the joint task force. This changed the dynamics of the process and resulted in the data being skewed. Shortly after this was discovered, the project was terminated.

The moral of these 2 examples is to adequately design the monitoring process and to adequately define all terms and parameters that will be used. Of paramount importance is to do a trial run or several trial runs to work out any bugs in the process.

Types of Data
Population versus sample
It is important to briefly discuss the concepts of population and sample because they are often found confusing. The population is the largest collection of entities that is being studied at a particular time,7,8,17-19 The population is always determined by the investigator. Examples of a population could be people or procedures. A sample is a random collection of entities taken from a population that is being studied at a particular time.7-9,17-20 Example of samples would be every 10th entity or 500 out of 40 000 entities.

In medical imaging, populations could be defined as all patients or all procedures. If the population were all patients, then the sample could be either all inpatients or all outpatients. Another example of a medical imaging population is CT scans performed in a year and the sample could be the CT scans performed during a specified period of time.

When discussing samples, it is important to understand that there are 2 techniques that are used to obtain a sample. They are called probability sampling and non-probability sampling. Probability sampling requires that every element in the population has an equal chance of being selected for the sample.9,18 There are 4 types of probability sampling: simple random sampling, systematic sampling, stratified random sampling, and cluster sampling.9,20 Simple random sampling ensures that each individual has an equal chance of being selected. Systematic random sampling requires drawing every nth element of the population. Stratified random sampling requires that the population be divided into strata or subgroups; each member of a strata has an equal chance of being drawn. Cluster random sampling requires that the population be divided into clusters in which sampling takes place in each group.

Non-probability sampling are those types of sampling in which the probability of a subject being selected are unknown.18 The major types of non-probability sampling are convenience sampling, judgment sampling, expert sampling, and quota sampling.9,18 Convenience sampling allows for the use of any available group of subjects. Judgment sampling is a subjective method that selects a particular group or groups based on a certain criteria. Expert sampling is a type of purposive sampling, which involves selecting experts in a given area because of their access to the information of relevance to the study. Quota sampling occurs when a decision is made by the investigator based on judgment about the best type of sample for the investigation.

It should be noted that probability sampling is the most commonly used form of sampling in medicine. One of the most frequently asked questions regarding sample size is how large should the sample be. This is a valid question that does not have a clear answer. The sample needs to be large enough to insure that the results provide the desired data. Typically a sample should not be less than 100 because anything less may lead to inaccurate assumptions regarding the data.

Another question that occasionally comes up deals with idea of the population and sample being the same. An example of this would be when a satisfaction survey is being conducted on all customers receiving a mammogram.

Stratification is the separation and classification of data according to selected identifiers. It allows for the creation of strata or categories within the data.9 Also, it provides a method for the discovery of patterns that would not otherwise be observed if the data were all aggregated together. Data can be stratified by the day of the week, time of day, time of year, shift, type of order (stat vs routine), or type of procedure.9 Figure 2 illustrates non-stratified data and Figure 3 provides an example of data after stratification.

Statistical Tools
There is a vast arsenal of statistical tools that are available for use by the quality manager. They include measures of central tendency, measures of dispersion (spread), analysis of variance, correlation and regression, t-distribution, t-test, and chi square. However, it is beyond the scope of this article to discuss the majority of these statistics, and therefore, the discussion will focus on measures of central tendency and measures of dispersion. These are the most commonly used statistical tools used in healthcare.

Any discussion of statistical tools requires a discussion of 2 basic concepts of statistics. They are descriptive and inferential statistics.16 Descriptive statistics describe the summary results of the values from some variable—the mean, standard deviation (SD), or some other statistic, such as proportion or ratio from a population or from a sample. Inferential statistics draw conclusions about a population based on information obtained from a sample from the entire population.

Measures of central tendency
Measures of central tendency are a number that describes or characterizes a distribution of numbers.11 There are 3 types of measures that are commonly used in describing central tendency. They are the mean, median, and mode.

Mean. The mean is the average of a group of numbers. It is the most commonly used central tendency measure in healthcare. The mean is determined by the following formula:

Figure 4 is an example of how to determine a mean.

Median. The median is the midpoint of a group of numbers. It is the 50th percentile of a group of numbers. Figures 5A and 5B illustrate the concept of median.

Mode. The mode is the most frequently observed number in a set of numbers. On some occasions there may be more than 1 mode in a set of numbers. The mode is rarely used in healthcare. Figures 6A and 6B illustrate how the mode is determined.

Measures of dispersion
Measures of dispersion (spread) are numbers that describe the spread around the mean.11 There are 3 types of measures that are commonly used when discussing dispersion, spread, or variability. They are the range, percentile, and SD.

Range. The range is the difference between the lowest and highest number in a group of numbers. Figure 7 illustrates the concept of range.

Percentile. Percentile is a number or score that that falls at or below a given percentage. The most commonly used percentiles are the 25th, 50th, and 75th.

SD. The SD is the range or variation around the mean. After the mean, it is the most commonly used statistic in healthcare. The SD is calculated using the following formula:

Figure 8 provides an example of how to calculate SD.12

When discussing SD, there are 3 basic facts that the quality manger should be acquainted with. First, the SD will never be a negative number. Secondly, when the SD is close to the mean the data are more homogenous. Finally, the SD is affected by extremely small outliers and extremely large outliers.


In the 21st century quality management is an all encompassing term that covers not only quality control and quality assurance, but also performance improvement and process improvement. The current emphasis on patient safety and patient care standards throughout healthcare has forced medical imaging to rethink performance improvement and process improvement. The use of quality indicators are an integral part of performance and process improvement in medical imaging. Typically quality indicators are selected by senior management and should be selected to meet the needs of the medical imaging department. When selecting quality indicators, it is crucial to ask and answer the 10 basic questions that pertain to the selection of indicators. Throughout the selection process it is important to remember that quality indicators should be meaningful, manageable, and measurable. Depending on the number of indicators selected it may be necessary to categorize or group the indicators to facilitate their manageability. The measurability of indicators is affected by the design of the data collection process and how all components are defined. Indicators become meaningful when they have a positive on the overall operation of the medical imaging department. When data collection begins, it is important that the population be defined and an adequate sample size determined. It should be remembered that the most commonly used statistical tools are the measures of central tendency and the measures of dispersion.


1. American Society for Quality. Glossary. Available at: www.asq.org/glossary/q.html. Accessed November 7, 2007.

2. Burhans LD. What is quality? Do we agree, and does it matter? J Healthc Qual. 2007;29:39-44, 54.

3. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.

4. Bauer JE, Duffy, GL, Westcott RT, eds. The Quality Improvement Handbook. Milwaukee, WI: ASQ Quality Press; 2002.

5. Lau LS. A continuum of quality in radiology. J Am Coll Radiol. 2006;3:233-239.

6. Johnson CD, Swensen SJ, Applegate KE, et al. Quality improvement in radiology: white paper report of the sun valley group meeting. J Am Coll Radiol. 2006;3:544-549.

7. Claflin N, Clinefelter K, DeMert L, et al. NAHQ Guide to Quality Management. 8th ed. Glenview, IL: National Association for Healthcare Quality; 1998.

8. Brown JA. The Healthcare Quality Handbook: A Professional Resource and Study Guide. 17th ed. Pasadena, CA: JB Quality Solutions; 2002.

9. Lloyd R. Quality Health Care: A Guide to Developing and Using Indicators. Boston, MA: Jones and Bartlett; 2004.

10. Ondategui-Parra S, Bhagwat JG, Zou KH, et al. Use of productivity and financial indicators for monitoring performance in academic radiology departments: U.S. nationwide survey. Radiology. 2005;236:214-219.

11. Stevens AT. Quality Management for Radiographic Imaging. New York, NY: McGraw-Hill Medical Publishing; 2001.

12. Pieper SK. Reading the right signals: how to strategically manage with scorecards. Healthc Exec. 2005;20:9-14.

13. Kaplan RS, Norton DP. Transforming the balanced scorecard from performance measurement to strategic management: part I. Account Horiz. 2001;15:87-104.

14. Kelley E, McNeill D, Moy E, et al. Balancing the nation's health care scorecard: the national healthcare quality and disparities reports. J Qual Patient Saf. 2005;31:622-630.

15. Cleverly WO, Cleverly JO. Scorecards and dashboards: using financial metrics to improve performance. Healthc Financ Manage. 2005;May:64-69.

16. Rothman J. Successful dashboards for healthcare. Bus Integration J. 2005;February:36-39.

17. Hansen J. Can't miss-conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation. J Healthc Qual. 2003;25:19-24.

18. Hansen J. Can't miss-conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions. J Healthc Qual. 2003;25:25-33.

19. Dawson-Saunders B, Trapp RG. Basic & Clinical Biostatistics. 2nd ed. Norwalk, CT: Appleton-Lange; 1994.

20. Spatz C, Johnson JO. Basic Statistics: Tales of Distribution. 4th ed. Pacific Grove, CA: Brooks/Cole Publishing; 1989.



What did you think of this article?
Establishing Quality Indicators for Medical Imaging and the Basic Quality Management Toolbox

» Comment From: jlvol » Posted on: 02/04/2008 12:35 PM
way too wordy!!
» Comment From: svarma » Posted on: 02/06/2008 13:42 PM
» Comment From: kagebo » Posted on: 02/07/2008 15:18 PM
I found some useful info/ideas.
There are 15 total comments: View All Comments

Post A Comment


Home | Contact Us | About Us | Contributors | Advertising | Events | FAQ | Terms of Use | Privacy Policy | My Account
Copyright © 2015 - ERADIMAGING.COM